qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
28,031,210
I have a code who looks like this : ``` # step 1 remove from switch for server in server_list: remove_server_from_switch(server) logger.info("OK : Removed %s", server) # step 2 remove port for port in port_list: remove_ports_from_switch(port) logger.info("OK : Removed port %s", port) # step 3 execute the other operations for descr in pairs: move_descr(descr) # step 4 add server back to switch for server in server_list: add_server_to_switch(server) logger.info("OK : server added %s", server) # step 5 add back port for port in port_list: add_ports_to_switch(port) logger.info("OK : Added port %s", port) ``` functions inside the for loop can raise exceptions or the user can interrupt the script with the Ctrl+C. But I would like to enter in a roll-back mode by undo changes already done before if exceptions are raised during the execution. I mean, if an exception is raised during the step 3, I have to roll-back steps 1 and 2 (by executing actions in step 4 and 5 ). Or if a user try to stop the script with a Ctrl+C in the middle of for loop in the step 1, I would like to roll-back the action and add back the servers removed. How can it be done in a good pythonic way with the use of exceptions, please ? :)
2015/01/19
[ "https://Stackoverflow.com/questions/28031210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4471200/" ]
This is what context managers are for. Read up on the [with statement](https://docs.python.org/2/reference/compound_stmts.html#with) for details, but the general idea is you need to write context manager classes where the `__enter__` and `__exit__` functions do the removal/re-addition of your servers/ports. Then your code structure becomes something like: ``` with RemoveServers(server_list): with RemovePorts(port_list): do_stuff # exiting the with blocks will undo the actions ```
Maybe something like this will work: ``` undo_dict = {remove_server_from_switch: add_server_to_switch, remove_ports_from_switch: add_ports_to_switch, add_server_to_switch: remove_server_from_switch, add_ports_to_switch: remove_ports_from_switch} def undo_action(action): args = action[1:] func = action[0] undo_dict[func](*args) try: #keep track of all successfully executed actions action_list = [] # step 1 remove from switch for server in server_list: remove_server_from_switch(server) logger.info("OK : Removed %s", server) action_list.append((remove_server_from_switch, server)) # step 2 remove port for port in port_list: remove_ports_from_switch(port) logger.info("OK : Removed port %s", port) action_list.append((remove_ports_from_switch, port)) # step 3 execute the other operations for descr in pairs: move_descr(descr) # step 4 add server back to switch for server in server_list: add_server_to_switch(server) logger.info("OK : server added %s", server) action_list.append((add_server_to_switch, server)) # step 5 add back port for port in port_list: add_ports_to_switch(port) logger.info("OK : Added port %s", port) action_list.append((add_ports_to_switch, port)) except Exception: for action in reverse(action_list): undo_action(action) logger.info("ERROR Recovery : undoing {func}({args})",func = action[0], args = action[1:]) finally: del action_list ``` EDIT: [As tzaman said below](https://stackoverflow.com/a/28031889/2437514), the best thing to do in a situation like this is to wrap the entire thing into a context manager, and use the `with` statement. Then it doesn't matter whether or not there was an error encountered- all your actions are undone at the end of the `with` block. Here's what it might look like: ``` class ActionManager(): def __init__(self, undo_dict): self.action_list = [] self.undo_dict = undo_dict def action_pop(self): yield self.action_list.pop() def action_add(self, *args): self.action_list.append(args) def undo_action(self, action): args = action[1:] func = action[0] self.undo_dict[func](*args) def __enter__(self): return self def __exit__(self, type, value, traceback): for action in self.action_stack: undo_action(action) logger.info("Action Manager Cleanup : undoing {func}({args})",func = action[0], args = action[1:]) ``` Now you can just do this: ``` #same undo_dict as before with ActionManager(undo_dict) as am: # step 1 remove from switch for server in server_list: remove_server_from_switch(server) logger.info("OK : Removed %s", server) am.action_add(remove_server_from_switch, server) # step 2 remove port for port in port_list: remove_ports_from_switch(port) logger.info("OK : Removed port %s", port) am.action_add(remove_ports_from_switch, port) # step 3 execute the other operations for descr in pairs: move_descr(descr) # steps 4 and 5 occur automatically ``` Another way to do it - and probably a lot better - would be to add the servers/ports in the `__enter__` method. You could subclass the `ActionManager` above and add the port addition and removal logic inside of it. The `__enter__` method doesn't even have to return an instance of the `ActionManager` class - if it makes sense to do so, you could even write it so that `with SwitchManager(servers,ports)` returns your pairs object, and you could end up doing this: ``` with SwitchManager(servers, ports) as pairs: for descr in pairs: move_descr(descr) ```
28,031,210
I have a code who looks like this : ``` # step 1 remove from switch for server in server_list: remove_server_from_switch(server) logger.info("OK : Removed %s", server) # step 2 remove port for port in port_list: remove_ports_from_switch(port) logger.info("OK : Removed port %s", port) # step 3 execute the other operations for descr in pairs: move_descr(descr) # step 4 add server back to switch for server in server_list: add_server_to_switch(server) logger.info("OK : server added %s", server) # step 5 add back port for port in port_list: add_ports_to_switch(port) logger.info("OK : Added port %s", port) ``` functions inside the for loop can raise exceptions or the user can interrupt the script with the Ctrl+C. But I would like to enter in a roll-back mode by undo changes already done before if exceptions are raised during the execution. I mean, if an exception is raised during the step 3, I have to roll-back steps 1 and 2 (by executing actions in step 4 and 5 ). Or if a user try to stop the script with a Ctrl+C in the middle of for loop in the step 1, I would like to roll-back the action and add back the servers removed. How can it be done in a good pythonic way with the use of exceptions, please ? :)
2015/01/19
[ "https://Stackoverflow.com/questions/28031210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4471200/" ]
You should use a [with](https://docs.python.org/3/reference/compound_stmts.html#the-with-statement) construct. As [this link](https://realpython.com/python-with-statement/#the-with-statement-approach) explains: ``` with expression as target_var: do_something(target_var) ``` > > The context manager object results from evaluating the expression > after with. In other words, expression must return an object that > implements the context management protocol. This protocol consists of > two special methods: > > > ``` .__enter__() is called by the with statement to enter the runtime context. .__exit__() is called when the execution leaves the with code block. ```
Maybe something like this will work: ``` undo_dict = {remove_server_from_switch: add_server_to_switch, remove_ports_from_switch: add_ports_to_switch, add_server_to_switch: remove_server_from_switch, add_ports_to_switch: remove_ports_from_switch} def undo_action(action): args = action[1:] func = action[0] undo_dict[func](*args) try: #keep track of all successfully executed actions action_list = [] # step 1 remove from switch for server in server_list: remove_server_from_switch(server) logger.info("OK : Removed %s", server) action_list.append((remove_server_from_switch, server)) # step 2 remove port for port in port_list: remove_ports_from_switch(port) logger.info("OK : Removed port %s", port) action_list.append((remove_ports_from_switch, port)) # step 3 execute the other operations for descr in pairs: move_descr(descr) # step 4 add server back to switch for server in server_list: add_server_to_switch(server) logger.info("OK : server added %s", server) action_list.append((add_server_to_switch, server)) # step 5 add back port for port in port_list: add_ports_to_switch(port) logger.info("OK : Added port %s", port) action_list.append((add_ports_to_switch, port)) except Exception: for action in reverse(action_list): undo_action(action) logger.info("ERROR Recovery : undoing {func}({args})",func = action[0], args = action[1:]) finally: del action_list ``` EDIT: [As tzaman said below](https://stackoverflow.com/a/28031889/2437514), the best thing to do in a situation like this is to wrap the entire thing into a context manager, and use the `with` statement. Then it doesn't matter whether or not there was an error encountered- all your actions are undone at the end of the `with` block. Here's what it might look like: ``` class ActionManager(): def __init__(self, undo_dict): self.action_list = [] self.undo_dict = undo_dict def action_pop(self): yield self.action_list.pop() def action_add(self, *args): self.action_list.append(args) def undo_action(self, action): args = action[1:] func = action[0] self.undo_dict[func](*args) def __enter__(self): return self def __exit__(self, type, value, traceback): for action in self.action_stack: undo_action(action) logger.info("Action Manager Cleanup : undoing {func}({args})",func = action[0], args = action[1:]) ``` Now you can just do this: ``` #same undo_dict as before with ActionManager(undo_dict) as am: # step 1 remove from switch for server in server_list: remove_server_from_switch(server) logger.info("OK : Removed %s", server) am.action_add(remove_server_from_switch, server) # step 2 remove port for port in port_list: remove_ports_from_switch(port) logger.info("OK : Removed port %s", port) am.action_add(remove_ports_from_switch, port) # step 3 execute the other operations for descr in pairs: move_descr(descr) # steps 4 and 5 occur automatically ``` Another way to do it - and probably a lot better - would be to add the servers/ports in the `__enter__` method. You could subclass the `ActionManager` above and add the port addition and removal logic inside of it. The `__enter__` method doesn't even have to return an instance of the `ActionManager` class - if it makes sense to do so, you could even write it so that `with SwitchManager(servers,ports)` returns your pairs object, and you could end up doing this: ``` with SwitchManager(servers, ports) as pairs: for descr in pairs: move_descr(descr) ```
28,031,210
I have a code who looks like this : ``` # step 1 remove from switch for server in server_list: remove_server_from_switch(server) logger.info("OK : Removed %s", server) # step 2 remove port for port in port_list: remove_ports_from_switch(port) logger.info("OK : Removed port %s", port) # step 3 execute the other operations for descr in pairs: move_descr(descr) # step 4 add server back to switch for server in server_list: add_server_to_switch(server) logger.info("OK : server added %s", server) # step 5 add back port for port in port_list: add_ports_to_switch(port) logger.info("OK : Added port %s", port) ``` functions inside the for loop can raise exceptions or the user can interrupt the script with the Ctrl+C. But I would like to enter in a roll-back mode by undo changes already done before if exceptions are raised during the execution. I mean, if an exception is raised during the step 3, I have to roll-back steps 1 and 2 (by executing actions in step 4 and 5 ). Or if a user try to stop the script with a Ctrl+C in the middle of for loop in the step 1, I would like to roll-back the action and add back the servers removed. How can it be done in a good pythonic way with the use of exceptions, please ? :)
2015/01/19
[ "https://Stackoverflow.com/questions/28031210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4471200/" ]
This is what context managers are for. Read up on the [with statement](https://docs.python.org/2/reference/compound_stmts.html#with) for details, but the general idea is you need to write context manager classes where the `__enter__` and `__exit__` functions do the removal/re-addition of your servers/ports. Then your code structure becomes something like: ``` with RemoveServers(server_list): with RemovePorts(port_list): do_stuff # exiting the with blocks will undo the actions ```
You should use a [with](https://docs.python.org/3/reference/compound_stmts.html#the-with-statement) construct. As [this link](https://realpython.com/python-with-statement/#the-with-statement-approach) explains: ``` with expression as target_var: do_something(target_var) ``` > > The context manager object results from evaluating the expression > after with. In other words, expression must return an object that > implements the context management protocol. This protocol consists of > two special methods: > > > ``` .__enter__() is called by the with statement to enter the runtime context. .__exit__() is called when the execution leaves the with code block. ```
2,157,665
I have created a templatetag that loads a yaml document into a python list. In my template I have `{% get_content_set %}`, this dumps the raw list data. What I want to be able to do is something like ``` {% for items in get_content_list %} <h2>{{items.title}}</h2> {% endfor %}` ```
2010/01/28
[ "https://Stackoverflow.com/questions/2157665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245889/" ]
If the list is in a python variable X, then add it to the template context `context['X'] = X` and then you can do ``` {% for items in X %} {{ items.title }} {% endfor %} ``` A template tag is designed to render output, so won't provide an iterable list for you to use. But you don't need that as the normal context + for loop are fine.
Since writing complex templatetags is not an easy task (well documented though) i would take {% with %} tag source and adapt it for my needs, so it looks like ``` {% get_content_list as content % {% for items in content %} <h2>{{items.title}}</h2> {% endfor %}` ```
58,578,181
I'm try to create python package in **3.6** But I also want backward compatibility to **2.7** How can I write a code for **3.6** and **2.7** For example I have method called `geo_point()`. ``` def geo_point(lat: float, lng: float): pass ``` This function work fine in **3.6** but not in **2.7** it show syntax error I think **2.7** not support type hinting. So I want to write another function which is supported by **2.7** and when user run my package on **2.7** it ignore all the function which is not supported For example ``` @python_version(2.7) def geo_point(lat, lng): pass ``` Is it possible to have both function and python decide which function is compatible?
2019/10/27
[ "https://Stackoverflow.com/questions/58578181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12280920/" ]
If type hinting is the only issue you have with your code, then look at SO question [Type hinting in Python 2](https://stackoverflow.com/questions/35230635/type-hinting-in-python-2) It says, that python3 respects also type hinting in comment lines. Python2 will ignore it and python3 respects this alternative syntax. It has specifically been designed for code that still had to be python2 compatible. However please note, that just because the code compiles with python2, it doesn't mean it will yield the correct result. If you have more compatibility issues I strongly propose to look at the `future` module (not to be mixed up with the `from __future__ import xxx` statement. You can install future ( <https://pypi.org/project/future/> ) with `pip install future`. As you don't show any other code that causes problems I cannot advise on specific issues. The url <https://python-future.org/compatible_idioms.html> shows a quite extensive list of potential python 2 / python 3 issues and how you might resolve them. For example for opening files in python 2 with less encoding / unicode issues can be done by importing an alternative version of open with the line ``` from io import open ``` <https://python-future.org/compatible_idioms.html> **Addendum**: If you really are in the need of declaring one function for python3 and one function for python2 you can do: ``` import sys if sys.version_info < (3, 0): def myfunc(a, b): # python 2 compatible function bla bla bla else: def myfunc(a, b): # python 3 compatible function bla bla bla ``` **However:** Both function **must** be syntactically correct for python 2 and python 3 If you really want to have functions, which are only python2 or only python3 syntactically correct (e.g. print statement or await), then you could do something like: ``` import sys if sys.version_info < (3, 0): from myfunc_py2 import myfunc # the file myfunc_py2.py does not have to be python3 compatible else: from myfunc_py3 import myfunc # the file myfunc_py3.py does not have to be python2 compatible ```
I doubt it's worth the trouble, but as a proof of concept: You could use a combination of a decorator and the built-in `exec()` function. Using `exec()` is a way to avoid syntax errors due to language differences. Here's what I mean: ``` import sys sys_vers_major, sys_vers_minor, sys_vers_micro = sys.version_info[:3] sys_vers = sys_vers_major + sys_vers_minor*.1 # + sys_vers_micro*.01 print('sys_vers: %s' % sys_vers) def python_version(vers, code1, code2): lcls = {} # Dictionary to temporarily store version of function defined. if sys_vers == vers: exec(code1, globals(), lcls) else: exec(code2, globals(), lcls) def decorator(func): return lcls[func.__name__] return decorator @python_version(2.7, """ def geo_point(lat, lng): print('code1 version') """, """ def geo_point(lat: float, lng: float): print('code2 version') """) def geo_point(): pass # Needed to know name of function that's being defined. geo_point(121, 47) # Show which version was defined. ```
58,578,181
I'm try to create python package in **3.6** But I also want backward compatibility to **2.7** How can I write a code for **3.6** and **2.7** For example I have method called `geo_point()`. ``` def geo_point(lat: float, lng: float): pass ``` This function work fine in **3.6** but not in **2.7** it show syntax error I think **2.7** not support type hinting. So I want to write another function which is supported by **2.7** and when user run my package on **2.7** it ignore all the function which is not supported For example ``` @python_version(2.7) def geo_point(lat, lng): pass ``` Is it possible to have both function and python decide which function is compatible?
2019/10/27
[ "https://Stackoverflow.com/questions/58578181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12280920/" ]
If type hinting is the only issue you have with your code, then look at SO question [Type hinting in Python 2](https://stackoverflow.com/questions/35230635/type-hinting-in-python-2) It says, that python3 respects also type hinting in comment lines. Python2 will ignore it and python3 respects this alternative syntax. It has specifically been designed for code that still had to be python2 compatible. However please note, that just because the code compiles with python2, it doesn't mean it will yield the correct result. If you have more compatibility issues I strongly propose to look at the `future` module (not to be mixed up with the `from __future__ import xxx` statement. You can install future ( <https://pypi.org/project/future/> ) with `pip install future`. As you don't show any other code that causes problems I cannot advise on specific issues. The url <https://python-future.org/compatible_idioms.html> shows a quite extensive list of potential python 2 / python 3 issues and how you might resolve them. For example for opening files in python 2 with less encoding / unicode issues can be done by importing an alternative version of open with the line ``` from io import open ``` <https://python-future.org/compatible_idioms.html> **Addendum**: If you really are in the need of declaring one function for python3 and one function for python2 you can do: ``` import sys if sys.version_info < (3, 0): def myfunc(a, b): # python 2 compatible function bla bla bla else: def myfunc(a, b): # python 3 compatible function bla bla bla ``` **However:** Both function **must** be syntactically correct for python 2 and python 3 If you really want to have functions, which are only python2 or only python3 syntactically correct (e.g. print statement or await), then you could do something like: ``` import sys if sys.version_info < (3, 0): from myfunc_py2 import myfunc # the file myfunc_py2.py does not have to be python3 compatible else: from myfunc_py3 import myfunc # the file myfunc_py3.py does not have to be python2 compatible ```
> > I think 2.7 not support type hinting > > > Actually, Python 2 supports type hinting, and you can write backwards-compatible code. See [the answer about Python 2 type hinting on StackOverflow](https://stackoverflow.com/a/35230792/3694363).
58,578,181
I'm try to create python package in **3.6** But I also want backward compatibility to **2.7** How can I write a code for **3.6** and **2.7** For example I have method called `geo_point()`. ``` def geo_point(lat: float, lng: float): pass ``` This function work fine in **3.6** but not in **2.7** it show syntax error I think **2.7** not support type hinting. So I want to write another function which is supported by **2.7** and when user run my package on **2.7** it ignore all the function which is not supported For example ``` @python_version(2.7) def geo_point(lat, lng): pass ``` Is it possible to have both function and python decide which function is compatible?
2019/10/27
[ "https://Stackoverflow.com/questions/58578181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12280920/" ]
If type hinting is the only issue you have with your code, then look at SO question [Type hinting in Python 2](https://stackoverflow.com/questions/35230635/type-hinting-in-python-2) It says, that python3 respects also type hinting in comment lines. Python2 will ignore it and python3 respects this alternative syntax. It has specifically been designed for code that still had to be python2 compatible. However please note, that just because the code compiles with python2, it doesn't mean it will yield the correct result. If you have more compatibility issues I strongly propose to look at the `future` module (not to be mixed up with the `from __future__ import xxx` statement. You can install future ( <https://pypi.org/project/future/> ) with `pip install future`. As you don't show any other code that causes problems I cannot advise on specific issues. The url <https://python-future.org/compatible_idioms.html> shows a quite extensive list of potential python 2 / python 3 issues and how you might resolve them. For example for opening files in python 2 with less encoding / unicode issues can be done by importing an alternative version of open with the line ``` from io import open ``` <https://python-future.org/compatible_idioms.html> **Addendum**: If you really are in the need of declaring one function for python3 and one function for python2 you can do: ``` import sys if sys.version_info < (3, 0): def myfunc(a, b): # python 2 compatible function bla bla bla else: def myfunc(a, b): # python 3 compatible function bla bla bla ``` **However:** Both function **must** be syntactically correct for python 2 and python 3 If you really want to have functions, which are only python2 or only python3 syntactically correct (e.g. print statement or await), then you could do something like: ``` import sys if sys.version_info < (3, 0): from myfunc_py2 import myfunc # the file myfunc_py2.py does not have to be python3 compatible else: from myfunc_py3 import myfunc # the file myfunc_py3.py does not have to be python2 compatible ```
While the other answers emphasize type hinting, I'm thinking that the **Six** package may be of help. The project base and link to documentation is at <https://pypi.org/project/six/>. > > Six is a Python 2 and 3 compatibility library. It provides utility functions for smoothing over the differences between the Python versions with the goal of writing Python code that is compatible on both Python versions. See the documentation for more information on what is provided. > > > One of the notable differences between Python 2 and Python 3 is that lack of parens in a print statement will cause a syntax error in Python 3, for example, ``` print "Hello, World" # OK in Python 2, syntax error in Python 3. ``` Six addresses these differences.
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
There is a [`Queue`](http://docs.python.org/library/multiprocessing.html#multiprocessing.Queue) class within the `multiprocessing` module specifically for this purpose. Edit: If you are looking for a complete framework for parallel computing which features a `map()` function using a task queue, have a look at the parallel computing facilities of [IPython](http://ipython.scipy.org/). In particlar, you can use the [`TaskClient.map()`](http://ipython.scipy.org/doc/stable/html/parallel/parallel_task.html#parallel-map) function to get a load-balanced mapping to the available processors.
About queue implementations. There are some. Look at the Celery project. <http://celeryproject.org/> So, in your case, you can run 12 conversions (one on each CPU) as Celery tasks, add a callback function (to the conversion or to the task) and in that callback function add a new conversion task running when one of the previous conversions is finished.
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
This is trivial to do with [jug](http://luispedro.org/software/jug): ``` def process_image(img): .... images = glob('*.jpg') for im in images: Task(process_image, im) ``` Now, just run `jug execute` a few times to spawn worker processes.
About queue implementations. There are some. Look at the Celery project. <http://celeryproject.org/> So, in your case, you can run 12 conversions (one on each CPU) as Celery tasks, add a callback function (to the conversion or to the task) and in that callback function add a new conversion task running when one of the previous conversions is finished.
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
About queue implementations. There are some. Look at the Celery project. <http://celeryproject.org/> So, in your case, you can run 12 conversions (one on each CPU) as Celery tasks, add a callback function (to the conversion or to the task) and in that callback function add a new conversion task running when one of the previous conversions is finished.
This is not the case if you use [`Pool.imap_unordered`](http://docs.python.org/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.imap_unordered).
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
There is a [`Queue`](http://docs.python.org/library/multiprocessing.html#multiprocessing.Queue) class within the `multiprocessing` module specifically for this purpose. Edit: If you are looking for a complete framework for parallel computing which features a `map()` function using a task queue, have a look at the parallel computing facilities of [IPython](http://ipython.scipy.org/). In particlar, you can use the [`TaskClient.map()`](http://ipython.scipy.org/doc/stable/html/parallel/parallel_task.html#parallel-map) function to get a load-balanced mapping to the available processors.
The Python threading library that has brought me most joy is [Parallel Python (PP)](http://www.parallelpython.com/). It is trivial with PP to use a thread pool approach with a single queue to achieve what you need.
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
There is a [`Queue`](http://docs.python.org/library/multiprocessing.html#multiprocessing.Queue) class within the `multiprocessing` module specifically for this purpose. Edit: If you are looking for a complete framework for parallel computing which features a `map()` function using a task queue, have a look at the parallel computing facilities of [IPython](http://ipython.scipy.org/). In particlar, you can use the [`TaskClient.map()`](http://ipython.scipy.org/doc/stable/html/parallel/parallel_task.html#parallel-map) function to get a load-balanced mapping to the available processors.
This is trivial to do with [jug](http://luispedro.org/software/jug): ``` def process_image(img): .... images = glob('*.jpg') for im in images: Task(process_image, im) ``` Now, just run `jug execute` a few times to spawn worker processes.
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
There is a [`Queue`](http://docs.python.org/library/multiprocessing.html#multiprocessing.Queue) class within the `multiprocessing` module specifically for this purpose. Edit: If you are looking for a complete framework for parallel computing which features a `map()` function using a task queue, have a look at the parallel computing facilities of [IPython](http://ipython.scipy.org/). In particlar, you can use the [`TaskClient.map()`](http://ipython.scipy.org/doc/stable/html/parallel/parallel_task.html#parallel-map) function to get a load-balanced mapping to the available processors.
This is not the case if you use [`Pool.imap_unordered`](http://docs.python.org/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.imap_unordered).
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
This is trivial to do with [jug](http://luispedro.org/software/jug): ``` def process_image(img): .... images = glob('*.jpg') for im in images: Task(process_image, im) ``` Now, just run `jug execute` a few times to spawn worker processes.
The Python threading library that has brought me most joy is [Parallel Python (PP)](http://www.parallelpython.com/). It is trivial with PP to use a thread pool approach with a single queue to achieve what you need.
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
The Python threading library that has brought me most joy is [Parallel Python (PP)](http://www.parallelpython.com/). It is trivial with PP to use a thread pool approach with a single queue to achieve what you need.
This is not the case if you use [`Pool.imap_unordered`](http://docs.python.org/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.imap_unordered).
4,838,740
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location. I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this: ``` ID | Location 1 | "Secretary's office" 2 | 3 | "Reception" 4 | ``` With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4). The first approach for the model, should be something like this: ``` class Printer(models.Model): brand = models.CharField( ... ... location = models.CharField( max_length=100, unique=True, blank=True ) ``` But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string. Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code: ``` from django.db import models class NullableCharField(models.CharField): description = "CharField that obeys null=True" def to_python(self, value): if isinstance(value, models.CharField): return value return value or "" def get_db_prep_value(self, value): return value or None ``` This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string. ``` ID | Location 1 | "Secretary's office" 2 | None 3 | "Reception" 4 | None ``` I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database). Is this the best way to have an unique, blank CharField? Thanks,
2011/01/29
[ "https://Stackoverflow.com/questions/4838740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/454760/" ]
This is trivial to do with [jug](http://luispedro.org/software/jug): ``` def process_image(img): .... images = glob('*.jpg') for im in images: Task(process_image, im) ``` Now, just run `jug execute` a few times to spawn worker processes.
This is not the case if you use [`Pool.imap_unordered`](http://docs.python.org/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.imap_unordered).
4,240,266
I have a little module that creates a window (program1). I've imported this into another python program of mine (program2). How do I make program 2 get self.x and x that's in program1? This is program1. ``` import Tkinter class Class(Tkinter.Tk): def __init__(self, parent): Tkinter.Tk.__init__(self, parent) self.parent = parent self.Main() def Main(self): self.button= Tkinter.Button(self,text='hello') self.button.pack() self.x = 34 x = 62 def run(): app = Class(None) app.mainloop() if __name__ == "__main__": run() ```
2010/11/21
[ "https://Stackoverflow.com/questions/4240266", "https://Stackoverflow.com", "https://Stackoverflow.com/users/433417/" ]
You can access the variable `self.x` as a member of an instance of `Class`: ``` c = Class(parent) print(c.x) ``` You cannot access the local variable - it goes out of scope when the method call ends.
I'm not sure exactly what the purpose of 'self.x' and 'x' are but one thing to note in the 'Main' method of class Class ``` def Main(self): self.button= Tkinter.Button(self,text='hello') self.button.pack() self.x = 34 x = 62 ``` is that 'x' and 'self.x' are two different variables. The variable 'x' is a local variable for the method 'Main' and 'self.x' is an instance variable. Like Mark says you can access the instance variable 'self.x' as an attribute of an instance of Class, but the method variable 'x' is only accessible from within the 'Main' method. If you would like to have the ability to access the method variable 'x' then you could change the signature of the 'Main' method as follows. ``` def Main(self,x=62): self.button= Tkinter.Button(self,text='hello') self.button.pack() self.x = 34 return x ``` This way you can set the value of the method variable 'x' when you call the 'Main' method from an instance of Class ``` >> c = Class() >> c.Main(4) 4 ``` or just keep the default ``` >> c.Main() 62 ``` and as before like Mark said you will have access to the instance variable 'self.x' ``` >> c.x 34 ```
6,372,159
Can anyone suggest me what is the most pythonic way to import modules in python? Let me explain - i have read a lot of python code and found several different ways of how to import modules or if to be more precise - when to import: 1. Use one module/several modules which include all the imports(third party modules) which are necessary for entire project so all of the imports are concentrated within few modules so it is easy to maintain imports. When any single module requires any module to be imported it ask references module for it. For example in our project we have separated level named 'references' so it contains modules like 'system.py'(contains references to all system libraries), 'platform.py'(contains references to all platform libraries), 'devexpress.py'(contains references to all devexpress libraries) and so on. These modules looks like: 2. Each module imports all necessary classes and functions at the top of the module - e.g. there is a section with imports within each module in project 3. Each function/class uses import locally e.g right after definition and import only things that them really need. Please find samples below. 1 sample import module - only 'import' and 'from ... import ...' statements(no any methods or classes): ======================================================================================================= ``` #references.py import re import clr import math import System import System.Text.RegularExpressions import System.Random import System.Threading import System.DateTime # System assemblies clr.AddReference("System.Core") clr.AddReference("System.Data") clr.AddReference("System.Drawing") ... #test.py from references.syslibs import (Array, DataTable, OleDbConnection, OleDbDataAdapter, OleDbCommand, OleDbSchemaGuid) def get_dict_from_data_table(dataTable): pass ``` 2 module with 'import' and 'from ... import ...' as well as methods and classes: ================================================================================ ``` from ... import ... from ... import ... def Generate(param, param1 ...): pass ``` 3 module with 'import' and 'from ... import ...' statements which are used inside of methods and classes: ========================================================================================================= ``` import clr clr.AddReference("assembly") from ... import ... ... def generate_(txt, param1, param2): from ... import ... from ... import ... from ... import ... if not cond(param1): res = "text" if not cond(param2): name = "default" ``` So what is the most pythonic way to import modules in python?
2011/06/16
[ "https://Stackoverflow.com/questions/6372159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/781150/" ]
It *really* doesn't matter, so long as you don't `from ... import *`. The rest is all taste and getting around cyclic import issues. [PEP 8](http://www.python.org/dev/peps/pep-0008/) states that you should import at the top of the script, but even that isn't set in stone.
Python's "import" loads a Python module into its own namespace, so that you have to add the module name followed by a dot in front of references to any names from the imported module ``` import animals animals.Elephant() ``` "from" loads a Python module into the current namespace, so that you can refer to it without the need to mention the module name again ``` from animals import Elephant Elephant() ``` or ``` from animals import * Elephant() ``` using from is good, (but using a wildcard import is discouraging). but if you have a big scaled project, importing from diffrent modules may cause naming confilicts. Like importing **Elephant()** function from two diffrent modules will cause problem (like using wildcard imports with **\***) So, if you have a large scaled project where you import many diffrent things from other modules, it is better to use import and using imported things with **module\_name.your\_class\_or\_function**. Otherwise, use from notation...
6,372,159
Can anyone suggest me what is the most pythonic way to import modules in python? Let me explain - i have read a lot of python code and found several different ways of how to import modules or if to be more precise - when to import: 1. Use one module/several modules which include all the imports(third party modules) which are necessary for entire project so all of the imports are concentrated within few modules so it is easy to maintain imports. When any single module requires any module to be imported it ask references module for it. For example in our project we have separated level named 'references' so it contains modules like 'system.py'(contains references to all system libraries), 'platform.py'(contains references to all platform libraries), 'devexpress.py'(contains references to all devexpress libraries) and so on. These modules looks like: 2. Each module imports all necessary classes and functions at the top of the module - e.g. there is a section with imports within each module in project 3. Each function/class uses import locally e.g right after definition and import only things that them really need. Please find samples below. 1 sample import module - only 'import' and 'from ... import ...' statements(no any methods or classes): ======================================================================================================= ``` #references.py import re import clr import math import System import System.Text.RegularExpressions import System.Random import System.Threading import System.DateTime # System assemblies clr.AddReference("System.Core") clr.AddReference("System.Data") clr.AddReference("System.Drawing") ... #test.py from references.syslibs import (Array, DataTable, OleDbConnection, OleDbDataAdapter, OleDbCommand, OleDbSchemaGuid) def get_dict_from_data_table(dataTable): pass ``` 2 module with 'import' and 'from ... import ...' as well as methods and classes: ================================================================================ ``` from ... import ... from ... import ... def Generate(param, param1 ...): pass ``` 3 module with 'import' and 'from ... import ...' statements which are used inside of methods and classes: ========================================================================================================= ``` import clr clr.AddReference("assembly") from ... import ... ... def generate_(txt, param1, param2): from ... import ... from ... import ... from ... import ... if not cond(param1): res = "text" if not cond(param2): name = "default" ``` So what is the most pythonic way to import modules in python?
2011/06/16
[ "https://Stackoverflow.com/questions/6372159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/781150/" ]
It *really* doesn't matter, so long as you don't `from ... import *`. The rest is all taste and getting around cyclic import issues. [PEP 8](http://www.python.org/dev/peps/pep-0008/) states that you should import at the top of the script, but even that isn't set in stone.
People have already commented on the major style issues (at the top of the script, etc), so I'll skip that. For my imports, I usually have them ordered alphabetically by module name (regardless of whether it's 'import' or 'from ... import ...'. I split it into groups of: standard lib; third party modules (from pypi or other); internal modules. ``` import os import system import twisted import zope import mymodule_1 import mymodule_2 ```
6,372,159
Can anyone suggest me what is the most pythonic way to import modules in python? Let me explain - i have read a lot of python code and found several different ways of how to import modules or if to be more precise - when to import: 1. Use one module/several modules which include all the imports(third party modules) which are necessary for entire project so all of the imports are concentrated within few modules so it is easy to maintain imports. When any single module requires any module to be imported it ask references module for it. For example in our project we have separated level named 'references' so it contains modules like 'system.py'(contains references to all system libraries), 'platform.py'(contains references to all platform libraries), 'devexpress.py'(contains references to all devexpress libraries) and so on. These modules looks like: 2. Each module imports all necessary classes and functions at the top of the module - e.g. there is a section with imports within each module in project 3. Each function/class uses import locally e.g right after definition and import only things that them really need. Please find samples below. 1 sample import module - only 'import' and 'from ... import ...' statements(no any methods or classes): ======================================================================================================= ``` #references.py import re import clr import math import System import System.Text.RegularExpressions import System.Random import System.Threading import System.DateTime # System assemblies clr.AddReference("System.Core") clr.AddReference("System.Data") clr.AddReference("System.Drawing") ... #test.py from references.syslibs import (Array, DataTable, OleDbConnection, OleDbDataAdapter, OleDbCommand, OleDbSchemaGuid) def get_dict_from_data_table(dataTable): pass ``` 2 module with 'import' and 'from ... import ...' as well as methods and classes: ================================================================================ ``` from ... import ... from ... import ... def Generate(param, param1 ...): pass ``` 3 module with 'import' and 'from ... import ...' statements which are used inside of methods and classes: ========================================================================================================= ``` import clr clr.AddReference("assembly") from ... import ... ... def generate_(txt, param1, param2): from ... import ... from ... import ... from ... import ... if not cond(param1): res = "text" if not cond(param2): name = "default" ``` So what is the most pythonic way to import modules in python?
2011/06/16
[ "https://Stackoverflow.com/questions/6372159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/781150/" ]
It *really* doesn't matter, so long as you don't `from ... import *`. The rest is all taste and getting around cyclic import issues. [PEP 8](http://www.python.org/dev/peps/pep-0008/) states that you should import at the top of the script, but even that isn't set in stone.
Do not use `from module import *`. This will pollute the namespace and is highly frowned upon. However, you can import specific things using from; `from module import something`. This keeps the namespace clean. On larger projects if you use a wildcard you could be importing 2 foo or 2 bar into the same namespace. [PEP 8](http://www.python.org/dev/peps/pep-0008/) says to have imports on separate lines. For instance: ``` import os import sys import yourmodule from yourmodule import specific_stuff ``` One thing I do is alphabetize my imports into two groups. One is std/third party and the second is internal modules.
6,372,159
Can anyone suggest me what is the most pythonic way to import modules in python? Let me explain - i have read a lot of python code and found several different ways of how to import modules or if to be more precise - when to import: 1. Use one module/several modules which include all the imports(third party modules) which are necessary for entire project so all of the imports are concentrated within few modules so it is easy to maintain imports. When any single module requires any module to be imported it ask references module for it. For example in our project we have separated level named 'references' so it contains modules like 'system.py'(contains references to all system libraries), 'platform.py'(contains references to all platform libraries), 'devexpress.py'(contains references to all devexpress libraries) and so on. These modules looks like: 2. Each module imports all necessary classes and functions at the top of the module - e.g. there is a section with imports within each module in project 3. Each function/class uses import locally e.g right after definition and import only things that them really need. Please find samples below. 1 sample import module - only 'import' and 'from ... import ...' statements(no any methods or classes): ======================================================================================================= ``` #references.py import re import clr import math import System import System.Text.RegularExpressions import System.Random import System.Threading import System.DateTime # System assemblies clr.AddReference("System.Core") clr.AddReference("System.Data") clr.AddReference("System.Drawing") ... #test.py from references.syslibs import (Array, DataTable, OleDbConnection, OleDbDataAdapter, OleDbCommand, OleDbSchemaGuid) def get_dict_from_data_table(dataTable): pass ``` 2 module with 'import' and 'from ... import ...' as well as methods and classes: ================================================================================ ``` from ... import ... from ... import ... def Generate(param, param1 ...): pass ``` 3 module with 'import' and 'from ... import ...' statements which are used inside of methods and classes: ========================================================================================================= ``` import clr clr.AddReference("assembly") from ... import ... ... def generate_(txt, param1, param2): from ... import ... from ... import ... from ... import ... if not cond(param1): res = "text" if not cond(param2): name = "default" ``` So what is the most pythonic way to import modules in python?
2011/06/16
[ "https://Stackoverflow.com/questions/6372159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/781150/" ]
People have already commented on the major style issues (at the top of the script, etc), so I'll skip that. For my imports, I usually have them ordered alphabetically by module name (regardless of whether it's 'import' or 'from ... import ...'. I split it into groups of: standard lib; third party modules (from pypi or other); internal modules. ``` import os import system import twisted import zope import mymodule_1 import mymodule_2 ```
Python's "import" loads a Python module into its own namespace, so that you have to add the module name followed by a dot in front of references to any names from the imported module ``` import animals animals.Elephant() ``` "from" loads a Python module into the current namespace, so that you can refer to it without the need to mention the module name again ``` from animals import Elephant Elephant() ``` or ``` from animals import * Elephant() ``` using from is good, (but using a wildcard import is discouraging). but if you have a big scaled project, importing from diffrent modules may cause naming confilicts. Like importing **Elephant()** function from two diffrent modules will cause problem (like using wildcard imports with **\***) So, if you have a large scaled project where you import many diffrent things from other modules, it is better to use import and using imported things with **module\_name.your\_class\_or\_function**. Otherwise, use from notation...
6,372,159
Can anyone suggest me what is the most pythonic way to import modules in python? Let me explain - i have read a lot of python code and found several different ways of how to import modules or if to be more precise - when to import: 1. Use one module/several modules which include all the imports(third party modules) which are necessary for entire project so all of the imports are concentrated within few modules so it is easy to maintain imports. When any single module requires any module to be imported it ask references module for it. For example in our project we have separated level named 'references' so it contains modules like 'system.py'(contains references to all system libraries), 'platform.py'(contains references to all platform libraries), 'devexpress.py'(contains references to all devexpress libraries) and so on. These modules looks like: 2. Each module imports all necessary classes and functions at the top of the module - e.g. there is a section with imports within each module in project 3. Each function/class uses import locally e.g right after definition and import only things that them really need. Please find samples below. 1 sample import module - only 'import' and 'from ... import ...' statements(no any methods or classes): ======================================================================================================= ``` #references.py import re import clr import math import System import System.Text.RegularExpressions import System.Random import System.Threading import System.DateTime # System assemblies clr.AddReference("System.Core") clr.AddReference("System.Data") clr.AddReference("System.Drawing") ... #test.py from references.syslibs import (Array, DataTable, OleDbConnection, OleDbDataAdapter, OleDbCommand, OleDbSchemaGuid) def get_dict_from_data_table(dataTable): pass ``` 2 module with 'import' and 'from ... import ...' as well as methods and classes: ================================================================================ ``` from ... import ... from ... import ... def Generate(param, param1 ...): pass ``` 3 module with 'import' and 'from ... import ...' statements which are used inside of methods and classes: ========================================================================================================= ``` import clr clr.AddReference("assembly") from ... import ... ... def generate_(txt, param1, param2): from ... import ... from ... import ... from ... import ... if not cond(param1): res = "text" if not cond(param2): name = "default" ``` So what is the most pythonic way to import modules in python?
2011/06/16
[ "https://Stackoverflow.com/questions/6372159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/781150/" ]
People have already commented on the major style issues (at the top of the script, etc), so I'll skip that. For my imports, I usually have them ordered alphabetically by module name (regardless of whether it's 'import' or 'from ... import ...'. I split it into groups of: standard lib; third party modules (from pypi or other); internal modules. ``` import os import system import twisted import zope import mymodule_1 import mymodule_2 ```
Do not use `from module import *`. This will pollute the namespace and is highly frowned upon. However, you can import specific things using from; `from module import something`. This keeps the namespace clean. On larger projects if you use a wildcard you could be importing 2 foo or 2 bar into the same namespace. [PEP 8](http://www.python.org/dev/peps/pep-0008/) says to have imports on separate lines. For instance: ``` import os import sys import yourmodule from yourmodule import specific_stuff ``` One thing I do is alphabetize my imports into two groups. One is std/third party and the second is internal modules.
50,105,459
Hello i have been playing around with python recently and have been trying to learn how to control external peripherals and i/o ports on my laptop. I have been trying to disable USB ports and disable my network adapter. However when i run my program it does not work. The code does not have a specific syntax error but when it is ran nothing happens. ``` import subprocess def main(): print("PROGRAM STARTED") subprocess.call(["runas", "/user:Administrator", "cmd.exe /c netsh interface set interface '*' admin=disable"]) print("Program Exited") if __name__ == "__main__": main() ```
2018/04/30
[ "https://Stackoverflow.com/questions/50105459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7802263/" ]
I think you should try to run such commands as admin in windows. This might help: <https://social.technet.microsoft.com/Forums/windows/en-US/05cce5f6-3c3a-4bb8-8b72-8c1ce4b5eff1/how-to-run-a-program-as-adminitrator-via-the-command-line?forum=w7itproappcompat> You can also modify your command to print the output in stdout to debug easily. `print subprocess.check_output(['runas','/user:Bradley', "cmd.exe /c netsh interface set interface '*' admin=disable")`
I found the issue with the code. to start with i was using the `subprocess.call` function however trying to run the program with Administrator through python do it through command prompt and use this line of code instead ``` subprocess.run(["powershell","Disable-NetAdapter -Name '*'"]) ``` Note\* Yes i changed from cmd to powershell this is because the command was easier to use.
67,915,722
I am fighiting with some listing all possibilities of command with optional and mandatory parameters in python. I need it to generate some autocomplete script in bash based on help output from some script. E.g. fictional command: ``` add disk -pool <name> { -diskid <diskid> | -diskid auto [-fx | -tdr] } [-fx] [-status { enable | disable } ] ``` *Where: {} mandatory, [] optional, | or* Expected result as all (24) possibilities of above command: ``` add disk -pool <name> -diskid <diskid> add disk -pool <name> -diskid <diskid> -capacity_saving add disk -pool <name> -diskid <diskid> -capacity_saving -status enable add disk -pool <name> -diskid <diskid> -capacity_saving -status disable add disk -pool <name> -diskid <diskid> -status enable add disk -pool <name> -diskid <diskid> -status disable add disk -pool <name> -diskid auto add disk -pool <name> -diskid auto -capacity_saving add disk -pool <name> -diskid auto -capacity_saving -status enable add disk -pool <name> -diskid auto -capacity_saving -status disable add disk -pool <name> -diskid auto -status enable add disk -pool <name> -diskid auto -status disable add disk -pool <name> -diskid auto -fx add disk -pool <name> -diskid auto -fx -capacity_saving add disk -pool <name> -diskid auto -fx -capacity_saving -status enable add disk -pool <name> -diskid auto -fx -capacity_saving -status disable add disk -pool <name> -diskid auto -fx -status enable add disk -pool <name> -diskid auto -fx -status disable add disk -pool <name> -diskid auto -tdr add disk -pool <name> -diskid auto -tdr -capacity_saving add disk -pool <name> -diskid auto -tdr -capacity_saving -status enable add disk -pool <name> -diskid auto -tdr -capacity_saving -status disable add disk -pool <name> -diskid auto -tdr -status enable add disk -pool <name> -diskid auto -tdr -status disable ``` I've tried `import intertool` + `product()` but it only works for less complex commands like `{ -diskid <diskid> | -diskid auto }` so if there are no more parentheses in parentheses like below command with output: ``` # add disk -pool <name> { -diskid <diskid> | -diskid auto } [-fx] command = [ ['add'], ['disk'], ['-pool <name>'], ['-diskid <diskid>', '-diskid auto'], ['', '-fx']] print(list(itertools.product(*command))) print(len(list(itertools.product(*command)))) ``` Output: ``` [('add', 'disk', '-pool <name>', '-diskid <diskid>', ''), ('add', 'disk', '-pool <name>', '-diskid <diskid>', '-fx'), ('add', 'disk', '-pool <name>', '-diskid auto', ''), ('add', 'disk', '-pool <name>', '-diskid auto', '-fx')] 4 ``` How can I get the expected result? :c
2021/06/10
[ "https://Stackoverflow.com/questions/67915722", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11387742/" ]
The question is pretty lacking on what exactly wants to be retrieved from Kubernetes but I think I can provide a good baseline. When you use Kubernetes, you are most probably using `kubectl` to interact with `kubeapi-server`. Some of the commands you can use to retrieve the information from the cluster: * `$ kubectl get RESOURCE --namespace NAMESPACE RESOURCE_NAME` * `$ kubectl describe RESOURCE --namespace NAMESPACE RESOURCE_NAME` --- ### Example: Let's assume that you have a `Service` of type `LoadBalancer` (I've redacted some output to be more readable): * `$ kubectl get service nginx -o yaml` ``` apiVersion: v1 kind: Service metadata: name: nginx namespace: default spec: clusterIP: 10.2.151.123 externalTrafficPolicy: Cluster ports: - nodePort: 30531 port: 80 protocol: TCP targetPort: 80 selector: app: nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: A.B.C.D ``` Getting a `nodePort` from this output could be done like this: * `kubectl get svc nginx -o jsonpath='{.spec.ports[].nodePort}'` ```sh 30531 ``` Getting a `loadBalancer IP` from this output could be done like this: * `kubectl get svc nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}"` ``` A.B.C.D ``` You can also use `kubectl` with `custom-columns`: * `kubectl get service -o=custom-columns=NAME:metadata.name,IP:.spec.clusterIP` ```sh NAME IP kubernetes 10.2.0.1 nginx 10.2.151.123 ``` --- There are a lot of possible ways to retrieve data with `kubectl` which you can read more by following the: * `kubectl get --help`: > > -o, --output='': Output format. One of: > json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=... > See [custom columns](http://kubernetes.io/docs/user-guide/kubectl-overview/#custom-columns), [golang template](http://golang.org/pkg/text/template/#pkg-overview) and [jsonpath template](http://kubernetes.io/docs/user-guide/jsonpath). > > > * *[Kubernetes.io: Docs: Reference: Kubectl: Cheatsheet: Formatting output](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output)* --- Additional resources: * *[Kubernetes.io: Docs: Reference: Kubectl: Overview](https://kubernetes.io/docs/reference/kubectl/overview/)* * *[Github.com: Kubernetes client: Python](https://github.com/kubernetes-client/python)* - if you would like to retrieve this information with Python * *[Stackoverflow.com: Answer: How to parse kubectl describe output and get the required field value](https://stackoverflow.com/a/53669973/12257134)*
If you want to extract just single values, perhaps as part of scripts, then what you are searching for is `-ojsonpath` such as this example: ``` kubectl get svc service-name -ojsonpath='{.spec.ports[0].port}' ``` which will extract jus the value of the first port listed into the service **specs**. docs - <https://kubernetes.io/docs/reference/kubectl/jsonpath/> --- If you want to extract the whole definition of an object, such as a service, then what you are searching for is `-oyaml` such as this example: ``` kubectl get svc service-name -oyaml ``` which will output the whole service definition, all in yaml format. --- If you want to get a more user-friendly description of a resource, such as a service, then you are searching for a describe command, such as this example: ``` kubectl describe svc service-name ``` --- docs - <https://kubernetes.io/docs/reference/kubectl/overview/#output-options>
35,934,735
I'd like to bind a class method to the object instance so that when the method is invoke as callback it can still access the object instance. I am using an event emitter to generate and fire events. This is my code: ``` #!/usr/bin/env python3 from pyee import EventEmitter class Component(object): _emiter = EventEmitter() def emit(self, event_type, event): Component._emiter.emit(event_type, event) def listen_on(event): def listen_on_decorator(func): print("set event") Component._emiter.on(event, func) def method_wrapper(*args, **kwargs): return func(*args, **kwargs) return method_wrapper return listen_on_decorator class TestComponent(Component): @listen_on('test') def on_test(self, event): print("self is " + str(self)) print("FF" + str(event)) if __name__ == '__main__': t = TestComponent() t.emit('test', { 'a': 'dfdsf' }) ``` If you run this code, an error is thrown : ``` File "component.py", line 29, in <module> [0/1889] t.emit('test', { 'a': 'dfdsf' }) File "component.py", line 8, in emit Component._emiter.emit('test', event) File "/Users/giuseppe/.virtualenvs/Forex/lib/python3.4/site-packages/pyee/__init__.py", line 117, in emit f(*args, **kwargs) File "component.py", line 14, in method_wrapper return func(*args, **kwargs) TypeError: on_test() missing 1 required positional argument: 'event' ``` This is caused by the missing self when the method `on_test` is called.
2016/03/11
[ "https://Stackoverflow.com/questions/35934735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1022525/" ]
Get start day && end day: ``` $date = date('Y-m-d'); $startDate = new \DateTime($date); $endDate = new \DateTime($date); $endDate->modify("+1 day -1 second"); echo $startDate->format('Y-m-d H:i:s'); return dd($endDate); ```
change your output to: ``` echo $StartDate->format('Y-m-d H:i:s'); ``` Here's a list of all the formatting characters that can be used to customize your output [Link](http://www.w3schools.com/php/func_date_date.asp)
63,345,326
I am new to OPC-UA and Eclipse Milo and I am trying to construct a client that can connect to the OPC-UA server of a machine we have just acquired. I have been able to set up a simple OPC-UA server on my laptop by using this python tutorial series: <https://www.youtube.com/watch?v=NbKeBfK3pfk>. Additionally, I have been able to use the Eclipse Milo examples to run the subscription example successfully to read some values from this server. However, I have been having difficulty connecting to the OPC-UA server of the machine we have just received. I have successfully connected to this server using the UaExpert client, but we want to build our own client using Eclipse Milo. I can see that some warnings come up when using UaExpert to connect to the server which appear to give clues about the issue but I have too little experience in server-client communications/OPC-UA and would appreciate some help. I will explain what happens when I use the UaExpert client as I have been using this to try and diagnose what is going on. I notice that when I first launch UaExpert I get the following errors which could be relevant: ``` Discovery FindServersOnNetwork on opc.tcp://localhost:4840 failed (BadTimeout), falling back to FindServers Discovery FindServers on opc.tpc://localhost:4840 failed (BadTimeout) Discovery GetEndpoints on opc.tcp://localhost:4840 failed ``` I am really new to networking so not sure exactly what this means. I will outline the process I have followed when trying to get the SubscriptionExample of Eclipse Milo working with this machine's server. Firstly, I change the getEndpointUrl() method to the ip address of the device we are using: return "opc.tcp://11.23.1.1:4840". I can successfully ping the device using ping 11.23.1.1 from my laptop. When I try to run the SubscriptionExample with this address I get the following error: ``` [NonceUtilSecureRandom] INFO o.e.m.o.stack.core.util.NonceUtil - SecureRandom seeded in 0ms. 18:36:23.879 [main] ERROR o.e.m.e.client.ClientExampleRunner - Error running client example: java.net.UnknownHostException: br-automation java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation at java.util.concurrent.CompletableFuture.reportGet(Unknown Source) at java.util.concurrent.CompletableFuture.get(Unknown Source) at org.eclipse.milo.examples.client.SubscriptionExample.run(SubscriptionExample.java:50) at org.eclipse.milo.examples.client.ClientExampleRunner.run(ClientExampleRunner.java:120) at org.eclipse.milo.examples.client.SubscriptionExample.main(SubscriptionExample.java:42) Caused by: java.net.UnknownHostException: br-automation at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source) at java.net.InetAddress.getAddressesFromNameService(Unknown Source) at java.net.InetAddress.getAllByName0(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at java.net.InetAddress.getByName(Unknown Source) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:148) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:145) at java.security.AccessController.doPrivileged(Native Method) at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:145) at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32) at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200) at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at java.lang.Thread.run(Unknown Source) 18:36:23.881 [ForkJoinPool.commonPool-worker-1] ERROR o.e.m.e.client.ClientExampleRunner - Error running example: java.net.UnknownHostException: br-automation java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation at java.util.concurrent.CompletableFuture.reportGet(Unknown Source) at java.util.concurrent.CompletableFuture.get(Unknown Source) at org.eclipse.milo.examples.client.SubscriptionExample.run(SubscriptionExample.java:50) at org.eclipse.milo.examples.client.ClientExampleRunner.run(ClientExampleRunner.java:120) at org.eclipse.milo.examples.client.SubscriptionExample.main(SubscriptionExample.java:42) Caused by: java.net.UnknownHostException: br-automation at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source) at java.net.InetAddress.getAddressesFromNameService(Unknown Source) at java.net.InetAddress.getAllByName0(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at java.net.InetAddress.getByName(Unknown Source) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:148) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:145) at java.security.AccessController.doPrivileged(Native Method) at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:145) at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32) at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200) at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at java.lang.Thread.run(Unknown Source) ``` When using UaExpert, "opc.tcp://11.23.1.1:4840" is the address of the server that I input when adding a new server when using Custom Discovery. When I enter this, a device appears as a dropdown of this server called B&R Embedded OPC-UA Server as the OPC-UA server is hosted on a B&R device in the machine. When I select this device to connect to, I get the following message: *The hostname of the discovery URL used to call GetEndpoints (br-automation) was replaced by the hostname used to call FindServers (11.23.1.1). Do you also want to replace the hostnames of the EndpointURLs with this hostname?* I have to accept this message for the server to be found, but I am confused exactly what is going on. I assume there is a difference in the endpoint used to find the server and the endpoint used for something else? I have found the resources online very difficult to understand. In the UaExpert logs there are three lines of logs in a row which report "Adding Url: ocp.tcp://br-automation:4840". It also then reports the endpoint: "ocp.tcp://br-automation:4840", the application Uri and the security policy (none). If I try change the address in the client's getEndpointUrl method to ocp.tcp://br-automation:4840 then I get the following error: ``` [main] INFO o.e.m.opcua.sdk.client.OpcUaClient - Eclipse Milo OPC UA Client SDK version: 0.4.3-SNAPSHOT 18:37:46.035 [main] ERROR o.e.m.e.client.ClientExampleRunner - Error getting client: java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation org.eclipse.milo.opcua.stack.core.UaException: java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation at org.eclipse.milo.opcua.sdk.client.OpcUaClient.lambda$create$1(OpcUaClient.java:204) at java.util.Optional.orElseGet(Unknown Source) at org.eclipse.milo.opcua.sdk.client.OpcUaClient.create(OpcUaClient.java:204) at org.eclipse.milo.opcua.sdk.client.OpcUaClient.create(OpcUaClient.java:201) at org.eclipse.milo.examples.client.ClientExampleRunner.createClient(ClientExampleRunner.java:73) at org.eclipse.milo.examples.client.ClientExampleRunner.run(ClientExampleRunner.java:94) at org.eclipse.milo.examples.client.SubscriptionExample.main(SubscriptionExample.java:42) Caused by: java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation at java.util.concurrent.CompletableFuture.reportGet(Unknown Source) at java.util.concurrent.CompletableFuture.get(Unknown Source) at org.eclipse.milo.opcua.sdk.client.OpcUaClient.create(OpcUaClient.java:180) ... 4 common frames omitted Caused by: java.net.UnknownHostException: br-automation at java.net.InetAddress.getAllByName0(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at java.net.InetAddress.getAllByName(Unknown Source) at java.net.InetAddress.getByName(Unknown Source) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:148) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:145) at java.security.AccessController.doPrivileged(Native Method) at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:145) at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32) at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200) at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at java.lang.Thread.run(Unknown Source) ``` I don't know if this is enough information to diagnose the problem, but I would appreciate any help on how I can get the Eclipse Milo server to perform the same process and connect to the machine's server.
2020/08/10
[ "https://Stackoverflow.com/questions/63345326", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8381207/" ]
I see a couple things to try. First, make sure to set your custom js file to have 'slick-js' as a dependancy. This way it loads *after* slick slider does. Also, jquery is already part of wordpress, so you **do not** need to enque it again. However, it should be a dependancy for both your custom script and slick: ``` wp_enqueue_script('main', get_stylesheet_directory_uri() . '/js/custom.js', array('slick-js', 'jquery'), NULL, true ) ; ``` Second, I'm not sure what val-slider is, but it could be conflicting with slick slider. I suggest only using one javascript slider for your theme. Slick is very powerful and customizable, so It's a good choice. Third, slick slider typically also has a theme-styles.css file that you should include. This pretties up the slider and puts arrows/dots in the right place. Fourth, I'm not sure what your HTML looks like, but make sure the div with class `.slideshow` is the immediate parent to your slides (typically a for or foreach loop.) If there is another surrounding div in there then slick will interpret that as one slide. Here's an example: ``` <div class="slideshow"> <?php foreach($slides as $slide) { echo '<div class="slide">'; //this class name is unimportant //slide content here echo '</div>'; }; ?> </div> ``` Fifth, not sure if this is a copy/paste error but you're missing the closing `)};` in your javascript. Last thing, this wont break slick slider, but it could cause some weird things to happen; you have `slidesToScroll: 10` but are only showing one slide `slidesToShow:1`. I think it's a good practice to make these numbers the same.
Thank you very much, now it finally works. One other thing that I that I was not aware of, was that I was to replace the $ with jQuery, so my custom.js looks like this: ``` `jQuery('.slider').slick({ arrows: false, slidesToShow: 1, slidesToScroll: 1, arrows: false, focusOnSelect: false, });` ``` Now it is finally running
43,037,588
I have a CSV file in the same directory as my Python script, and I would like to take that data and turn it into a list that I can use later. I would prefer to use Python's CSV module. After reading the the module's documentation and questions regarding it, I have still not found any help. ### Code ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import csv inputfile = 'inputfile.csv' inputm = [] reader = csv.reader(inputfile) for row in reader: inputm.append(row) ``` ### inputfile.csv ``` point1,point2,point3,point4 0,1,4,1 0,1,1,1 0,1,4,1 0,1,0,0 0,1,2,1 0,0,0,0 0,0,0,0 0,1,3,1 0,1,4,1 ``` It only returns the string of the filename I provide it. `[['i'], ['n'], ['p'], ['u'], ['t'], ['f'], ['i'], ['l'], ['e']]` I would like it to return each row of the CSV file as a sub-list instead of each letter of the filename as a sub-list.
2017/03/27
[ "https://Stackoverflow.com/questions/43037588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6417530/" ]
You need to open the file in read mode, read the contents! That is, ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import csv inputfile = 'inputfile.csv' inputm = [] with open(inputfile, "rb") as f: reader = csv.reader(f, delimiter="\t") for row in reader: inputm.append(row) ``` Output: ``` [['point1,point2,point3,point4'], ['0,1,4,1'], ['0,1,1,1'], ['0,1,4,1'], ['0,1,0,0'], ['0,1,2,1'], ['0,0,0,0'], ['0,0,0,0'], ['0,1,3,1'], ['0,1,4,1']] ```
You actually need to `open()` the file: ``` inputfile = open('inputfile.csv') ``` You may want to look at the `with` statement: ``` with open('inputfile.csv') as inputfile: reader = csv.reader(inputfile) inputm = list(reader) ```
11,306,641
Here on SO people sometimes say something like "you cannot parse X with regular expressions, because X is not a regular language". From my understanding however, modern regular expressions engines can match more than just regular languages in [Chomsky's sense](http://en.wikipedia.org/wiki/Chomsky_hierarchy). My questions: given a regular expression engine that supports * backreferences * lookaround assertions of unlimited width * recursion, like `(?R)` what kind of languages can it parse? Can it parse any context-free language, and if not, what would be the counterexample? (To be precise, by "parse" I mean "build a single regular expression that would accept all strings generated by the grammar X and reject all other strings"). Add.: I'm particularly interested to see an example of a context-free language that modern regex engines (Perl, Net, python regex module) would be unable to parse.
2012/07/03
[ "https://Stackoverflow.com/questions/11306641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/989121/" ]
Modern regex engines can certainly parse a bigger set of languages than the regular languages set. So said, none of the four classic Chomsky sets are exactly recognized by regexes. All regular languages are clearly recognized by regexes. There are some classic context-free languages that cannot be recognized by regexes, such as the balanced parenthesis language `a^n b^n`, unless backreferences with counting are available. However, a regex can parse the language `ww` which is context-sensitive. Actually, regular expressions in formal language theory are only lightly related to regexes. Matching regexes with unlimited backreference is NP-Complete in the most general case, so all pattern matching algorithms for powerful enough regexes are exponential, at least in the general case. However most times for most input they are quite fast. It is known that matching context-free languages is at most something faster than `n^3`, so there are some languages in regexes that are not context-free (like `ww`) but not all context-free languages can be parsed by regexes. Type 0 languages are non-decidable in general, son regexes don't get there. So as a not very conclusive conclusion, regexes can parse a broad set of languages that include all regular languages, and some context-free and context-sensitive, but it is not exactly equal to any of those sets. There are other categories of languages, and other taxonomies, where you could find a more precise answer, but no taxonomy that includes context-free languages as a proper subset in a hierarchy of languages can provide a single language exactly recognized by regexes, because regexes only intersect in some part with context-free languages, and neither is a proper subset of the other.
You can read about regexes in *[An Introduction to Language And Linguistics By Ralph W. Fasold, Jeff Connor-Linton P.477](http://books.google.com/books?id=dlzthEZGkmsC&pg=PA477#v=onepage&q&f=false)* **Chomsky Hierarchy**: Type0 >= Type1 >= Type2 >= Type3 Computational Linguistics mainly features Type 2 & 3 Grammars • *Type 3 grammars*: –Include **regular expressions** and finite state automata (aka, finite state machines) –The focal point of the rest of this talk • *Type 2 grammars*: –Commonly used for natural language parsers –Used to model syntactic structure in many linguistic theories (often supplemented by other mechanisms) –We will play a key roll in the next talk on parsing. --- most XMLs like [Microsoft DGML](http://en.wikipedia.org/wiki/DGML) (Directed Graph Markup Language) that has inter-relational links are samples that Regex are useless. --- and this three answers may be useful: 1 - [does-lookaround-affect-which-languages-can-be-matched-by-regular-expressions](https://stackoverflow.com/questions/2974210/does-lookaround-affect-which-languages-can-be-matched-by-regular-expressions/2991587#2991587) 2 - [regular-expressions-arent](https://cstheory.stackexchange.com/questions/448/regular-expressions-arent) 3 - [where-do-most-regex-implementations-fall-on-the-complexity-scale](https://cstheory.stackexchange.com/questions/1047/where-do-most-regex-implementations-fall-on-the-complexity-scale)
11,306,641
Here on SO people sometimes say something like "you cannot parse X with regular expressions, because X is not a regular language". From my understanding however, modern regular expressions engines can match more than just regular languages in [Chomsky's sense](http://en.wikipedia.org/wiki/Chomsky_hierarchy). My questions: given a regular expression engine that supports * backreferences * lookaround assertions of unlimited width * recursion, like `(?R)` what kind of languages can it parse? Can it parse any context-free language, and if not, what would be the counterexample? (To be precise, by "parse" I mean "build a single regular expression that would accept all strings generated by the grammar X and reject all other strings"). Add.: I'm particularly interested to see an example of a context-free language that modern regex engines (Perl, Net, python regex module) would be unable to parse.
2012/07/03
[ "https://Stackoverflow.com/questions/11306641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/989121/" ]
I recently wrote a rather long article on this topic: [The true power of regular expressions](http://nikic.github.com/2012/06/15/The-true-power-of-regular-expressions.html). To summarize: * Regular expressions with support for recursive subpattern references can match *all* context-free languages (e.g `a^n b^n`). * Regular expressions with lookaround assertions and subpattern references can match at least *some* context-sensitive languages (e.g. `ww` and `a^n b^n c^n`). * If the assertions have unlimited width (as you say), then *all* context-sensitive grammars can be matched. I don't know any regex flavor though that does not have fixed-width restrictions on lookbehind (and at the same time supports subpattern references). * Regular expressions with backreferences are NP-complete, so any other NP problem can be solved using regular expressions (after applying a polynomial-time transformation). Some examples: * Matching the context-free language `{a^n b^n, n>0}`: ``` /^(a(?1)?b)$/ # or /^ (?: a (?= a* (\1?+ b) ) )+ \1 $/x ``` * Matching the context-sensitive language `{a^n b^n c^n, n>0}`: ``` /^ (?=(a(?-1)?b)c) a+(b(?-1)?c) $/x # or /^ (?: a (?= a* (\1?+ b) b* (\2?+ c) ) )+ \1 \2 $/x ```
Modern regex engines can certainly parse a bigger set of languages than the regular languages set. So said, none of the four classic Chomsky sets are exactly recognized by regexes. All regular languages are clearly recognized by regexes. There are some classic context-free languages that cannot be recognized by regexes, such as the balanced parenthesis language `a^n b^n`, unless backreferences with counting are available. However, a regex can parse the language `ww` which is context-sensitive. Actually, regular expressions in formal language theory are only lightly related to regexes. Matching regexes with unlimited backreference is NP-Complete in the most general case, so all pattern matching algorithms for powerful enough regexes are exponential, at least in the general case. However most times for most input they are quite fast. It is known that matching context-free languages is at most something faster than `n^3`, so there are some languages in regexes that are not context-free (like `ww`) but not all context-free languages can be parsed by regexes. Type 0 languages are non-decidable in general, son regexes don't get there. So as a not very conclusive conclusion, regexes can parse a broad set of languages that include all regular languages, and some context-free and context-sensitive, but it is not exactly equal to any of those sets. There are other categories of languages, and other taxonomies, where you could find a more precise answer, but no taxonomy that includes context-free languages as a proper subset in a hierarchy of languages can provide a single language exactly recognized by regexes, because regexes only intersect in some part with context-free languages, and neither is a proper subset of the other.
11,306,641
Here on SO people sometimes say something like "you cannot parse X with regular expressions, because X is not a regular language". From my understanding however, modern regular expressions engines can match more than just regular languages in [Chomsky's sense](http://en.wikipedia.org/wiki/Chomsky_hierarchy). My questions: given a regular expression engine that supports * backreferences * lookaround assertions of unlimited width * recursion, like `(?R)` what kind of languages can it parse? Can it parse any context-free language, and if not, what would be the counterexample? (To be precise, by "parse" I mean "build a single regular expression that would accept all strings generated by the grammar X and reject all other strings"). Add.: I'm particularly interested to see an example of a context-free language that modern regex engines (Perl, Net, python regex module) would be unable to parse.
2012/07/03
[ "https://Stackoverflow.com/questions/11306641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/989121/" ]
I recently wrote a rather long article on this topic: [The true power of regular expressions](http://nikic.github.com/2012/06/15/The-true-power-of-regular-expressions.html). To summarize: * Regular expressions with support for recursive subpattern references can match *all* context-free languages (e.g `a^n b^n`). * Regular expressions with lookaround assertions and subpattern references can match at least *some* context-sensitive languages (e.g. `ww` and `a^n b^n c^n`). * If the assertions have unlimited width (as you say), then *all* context-sensitive grammars can be matched. I don't know any regex flavor though that does not have fixed-width restrictions on lookbehind (and at the same time supports subpattern references). * Regular expressions with backreferences are NP-complete, so any other NP problem can be solved using regular expressions (after applying a polynomial-time transformation). Some examples: * Matching the context-free language `{a^n b^n, n>0}`: ``` /^(a(?1)?b)$/ # or /^ (?: a (?= a* (\1?+ b) ) )+ \1 $/x ``` * Matching the context-sensitive language `{a^n b^n c^n, n>0}`: ``` /^ (?=(a(?-1)?b)c) a+(b(?-1)?c) $/x # or /^ (?: a (?= a* (\1?+ b) b* (\2?+ c) ) )+ \1 \2 $/x ```
You can read about regexes in *[An Introduction to Language And Linguistics By Ralph W. Fasold, Jeff Connor-Linton P.477](http://books.google.com/books?id=dlzthEZGkmsC&pg=PA477#v=onepage&q&f=false)* **Chomsky Hierarchy**: Type0 >= Type1 >= Type2 >= Type3 Computational Linguistics mainly features Type 2 & 3 Grammars • *Type 3 grammars*: –Include **regular expressions** and finite state automata (aka, finite state machines) –The focal point of the rest of this talk • *Type 2 grammars*: –Commonly used for natural language parsers –Used to model syntactic structure in many linguistic theories (often supplemented by other mechanisms) –We will play a key roll in the next talk on parsing. --- most XMLs like [Microsoft DGML](http://en.wikipedia.org/wiki/DGML) (Directed Graph Markup Language) that has inter-relational links are samples that Regex are useless. --- and this three answers may be useful: 1 - [does-lookaround-affect-which-languages-can-be-matched-by-regular-expressions](https://stackoverflow.com/questions/2974210/does-lookaround-affect-which-languages-can-be-matched-by-regular-expressions/2991587#2991587) 2 - [regular-expressions-arent](https://cstheory.stackexchange.com/questions/448/regular-expressions-arent) 3 - [where-do-most-regex-implementations-fall-on-the-complexity-scale](https://cstheory.stackexchange.com/questions/1047/where-do-most-regex-implementations-fall-on-the-complexity-scale)
34,722,459
Is there a way to generate a file on HDFS directly? I want to avoid generating a local file and then over hdfs command line like: `hdfs dfs -put - "file_name.csv"` to copy to HDFS. Or is there any python library?
2016/01/11
[ "https://Stackoverflow.com/questions/34722459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5773478/" ]
Have you tried with [HdfsCli](http://hdfscli.readthedocs.org/en/latest/quickstart.html)? To quote the paragraph [Reading and Writing files](http://hdfscli.readthedocs.org/en/latest/quickstart.html#reading-and-writing-files): ``` # Loading a file in memory. with client.read('features') as reader: features = reader.read() # Directly deserializing a JSON object. with client.read('model.json', encoding='utf-8') as reader: from json import load model = load(reader) ```
Is extremly slow when I use hdfscli the write method? Is there an any way to speedup with using hdfscli? ``` with client.write(conf.hdfs_location+'/'+ conf.filename, encoding='utf-8', buffersize=10000000) as f: writer = csv.writer(f, delimiter=conf.separator) for i in tqdm(10000000000): row = [column.get_value() for column in conf.columns] writer.writerow(row) ``` Thanks lot.
34,722,459
Is there a way to generate a file on HDFS directly? I want to avoid generating a local file and then over hdfs command line like: `hdfs dfs -put - "file_name.csv"` to copy to HDFS. Or is there any python library?
2016/01/11
[ "https://Stackoverflow.com/questions/34722459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5773478/" ]
Have you tried with [HdfsCli](http://hdfscli.readthedocs.org/en/latest/quickstart.html)? To quote the paragraph [Reading and Writing files](http://hdfscli.readthedocs.org/en/latest/quickstart.html#reading-and-writing-files): ``` # Loading a file in memory. with client.read('features') as reader: features = reader.read() # Directly deserializing a JSON object. with client.read('model.json', encoding='utf-8') as reader: from json import load model = load(reader) ```
`hdfs dfs -put` does not require yo to create a file on local. Also, no need of creating a zero byte file on hdfs (`touchz`) and append to it (`appendToFile`). You can directly write a file on hdfs as: ``` hadoop fs -put - /user/myuser/testfile ``` Hit enter. On the command prompt, enter the text you want to put in the file. Once you are finished, say `Ctrl+D`.
34,722,459
Is there a way to generate a file on HDFS directly? I want to avoid generating a local file and then over hdfs command line like: `hdfs dfs -put - "file_name.csv"` to copy to HDFS. Or is there any python library?
2016/01/11
[ "https://Stackoverflow.com/questions/34722459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5773478/" ]
Have you tried with [HdfsCli](http://hdfscli.readthedocs.org/en/latest/quickstart.html)? To quote the paragraph [Reading and Writing files](http://hdfscli.readthedocs.org/en/latest/quickstart.html#reading-and-writing-files): ``` # Loading a file in memory. with client.read('features') as reader: features = reader.read() # Directly deserializing a JSON object. with client.read('model.json', encoding='utf-8') as reader: from json import load model = load(reader) ```
Two ways of write local files to hdfs using python: One way is using hdfs python package: **Code snippet:** ``` from hdfs import InsecureClient hdfsclient = InsecureClient('http://localhost:50070', user='madhuc') hdfspath="/user/madhuc/hdfswritedata/" localpath="/home/madhuc/sample.csv" hdfsclient.upload(hdfspath, localpath) ``` Outputlocation:**'/user/madhuc/hdfswritedata/sample.csv'** Otherway is subprocess python package using PIPE **Code Sbippet:** ``` from subprocess import PIPE, Popen # put file into hdfs put = Popen(["hadoop", "fs", "-put", localpath, hdfspath], stdin=PIPE, bufsize=-1) put.communicate() print("File Saved Successfully") ```
63,170,922
Is there a way to **try to** decode a bytearray without raising an error if the encoding fails? **EDIT**: The solution needn't use bytearray.decode(...). Anything library (preferably standard) that does the job would be great. **Note**: I don't want to ignore errors, (which I could do using `bytearray.decode(errors='ignore')`). I also don't want an exception to be raised. Preferably, I would like the function to return None, for example. ```py my_bytearray = bytearray('', encoding='utf-8') # ... # Read some stream of bytes into my_bytearray. # ... text = my_bytearray.decode() ``` If my\_bytearray doesn't contain valid UTF-8 text, the last line will raise an error. **Question**: Is there a way to perform the validation but without raising an error? (I realize that raising an error is considered "pythonic". Let's assume this is undesirable for some or other good reason.) I don't want to use a try-catch block because this code gets called thousands of times and I don't want my IDE to stop every time this exception is raised (whereas I do want it to pause on other errors).
2020/07/30
[ "https://Stackoverflow.com/questions/63170922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4093278/" ]
You could use the [suppress](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) context manager to suppress the exception and have slightly prettier code than with try/except/pass: ```py import contextlib ... return_val = None with contextlib.suppress(UnicodeDecodeError): return_val = my_bytearray.decode('utf-8') ```
The `chardet` module can be used to detect the encoding of a bytearray before calling `bytearray.decode(...)`. **The Code:** ```py import chardet identity = chardet.detect(my_bytearray) ``` The method `chardet.detect(...)` returns a dictionary with the following format: ``` { 'confidence': 0.99, 'encoding': 'ascii', 'language': '' } ``` One could check `analysis['encoding']` to confirm that `my_bytearray` is compatible with an expected set of text encoding before calling `my_bytearray.decode()`. One consideration of using this approach is that the encoding indicated by the analysis might indicate one of a number of equivalent encodings. In this case, for instance, the analysis indicates that the encoding is ASCII whereas it could equivalently be UTF-8. (Credit to @simon who pointed this out on StackOverflow [here](https://stackoverflow.com/a/49480024/4093278).)
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
It's better to use android latest versions. But you can resolve by replace below code in your `app/build.gradle` file ``` android { compileSdkVersion 24 buildToolsVersion "24.2.1" defaultConfig { minSdkVersion 19 targetSdkVersion 24 ... } ``` Dependencies as follows: ``` compile 'com.android.support:appcompat-v7:24.2.1' compile 'com.android.support:support-v4:24.2.1' compile 'com.android.support:design:24.2.1' ```
I have just used Maven google repository in build.gradle(project) : ``` // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { repositories { jcenter() google() maven { url 'https://maven.google.com' } } dependencies { classpath 'com.android.tools.build:gradle:4.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { jcenter() maven { url 'https://maven.google.com' } } } task clean(type: Delete) { delete rootProject.buildDir } ```
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
Update your android studio to the latest version.
Maybe it is because of your minimal SDK version, and you need to migrate to AndroidX as SDK does not support legacy support components
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
Follow these steps: 1. update version of build tools and dependences in buyild.gradle(Module) ``` android { compileSdkVersion 24 buildToolsVersion "24.2.1" defaultConfig { minSdkVersion 19 targetSdkVersion 24 ... } ``` and dependences ``` compile 'com.android.support:appcompat-v7:24.2.1' compile 'com.android.support:support-v4:24.2.1' compile 'com.android.support:design:24.2.1' ``` 2. Uses Maven google repository instead of google() in build.gradle(project) ``` buildscript { repositories { jcenter() maven { url 'https://maven.google.com' } } ..... allprojects { repositories { maven { url 'https://maven.google.com' } jcenter() } } ``` **Note:** Android Studio 4.1.1 Gradle 6.5 [![enter image description here](https://i.stack.imgur.com/UDGlB.jpg)](https://i.stack.imgur.com/UDGlB.jpg) [![enter image description here](https://i.stack.imgur.com/85Cr1.jpg)](https://i.stack.imgur.com/85Cr1.jpg)
I have just used Maven google repository in build.gradle(project) : ``` // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { repositories { jcenter() google() maven { url 'https://maven.google.com' } } dependencies { classpath 'com.android.tools.build:gradle:4.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { jcenter() maven { url 'https://maven.google.com' } } } task clean(type: Delete) { delete rootProject.buildDir } ```
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
It's better to use android latest versions. But you can resolve by replace below code in your `app/build.gradle` file ``` android { compileSdkVersion 24 buildToolsVersion "24.2.1" defaultConfig { minSdkVersion 19 targetSdkVersion 24 ... } ``` Dependencies as follows: ``` compile 'com.android.support:appcompat-v7:24.2.1' compile 'com.android.support:support-v4:24.2.1' compile 'com.android.support:design:24.2.1' ```
Make sure you have `compile 'com.android.support:appcompat-v7:25.1.0` in your `app/build.gradle`.
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
It's better to use android latest versions. But you can resolve by replace below code in your `app/build.gradle` file ``` android { compileSdkVersion 24 buildToolsVersion "24.2.1" defaultConfig { minSdkVersion 19 targetSdkVersion 24 ... } ``` Dependencies as follows: ``` compile 'com.android.support:appcompat-v7:24.2.1' compile 'com.android.support:support-v4:24.2.1' compile 'com.android.support:design:24.2.1' ```
Maybe it is because of your minimal SDK version, and you need to migrate to AndroidX as SDK does not support legacy support components
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
It's better to use android latest versions. But you can resolve by replace below code in your `app/build.gradle` file ``` android { compileSdkVersion 24 buildToolsVersion "24.2.1" defaultConfig { minSdkVersion 19 targetSdkVersion 24 ... } ``` Dependencies as follows: ``` compile 'com.android.support:appcompat-v7:24.2.1' compile 'com.android.support:support-v4:24.2.1' compile 'com.android.support:design:24.2.1' ```
Update your Android Studio to the latest version to resolve
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
I have just used Maven google repository in build.gradle(project) : ``` // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { repositories { jcenter() google() maven { url 'https://maven.google.com' } } dependencies { classpath 'com.android.tools.build:gradle:4.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { jcenter() maven { url 'https://maven.google.com' } } } task clean(type: Delete) { delete rootProject.buildDir } ```
Make sure you have `compile 'com.android.support:appcompat-v7:25.1.0` in your `app/build.gradle`.
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
Follow these steps: 1. update version of build tools and dependences in buyild.gradle(Module) ``` android { compileSdkVersion 24 buildToolsVersion "24.2.1" defaultConfig { minSdkVersion 19 targetSdkVersion 24 ... } ``` and dependences ``` compile 'com.android.support:appcompat-v7:24.2.1' compile 'com.android.support:support-v4:24.2.1' compile 'com.android.support:design:24.2.1' ``` 2. Uses Maven google repository instead of google() in build.gradle(project) ``` buildscript { repositories { jcenter() maven { url 'https://maven.google.com' } } ..... allprojects { repositories { maven { url 'https://maven.google.com' } jcenter() } } ``` **Note:** Android Studio 4.1.1 Gradle 6.5 [![enter image description here](https://i.stack.imgur.com/UDGlB.jpg)](https://i.stack.imgur.com/UDGlB.jpg) [![enter image description here](https://i.stack.imgur.com/85Cr1.jpg)](https://i.stack.imgur.com/85Cr1.jpg)
Update your Android Studio to the latest version to resolve
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
I have just used Maven google repository in build.gradle(project) : ``` // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { repositories { jcenter() google() maven { url 'https://maven.google.com' } } dependencies { classpath 'com.android.tools.build:gradle:4.0.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { jcenter() maven { url 'https://maven.google.com' } } } task clean(type: Delete) { delete rootProject.buildDir } ```
Maybe it is because of your minimal SDK version, and you need to migrate to AndroidX as SDK does not support legacy support components
41,801,225
I'm very new to python. I am writing code to generate an array of number but the output is not as I want. The code is as follows ``` import numpy as np n_zero=input('Insert the amount of 0: ') n_one =input('Insert the amount of 1: ') n_two =input('Insert the amount of 2: ') n_three = input('Insert the amount of 3: ') data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three np.random.shuffle(data) print(data) ``` The output is as follows : ``` Insert the amount of 0: 10 Insert the amount of 1: 3 Insert the amount of 2: 3 Insert the amount of 3: 3 [0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0] ``` I want the following output: ``` 0031032030202110000 ``` Thank you
2017/01/23
[ "https://Stackoverflow.com/questions/41801225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
Update your android studio to the latest version.
It's better to use android latest versions. But you can resolve by replace below code in your `app/build.gradle` file ``` android { compileSdkVersion 24 buildToolsVersion "24.2.1" defaultConfig { minSdkVersion 19 targetSdkVersion 24 ... } ``` Dependencies as follows: ``` compile 'com.android.support:appcompat-v7:24.2.1' compile 'com.android.support:support-v4:24.2.1' compile 'com.android.support:design:24.2.1' ```
70,771,156
I've converted my python script with tkinter module to a standalone executable file with PyInstaller but it doesn't work without image.png file in the same patch. How I can add this .png file to my app. And why .exe file have an enormous weight of ~350 Mb?
2022/01/19
[ "https://Stackoverflow.com/questions/70771156", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17971397/" ]
I had the exact same situation with Tkinter and a single image needed in the GUI. I combined Aleksandr Tyshkevich answer here and Jonathon Reinhart's answer here [Pyinstaller adding data files](https://stackoverflow.com/questions/41870727/pyinstaller-adding-data-files) as I need to send just the exe file to others, so it needs to work from any location. I have one python file named gui.py. In gui.py I included the code: ``` import sys import os from PIL import Image def resource_path(relative_path): """ Get absolute path to resource, works for dev and for PyInstaller """ base_path = getattr(sys, '_MEIPASS', os.path.dirname(os.path.abspath(__file__))) return os.path.join(base_path, relative_path) path = resource_path("logo.png") # When opening image logo = Image.open(path) ``` In the terminal I used: ``` pyinstaller --onefile --add-data "logo.png;." gui.py ``` I ran into a problem when I tried to use a colon in the line above, as I'm using Windows OS so needed to use a semi-colon.
It works: ```py import os def resource_path(relative_path): try: base_path = sys._MEIPASS except Exception: base_path = os.path.abspath(".") return os.path.join(base_path, relative_path) path = resource_path("image.png") photo = tk.PhotoImage(file=path) ```
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
Okay. Let start from simple. First you need get unique user\_id/dev id combinations ``` select distinct dev_id,user_id from reports ``` Result will be ``` dev_id user_id ------------------ 111 1 222 2 111 2 333 3 ``` After that you should get number of different user\_id per dev\_id ``` select dev_id,c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts ``` Result of such query will be ``` dev_id c ----------------- 111 1 222 0 333 0 ``` Now you should show users which have such dev\_id. For that you should join such dev\_id list with table from step1(which show which one user\_id,dev\_id pairs exist) ``` select distinct fixed_reports2.user_id,counts.c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts join (select distinct user_id,dev_id from reports) as fixed_reports2 on fixed_reports2.dev_id=counts.dev_id where counts.c>0 and counts.c is not null ``` "Distinct" here need to skip same rows. Result should be for internal query ``` dev_id c ----------------- 111 1 ``` For all query ``` user_id c ------------------ 1 1 2 1 ``` If you sure you need also rows with c=0, then you need do "left join" of fixed\_reports2 and large query,that way you will get all rows and rows with c=null will be rows with 0(can be changed by case/when statement)
I think following sql query should solve you problem: ``` SELECT t1.user_id, t1.dev_id, count(t2.user_id) as qu FROM (Select Distinct * from reports) t1 Left Join (Select Distinct * from reports) t2 on t1.user_id != t2.user_id and t2.dev_id = t1.dev_id group by t1.user_Id, t1.dev_id ``` [SQL Fiddle Link](http://sqlfiddle.com/#!9/86d89ff/9)
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
I think following sql query should solve you problem: ``` SELECT t1.user_id, t1.dev_id, count(t2.user_id) as qu FROM (Select Distinct * from reports) t1 Left Join (Select Distinct * from reports) t2 on t1.user_id != t2.user_id and t2.dev_id = t1.dev_id group by t1.user_Id, t1.dev_id ``` [SQL Fiddle Link](http://sqlfiddle.com/#!9/86d89ff/9)
Try ``` SELECT user_id, SUM(qu) AS qu FROM ( SELECT user_id, count(*)-1 AS qu FROM reports GROUP BY user_id, dev_id ) AS r GROUP BY user_id ``` No need to do a join if all the data you need is in one table. Edit: changed the group by to dev\_id instead of user\_id Edit2: I think you need both dev\_id and user\_id in the group by clause. Edit3: Added a subquery to get the desired result. This might be a little cumbersome, perhaps someone has a way to improve this?
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
You can get results by doing: ``` select r.user_id, count(*) - 1 from reports r group by r.user_id; ``` Is this the calculation that you want?
I think following sql query should solve you problem: ``` SELECT t1.user_id, t1.dev_id, count(t2.user_id) as qu FROM (Select Distinct * from reports) t1 Left Join (Select Distinct * from reports) t2 on t1.user_id != t2.user_id and t2.dev_id = t1.dev_id group by t1.user_Id, t1.dev_id ``` [SQL Fiddle Link](http://sqlfiddle.com/#!9/86d89ff/9)
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
Your query is broken and would not run on many systems. The problem is that the group with `user_id` of 2 has two different `dev_id`s. If you run the "broken query" below you can see that the `min()` and `max()` are distinct but the subquery only sees one of those values which is randomly chosen. The last query is corrected by adding `dev_id` to the groupings which shows you where the "missing" row went in the counts. ``` SELECT -- broken query t1.user_id, min(t1.dev_id), max(t1.dev_id), (select distinct t1.dev_id from reports) as should_have_errored, (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = t1.dev_id ) AS qu FROM reports t1 GROUP BY t1.user_id; -- On SQL Server that query returns an error -- Msg 8120, Level 16, State 1, Line 7 -- Column 'reports.dev_id' is invalid in the select list because it is -- not contained in either an aggregate function or the GROUP BY clause. SELECT -- query that duplicates your original query t1.user_id, (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = max(t1.dev_id) /* <-- see here */ ) AS qu FROM reports t1 GROUP BY t1.user_id; SELECT t1.user_id, t1.dev_id, -- fixed query (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = t1.dev_id ) AS qu FROM reports t1 GROUP BY t1.user_id, t1.dev_id ``` <http://sqlfiddle.com/#!9/6576e3/20> Here are some queries that might be useful: Which `dev_id`s have multiple `user_id`s associated with them? ``` select dev_id from reports group by dev_id having count(distinct user_id) > 1 ``` Which other `user_id`s share a `dev_id` with this `user_id`? ``` select user_id from reports r1 where exists ( select 1 from reports r2 where r2.dev_id = r1.dev_id and r2.user_id <> ? ) ``` Or really that's just equivalent to an inner join which also makes it easy to list everybody at once. Note that each pair will be listed twice: ``` select r1.user_id, r1.dev_id, r2.user_id as common_user_id from reports r1 inner join reports r2 on r2.dev_id = r1.dev_id where r1.user_id <> r2.user_id order by r1.user_id, r1.dev_id, r2.user_id ``` And since you've got duplicate rows in your table you'd need to make it `select distinct` to get unique rows.
Try ``` SELECT user_id, SUM(qu) AS qu FROM ( SELECT user_id, count(*)-1 AS qu FROM reports GROUP BY user_id, dev_id ) AS r GROUP BY user_id ``` No need to do a join if all the data you need is in one table. Edit: changed the group by to dev\_id instead of user\_id Edit2: I think you need both dev\_id and user\_id in the group by clause. Edit3: Added a subquery to get the desired result. This might be a little cumbersome, perhaps someone has a way to improve this?
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
Okay. Let start from simple. First you need get unique user\_id/dev id combinations ``` select distinct dev_id,user_id from reports ``` Result will be ``` dev_id user_id ------------------ 111 1 222 2 111 2 333 3 ``` After that you should get number of different user\_id per dev\_id ``` select dev_id,c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts ``` Result of such query will be ``` dev_id c ----------------- 111 1 222 0 333 0 ``` Now you should show users which have such dev\_id. For that you should join such dev\_id list with table from step1(which show which one user\_id,dev\_id pairs exist) ``` select distinct fixed_reports2.user_id,counts.c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts join (select distinct user_id,dev_id from reports) as fixed_reports2 on fixed_reports2.dev_id=counts.dev_id where counts.c>0 and counts.c is not null ``` "Distinct" here need to skip same rows. Result should be for internal query ``` dev_id c ----------------- 111 1 ``` For all query ``` user_id c ------------------ 1 1 2 1 ``` If you sure you need also rows with c=0, then you need do "left join" of fixed\_reports2 and large query,that way you will get all rows and rows with c=null will be rows with 0(can be changed by case/when statement)
Your query is broken and would not run on many systems. The problem is that the group with `user_id` of 2 has two different `dev_id`s. If you run the "broken query" below you can see that the `min()` and `max()` are distinct but the subquery only sees one of those values which is randomly chosen. The last query is corrected by adding `dev_id` to the groupings which shows you where the "missing" row went in the counts. ``` SELECT -- broken query t1.user_id, min(t1.dev_id), max(t1.dev_id), (select distinct t1.dev_id from reports) as should_have_errored, (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = t1.dev_id ) AS qu FROM reports t1 GROUP BY t1.user_id; -- On SQL Server that query returns an error -- Msg 8120, Level 16, State 1, Line 7 -- Column 'reports.dev_id' is invalid in the select list because it is -- not contained in either an aggregate function or the GROUP BY clause. SELECT -- query that duplicates your original query t1.user_id, (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = max(t1.dev_id) /* <-- see here */ ) AS qu FROM reports t1 GROUP BY t1.user_id; SELECT t1.user_id, t1.dev_id, -- fixed query (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = t1.dev_id ) AS qu FROM reports t1 GROUP BY t1.user_id, t1.dev_id ``` <http://sqlfiddle.com/#!9/6576e3/20> Here are some queries that might be useful: Which `dev_id`s have multiple `user_id`s associated with them? ``` select dev_id from reports group by dev_id having count(distinct user_id) > 1 ``` Which other `user_id`s share a `dev_id` with this `user_id`? ``` select user_id from reports r1 where exists ( select 1 from reports r2 where r2.dev_id = r1.dev_id and r2.user_id <> ? ) ``` Or really that's just equivalent to an inner join which also makes it easy to list everybody at once. Note that each pair will be listed twice: ``` select r1.user_id, r1.dev_id, r2.user_id as common_user_id from reports r1 inner join reports r2 on r2.dev_id = r1.dev_id where r1.user_id <> r2.user_id order by r1.user_id, r1.dev_id, r2.user_id ``` And since you've got duplicate rows in your table you'd need to make it `select distinct` to get unique rows.
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
Okay. Let start from simple. First you need get unique user\_id/dev id combinations ``` select distinct dev_id,user_id from reports ``` Result will be ``` dev_id user_id ------------------ 111 1 222 2 111 2 333 3 ``` After that you should get number of different user\_id per dev\_id ``` select dev_id,c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts ``` Result of such query will be ``` dev_id c ----------------- 111 1 222 0 333 0 ``` Now you should show users which have such dev\_id. For that you should join such dev\_id list with table from step1(which show which one user\_id,dev\_id pairs exist) ``` select distinct fixed_reports2.user_id,counts.c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts join (select distinct user_id,dev_id from reports) as fixed_reports2 on fixed_reports2.dev_id=counts.dev_id where counts.c>0 and counts.c is not null ``` "Distinct" here need to skip same rows. Result should be for internal query ``` dev_id c ----------------- 111 1 ``` For all query ``` user_id c ------------------ 1 1 2 1 ``` If you sure you need also rows with c=0, then you need do "left join" of fixed\_reports2 and large query,that way you will get all rows and rows with c=null will be rows with 0(can be changed by case/when statement)
``` SELECT user_id, (COUNT(user_id) -1) as qu FROM reports GROUP BY user_id ``` This would give desired result in your case, however you can improve it a lot more. Cheers,,
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
Okay. Let start from simple. First you need get unique user\_id/dev id combinations ``` select distinct dev_id,user_id from reports ``` Result will be ``` dev_id user_id ------------------ 111 1 222 2 111 2 333 3 ``` After that you should get number of different user\_id per dev\_id ``` select dev_id,c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts ``` Result of such query will be ``` dev_id c ----------------- 111 1 222 0 333 0 ``` Now you should show users which have such dev\_id. For that you should join such dev\_id list with table from step1(which show which one user\_id,dev\_id pairs exist) ``` select distinct fixed_reports2.user_id,counts.c from ( SELECT dev_id, count(*)-1 AS c FROM (select distinct user_id,dev_id from reports) as fixed_reports GROUP BY dev_id ) as counts join (select distinct user_id,dev_id from reports) as fixed_reports2 on fixed_reports2.dev_id=counts.dev_id where counts.c>0 and counts.c is not null ``` "Distinct" here need to skip same rows. Result should be for internal query ``` dev_id c ----------------- 111 1 ``` For all query ``` user_id c ------------------ 1 1 2 1 ``` If you sure you need also rows with c=0, then you need do "left join" of fixed\_reports2 and large query,that way you will get all rows and rows with c=null will be rows with 0(can be changed by case/when statement)
Try ``` SELECT user_id, SUM(qu) AS qu FROM ( SELECT user_id, count(*)-1 AS qu FROM reports GROUP BY user_id, dev_id ) AS r GROUP BY user_id ``` No need to do a join if all the data you need is in one table. Edit: changed the group by to dev\_id instead of user\_id Edit2: I think you need both dev\_id and user\_id in the group by clause. Edit3: Added a subquery to get the desired result. This might be a little cumbersome, perhaps someone has a way to improve this?
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
``` SELECT user_id, (COUNT(user_id) -1) as qu FROM reports GROUP BY user_id ``` This would give desired result in your case, however you can improve it a lot more. Cheers,,
Try ``` SELECT user_id, SUM(qu) AS qu FROM ( SELECT user_id, count(*)-1 AS qu FROM reports GROUP BY user_id, dev_id ) AS r GROUP BY user_id ``` No need to do a join if all the data you need is in one table. Edit: changed the group by to dev\_id instead of user\_id Edit2: I think you need both dev\_id and user\_id in the group by clause. Edit3: Added a subquery to get the desired result. This might be a little cumbersome, perhaps someone has a way to improve this?
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
You can get results by doing: ``` select r.user_id, count(*) - 1 from reports r group by r.user_id; ``` Is this the calculation that you want?
Your query is broken and would not run on many systems. The problem is that the group with `user_id` of 2 has two different `dev_id`s. If you run the "broken query" below you can see that the `min()` and `max()` are distinct but the subquery only sees one of those values which is randomly chosen. The last query is corrected by adding `dev_id` to the groupings which shows you where the "missing" row went in the counts. ``` SELECT -- broken query t1.user_id, min(t1.dev_id), max(t1.dev_id), (select distinct t1.dev_id from reports) as should_have_errored, (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = t1.dev_id ) AS qu FROM reports t1 GROUP BY t1.user_id; -- On SQL Server that query returns an error -- Msg 8120, Level 16, State 1, Line 7 -- Column 'reports.dev_id' is invalid in the select list because it is -- not contained in either an aggregate function or the GROUP BY clause. SELECT -- query that duplicates your original query t1.user_id, (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = max(t1.dev_id) /* <-- see here */ ) AS qu FROM reports t1 GROUP BY t1.user_id; SELECT t1.user_id, t1.dev_id, -- fixed query (SELECT Count(DISTINCT t2.dev_id) FROM reports t2 WHERE t2.user_id != t1.user_id AND t2.dev_id = t1.dev_id ) AS qu FROM reports t1 GROUP BY t1.user_id, t1.dev_id ``` <http://sqlfiddle.com/#!9/6576e3/20> Here are some queries that might be useful: Which `dev_id`s have multiple `user_id`s associated with them? ``` select dev_id from reports group by dev_id having count(distinct user_id) > 1 ``` Which other `user_id`s share a `dev_id` with this `user_id`? ``` select user_id from reports r1 where exists ( select 1 from reports r2 where r2.dev_id = r1.dev_id and r2.user_id <> ? ) ``` Or really that's just equivalent to an inner join which also makes it easy to list everybody at once. Note that each pair will be listed twice: ``` select r1.user_id, r1.dev_id, r2.user_id as common_user_id from reports r1 inner join reports r2 on r2.dev_id = r1.dev_id where r1.user_id <> r2.user_id order by r1.user_id, r1.dev_id, r2.user_id ``` And since you've got duplicate rows in your table you'd need to make it `select distinct` to get unique rows.
34,712,248
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7 ``` import urllib testsite = urllib.URLopener() testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") ``` This is what happens: ``` Traceback (most recent call last): File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module> testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html") File "C:\Python27\lib\urllib.py", line 237, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 205, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 435, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 940, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 803, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 755, in send self.connect() File "C:\Python27\lib\httplib.py", line 1156, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "C:\Python27\lib\ssl.py", line 342, in wrap_socket ciphers=ciphers) File "C:\Python27\lib\ssl.py", line 121, in __init__ self.do_handshake() File "C:\Python27\lib\ssl.py", line 281, in do_handshake self._sslobj.do_handshake() IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error ``` Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_< Generates the same error: ``` import urllib2 import os import datetime import time from urllib import FancyURLopener from random import choice today = datetime.datetime.today() today = today.strftime('%Y.%m.%d') user_agents = [ 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11', 'Opera/9.25 (Windows NT 5.1; U; en)', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)', 'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12'] class MyOpener(FancyURLopener, object): version = choice(user_agents) myopener = MyOpener() page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html') ``` Is anyone out there able to do this successfully?
2016/01/10
[ "https://Stackoverflow.com/questions/34712248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5771257/" ]
You can get results by doing: ``` select r.user_id, count(*) - 1 from reports r group by r.user_id; ``` Is this the calculation that you want?
Try ``` SELECT user_id, SUM(qu) AS qu FROM ( SELECT user_id, count(*)-1 AS qu FROM reports GROUP BY user_id, dev_id ) AS r GROUP BY user_id ``` No need to do a join if all the data you need is in one table. Edit: changed the group by to dev\_id instead of user\_id Edit2: I think you need both dev\_id and user\_id in the group by clause. Edit3: Added a subquery to get the desired result. This might be a little cumbersome, perhaps someone has a way to improve this?
45,510,287
I want to split long math equation by multipliers. The expression is given as a string where whitespaces are allowed. For example: ``` "((a*b>0) * (e>500)) * (abs(j)>2.0) * (n>1)" ``` Should return: ``` ['a*b>0', 'e>500', 'abs(j)>2.0', 'n>1'] ``` If the division is used things get even more complicated, but let assume there is no division for the start. What would be the most pythonic way to solve this?
2017/08/04
[ "https://Stackoverflow.com/questions/45510287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3152072/" ]
``` import re string = "((a-b>0) * (e + 10>500)) * (abs(j)>2.0) * (n>1)" signals = {'+','*','/','-'} ### ## def splitString(string): arr_equations = re.split(''([\)]+(\*|\-|\+|\/)+[\(])'',string.replace(" ", "")) new_array = [] for each_equa in arr_equations: each_equa = each_equa.strip("()") if (not(each_equa in signals)): new_array.append(each_equa) return new_array ### ## print(splitString(string)) ```
You can simply use the `split()` function: ``` ans_list = your_string.split(" * ") ``` Note the spaces around the multiplier sign. This assumes that your string is exactly as you say.
45,510,287
I want to split long math equation by multipliers. The expression is given as a string where whitespaces are allowed. For example: ``` "((a*b>0) * (e>500)) * (abs(j)>2.0) * (n>1)" ``` Should return: ``` ['a*b>0', 'e>500', 'abs(j)>2.0', 'n>1'] ``` If the division is used things get even more complicated, but let assume there is no division for the start. What would be the most pythonic way to solve this?
2017/08/04
[ "https://Stackoverflow.com/questions/45510287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3152072/" ]
``` import re string = "((a-b>0) * (e + 10>500)) * (abs(j)>2.0) * (n>1)" signals = {'+','*','/','-'} ### ## def splitString(string): arr_equations = re.split(''([\)]+(\*|\-|\+|\/)+[\(])'',string.replace(" ", "")) new_array = [] for each_equa in arr_equations: each_equa = each_equa.strip("()") if (not(each_equa in signals)): new_array.append(each_equa) return new_array ### ## print(splitString(string)) ```
you can use regex: ``` s = "((a*b>0) * (e>500)) * (abs(j)>2.0) * (n>1)" s = ''.join(s.split()) s = re.split(r'([\)]+[\*\+\-/\^]+[\(])', s) res = [] for x in s: x = re.sub(r'(^[\(\)\*\+\-\/]+)', '', x) x = re.sub(r'([\(\)]+$)', '', x) if len(x) > 0: res.append(x) print(res) ```
57,302,048
How to Fix this error? I tried visiting all the forums searching for answers to rectify this issue. Here i am trying to perform multi-label classification using keras ``` from keras.preprocessing.text import Tokenizer from keras.models import Sequential from keras.layers import Dense from keras.preprocessing.sequence import pad_sequences from keras.layers import Input, Dense, Dropout, Embedding, LSTM, Flatten from keras.models import Model from keras.utils import to_categorical from keras.callbacks import ModelCheckpoint MAX_LENGTH = 500 tokenizer = Tokenizer() tokenizer.fit_on_texts(df.overview.values) post_seq = tokenizer.texts_to_sequences(df.overview.values) post_seq_padded = pad_sequences(post_seq, maxlen=MAX_LENGTH) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(post_seq_padded, train_encoded, test_size=0.3) vocab_size = len(tokenizer.word_index) + 1 inputs = Input(shape=(MAX_LENGTH, )) embedding_layer = Embedding(vocab_size, 128, input_length=MAX_LENGTH)(inputs) x = Dense(64, input_shape=(None,), activation='relu')(embedding_layer) predictions = Dense(num_classes, activation='softmax')(x) model = Model(inputs=[inputs], outputs=predictions) model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc']) model.summary() filepath="weights.hdf5" checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') history = model.fit(X_train, batch_size=64, y=to_categorical(y_train), verbose=1, validation_split=0.25, shuffle=True, epochs=10, callbacks=[checkpointer]) ``` **ValueError Traceback (most recent call last)** ``` <ipython-input-11-7fdc4bff9648> in <module> 2 checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') 3 history = model.fit(X_train, batch_size=64, y=to_categorical(y_train), verbose=1, validation_split=0.25, **---->** 4 shuffle=True, epochs=10, callbacks=[checkpointer]) ``` ***ValueError: Error when checking target: expected dense\_3 to have shape (500, 4) but got array with shape (4, 2)*** I expected to have the output shape as (500,3) but i am getting (4,2) which is not matching to proceed further.
2019/08/01
[ "https://Stackoverflow.com/questions/57302048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11050535/" ]
Adding an index along the lines of the following might help the performance: ``` CREATE INDEX idx ON table_e (Phone_number, bill_date, col1, col2); ``` Here `col1` and `col2` are the other two columns which might appear in the `SELECT` clause. The strategy of this index, if used, would be to scan the relatively small `table_a`, which only has 309 records. For each record in `a`, MySQL would then use the above index to rapidly find the matching records in the `e` table.
If you can update then it is better to create `index` on `a.invoice_date` and . Please find the [link](https://dev.mysql.com/doc/refman/8.0/en/index-hints.html) for the same.
26,535,493
I wrote a custom python module. It consists of several functions divided thematically between 3 .py files, which are all in the same directory called `microbiome` in my home directory. So the whole path to my custom module directory is: ``` /Users/drosophila/microbiome ``` I'm working on OsX Mavericks. I want to import this module in python scripts which I run from a different directory. I tried adding the `microbiome` directory to the path by editing `/etc/paths`: ``` sudo nano /etc/paths ``` Then in `/etc/paths` I write: ``` /usr/bin /bin /usr/sbin /sbin /usr/local/bin /Users/drosophila/blast-2.2.22/bin /Users/drosophila/blast-2.2.22/ /Users/drosophila/microbiome ``` I also tried editing `.bash_profile` as follows: ``` export PATH="/Users/drosophila/microbiome:/Users/drosophila/anaconda/bin:$PATH" ``` It seems that the 'microbiome' directory is added to the path successfully, since `echo $PATH` shows the directory is in there: ``` /Users/drosophilq/anaconda/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usearch:/Users/drosophila/blast-2.2.22/bin:/Users/drosophila/blast-2.2.22/:/Users/drosophila/microbiome:/opt/X11/bin:/usr/texbin ``` However, when I try to import the microbiome module in python, it insists that such a module doesn't exist. I have Python 3.4.1 |Anaconda 2.0.1 The 'microbiome' directory contains an empty `__init__.py` file. What am I doing wrong?
2014/10/23
[ "https://Stackoverflow.com/questions/26535493", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1954277/" ]
The *right* way to do this, as explained in the [Python Packaging User Guide](https://packaging.python.org/en/latest/), is to create a `setup.py`-based project. Then, you can just install your code for any particular Python installation (or virtual environment) by using, e.g., `pip3 install .` from the root directory of the project. That makes sure everything gets copied, with the proper layout, into some appropriate site-packages directory, where it will be available for that Python installation to import. Trying to do what the standard tools do yourself is just making things harder on yourself. --- That being said, if you really, really want to, the key is that you need to get your new directory into the `sys.path` for the Python installation you want. Modifying `PATH` or `/etc/paths` won't do that. Modifying `PYTHONPATH` will, but it will affect *every* installation. The way to do this is to add a `.pth` file and/or a `sitecustomize.py` file, as described in the docs for the [`site`](https://docs.python.org/3/library/site.html) module. I don't know where your Anaconda site-packages is (you can find out by `import sys; print(sys.path)` from within Python), but let's say it's `/usr/local/anaconda/lib/python3.4/site-packages`. So, you can create a `microbiome.pth` file in that directory, with the absolute path to your `/Users/drosophilia/microbiome` directory. Then, every time you start Python, that directory will be added to `sys.path`, and your import will work. --- It's also worth noting that if you just want to reuse a directory as if it were part of a handful of different projects, and you don't want to even think about "installation" or anything like that, there are even simpler ways to do it: Symlink the directory into your different projects. Or, if you're using version control, create a git submodule. Or various other similar equivalents. Then, it looks like each project just includes `microbiome` as part of that project, and you don't have to worry about paths or anything else.
As you've discovered, `/etc/paths` affects `$PATH`. But `$PATH` does not affect where Python looks for modules. Try `$PYTHONPATH` instead. See `man python` for details.
56,973,032
So I am trying to install plaidML-keras so I can do tensor-flow stuff on my MacBookPro's gpu (radeon pro 560x). From my research, it can be done using plaidML-Keras ([instalation instrutions](https://github.com/plaidml/plaidml/blob/master/docs/install.rst#macos)). When I run `pip install -U plaidml-keras` it works fine, but the next step, `plaidml-setup` returns the following error. ``` Traceback (most recent call last): File "/usr/local/bin/plaidml-setup", line 6, in <module> from plaidml.plaidml_setup import main File "/usr/local/lib/python3.7/site-packages/plaidml/__init__.py", line 50, in <module> import plaidml.settings File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 33, in <module> _setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json') File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json". ``` From my limited understanding of the error message, it is saying that I am missing a conifuration file, but I don't know where to put it, or what to put in it. I am guessing that it has something to do with the following (vague) line from the instructions. > > Finally, set up PlaidML to use a preferred computing device > > > But how do I specify that I want it to use the radeon pro 560x. Also, I did check and my mac is compatible with openCL 1.2 (required for plaidML)
2019/07/10
[ "https://Stackoverflow.com/questions/56973032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565630/" ]
Disclaimer: I'm on the PlaidML team, and we're actively working to improve the setup experience and documentation around it. We're sorry you were stuck on this. For now, here's some instructions to get you back on track. 1. Find out where plaidml-setup was installed. Typically, this is some variant of `/usr/local/bin` or a path to your virtual environment. The prefix of this path (i.e. `/usr/local`) is important to note for the next step. 2. Find the plaidml `share` directory. It's within the same prefix as plaidml-setup, i.e. `/usr/local/share/plaidml`. 3. Within the plaidml `share` directory, there should be a few files: at a minimum, `config.json` and `experimental.json` should be in there. If they're not in there, you can copy [the files here](https://github.com/plaidml/plaidml/tree/master/plaidml/configs) to your plaidml `share` directory. After copying those json files over, you should be able to run `plaidml-setup` with no issue.
I'm facing the same problem and answers online are not very helpful. In this case, I'd suggest debugging yourself. Since this is where the problem is: ``` File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) ``` You can `vim /usr/local/lib/python3.7/site-packages/plaidml/settings.py`, and read the code. Basically it's trying to use function `_find_config` to get config files. After `cfg_path = os.path.join(prefix, 'share', 'plaidml', name)`, I added `print(cfg_path)` to see what path it's looking for. And I got: ``` /usr/local/Caskroom/miniconda/base/share/plaidml/experimental.json /usr/local/Caskroom/miniconda/base/share/plaidml/config.json ``` This is why it's hard to tell you where to put the files: it depends on your system setup. Not everyone is using `cask` and `conda` like me, so I assume it should be different in your OS. @Denise Kutnick: thanks for your hard work, maybe either print cfg\_path when there's a problem, or try to add `.` as a search path, so that it would be easier for users to get some clue?
56,973,032
So I am trying to install plaidML-keras so I can do tensor-flow stuff on my MacBookPro's gpu (radeon pro 560x). From my research, it can be done using plaidML-Keras ([instalation instrutions](https://github.com/plaidml/plaidml/blob/master/docs/install.rst#macos)). When I run `pip install -U plaidml-keras` it works fine, but the next step, `plaidml-setup` returns the following error. ``` Traceback (most recent call last): File "/usr/local/bin/plaidml-setup", line 6, in <module> from plaidml.plaidml_setup import main File "/usr/local/lib/python3.7/site-packages/plaidml/__init__.py", line 50, in <module> import plaidml.settings File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 33, in <module> _setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json') File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json". ``` From my limited understanding of the error message, it is saying that I am missing a conifuration file, but I don't know where to put it, or what to put in it. I am guessing that it has something to do with the following (vague) line from the instructions. > > Finally, set up PlaidML to use a preferred computing device > > > But how do I specify that I want it to use the radeon pro 560x. Also, I did check and my mac is compatible with openCL 1.2 (required for plaidML)
2019/07/10
[ "https://Stackoverflow.com/questions/56973032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565630/" ]
Disclaimer: I'm on the PlaidML team, and we're actively working to improve the setup experience and documentation around it. We're sorry you were stuck on this. For now, here's some instructions to get you back on track. 1. Find out where plaidml-setup was installed. Typically, this is some variant of `/usr/local/bin` or a path to your virtual environment. The prefix of this path (i.e. `/usr/local`) is important to note for the next step. 2. Find the plaidml `share` directory. It's within the same prefix as plaidml-setup, i.e. `/usr/local/share/plaidml`. 3. Within the plaidml `share` directory, there should be a few files: at a minimum, `config.json` and `experimental.json` should be in there. If they're not in there, you can copy [the files here](https://github.com/plaidml/plaidml/tree/master/plaidml/configs) to your plaidml `share` directory. After copying those json files over, you should be able to run `plaidml-setup` with no issue.
As I wrote here: <https://superuser.com/questions/1404114/traceback-error-during-plaidml-installation/1488059#1488059> the file `plaidml/settings.py` uses variable `sys.prefix` which for a reason has wrong value for my system: it contains `/usr` instead of `~/.local` so it tries to load `/usr/share/plaidml/experimental.json` instead of `~/.local/share/plaidml/experimental.json` I don't know how to fix the value of `sys.prefix` yet and whether plaidml will be able to find it's `so` file...
56,973,032
So I am trying to install plaidML-keras so I can do tensor-flow stuff on my MacBookPro's gpu (radeon pro 560x). From my research, it can be done using plaidML-Keras ([instalation instrutions](https://github.com/plaidml/plaidml/blob/master/docs/install.rst#macos)). When I run `pip install -U plaidml-keras` it works fine, but the next step, `plaidml-setup` returns the following error. ``` Traceback (most recent call last): File "/usr/local/bin/plaidml-setup", line 6, in <module> from plaidml.plaidml_setup import main File "/usr/local/lib/python3.7/site-packages/plaidml/__init__.py", line 50, in <module> import plaidml.settings File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 33, in <module> _setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json') File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json". ``` From my limited understanding of the error message, it is saying that I am missing a conifuration file, but I don't know where to put it, or what to put in it. I am guessing that it has something to do with the following (vague) line from the instructions. > > Finally, set up PlaidML to use a preferred computing device > > > But how do I specify that I want it to use the radeon pro 560x. Also, I did check and my mac is compatible with openCL 1.2 (required for plaidML)
2019/07/10
[ "https://Stackoverflow.com/questions/56973032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565630/" ]
Disclaimer: I'm on the PlaidML team, and we're actively working to improve the setup experience and documentation around it. We're sorry you were stuck on this. For now, here's some instructions to get you back on track. 1. Find out where plaidml-setup was installed. Typically, this is some variant of `/usr/local/bin` or a path to your virtual environment. The prefix of this path (i.e. `/usr/local`) is important to note for the next step. 2. Find the plaidml `share` directory. It's within the same prefix as plaidml-setup, i.e. `/usr/local/share/plaidml`. 3. Within the plaidml `share` directory, there should be a few files: at a minimum, `config.json` and `experimental.json` should be in there. If they're not in there, you can copy [the files here](https://github.com/plaidml/plaidml/tree/master/plaidml/configs) to your plaidml `share` directory. After copying those json files over, you should be able to run `plaidml-setup` with no issue.
**Within the plaidml share directory, there should be a few files: at a minimum, config.json and experimental.json** usr/local/lib/python3.8/site-packages/plaidml ++ do the followings: **export PLAIDML\_NATIVE\_PATH=/usr/local/lib/libplaidml.dylib export RUNFILES\_DIR=/usr/local/share/plaidml.**
56,973,032
So I am trying to install plaidML-keras so I can do tensor-flow stuff on my MacBookPro's gpu (radeon pro 560x). From my research, it can be done using plaidML-Keras ([instalation instrutions](https://github.com/plaidml/plaidml/blob/master/docs/install.rst#macos)). When I run `pip install -U plaidml-keras` it works fine, but the next step, `plaidml-setup` returns the following error. ``` Traceback (most recent call last): File "/usr/local/bin/plaidml-setup", line 6, in <module> from plaidml.plaidml_setup import main File "/usr/local/lib/python3.7/site-packages/plaidml/__init__.py", line 50, in <module> import plaidml.settings File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 33, in <module> _setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json') File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json". ``` From my limited understanding of the error message, it is saying that I am missing a conifuration file, but I don't know where to put it, or what to put in it. I am guessing that it has something to do with the following (vague) line from the instructions. > > Finally, set up PlaidML to use a preferred computing device > > > But how do I specify that I want it to use the radeon pro 560x. Also, I did check and my mac is compatible with openCL 1.2 (required for plaidML)
2019/07/10
[ "https://Stackoverflow.com/questions/56973032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565630/" ]
You need to set **plaidml** and **libplaidml.dylib** path correctly in environment. Possible paths for **plaidml** 1. `/Library/Frameworks/Python.framework/Versions/3.7/share/plaidml` 2. `/usr/local/share/plaidml` 3. Some other location. Search it. Possible paths for **libplaidml.dylib** 1. `/Library/Frameworks/Python.framework/Versions/3.7/lib/libplaidml.dylib` 2. `/usr/local/lib/libplaidml.dylib` 3. Some other location. Search it. ### Example: In your code set environment variable: ``` import os os.environ["KERAS_BACKEND"] = "plaidml.keras.backend" os.environ["RUNFILES_DIR"] = "/Library/Frameworks/Python.framework/Versions/3.7/share/plaidml" os.environ["PLAIDML_NATIVE_PATH"] = "/Library/Frameworks/Python.framework/Versions/3.7/lib/libplaidml.dylib" ``` For complete steps to setup opencl and plaidml. See [this](https://stackoverflow.com/a/60016869/6117565). ### Note: This is for macOS/Linux
I'm facing the same problem and answers online are not very helpful. In this case, I'd suggest debugging yourself. Since this is where the problem is: ``` File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) ``` You can `vim /usr/local/lib/python3.7/site-packages/plaidml/settings.py`, and read the code. Basically it's trying to use function `_find_config` to get config files. After `cfg_path = os.path.join(prefix, 'share', 'plaidml', name)`, I added `print(cfg_path)` to see what path it's looking for. And I got: ``` /usr/local/Caskroom/miniconda/base/share/plaidml/experimental.json /usr/local/Caskroom/miniconda/base/share/plaidml/config.json ``` This is why it's hard to tell you where to put the files: it depends on your system setup. Not everyone is using `cask` and `conda` like me, so I assume it should be different in your OS. @Denise Kutnick: thanks for your hard work, maybe either print cfg\_path when there's a problem, or try to add `.` as a search path, so that it would be easier for users to get some clue?
56,973,032
So I am trying to install plaidML-keras so I can do tensor-flow stuff on my MacBookPro's gpu (radeon pro 560x). From my research, it can be done using plaidML-Keras ([instalation instrutions](https://github.com/plaidml/plaidml/blob/master/docs/install.rst#macos)). When I run `pip install -U plaidml-keras` it works fine, but the next step, `plaidml-setup` returns the following error. ``` Traceback (most recent call last): File "/usr/local/bin/plaidml-setup", line 6, in <module> from plaidml.plaidml_setup import main File "/usr/local/lib/python3.7/site-packages/plaidml/__init__.py", line 50, in <module> import plaidml.settings File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 33, in <module> _setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json') File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json". ``` From my limited understanding of the error message, it is saying that I am missing a conifuration file, but I don't know where to put it, or what to put in it. I am guessing that it has something to do with the following (vague) line from the instructions. > > Finally, set up PlaidML to use a preferred computing device > > > But how do I specify that I want it to use the radeon pro 560x. Also, I did check and my mac is compatible with openCL 1.2 (required for plaidML)
2019/07/10
[ "https://Stackoverflow.com/questions/56973032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565630/" ]
You need to set **plaidml** and **libplaidml.dylib** path correctly in environment. Possible paths for **plaidml** 1. `/Library/Frameworks/Python.framework/Versions/3.7/share/plaidml` 2. `/usr/local/share/plaidml` 3. Some other location. Search it. Possible paths for **libplaidml.dylib** 1. `/Library/Frameworks/Python.framework/Versions/3.7/lib/libplaidml.dylib` 2. `/usr/local/lib/libplaidml.dylib` 3. Some other location. Search it. ### Example: In your code set environment variable: ``` import os os.environ["KERAS_BACKEND"] = "plaidml.keras.backend" os.environ["RUNFILES_DIR"] = "/Library/Frameworks/Python.framework/Versions/3.7/share/plaidml" os.environ["PLAIDML_NATIVE_PATH"] = "/Library/Frameworks/Python.framework/Versions/3.7/lib/libplaidml.dylib" ``` For complete steps to setup opencl and plaidml. See [this](https://stackoverflow.com/a/60016869/6117565). ### Note: This is for macOS/Linux
As I wrote here: <https://superuser.com/questions/1404114/traceback-error-during-plaidml-installation/1488059#1488059> the file `plaidml/settings.py` uses variable `sys.prefix` which for a reason has wrong value for my system: it contains `/usr` instead of `~/.local` so it tries to load `/usr/share/plaidml/experimental.json` instead of `~/.local/share/plaidml/experimental.json` I don't know how to fix the value of `sys.prefix` yet and whether plaidml will be able to find it's `so` file...
56,973,032
So I am trying to install plaidML-keras so I can do tensor-flow stuff on my MacBookPro's gpu (radeon pro 560x). From my research, it can be done using plaidML-Keras ([instalation instrutions](https://github.com/plaidml/plaidml/blob/master/docs/install.rst#macos)). When I run `pip install -U plaidml-keras` it works fine, but the next step, `plaidml-setup` returns the following error. ``` Traceback (most recent call last): File "/usr/local/bin/plaidml-setup", line 6, in <module> from plaidml.plaidml_setup import main File "/usr/local/lib/python3.7/site-packages/plaidml/__init__.py", line 50, in <module> import plaidml.settings File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 33, in <module> _setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json') File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config 'Could not find PlaidML configuration file: "{}".'.format(filename)) plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json". ``` From my limited understanding of the error message, it is saying that I am missing a conifuration file, but I don't know where to put it, or what to put in it. I am guessing that it has something to do with the following (vague) line from the instructions. > > Finally, set up PlaidML to use a preferred computing device > > > But how do I specify that I want it to use the radeon pro 560x. Also, I did check and my mac is compatible with openCL 1.2 (required for plaidML)
2019/07/10
[ "https://Stackoverflow.com/questions/56973032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8565630/" ]
You need to set **plaidml** and **libplaidml.dylib** path correctly in environment. Possible paths for **plaidml** 1. `/Library/Frameworks/Python.framework/Versions/3.7/share/plaidml` 2. `/usr/local/share/plaidml` 3. Some other location. Search it. Possible paths for **libplaidml.dylib** 1. `/Library/Frameworks/Python.framework/Versions/3.7/lib/libplaidml.dylib` 2. `/usr/local/lib/libplaidml.dylib` 3. Some other location. Search it. ### Example: In your code set environment variable: ``` import os os.environ["KERAS_BACKEND"] = "plaidml.keras.backend" os.environ["RUNFILES_DIR"] = "/Library/Frameworks/Python.framework/Versions/3.7/share/plaidml" os.environ["PLAIDML_NATIVE_PATH"] = "/Library/Frameworks/Python.framework/Versions/3.7/lib/libplaidml.dylib" ``` For complete steps to setup opencl and plaidml. See [this](https://stackoverflow.com/a/60016869/6117565). ### Note: This is for macOS/Linux
**Within the plaidml share directory, there should be a few files: at a minimum, config.json and experimental.json** usr/local/lib/python3.8/site-packages/plaidml ++ do the followings: **export PLAIDML\_NATIVE\_PATH=/usr/local/lib/libplaidml.dylib export RUNFILES\_DIR=/usr/local/share/plaidml.**
16,580,285
I am writing a python script to keep a buggy program open and I need to figure out if the program is not respoding and close it on windows. I can't quite figure out how to do this.
2013/05/16
[ "https://Stackoverflow.com/questions/16580285", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2125510/" ]
On Windows you can do this: ``` import os def isresponding(name): os.system('tasklist /FI "IMAGENAME eq %s" /FI "STATUS eq running" > tmp.txt' % name) tmp = open('tmp.txt', 'r') a = tmp.readlines() tmp.close() if a[-1].split()[0] == name: return True else: return False ``` It is more robust to use the PID though: ``` def isrespondingPID(PID): os.system('tasklist /FI "PID eq %d" /FI "STATUS eq running" > tmp.txt' % PID) tmp = open('tmp.txt', 'r') a = tmp.readlines() tmp.close() if int(a[-1].split()[1]) == PID: return True else: return False ``` From `tasklist` you can get more information than that. To get the "NOT RESPONDING" processes directly, just change "running" by "not responding" in the functions given. [See more info here](http://www.gossamer-threads.com/lists/python/python/796145).
Piling up on the awesome answer from @Saullo GP Castro, this is a version using `subprocess.Popen` instead of `os.system` to avoid creating a temporary file. ```py import subprocess def isresponding(name): """Check if a program (based on its name) is responding""" cmd = 'tasklist /FI "IMAGENAME eq %s" /FI "STATUS eq running"' % name status = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout.read() return name in str(status) ``` The corresponding PID version is: ``` def isresponding_PID(pid): """Check if a program (based on its PID) is responding""" cmd = 'tasklist /FI "PID eq %d" /FI "STATUS eq running"' % pid status = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout.read() return str(pid) in str(status) ``` The usage of `timeit` showed that the usage of `subprocess.Popen` is twice as fast (mainly because we don't need to go through a file): ``` +-----------------------------+---------------------------+ | Function | Time in s (10 iterations) | +-----------------------------+---------------------------+ | isresponding_os | 8.902 | +-----------------------------+---------------------------+ | isrespondingPID_os | 8.318 | +-----------------------------+---------------------------+ | isresponding_subprocess | 4.852 | +-----------------------------+---------------------------+ | isresponding_PID_subprocess | 4.868 | +-----------------------------+---------------------------+ ``` Suprisingly, it is a bit slower for `os.system` implementation if we use PID but not much different if we use `subprocess.Popen`. Hope it can help.
5,230,699
``` gardai-plan-crackdown-on-troublemakers-at-protest-2438316.html': {'dail': 1, 'focus': 1, 'actions': 1, 'trade': 2, 'protest': 1, 'identify': 1, 'previous': 1, 'detectives': 1, 'republican': 1, 'group': 1, 'monitor': 1, 'clashes': 1, 'civil': 1, 'charge': 1, 'breaches': 1, 'travelling': 1, 'main': 1, 'disrupt': 1, 'real': 1, 'policing': 3, 'march': 6, 'finance': 1, 'drawn': 1, 'assistant': 1, 'protesters': 1, 'emphasised': 1, 'department': 1, 'traffic': 2, 'outbreak': 1, 'culprits': 1, 'proportionate': 1, 'instructions': 1, 'warned': 2, 'commanders': 1, 'michael': 2, 'exploit': 1, 'culminating': 1, 'large': 2, 'continue': 1, 'team': 1, 'hijack': 1, 'disorder': 1, 'square': 1, 'leaders': 1, 'deal': 2, 'people': 3, 'streets': 1, 'demonstrations': 2, 'observed': 1, 'street': 2, 'college': 1, 'organised': 1, 'operation': 1, 'special': 1, 'shown': 1, 'attendance': 1, 'normal': 1, 'unions': 2, 'individuals': 1, 'safety': 2, 'prosecuted': 1, 'ira': 1, 'ground': 1, 'public': 2, 'told': 1, 'body': 1, 'stewards': 2, 'obey': 1, 'business': 1, 'gathered': 1, 'assemble': 1, 'garda': 5, 'sinn': 1, 'broken': 1, 'fachtna': 1, 'management': 2, 'possibility': 1, 'groups': 3, 'put': 1, 'affiliated': 1, 'strong': 2, 'security': 1, 'stage': 1, 'behaviour': 1, 'involved': 1, 'route': 2, 'violence': 1, 'dublin': 3, 'fein': 1, 'ensure': 2, 'stand': 1, 'act': 2, 'contingency': 1, 'troublemakers': 2, 'facilitate': 2, 'road': 1, 'members': 1, 'prepared': 1, 'presence': 1, 'sullivan': 2, 'reassure': 1, 'number': 3, 'community': 1, 'strategic': 1, 'visible': 2, 'addressed': 1, 'notify': 1, 'trained': 1, 'eirigi': 1, 'city': 4, 'gpo': 1, 'from': 3, 'crowd': 1, 'visit': 1, 'wood': 1, 'editor': 1, 'peaceful': 4, 'expected': 2, 'today': 1, 'commissioner': 4, 'quay': 1, 'ictu': 1, 'advance': 1, 'murphy': 2, 'gardai': 6, 'aware': 1, 'closures': 1, 'courts': 1, 'branch': 1, 'deployed': 1, 'made': 1, 'thousands': 1, 'socialist': 1, 'work': 1, 'supt': 2, 'feehan': 1, 'mr': 1, 'briefing': 1, 'visited': 1, 'manner': 1, 'irish': 2, 'metropolitan': 1, 'spotters': 1, 'organisers': 1, 'in': 13, 'dissident': 1, 'evidence': 1, 'tom': 1, 'arrangements': 3, 'experience': 1, 'allowed': 1, 'sought': 1, 'rally': 1, 'connell': 1, 'officers': 3, 'potential': 1, 'holding': 1, 'units': 1, 'place': 2, 'events': 1, 'dignified': 1, 'planned': 1, 'independent': 1, 'added': 2, 'plans': 1, 'congress': 1, 'centre': 3, 'comprehensive': 1, 'measures': 1, 'yesterday': 2, 'alert': 1, 'important': 1, 'moving': 1, 'plan': 2, 'highly': 1, 'law': 2, 'senior': 2, 'fair': 1, 'recent': 1, 'refuse': 1, 'attempt': 1, 'brady': 1, 'liaising': 1, 'conscious': 1, 'light': 1, 'clear': 1, 'headquarters': 1, 'wing': 1, 'chief': 2, 'maintain': 1, 'harcourt': 1, 'order': 2, 'left': 1}} ``` I have a python script that extracts words from text files and counts the number of times they occur in the file. I want to add them to an ".ARFF" file to use for weka classification. Above is an example output of my python script. How do I go about inserting them into an ARFF file, keeping each text file separate. Each file is differentiated by {"with their words in here!!"}
2011/03/08
[ "https://Stackoverflow.com/questions/5230699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/515263/" ]
There are details on the [ARFF file format here](http://www.cs.waikato.ac.nz/~ml/weka/arff.html) and it's very simple to generate. For example, using a cut-down version of your Python dictionary, the following script: ``` import re d = { 'gardai-plan-crackdown-on-troublemakers-at-protest-2438316.html': {'dail': 1, 'focus': 1, 'actions': 1, 'trade': 2, 'protest': 1, 'identify': 1 }} for original_filename in d.keys(): m = re.search('^(.*)\.html$',original_filename,) if not m: print "Ignoring the file:", original_filename continue output_filename = m.group(1)+'.arff' with open(output_filename,"w") as fp: fp.write('''@RELATION wordcounts @ATTRIBUTE word string @ATTRIBUTE count numeric @DATA ''') for word_and_count in d[original_filename].items(): fp.write("%s,%d\n" % word_and_count) ``` Generates output of the form: ``` @RELATION wordcounts @ATTRIBUTE word string @ATTRIBUTE count numeric @DATA dail,1 focus,1 actions,1 trade,2 protest,1 identify,1 ``` ... in a file called `gardai-plan-crackdown-on-troublemakers-at-protest-2438316.arff`. If that's not exactly what you want, I'm sure you can easily alter it. (For example, if the "words" might have spaces or other punctuation in them, you probably want to quote them.)
I know it's pretty easy to generate an arff file on your own, but I still wanted to make it simpler so I wrote a python package <https://github.com/ubershmekel/arff> It's also on pypi so `easy_install arff`
5,230,699
``` gardai-plan-crackdown-on-troublemakers-at-protest-2438316.html': {'dail': 1, 'focus': 1, 'actions': 1, 'trade': 2, 'protest': 1, 'identify': 1, 'previous': 1, 'detectives': 1, 'republican': 1, 'group': 1, 'monitor': 1, 'clashes': 1, 'civil': 1, 'charge': 1, 'breaches': 1, 'travelling': 1, 'main': 1, 'disrupt': 1, 'real': 1, 'policing': 3, 'march': 6, 'finance': 1, 'drawn': 1, 'assistant': 1, 'protesters': 1, 'emphasised': 1, 'department': 1, 'traffic': 2, 'outbreak': 1, 'culprits': 1, 'proportionate': 1, 'instructions': 1, 'warned': 2, 'commanders': 1, 'michael': 2, 'exploit': 1, 'culminating': 1, 'large': 2, 'continue': 1, 'team': 1, 'hijack': 1, 'disorder': 1, 'square': 1, 'leaders': 1, 'deal': 2, 'people': 3, 'streets': 1, 'demonstrations': 2, 'observed': 1, 'street': 2, 'college': 1, 'organised': 1, 'operation': 1, 'special': 1, 'shown': 1, 'attendance': 1, 'normal': 1, 'unions': 2, 'individuals': 1, 'safety': 2, 'prosecuted': 1, 'ira': 1, 'ground': 1, 'public': 2, 'told': 1, 'body': 1, 'stewards': 2, 'obey': 1, 'business': 1, 'gathered': 1, 'assemble': 1, 'garda': 5, 'sinn': 1, 'broken': 1, 'fachtna': 1, 'management': 2, 'possibility': 1, 'groups': 3, 'put': 1, 'affiliated': 1, 'strong': 2, 'security': 1, 'stage': 1, 'behaviour': 1, 'involved': 1, 'route': 2, 'violence': 1, 'dublin': 3, 'fein': 1, 'ensure': 2, 'stand': 1, 'act': 2, 'contingency': 1, 'troublemakers': 2, 'facilitate': 2, 'road': 1, 'members': 1, 'prepared': 1, 'presence': 1, 'sullivan': 2, 'reassure': 1, 'number': 3, 'community': 1, 'strategic': 1, 'visible': 2, 'addressed': 1, 'notify': 1, 'trained': 1, 'eirigi': 1, 'city': 4, 'gpo': 1, 'from': 3, 'crowd': 1, 'visit': 1, 'wood': 1, 'editor': 1, 'peaceful': 4, 'expected': 2, 'today': 1, 'commissioner': 4, 'quay': 1, 'ictu': 1, 'advance': 1, 'murphy': 2, 'gardai': 6, 'aware': 1, 'closures': 1, 'courts': 1, 'branch': 1, 'deployed': 1, 'made': 1, 'thousands': 1, 'socialist': 1, 'work': 1, 'supt': 2, 'feehan': 1, 'mr': 1, 'briefing': 1, 'visited': 1, 'manner': 1, 'irish': 2, 'metropolitan': 1, 'spotters': 1, 'organisers': 1, 'in': 13, 'dissident': 1, 'evidence': 1, 'tom': 1, 'arrangements': 3, 'experience': 1, 'allowed': 1, 'sought': 1, 'rally': 1, 'connell': 1, 'officers': 3, 'potential': 1, 'holding': 1, 'units': 1, 'place': 2, 'events': 1, 'dignified': 1, 'planned': 1, 'independent': 1, 'added': 2, 'plans': 1, 'congress': 1, 'centre': 3, 'comprehensive': 1, 'measures': 1, 'yesterday': 2, 'alert': 1, 'important': 1, 'moving': 1, 'plan': 2, 'highly': 1, 'law': 2, 'senior': 2, 'fair': 1, 'recent': 1, 'refuse': 1, 'attempt': 1, 'brady': 1, 'liaising': 1, 'conscious': 1, 'light': 1, 'clear': 1, 'headquarters': 1, 'wing': 1, 'chief': 2, 'maintain': 1, 'harcourt': 1, 'order': 2, 'left': 1}} ``` I have a python script that extracts words from text files and counts the number of times they occur in the file. I want to add them to an ".ARFF" file to use for weka classification. Above is an example output of my python script. How do I go about inserting them into an ARFF file, keeping each text file separate. Each file is differentiated by {"with their words in here!!"}
2011/03/08
[ "https://Stackoverflow.com/questions/5230699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/515263/" ]
There are details on the [ARFF file format here](http://www.cs.waikato.ac.nz/~ml/weka/arff.html) and it's very simple to generate. For example, using a cut-down version of your Python dictionary, the following script: ``` import re d = { 'gardai-plan-crackdown-on-troublemakers-at-protest-2438316.html': {'dail': 1, 'focus': 1, 'actions': 1, 'trade': 2, 'protest': 1, 'identify': 1 }} for original_filename in d.keys(): m = re.search('^(.*)\.html$',original_filename,) if not m: print "Ignoring the file:", original_filename continue output_filename = m.group(1)+'.arff' with open(output_filename,"w") as fp: fp.write('''@RELATION wordcounts @ATTRIBUTE word string @ATTRIBUTE count numeric @DATA ''') for word_and_count in d[original_filename].items(): fp.write("%s,%d\n" % word_and_count) ``` Generates output of the form: ``` @RELATION wordcounts @ATTRIBUTE word string @ATTRIBUTE count numeric @DATA dail,1 focus,1 actions,1 trade,2 protest,1 identify,1 ``` ... in a file called `gardai-plan-crackdown-on-troublemakers-at-protest-2438316.arff`. If that's not exactly what you want, I'm sure you can easily alter it. (For example, if the "words" might have spaces or other punctuation in them, you probably want to quote them.)
[This project](https://github.com/renatopp/liac-arff) seems to be a bit more up to date. You can install it via pip: ``` $ pip install liac-arff ``` or easy\_install: ``` $ easy_install liac-arff ```
5,230,699
``` gardai-plan-crackdown-on-troublemakers-at-protest-2438316.html': {'dail': 1, 'focus': 1, 'actions': 1, 'trade': 2, 'protest': 1, 'identify': 1, 'previous': 1, 'detectives': 1, 'republican': 1, 'group': 1, 'monitor': 1, 'clashes': 1, 'civil': 1, 'charge': 1, 'breaches': 1, 'travelling': 1, 'main': 1, 'disrupt': 1, 'real': 1, 'policing': 3, 'march': 6, 'finance': 1, 'drawn': 1, 'assistant': 1, 'protesters': 1, 'emphasised': 1, 'department': 1, 'traffic': 2, 'outbreak': 1, 'culprits': 1, 'proportionate': 1, 'instructions': 1, 'warned': 2, 'commanders': 1, 'michael': 2, 'exploit': 1, 'culminating': 1, 'large': 2, 'continue': 1, 'team': 1, 'hijack': 1, 'disorder': 1, 'square': 1, 'leaders': 1, 'deal': 2, 'people': 3, 'streets': 1, 'demonstrations': 2, 'observed': 1, 'street': 2, 'college': 1, 'organised': 1, 'operation': 1, 'special': 1, 'shown': 1, 'attendance': 1, 'normal': 1, 'unions': 2, 'individuals': 1, 'safety': 2, 'prosecuted': 1, 'ira': 1, 'ground': 1, 'public': 2, 'told': 1, 'body': 1, 'stewards': 2, 'obey': 1, 'business': 1, 'gathered': 1, 'assemble': 1, 'garda': 5, 'sinn': 1, 'broken': 1, 'fachtna': 1, 'management': 2, 'possibility': 1, 'groups': 3, 'put': 1, 'affiliated': 1, 'strong': 2, 'security': 1, 'stage': 1, 'behaviour': 1, 'involved': 1, 'route': 2, 'violence': 1, 'dublin': 3, 'fein': 1, 'ensure': 2, 'stand': 1, 'act': 2, 'contingency': 1, 'troublemakers': 2, 'facilitate': 2, 'road': 1, 'members': 1, 'prepared': 1, 'presence': 1, 'sullivan': 2, 'reassure': 1, 'number': 3, 'community': 1, 'strategic': 1, 'visible': 2, 'addressed': 1, 'notify': 1, 'trained': 1, 'eirigi': 1, 'city': 4, 'gpo': 1, 'from': 3, 'crowd': 1, 'visit': 1, 'wood': 1, 'editor': 1, 'peaceful': 4, 'expected': 2, 'today': 1, 'commissioner': 4, 'quay': 1, 'ictu': 1, 'advance': 1, 'murphy': 2, 'gardai': 6, 'aware': 1, 'closures': 1, 'courts': 1, 'branch': 1, 'deployed': 1, 'made': 1, 'thousands': 1, 'socialist': 1, 'work': 1, 'supt': 2, 'feehan': 1, 'mr': 1, 'briefing': 1, 'visited': 1, 'manner': 1, 'irish': 2, 'metropolitan': 1, 'spotters': 1, 'organisers': 1, 'in': 13, 'dissident': 1, 'evidence': 1, 'tom': 1, 'arrangements': 3, 'experience': 1, 'allowed': 1, 'sought': 1, 'rally': 1, 'connell': 1, 'officers': 3, 'potential': 1, 'holding': 1, 'units': 1, 'place': 2, 'events': 1, 'dignified': 1, 'planned': 1, 'independent': 1, 'added': 2, 'plans': 1, 'congress': 1, 'centre': 3, 'comprehensive': 1, 'measures': 1, 'yesterday': 2, 'alert': 1, 'important': 1, 'moving': 1, 'plan': 2, 'highly': 1, 'law': 2, 'senior': 2, 'fair': 1, 'recent': 1, 'refuse': 1, 'attempt': 1, 'brady': 1, 'liaising': 1, 'conscious': 1, 'light': 1, 'clear': 1, 'headquarters': 1, 'wing': 1, 'chief': 2, 'maintain': 1, 'harcourt': 1, 'order': 2, 'left': 1}} ``` I have a python script that extracts words from text files and counts the number of times they occur in the file. I want to add them to an ".ARFF" file to use for weka classification. Above is an example output of my python script. How do I go about inserting them into an ARFF file, keeping each text file separate. Each file is differentiated by {"with their words in here!!"}
2011/03/08
[ "https://Stackoverflow.com/questions/5230699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/515263/" ]
I know it's pretty easy to generate an arff file on your own, but I still wanted to make it simpler so I wrote a python package <https://github.com/ubershmekel/arff> It's also on pypi so `easy_install arff`
[This project](https://github.com/renatopp/liac-arff) seems to be a bit more up to date. You can install it via pip: ``` $ pip install liac-arff ``` or easy\_install: ``` $ easy_install liac-arff ```
18,507,559
Since I too have also seen this question on SO, so this might be a duplicate for many, but I've not found an answer to this question. I want select an item from the navigation bar and show the content inside another tag by replacing the current data with AJAX-generated data. Currently I'm able to post the data into the python services, it processes it and finally returns it back to the client. This last part of changing the data into the `div` is not happening. Here is my code. **python service** ``` def dashboard(request): if request.is_ajax(): which_nav_bar = request.POST['which_nav_bar'] print which_nav_bar // prints which_nav_bar inside the terminal ctx = {'result' : "hellloooo"} return render_to_response('dashboard/dashboard1.html',ctx, context_instance = RequestContext(request)) ``` **JS file** ``` $(function (){ $("#nav_bar li").click(function(){ alert($(this).text()); // works fine $.ajax({ type: "POST", //able to post the data behind the scenes url: "/dashboard/", data : { 'which_nav_bar' : $(this).text() }, success: function(result){ $(".container-fluid").html(result); } }); }); }); ``` **HTML** ``` <div class="navbar"> <div class="navbar-inner"> <ul class="nav" id="nav_bar"> <li class="active"><a href="#">Home</a></li> <li><a href="#">Device</a></li> <li><a href="#">Content</a></li> <li><a href="#">About</a></li> <li><a href="#">Help</li> </ul> </div> </div> <div class="container-fluid"></div> ``` **OUTPUT** On `$(".container-fluid").html(result);` [this](http://www.flickr.com/photos/37280036@N04/9618885201/) is what I get actually. I instead want that my python code should return something(in this case `ctx`) and prints the ctx variable.
2013/08/29
[ "https://Stackoverflow.com/questions/18507559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1162512/" ]
change it id ``` <div id="container-fluid"></div> ``` this is id selector `$("#container-fluid")` [id-selector](http://api.jquery.com/id-selector/) if you want to access by class you can use `$(".container-fluid")` [class-selector](http://api.jquery.com/class-selector/)
try ``` $("#nav_bar li").click(function(){ var text_input = $(this).text(); // works fine $.ajax({ type: "POST", //able to post the data behind the scenes url: "/dashboard/", data : { 'which_nav_bar' : text_input } success: function(result){ $("#container-fluid").text(result); } }); }); ``` in your code you use ``` data : { 'which_nav_bar' : $(this).text()}, ``` but in your ajax request `$(this).text()` would be undefined , so assign it to a variable and use it inside the `data{}` and also remove the comma at the end
18,507,559
Since I too have also seen this question on SO, so this might be a duplicate for many, but I've not found an answer to this question. I want select an item from the navigation bar and show the content inside another tag by replacing the current data with AJAX-generated data. Currently I'm able to post the data into the python services, it processes it and finally returns it back to the client. This last part of changing the data into the `div` is not happening. Here is my code. **python service** ``` def dashboard(request): if request.is_ajax(): which_nav_bar = request.POST['which_nav_bar'] print which_nav_bar // prints which_nav_bar inside the terminal ctx = {'result' : "hellloooo"} return render_to_response('dashboard/dashboard1.html',ctx, context_instance = RequestContext(request)) ``` **JS file** ``` $(function (){ $("#nav_bar li").click(function(){ alert($(this).text()); // works fine $.ajax({ type: "POST", //able to post the data behind the scenes url: "/dashboard/", data : { 'which_nav_bar' : $(this).text() }, success: function(result){ $(".container-fluid").html(result); } }); }); }); ``` **HTML** ``` <div class="navbar"> <div class="navbar-inner"> <ul class="nav" id="nav_bar"> <li class="active"><a href="#">Home</a></li> <li><a href="#">Device</a></li> <li><a href="#">Content</a></li> <li><a href="#">About</a></li> <li><a href="#">Help</li> </ul> </div> </div> <div class="container-fluid"></div> ``` **OUTPUT** On `$(".container-fluid").html(result);` [this](http://www.flickr.com/photos/37280036@N04/9618885201/) is what I get actually. I instead want that my python code should return something(in this case `ctx`) and prints the ctx variable.
2013/08/29
[ "https://Stackoverflow.com/questions/18507559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1162512/" ]
Change ``` $("#container-fluid").text(result); ``` to ``` $(".container-fluid").text(result); ``` `#` is used to access by `id` and `.` is used to access by `class`
try ``` $("#nav_bar li").click(function(){ var text_input = $(this).text(); // works fine $.ajax({ type: "POST", //able to post the data behind the scenes url: "/dashboard/", data : { 'which_nav_bar' : text_input } success: function(result){ $("#container-fluid").text(result); } }); }); ``` in your code you use ``` data : { 'which_nav_bar' : $(this).text()}, ``` but in your ajax request `$(this).text()` would be undefined , so assign it to a variable and use it inside the `data{}` and also remove the comma at the end
44,830,396
I am parsing a websocket message and due do a bug in a specific socket.io version (Unfortunately I don't have control over the server side), some of the payload is double encoded as utf-8: The correct value would be **Wrocławskiej** (note the l letter which is LATIN SMALL LETTER L WITH STROKE) but I actually get back **WrocÅawskiej**. I already tried to decode/encode it again with java ``` String str = new String(wrongEncoded.getBytes(StandardCharsets.UTF_8), StandardCharsets.UTF_8); ``` Unfortunately the string stays the same. Any idea on how to do a double decoding in java? I saw a python version where they convert it to `raw_unicode` first and then parse it again, but I don't know this works or if there is a similar solution for Java. I already read through a couple of posts on that topic, but none helped. Edit: To clarify in Fiddler I receive the following byte sequence for the above mentionend word: ``` WrocÃÂawskiej byte[] arrOutput = { 0x57, 0x72, 0x6F, 0x63, 0xC3, 0x85, 0xC2, 0x82, 0x61, 0x77, 0x73, 0x6B, 0x69, 0x65, 0x6A }; ```
2017/06/29
[ "https://Stackoverflow.com/questions/44830396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3450689/" ]
You text was encoding to UTF-8, those bytes were then interpreted as ISO-8859-1 and re-encoded to UTF-8. `Wrocławskiej` is unicode: 0057 0072 006f 0063 **0142** 0061 0077 0073 006b 0069 0065 006a Encoding to UTF-8 it is: 57 72 6f 63 **c5 82** 61 77 73 6b 69 65 6a In [ISO-8859-1](https://en.wikipedia.org/wiki/ISO/IEC_8859-1#Codepage_layout), `c5` is `Å` and `82` is *undefined*. As ISO-8859-1, those bytes are: `WrocÅawskiej` Encoding to UTF-8 it is: 57 72 6f 63 **c3 85 c2 82** 61 77 73 6b 69 65 6a Those are likely the bytes you are receiving. So, to undo that, you need: ``` String s = new String(bytes, StandardCharsets.UTF_8); // fix "double encoding" s = new String(s.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8); ```
Well, double encoding may not be the only issue to deal with. Here is a solution that counts for more then one reason ``` String myString = "heartbroken ð"; myString = new String(myString.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8); String cleanedText = StringEscapeUtils.unescapeJava(myString); byte[] bytes = cleanedText.getBytes(StandardCharsets.UTF_8); String text = new String(bytes, StandardCharsets.UTF_8); Charset charset = Charset.forName("UTF-8"); CharsetDecoder decoder = charset.newDecoder(); decoder.onMalformedInput(CodingErrorAction.IGNORE); decoder.onUnmappableCharacter(CodingErrorAction.IGNORE); CharsetEncoder encoder = charset.newEncoder(); encoder.onMalformedInput(CodingErrorAction.IGNORE); encoder.onUnmappableCharacter(CodingErrorAction.IGNORE); try { // The new ByteBuffer is ready to be read. ByteBuffer bbuf = encoder.encode(CharBuffer.wrap(text)); // The new ByteBuffer is ready to be read. CharBuffer cbuf = decoder.decode(bbuf); String str = cbuf.toString(); } catch (CharacterCodingException e) { logger.error("Error Message if you want to"); } ``` A
44,830,396
I am parsing a websocket message and due do a bug in a specific socket.io version (Unfortunately I don't have control over the server side), some of the payload is double encoded as utf-8: The correct value would be **Wrocławskiej** (note the l letter which is LATIN SMALL LETTER L WITH STROKE) but I actually get back **WrocÅawskiej**. I already tried to decode/encode it again with java ``` String str = new String(wrongEncoded.getBytes(StandardCharsets.UTF_8), StandardCharsets.UTF_8); ``` Unfortunately the string stays the same. Any idea on how to do a double decoding in java? I saw a python version where they convert it to `raw_unicode` first and then parse it again, but I don't know this works or if there is a similar solution for Java. I already read through a couple of posts on that topic, but none helped. Edit: To clarify in Fiddler I receive the following byte sequence for the above mentionend word: ``` WrocÃÂawskiej byte[] arrOutput = { 0x57, 0x72, 0x6F, 0x63, 0xC3, 0x85, 0xC2, 0x82, 0x61, 0x77, 0x73, 0x6B, 0x69, 0x65, 0x6A }; ```
2017/06/29
[ "https://Stackoverflow.com/questions/44830396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3450689/" ]
You text was encoding to UTF-8, those bytes were then interpreted as ISO-8859-1 and re-encoded to UTF-8. `Wrocławskiej` is unicode: 0057 0072 006f 0063 **0142** 0061 0077 0073 006b 0069 0065 006a Encoding to UTF-8 it is: 57 72 6f 63 **c5 82** 61 77 73 6b 69 65 6a In [ISO-8859-1](https://en.wikipedia.org/wiki/ISO/IEC_8859-1#Codepage_layout), `c5` is `Å` and `82` is *undefined*. As ISO-8859-1, those bytes are: `WrocÅawskiej` Encoding to UTF-8 it is: 57 72 6f 63 **c3 85 c2 82** 61 77 73 6b 69 65 6a Those are likely the bytes you are receiving. So, to undo that, you need: ``` String s = new String(bytes, StandardCharsets.UTF_8); // fix "double encoding" s = new String(s.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8); ```
I had the problem that sometimes I received double encoded strings and sometimes proper encoded strings. The following method fixDoubleUTF8Encoding will handle both properly: ```java public static void main(String[] args) { String input = "werewräüèö"; String result = fixDoubleUTF8Encoding(input); System.out.println(result); // werewräüèö input = "üäöé"; result = fixDoubleUTF8Encoding(input); System.out.println(result); // üäöé } private static String fixDoubleUTF8Encoding(String s) { // interpret the string as UTF_8 byte[] bytes = s.getBytes(StandardCharsets.UTF_8); // now check if the bytes contain 0x83 0xC2, meaning double encoded garbage if(isDoubleEncoded(bytes)) { // if so, lets fix the string by assuming it is ASCII extended and recode it once s = new String(s.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8); } return s; } private static boolean isDoubleEncoded(byte[] bytes) { for (int i = 0; i < bytes.length; i++) { if(bytes[i] == -125 && i+1 < bytes.length && bytes[i+1] == -62) { return true; } } return false; } ```
44,830,396
I am parsing a websocket message and due do a bug in a specific socket.io version (Unfortunately I don't have control over the server side), some of the payload is double encoded as utf-8: The correct value would be **Wrocławskiej** (note the l letter which is LATIN SMALL LETTER L WITH STROKE) but I actually get back **WrocÅawskiej**. I already tried to decode/encode it again with java ``` String str = new String(wrongEncoded.getBytes(StandardCharsets.UTF_8), StandardCharsets.UTF_8); ``` Unfortunately the string stays the same. Any idea on how to do a double decoding in java? I saw a python version where they convert it to `raw_unicode` first and then parse it again, but I don't know this works or if there is a similar solution for Java. I already read through a couple of posts on that topic, but none helped. Edit: To clarify in Fiddler I receive the following byte sequence for the above mentionend word: ``` WrocÃÂawskiej byte[] arrOutput = { 0x57, 0x72, 0x6F, 0x63, 0xC3, 0x85, 0xC2, 0x82, 0x61, 0x77, 0x73, 0x6B, 0x69, 0x65, 0x6A }; ```
2017/06/29
[ "https://Stackoverflow.com/questions/44830396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3450689/" ]
I had the problem that sometimes I received double encoded strings and sometimes proper encoded strings. The following method fixDoubleUTF8Encoding will handle both properly: ```java public static void main(String[] args) { String input = "werewräüèö"; String result = fixDoubleUTF8Encoding(input); System.out.println(result); // werewräüèö input = "üäöé"; result = fixDoubleUTF8Encoding(input); System.out.println(result); // üäöé } private static String fixDoubleUTF8Encoding(String s) { // interpret the string as UTF_8 byte[] bytes = s.getBytes(StandardCharsets.UTF_8); // now check if the bytes contain 0x83 0xC2, meaning double encoded garbage if(isDoubleEncoded(bytes)) { // if so, lets fix the string by assuming it is ASCII extended and recode it once s = new String(s.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8); } return s; } private static boolean isDoubleEncoded(byte[] bytes) { for (int i = 0; i < bytes.length; i++) { if(bytes[i] == -125 && i+1 < bytes.length && bytes[i+1] == -62) { return true; } } return false; } ```
Well, double encoding may not be the only issue to deal with. Here is a solution that counts for more then one reason ``` String myString = "heartbroken ð"; myString = new String(myString.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8); String cleanedText = StringEscapeUtils.unescapeJava(myString); byte[] bytes = cleanedText.getBytes(StandardCharsets.UTF_8); String text = new String(bytes, StandardCharsets.UTF_8); Charset charset = Charset.forName("UTF-8"); CharsetDecoder decoder = charset.newDecoder(); decoder.onMalformedInput(CodingErrorAction.IGNORE); decoder.onUnmappableCharacter(CodingErrorAction.IGNORE); CharsetEncoder encoder = charset.newEncoder(); encoder.onMalformedInput(CodingErrorAction.IGNORE); encoder.onUnmappableCharacter(CodingErrorAction.IGNORE); try { // The new ByteBuffer is ready to be read. ByteBuffer bbuf = encoder.encode(CharBuffer.wrap(text)); // The new ByteBuffer is ready to be read. CharBuffer cbuf = decoder.decode(bbuf); String str = cbuf.toString(); } catch (CharacterCodingException e) { logger.error("Error Message if you want to"); } ``` A
63,550,237
Here z is a list of dict(). ``` z = [{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]}, {'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},] ``` I want to append all dictionary values of 'loss' in a separate list and similarly 'val\_loss', 'accuracy', 'val\_accuracy'. For that, I tried to write the below python code: ``` a = b = c = d = [] for lis in z: a.append(lis['loss']) b.append(lis['val_loss']) c.append(lis['accuracy']) d.append(lis['val_accuracy']) ``` But when I am trying to print the length of the list `print(len(a))` the output is 40 instead of 10? I just want to append all `'loss'` into `a`.
2020/08/23
[ "https://Stackoverflow.com/questions/63550237", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3168982/" ]
The notation `a = b = c = d = []` assign a new list to `d`, then assign the 3 others variable to `d`, so you have 4 variables pointing to one same list, so you put 4\*10 items in the same list. Do : ``` a, b, c, d = [], [], [], [] ``` Using `map` and `itemgetter` you can do ``` from operator import itemgetter loss = list(map(itemgetter("loss"), z)) val_loss = list(map(itemgetter("val_loss"), z)) accuracy = list(map(itemgetter("accuracy"), z)) val_accuracy = list(map(itemgetter("val_accuracy"), z)) ```
In python, when you create a variable, you are just creating a pointer to an object, and not a copy of the object. In this case, when you are initializing your list with `a = b = c = d = []`, you are actually making a, b, c, and d point to the same list instead of creating four different lists. Take the following example: ``` In [1]: a = b = c = d = [] In [2]: a.append("a") In [3]: b Out[3]: ['a'] ``` What you actually need to do is initialize four different lists: ``` a = [] b = [] c = [] d = [] for lis in z: a.append(lis['loss']) b.append(lis['val_loss']) c.append(lis['accuracy']) d.append(lis['val_accuracy']) ```
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Another alternative, with which I have no experience, is [ProMesh](http://www.codeplex.com/promesh). Personally, I moving to ASP.NET MVC.
One alternative that seems interesting is [MonoRail](http://www.castleproject.org/monorail/index.html) although I haven't tested it out fully.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Personally I have tried both ASP.NET MVC and MonoRail by CastleProject. Although I really enjoy the other CastleProject libraries, I have found that I enjoy the ASP.NET MVC implementation model better than the CastleProject MonoRail model. Now that ASP.NET MVC has released that they will be including jQuery in with the releases, I am truly excited. Ultimately, I think it depends on what other libraries you use. If you use NHibernate, ActiveRecord, and Castle Windsor then you will probably enjoy the MonoRail libraries. If you don't use any of those libraries, or prefer the Microsoft Enterprise Libraries (currently the company standard where I work) then you will probably find the ASP.NET MVC fits your needs better. With the focus on ASP.NET MVC coming from Scott Guthrie himself I doubt it will be going away any time soon. In fact, the more people using it and signing it's praises the more likely it is to become the defacto standard.
One alternative that seems interesting is [MonoRail](http://www.castleproject.org/monorail/index.html) although I haven't tested it out fully.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
We have used [MonoRail](http://www.castleproject.org/projects/monorail) RC2 for our small business's online store for the past 18 months. It replaced a 7 year old disaster of classic ASP pages. MonoRail RC2 has worked well for us, serving an average of ~14,000 page requests per day. It enabled me to develop the site very quickly, was free, and holds up well. For that, I am thankful to the MonoRail team. I just used the MonoRail bit. I chose iBATIS.NET over ActiveRecord as I had to write creative SQL to maintain compatibility with a 7-year-old turd of a database. So I can't speak for some of the other Castle libraries. Some advantages of MonoRail include the following: * It's pretty easy to get in there and modify things. For example, the default routing implementation didn't preserve the query string when rerouting (I needed this to preserve backwards compatibility with the old URL formats used by the now defunct classic ASP site for SEO reasons) and it didn't support issuing HTTP 301 Permanent Redirect headers for that scenario. So I implemented whatever the MonoRail interface for that was, plugged it into my config file, and off I went. * It's still ASP.NET, so you can still use Forms Authentication, the totallly awesome [HTTP/HTTPS Switcher at Codeproject](http://www.codeproject.com/KB/web-security/WebPageSecurity_v2.aspx), and caching. * Relatively few surprises. After working with it for over year, I haven't had too many days where I've cursed at MonoRail. That's a pretty good litmus test. Some of the helper classes (FormHelper) can behave a little strangely, the wizard framework is outright bizarre, and parameter binding can sometimes throw you for a loop, but it doesn't happen often. * Choice of View Engines (templates). I put this here because most people seem to think choice here is a good thing although I usually think that it is not. MonoRail, however, is not without its problems: * Lack of direction in development. The number of changes between RC2 and RC3 was beyond ridiculous; a lot of protected virtual methods disappeared, a lot of helpers changed (which is a big deal when your view engine isn't statically typed), even the mechanism for unit testing controllers and views changed. For this reason, we're probably just going to stay on RC2 forever. Now that ASP.NET MVC is out, it's unclear how healthy the community behind MonoRail will stay (though ayende and hammett are as enthusiastic and active as ever). * NVelocity, the "de facto" view engine for MonoRail (at least at the time that we started development), is a promising templating language with an unhealthy implementation and maintenance outlook. (Does it work? well enough. But being a CTRL+C CTRL+V port from the Java version, don't read the source for that library because your eyes will bleed.) * NVelocity and RC2 shipped with an extremely severe threading bug where multiple users accessing the site at the same time could get served pages intended for the other. It's fixed in the latest release (which, due to the release nature of the Castle projects, is very difficult to upgrade to), and we managed to work around it. But it was a very disturbing and unexpected issue to encounter, one that would be highly unlikely to be encountered on a Microsoft framework. Caveat emptor. MonoRail provided an excellent opportunity for us in June 2007 by giving us a way to migrate an existing site on the Microsoft stack to the .NET platform in a way that avoided WebForms (which is great for intranet sites, but not so great when you need fine-grained control of your HTML output on a public-facing Web site, in my opinion). (Fine, the real reason is I just despise the WebForms postback model.) ASP.NET MVC was not even a gleam in Microsoft's eye at that point in time. Now that ASP.NET MVC does exist, however, and given that Microsoft is positioning it as an alternative to WebForms, I know that I personally will strongly consider it for any future project. MonoRail is a great project, it served us well, and I'm grateful to the open source community for it, but I think of it fondly as a heavily used, worn tool that is retired to a lower drawer in my workbench. Without it, ASP.NET MVC might not exist today.
One alternative that seems interesting is [MonoRail](http://www.castleproject.org/monorail/index.html) although I haven't tested it out fully.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Personally I have tried both ASP.NET MVC and MonoRail by CastleProject. Although I really enjoy the other CastleProject libraries, I have found that I enjoy the ASP.NET MVC implementation model better than the CastleProject MonoRail model. Now that ASP.NET MVC has released that they will be including jQuery in with the releases, I am truly excited. Ultimately, I think it depends on what other libraries you use. If you use NHibernate, ActiveRecord, and Castle Windsor then you will probably enjoy the MonoRail libraries. If you don't use any of those libraries, or prefer the Microsoft Enterprise Libraries (currently the company standard where I work) then you will probably find the ASP.NET MVC fits your needs better. With the focus on ASP.NET MVC coming from Scott Guthrie himself I doubt it will be going away any time soon. In fact, the more people using it and signing it's praises the more likely it is to become the defacto standard.
Another alternative, with which I have no experience, is [ProMesh](http://www.codeplex.com/promesh). Personally, I moving to ASP.NET MVC.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
We have used [MonoRail](http://www.castleproject.org/projects/monorail) RC2 for our small business's online store for the past 18 months. It replaced a 7 year old disaster of classic ASP pages. MonoRail RC2 has worked well for us, serving an average of ~14,000 page requests per day. It enabled me to develop the site very quickly, was free, and holds up well. For that, I am thankful to the MonoRail team. I just used the MonoRail bit. I chose iBATIS.NET over ActiveRecord as I had to write creative SQL to maintain compatibility with a 7-year-old turd of a database. So I can't speak for some of the other Castle libraries. Some advantages of MonoRail include the following: * It's pretty easy to get in there and modify things. For example, the default routing implementation didn't preserve the query string when rerouting (I needed this to preserve backwards compatibility with the old URL formats used by the now defunct classic ASP site for SEO reasons) and it didn't support issuing HTTP 301 Permanent Redirect headers for that scenario. So I implemented whatever the MonoRail interface for that was, plugged it into my config file, and off I went. * It's still ASP.NET, so you can still use Forms Authentication, the totallly awesome [HTTP/HTTPS Switcher at Codeproject](http://www.codeproject.com/KB/web-security/WebPageSecurity_v2.aspx), and caching. * Relatively few surprises. After working with it for over year, I haven't had too many days where I've cursed at MonoRail. That's a pretty good litmus test. Some of the helper classes (FormHelper) can behave a little strangely, the wizard framework is outright bizarre, and parameter binding can sometimes throw you for a loop, but it doesn't happen often. * Choice of View Engines (templates). I put this here because most people seem to think choice here is a good thing although I usually think that it is not. MonoRail, however, is not without its problems: * Lack of direction in development. The number of changes between RC2 and RC3 was beyond ridiculous; a lot of protected virtual methods disappeared, a lot of helpers changed (which is a big deal when your view engine isn't statically typed), even the mechanism for unit testing controllers and views changed. For this reason, we're probably just going to stay on RC2 forever. Now that ASP.NET MVC is out, it's unclear how healthy the community behind MonoRail will stay (though ayende and hammett are as enthusiastic and active as ever). * NVelocity, the "de facto" view engine for MonoRail (at least at the time that we started development), is a promising templating language with an unhealthy implementation and maintenance outlook. (Does it work? well enough. But being a CTRL+C CTRL+V port from the Java version, don't read the source for that library because your eyes will bleed.) * NVelocity and RC2 shipped with an extremely severe threading bug where multiple users accessing the site at the same time could get served pages intended for the other. It's fixed in the latest release (which, due to the release nature of the Castle projects, is very difficult to upgrade to), and we managed to work around it. But it was a very disturbing and unexpected issue to encounter, one that would be highly unlikely to be encountered on a Microsoft framework. Caveat emptor. MonoRail provided an excellent opportunity for us in June 2007 by giving us a way to migrate an existing site on the Microsoft stack to the .NET platform in a way that avoided WebForms (which is great for intranet sites, but not so great when you need fine-grained control of your HTML output on a public-facing Web site, in my opinion). (Fine, the real reason is I just despise the WebForms postback model.) ASP.NET MVC was not even a gleam in Microsoft's eye at that point in time. Now that ASP.NET MVC does exist, however, and given that Microsoft is positioning it as an alternative to WebForms, I know that I personally will strongly consider it for any future project. MonoRail is a great project, it served us well, and I'm grateful to the open source community for it, but I think of it fondly as a heavily used, worn tool that is retired to a lower drawer in my workbench. Without it, ASP.NET MVC might not exist today.
Another alternative, with which I have no experience, is [ProMesh](http://www.codeplex.com/promesh). Personally, I moving to ASP.NET MVC.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Another alternative, with which I have no experience, is [ProMesh](http://www.codeplex.com/promesh). Personally, I moving to ASP.NET MVC.
ASP.NET MVC should start to become more accepted as being mature in rapid fashion. Now that it is beta, and supposedly nearly feature complete, the rate at which people adopt it will continue to grow, and likely more sharply. With the RTM/RTW release promised to be in the near future, now is the best time to start to adopt it so that you can hit the ground running with it. If there are specific shortcomings that you see in ASP.NET MVC, you should definitely let Microsoft know about it. [Scott Guthrie](http://weblogs.asp.net/scottgu/about.aspx) is very receptive to feedback and the [MVC Contrib](http://www.codeplex.com/MVCContrib) project is both open to suggestions and has a great collection of enhancements available through their library.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
We have used [MonoRail](http://www.castleproject.org/projects/monorail) RC2 for our small business's online store for the past 18 months. It replaced a 7 year old disaster of classic ASP pages. MonoRail RC2 has worked well for us, serving an average of ~14,000 page requests per day. It enabled me to develop the site very quickly, was free, and holds up well. For that, I am thankful to the MonoRail team. I just used the MonoRail bit. I chose iBATIS.NET over ActiveRecord as I had to write creative SQL to maintain compatibility with a 7-year-old turd of a database. So I can't speak for some of the other Castle libraries. Some advantages of MonoRail include the following: * It's pretty easy to get in there and modify things. For example, the default routing implementation didn't preserve the query string when rerouting (I needed this to preserve backwards compatibility with the old URL formats used by the now defunct classic ASP site for SEO reasons) and it didn't support issuing HTTP 301 Permanent Redirect headers for that scenario. So I implemented whatever the MonoRail interface for that was, plugged it into my config file, and off I went. * It's still ASP.NET, so you can still use Forms Authentication, the totallly awesome [HTTP/HTTPS Switcher at Codeproject](http://www.codeproject.com/KB/web-security/WebPageSecurity_v2.aspx), and caching. * Relatively few surprises. After working with it for over year, I haven't had too many days where I've cursed at MonoRail. That's a pretty good litmus test. Some of the helper classes (FormHelper) can behave a little strangely, the wizard framework is outright bizarre, and parameter binding can sometimes throw you for a loop, but it doesn't happen often. * Choice of View Engines (templates). I put this here because most people seem to think choice here is a good thing although I usually think that it is not. MonoRail, however, is not without its problems: * Lack of direction in development. The number of changes between RC2 and RC3 was beyond ridiculous; a lot of protected virtual methods disappeared, a lot of helpers changed (which is a big deal when your view engine isn't statically typed), even the mechanism for unit testing controllers and views changed. For this reason, we're probably just going to stay on RC2 forever. Now that ASP.NET MVC is out, it's unclear how healthy the community behind MonoRail will stay (though ayende and hammett are as enthusiastic and active as ever). * NVelocity, the "de facto" view engine for MonoRail (at least at the time that we started development), is a promising templating language with an unhealthy implementation and maintenance outlook. (Does it work? well enough. But being a CTRL+C CTRL+V port from the Java version, don't read the source for that library because your eyes will bleed.) * NVelocity and RC2 shipped with an extremely severe threading bug where multiple users accessing the site at the same time could get served pages intended for the other. It's fixed in the latest release (which, due to the release nature of the Castle projects, is very difficult to upgrade to), and we managed to work around it. But it was a very disturbing and unexpected issue to encounter, one that would be highly unlikely to be encountered on a Microsoft framework. Caveat emptor. MonoRail provided an excellent opportunity for us in June 2007 by giving us a way to migrate an existing site on the Microsoft stack to the .NET platform in a way that avoided WebForms (which is great for intranet sites, but not so great when you need fine-grained control of your HTML output on a public-facing Web site, in my opinion). (Fine, the real reason is I just despise the WebForms postback model.) ASP.NET MVC was not even a gleam in Microsoft's eye at that point in time. Now that ASP.NET MVC does exist, however, and given that Microsoft is positioning it as an alternative to WebForms, I know that I personally will strongly consider it for any future project. MonoRail is a great project, it served us well, and I'm grateful to the open source community for it, but I think of it fondly as a heavily used, worn tool that is retired to a lower drawer in my workbench. Without it, ASP.NET MVC might not exist today.
Personally I have tried both ASP.NET MVC and MonoRail by CastleProject. Although I really enjoy the other CastleProject libraries, I have found that I enjoy the ASP.NET MVC implementation model better than the CastleProject MonoRail model. Now that ASP.NET MVC has released that they will be including jQuery in with the releases, I am truly excited. Ultimately, I think it depends on what other libraries you use. If you use NHibernate, ActiveRecord, and Castle Windsor then you will probably enjoy the MonoRail libraries. If you don't use any of those libraries, or prefer the Microsoft Enterprise Libraries (currently the company standard where I work) then you will probably find the ASP.NET MVC fits your needs better. With the focus on ASP.NET MVC coming from Scott Guthrie himself I doubt it will be going away any time soon. In fact, the more people using it and signing it's praises the more likely it is to become the defacto standard.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
Personally I have tried both ASP.NET MVC and MonoRail by CastleProject. Although I really enjoy the other CastleProject libraries, I have found that I enjoy the ASP.NET MVC implementation model better than the CastleProject MonoRail model. Now that ASP.NET MVC has released that they will be including jQuery in with the releases, I am truly excited. Ultimately, I think it depends on what other libraries you use. If you use NHibernate, ActiveRecord, and Castle Windsor then you will probably enjoy the MonoRail libraries. If you don't use any of those libraries, or prefer the Microsoft Enterprise Libraries (currently the company standard where I work) then you will probably find the ASP.NET MVC fits your needs better. With the focus on ASP.NET MVC coming from Scott Guthrie himself I doubt it will be going away any time soon. In fact, the more people using it and signing it's praises the more likely it is to become the defacto standard.
ASP.NET MVC should start to become more accepted as being mature in rapid fashion. Now that it is beta, and supposedly nearly feature complete, the rate at which people adopt it will continue to grow, and likely more sharply. With the RTM/RTW release promised to be in the near future, now is the best time to start to adopt it so that you can hit the ground running with it. If there are specific shortcomings that you see in ASP.NET MVC, you should definitely let Microsoft know about it. [Scott Guthrie](http://weblogs.asp.net/scottgu/about.aspx) is very receptive to feedback and the [MVC Contrib](http://www.codeplex.com/MVCContrib) project is both open to suggestions and has a great collection of enhancements available through their library.
300,925
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives? The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
2008/11/19
[ "https://Stackoverflow.com/questions/300925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ]
We have used [MonoRail](http://www.castleproject.org/projects/monorail) RC2 for our small business's online store for the past 18 months. It replaced a 7 year old disaster of classic ASP pages. MonoRail RC2 has worked well for us, serving an average of ~14,000 page requests per day. It enabled me to develop the site very quickly, was free, and holds up well. For that, I am thankful to the MonoRail team. I just used the MonoRail bit. I chose iBATIS.NET over ActiveRecord as I had to write creative SQL to maintain compatibility with a 7-year-old turd of a database. So I can't speak for some of the other Castle libraries. Some advantages of MonoRail include the following: * It's pretty easy to get in there and modify things. For example, the default routing implementation didn't preserve the query string when rerouting (I needed this to preserve backwards compatibility with the old URL formats used by the now defunct classic ASP site for SEO reasons) and it didn't support issuing HTTP 301 Permanent Redirect headers for that scenario. So I implemented whatever the MonoRail interface for that was, plugged it into my config file, and off I went. * It's still ASP.NET, so you can still use Forms Authentication, the totallly awesome [HTTP/HTTPS Switcher at Codeproject](http://www.codeproject.com/KB/web-security/WebPageSecurity_v2.aspx), and caching. * Relatively few surprises. After working with it for over year, I haven't had too many days where I've cursed at MonoRail. That's a pretty good litmus test. Some of the helper classes (FormHelper) can behave a little strangely, the wizard framework is outright bizarre, and parameter binding can sometimes throw you for a loop, but it doesn't happen often. * Choice of View Engines (templates). I put this here because most people seem to think choice here is a good thing although I usually think that it is not. MonoRail, however, is not without its problems: * Lack of direction in development. The number of changes between RC2 and RC3 was beyond ridiculous; a lot of protected virtual methods disappeared, a lot of helpers changed (which is a big deal when your view engine isn't statically typed), even the mechanism for unit testing controllers and views changed. For this reason, we're probably just going to stay on RC2 forever. Now that ASP.NET MVC is out, it's unclear how healthy the community behind MonoRail will stay (though ayende and hammett are as enthusiastic and active as ever). * NVelocity, the "de facto" view engine for MonoRail (at least at the time that we started development), is a promising templating language with an unhealthy implementation and maintenance outlook. (Does it work? well enough. But being a CTRL+C CTRL+V port from the Java version, don't read the source for that library because your eyes will bleed.) * NVelocity and RC2 shipped with an extremely severe threading bug where multiple users accessing the site at the same time could get served pages intended for the other. It's fixed in the latest release (which, due to the release nature of the Castle projects, is very difficult to upgrade to), and we managed to work around it. But it was a very disturbing and unexpected issue to encounter, one that would be highly unlikely to be encountered on a Microsoft framework. Caveat emptor. MonoRail provided an excellent opportunity for us in June 2007 by giving us a way to migrate an existing site on the Microsoft stack to the .NET platform in a way that avoided WebForms (which is great for intranet sites, but not so great when you need fine-grained control of your HTML output on a public-facing Web site, in my opinion). (Fine, the real reason is I just despise the WebForms postback model.) ASP.NET MVC was not even a gleam in Microsoft's eye at that point in time. Now that ASP.NET MVC does exist, however, and given that Microsoft is positioning it as an alternative to WebForms, I know that I personally will strongly consider it for any future project. MonoRail is a great project, it served us well, and I'm grateful to the open source community for it, but I think of it fondly as a heavily used, worn tool that is retired to a lower drawer in my workbench. Without it, ASP.NET MVC might not exist today.
ASP.NET MVC should start to become more accepted as being mature in rapid fashion. Now that it is beta, and supposedly nearly feature complete, the rate at which people adopt it will continue to grow, and likely more sharply. With the RTM/RTW release promised to be in the near future, now is the best time to start to adopt it so that you can hit the ground running with it. If there are specific shortcomings that you see in ASP.NET MVC, you should definitely let Microsoft know about it. [Scott Guthrie](http://weblogs.asp.net/scottgu/about.aspx) is very receptive to feedback and the [MVC Contrib](http://www.codeplex.com/MVCContrib) project is both open to suggestions and has a great collection of enhancements available through their library.
44,183,891
Hello community / developers, I am currently trying to install SCIP with python and found that there is Windows Support and a pip installer based on <https://github.com/SCIP-Interfaces/PySCIPOpt/blob/master/INSTALL.md>. Nevertheless I run into a problem "Cannot open include file" Below is a list of the things I performed to get to this step. 1. Download Python Anaconda 2.7 64 bit 2. Install with all checkboxes as they are 3. Download PyCharm Community edition 4. Click 64 bit desktop link, and associate with .py checkboxes 5. Open CMD > write: easy\_install -U pip 6. Download Visual C++ Compiler for Python 2.7 7. Setup folder structure and downloaded header files 8. CMD > pip install pyscipopt leads to error: C:\Users\UserName\Downloads\SCIPOPTDIR\include\scip/def.h(32) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory error: command 'C:\Users\UserName\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe' failed with exit status 2 My environment variables and folder directory can be found here: [http://imgur.com/a/mJRva](https://imgur.com/a/mJRva) Help is very much appreciated, Kind regards
2017/05/25
[ "https://Stackoverflow.com/questions/44183891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7646564/" ]
This looks like a `UNION` of two `INNER JOIN`s. One gets the information from `stock` and has `NULL` values in the columns from `sales_item`, the other gets information from `sales_item` and has `NULL` for the columns from `stock`. ``` SELECT i.item_id, i.name, s.stock_id, s.quantity, NULL AS sales_item_id, NULL AS sales_id, NULL AS price FROM items AS i JOIN stocks AS s ON s.item_id = i.item_id UNION SELECT i.item_id, i.name, NULL, NULL, si.sales_item_id, si.sales_id, si.price FROM items AS i JOIN sales_item AS si ON i.item_id = si.item_id ```
All the nulls in your example shows that you are trying to join together two completely different result sets: join the items with the stock and get all that data, then join the items with the sales and return all that data. The trickiness is that you have two different kinds of results in your desired join table. The only thing both sets have in common is an item id and name. A union (which is what you are going to have to use) requires both sets to have the same number of columns. However, your sets have a different number of columns, and very little overlap. As a result you have to explicitly "select" some non-existent columns as placeholders. So your final query should look something like this: Working through the parts you have two join statements: ``` SELECT items.item_id, items.name, stocks.stock_id, stocks.quantity FROM items JOIN stocks ON stocks.item_id=items.id ``` That gets your items + stock information. Then another join to get sales data: ``` SELECT items.item_id, items.name, sales.sales_item_id, sales.sales_id, sales.price FROM items JOIN sales_item ON sales_item.item_id=items.item_id ``` These are your two queries that need to be joined. Of course, you could right these a bit shorter by leaving out some table references in the select and using a different join syntax, but you get the idea. The exact type of join you want to do here will depend on your underlying problem, so I'm just guessing at a plain JOIN (which is an alias for INNER JOIN). Now you need to put them together with a UNION, and when you do that you have to add in some more columns as placeholders: ``` SELECT items.item_id, items.name, stocks.stock_id, stocks.quantity, null AS sales_item_id, null AS sales_id, null AS price FROM items JOIN stocks ON stocks.item_id=items.id UNION ALL SELECT items.item_id, items.name, null AS stock_id, null AS quantity, sales.sales_item_id, sales.sales_id, sales.price FROM items JOIN sales_item ON sales_item.item_id=items.item_id ``` And then (if needed) you can throw in a sort: ``` SELECT * FROM ( SELECT items.item_id, items.name, stocks.stock_id, stocks.quantity, null AS sales_item_id, null AS sales_id, null AS price FROM items JOIN stocks ON stocks.item_id=items.id UNION ALL SELECT items.item_id, items.name, null AS stock_id, null AS quantity, sales.sales_item_id, sales.sales_id, sales.price FROM items JOIN sales_item ON sales_item.item_id=items.item_id ) U order by item_id ASC ```
17,053,103
I saw [this question](https://stackoverflow.com/questions/903557/pythons-with-statement-versus-with-as), and I understand when you would want to use `with foo() as bar:`, but I don't understand when you would just want to do: ``` bar = foo() with bar: .... ``` Doesn't that just remove the tear-down benefits of using `with ... as`, or am I misunderstanding what is happening? Why would someone want to use just `with`?
2013/06/11
[ "https://Stackoverflow.com/questions/17053103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1388603/" ]
For example when you want to use `Lock()`: ``` from threading import Lock myLock = Lock() with myLock: ... ``` You don't really need the `Lock()` object. You just need to know that it is on.
Using `with` without `as` still gets you the exact same teardown; it just doesn't get you a new local object representing the context. The reason you want this is that sometimes the context itself isn't directly useful—in other words, you're only using it for the side effects of its context enter and exit. For example, with a `Lock` object, you have to already have the object for the `with` block to be useful—so, even if you need it within the block, there's no reason to rebind it to another name. The same is true when you use `contextlib.closing` on an object that isn't a context manager—you already have the object itself, so who cares what `closing` yields? With something like `sh.sudo`, there isn't even an object that you'd have any use for, period. There are also cases where the point of the context manager is just there to stash and auto-restore some state. For example, you might want to write a `termios.tcsetattr`-stasher, so you can call `tty.setraw()` inside the block. You don't care what the stash object looks like, all you care about is that it gets auto-restored. `decimal.localcontext` can work in any of these ways—you can pass it an object you already have (and therefore don't need a new name for), or pass it an unnamed temporary object, or have it just stash the current context to be auto-restored. But in any of those cases. There are some hybrid cases where sometimes you want the context, sometimes you don't. For example, if you just want a database transaction to auto-commit, you might write `with autocommit(db.begin()):`, because you aren't going to access it inside the block. But if you want it to auto-rollback unless you explicitly commit it, you'd probably write `with autorollback(db.begin()) as trans:`, so you can `trans.commit()` inside the block. (Of course often, you'd actually want a transaction that commits on normal exit and rolls back on exception, as in [PEP 343](http://www.python.org/dev/peps/pep-0343/)'s `transaction` example. But I couldn't think of a better hybrid example here…) [PEP 343](http://www.python.org/dev/peps/pep-0343/) and its predecessors (PEP 310, PEP 340, and other things linked from 343) explains all of this to some extent, but it's understandable that you wouldn't pick that up on a casual read—there's so much information that isn't relevant, and it mainly just explains the mile-high overview and then the implementation-level details, skipping over everything in between.
17,053,103
I saw [this question](https://stackoverflow.com/questions/903557/pythons-with-statement-versus-with-as), and I understand when you would want to use `with foo() as bar:`, but I don't understand when you would just want to do: ``` bar = foo() with bar: .... ``` Doesn't that just remove the tear-down benefits of using `with ... as`, or am I misunderstanding what is happening? Why would someone want to use just `with`?
2013/06/11
[ "https://Stackoverflow.com/questions/17053103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1388603/" ]
To expand a bit on @freakish's answer, `with` guarantees entry into and then exit from a "context". What the heck is a context? Well, it's "whatever the thing you're with-ing makes it". Some obvious ones are: * locks: you take a lock, manipulate some data, and release the lock. * external files/streams: you open a file, read or write it, and close it. * database records: you find a record (which generally also locks it), use some fields and maybe change them, and release the record (which also unlocks it). Less obvious ones might even include certain kinds of exception trapping: you might catch divide by zero, do some arithmetic, and then stop catching it. Of course that's built into the Python syntax: `try` ... `except` as a block! And, in fact, `with` is simply a special case of Python's try/except/finally mechanisms (technically, `try/finally` wrapped around another `try` block; see comments). The `as` part of a `with` block is useful when the context-entry provides some value(s) that you want to use inside the block. In the case of the file or database record, it's obvious that you need the newly-opened stream, or the just-obtained record. In the case of catching an exception, or holding a lock on a data structure, there may be no need to obtain a value from the context-entry.
For example when you want to use `Lock()`: ``` from threading import Lock myLock = Lock() with myLock: ... ``` You don't really need the `Lock()` object. You just need to know that it is on.
17,053,103
I saw [this question](https://stackoverflow.com/questions/903557/pythons-with-statement-versus-with-as), and I understand when you would want to use `with foo() as bar:`, but I don't understand when you would just want to do: ``` bar = foo() with bar: .... ``` Doesn't that just remove the tear-down benefits of using `with ... as`, or am I misunderstanding what is happening? Why would someone want to use just `with`?
2013/06/11
[ "https://Stackoverflow.com/questions/17053103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1388603/" ]
To expand a bit on @freakish's answer, `with` guarantees entry into and then exit from a "context". What the heck is a context? Well, it's "whatever the thing you're with-ing makes it". Some obvious ones are: * locks: you take a lock, manipulate some data, and release the lock. * external files/streams: you open a file, read or write it, and close it. * database records: you find a record (which generally also locks it), use some fields and maybe change them, and release the record (which also unlocks it). Less obvious ones might even include certain kinds of exception trapping: you might catch divide by zero, do some arithmetic, and then stop catching it. Of course that's built into the Python syntax: `try` ... `except` as a block! And, in fact, `with` is simply a special case of Python's try/except/finally mechanisms (technically, `try/finally` wrapped around another `try` block; see comments). The `as` part of a `with` block is useful when the context-entry provides some value(s) that you want to use inside the block. In the case of the file or database record, it's obvious that you need the newly-opened stream, or the just-obtained record. In the case of catching an exception, or holding a lock on a data structure, there may be no need to obtain a value from the context-entry.
Using `with` without `as` still gets you the exact same teardown; it just doesn't get you a new local object representing the context. The reason you want this is that sometimes the context itself isn't directly useful—in other words, you're only using it for the side effects of its context enter and exit. For example, with a `Lock` object, you have to already have the object for the `with` block to be useful—so, even if you need it within the block, there's no reason to rebind it to another name. The same is true when you use `contextlib.closing` on an object that isn't a context manager—you already have the object itself, so who cares what `closing` yields? With something like `sh.sudo`, there isn't even an object that you'd have any use for, period. There are also cases where the point of the context manager is just there to stash and auto-restore some state. For example, you might want to write a `termios.tcsetattr`-stasher, so you can call `tty.setraw()` inside the block. You don't care what the stash object looks like, all you care about is that it gets auto-restored. `decimal.localcontext` can work in any of these ways—you can pass it an object you already have (and therefore don't need a new name for), or pass it an unnamed temporary object, or have it just stash the current context to be auto-restored. But in any of those cases. There are some hybrid cases where sometimes you want the context, sometimes you don't. For example, if you just want a database transaction to auto-commit, you might write `with autocommit(db.begin()):`, because you aren't going to access it inside the block. But if you want it to auto-rollback unless you explicitly commit it, you'd probably write `with autorollback(db.begin()) as trans:`, so you can `trans.commit()` inside the block. (Of course often, you'd actually want a transaction that commits on normal exit and rolls back on exception, as in [PEP 343](http://www.python.org/dev/peps/pep-0343/)'s `transaction` example. But I couldn't think of a better hybrid example here…) [PEP 343](http://www.python.org/dev/peps/pep-0343/) and its predecessors (PEP 310, PEP 340, and other things linked from 343) explains all of this to some extent, but it's understandable that you wouldn't pick that up on a casual read—there's so much information that isn't relevant, and it mainly just explains the mile-high overview and then the implementation-level details, skipping over everything in between.
52,844,036
I have been trying to create a folder inside a Jenkins pipeline with the following code: ``` pipeline { agent { node { label 'python' } } stages{ stage('Folder'){ steps{ folder 'New Folder' } } } } ``` But I get the following error message java.lang.NoSuchMethodError: No such DSL method 'folder' found among steps Jenkins already has installed the Cloudbees-Folder plugin so not sure why it is happening.
2018/10/16
[ "https://Stackoverflow.com/questions/52844036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9131570/" ]
You can pass an updater function to setState. Here's an example of how this might work. The object returned from the updater function will be merged into the previous state. ``` const updateBubble = ({y, vy, ...props}) => ({y: y + vy, vy: vy + 0.1, ...props}) this.setState(state => ({bubbles: state.bubbles.map(updateBubble)})) ``` Change the updateBubble function to add bouncing and so on.
I would sugest you should change your approach. You should only manage the state as the first you showed, and then, in another component, manage multiple times the component you currently have. You could use something like this: ``` import React from 'react' import Box from './box' export default class Boxes extends React.Component { constructor(props) { super(props); this.state = { started:[false,false,false,false] /*these flags will be changed on your "onClick" event*/ } } render() { const {started} = this.state return( <div> {started[0] && <Box />} {started[1] && <Box />} {started[2] && <Box />} {started[3] && <Box />} </div> ) } } ```
52,844,036
I have been trying to create a folder inside a Jenkins pipeline with the following code: ``` pipeline { agent { node { label 'python' } } stages{ stage('Folder'){ steps{ folder 'New Folder' } } } } ``` But I get the following error message java.lang.NoSuchMethodError: No such DSL method 'folder' found among steps Jenkins already has installed the Cloudbees-Folder plugin so not sure why it is happening.
2018/10/16
[ "https://Stackoverflow.com/questions/52844036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9131570/" ]
You can pass an updater function to setState. Here's an example of how this might work. The object returned from the updater function will be merged into the previous state. ``` const updateBubble = ({y, vy, ...props}) => ({y: y + vy, vy: vy + 0.1, ...props}) this.setState(state => ({bubbles: state.bubbles.map(updateBubble)})) ``` Change the updateBubble function to add bouncing and so on.
You can do this with map indeed. Here's how I would have done it. One liner ``` this.setState({bubbles: this.state.bubbles.map(x => ({...x, vy: x.vy + 1})}) ``` More explicit ```js this.state = {bubbles: history} this.setState({ bubbles: this.state.bubbles.map(x => { // Option 1 - more verbose let newBubble = x; // or even let newBubble = {...x} newBubble.vy = x.vy + 1; // Option 2 - directly update x x.vy = x.vy + 1 // Then return x; })}) ```
52,844,036
I have been trying to create a folder inside a Jenkins pipeline with the following code: ``` pipeline { agent { node { label 'python' } } stages{ stage('Folder'){ steps{ folder 'New Folder' } } } } ``` But I get the following error message java.lang.NoSuchMethodError: No such DSL method 'folder' found among steps Jenkins already has installed the Cloudbees-Folder plugin so not sure why it is happening.
2018/10/16
[ "https://Stackoverflow.com/questions/52844036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9131570/" ]
You can do this with map indeed. Here's how I would have done it. One liner ``` this.setState({bubbles: this.state.bubbles.map(x => ({...x, vy: x.vy + 1})}) ``` More explicit ```js this.state = {bubbles: history} this.setState({ bubbles: this.state.bubbles.map(x => { // Option 1 - more verbose let newBubble = x; // or even let newBubble = {...x} newBubble.vy = x.vy + 1; // Option 2 - directly update x x.vy = x.vy + 1 // Then return x; })}) ```
I would sugest you should change your approach. You should only manage the state as the first you showed, and then, in another component, manage multiple times the component you currently have. You could use something like this: ``` import React from 'react' import Box from './box' export default class Boxes extends React.Component { constructor(props) { super(props); this.state = { started:[false,false,false,false] /*these flags will be changed on your "onClick" event*/ } } render() { const {started} = this.state return( <div> {started[0] && <Box />} {started[1] && <Box />} {started[2] && <Box />} {started[3] && <Box />} </div> ) } } ```
39,427,946
I'm wondering how I can import the six library to python 2.5.2? It's not possible for me to install using pip, as it's a closed system I'm using. I have tried to add the six.py file into the lib path. and then use "import six". However, it doesnt seem to be picking up the library from this path.
2016/09/10
[ "https://Stackoverflow.com/questions/39427946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264975/" ]
According to project history, [version 1.9.0](https://bitbucket.org/gutworth/six/src/a9b120c9c49734c1bd7a95e7f371fd3bf308f107?at=1.9.0) supports Python 2.5. Compatibility broke with 1.10.0 release. > > Six supports every Python version since 2.5. It is contained in only > one Python file, so it can be easily copied into your project. (The > copyright and license notice must be retained.) > > > [There is a commit in version control system which mentions change of minimum supported version](https://bitbucket.org/gutworth/six/commits/2dfeb4ba983d8d5985b5efae3859417d2a57e487). Note that pip is able to install fixed version of package if you want to. ``` pip install six==1.9.0 ```
You can't use `six` on Python 2.5; it requires Python 2.6 or newer. From the [`six` project homepage](https://bitbucket.org/gutworth/six): > > Six supports every Python version since 2.6. > > > Trying to install `six` on Python 2.5 anyway fails as the included `setup.py` tries to import the `six` module, which tries to access objects not available in Python 2.5: ``` Traceback (most recent call last): File "<string>", line 16, in <module> File "/private/tmp/test/build/six/setup.py", line 8, in <module> import six File "six.py", line 604, in <module> viewkeys = operator.methodcaller("viewkeys") AttributeError: 'module' object has no attribute 'methodcaller' ```
36,610,806
Here is a [link](https://drive.google.com/folderview?id=0B0bHr4crS9cpaWlockpxcmJxelE&usp=drive_web) to a project and output that you can use to reproduce the problem I describe below. I'm using **coverage** with **tox** against multiple versions of python. My tox.ini file looks something like this: ``` [tox] envlist = py27 py34 [testenv] deps = coverage commands = coverage run --source=modules/ -m pytest coverage report -m ``` My problem is that coverage will run using only one version of python (in my case, py27), not both py27 and py34. This is a problem whenever I have code execution dependent on the python version, e.g.: ``` def add(a, b): import sys if sys.version.startswith('2.7'): print('2.7') if sys.version.startswith('3'): print('3') return a + b ``` Running coverage against the above code will incorrectly report that line 6 ("print('3')") is "Missing" for both py27 and py34. It should only be Missing for py34. I know why this is happening: coverage is installed on my base OS (which uses python2.7). Thus, when **tox** is run, it notices that coverage is already installed and inherits coverage from the base OS rather than installing it in the virtualenv it creates. This is fine and dandy for py27, but causes incorrect results in the coverage report for py34. I have a hacky, temporary work-around: I require a slightly earlier version of coverage (relative to the one installed on my base OS) so that tox will be forced to install a separate copy of coverage in the virtualenv. E.g. ``` [testenv] deps = coverage==4.0.2 pytest==2.9.0 py==1.4.30 ``` I don't like this workaround, but it's the best I've found for now. Any suggestions on a way to force tox to install the current version of coverage in its virtualenv's, even when I already have it installed on my base OS?
2016/04/13
[ "https://Stackoverflow.com/questions/36610806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3188632/" ]
I came upon this problem today, but couldn't find an easy answer. So, for future reference, here is the solution that I came up with. 1. Create an `envlist` that contains each version of Python that will be tested and a custom env for `cov`. 2. For all versions of Python, set `COVERAGE_FILE` environment varible to store the `.coverage` file in `{envdir}`. 3. For the `cov` env I use two commands. 1. `coverage combine` that combines the reports, and 2. `coverage html` to generate the report and, if necessary, fail the test. 4. Create a `.coveragerc` file that contains a `[paths]` section to lists the `source=` locations. 1. The first line is where the actual source code is found. 2. The subsequent lines are the subpaths that will be eliminated by `coverage combine'. **tox.ini:** ``` [tox] envlist=py27,py36,py35,py34,py33,cov [testenv] deps= pytest pytest-cov pytest-xdist setenv= py{27,36,35,34,33}: COVERAGE_FILE={envdir}/.coverage commands= py{27,36,35,34,33}: python -m pytest --cov=my_project --cov-report=term-missing --no-cov-on-fail cov: /usr/bin/env bash -c '{envpython} -m coverage combine {toxworkdir}/py*/.coverage' cov: coverage html --fail-under=85 ``` **.coveragerc:** ``` [paths] source= src/ .tox/py*/lib/python*/site-packages/ ``` The most peculiar part of the configuration is the invocation of `coverage combine`. Here's a breakdown of the command: * `tox` does not handle Shell expansions `{toxworkdir}/py*/.coverage`, so we need to invoke a shell (`bash -c`) to get the necessary expansion. + If one were inclined, you could just type out all the paths individually and not jump through all of these hoops, but that would add maintenance and `.coverage` file dependency for each `pyNN` env. * `/usr/bin/env bash -c '...'` to ensure we get the correct version of `bash`. Using the fullpath to `env` avoids the need for setting `whitelist_externals`. * `'{envpython} -m coverage ...'` ensures that we invoke the correct `python` and `coverage` for the `cov` env. * **NOTE:** The unfortunate problem of this solution is that the `cov` env is dependent on the invocation of `py{27,36,35,34,33}` which has some not so desirable side effects. + My suggestion would be to only invoke `cov` through `tox`. + Never invoke `tox -ecov` because, either - It will likely fail due to a missing `.coverage` file, or - It could give bizarre results (combining differing tests). + If you must invoke it as a subset (`tox -epy27,py36,cov`), then wipe out the `.tox` directory first (`rm -rf .tox`) to avoid the missing `.coverage` file problem.
I don't understand why tox wouldn't install coverage in each virtualenv properly. You should get two different coverage reports, one for py27 and one for py35. A nicer option might be to produce one combined report. Use `coverage run -p` to record separate data for each run, and then `coverage combine` to combine them before reporting.
29,419,322
I am getting the following error while executing the below code snippet exactly at the line `if uID in repo.git.log():`, the problem is in `repo.git.log()`, I have looked at all the similar questions on Stack Overflow which suggests to use `decode("utf-8")`. how do I convert `repo.git.log()` into `decode("utf-8")`? ``` UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ``` Relavant code: ``` .................. uID = gerritInfo['id'].decode("utf-8") if uID in repo.git.log(): inwslist.append(gerritpatch) ..................... Traceback (most recent call last): File "/prj/host_script/script.py", line 1417, in <module> result=main() File "/prj/host_script/script.py", line 1028, in main if uID in repo.git.log(): File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 431, in <lambda> return lambda *args, **kwargs: self._call_process(name, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 802, in _call_process return self.execute(make_call(), **_kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 610, in execute stdout_value = stdout_value.decode(defenc) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ```
2015/04/02
[ "https://Stackoverflow.com/questions/29419322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
0x92 is a smart quote(’) of Windows-1252. It simply doesn't exist in unicode, therefore it can't be decoded. Maybe your file was edited by a Windows machine which basically caused this problem?
After good research, I got the solution. In my case, **`datadump.json`** file was having the issue. * Simply Open the file in notepad format * Click on save as option * Go to encoding section below & Click on "UTF-8" * Save the file. Now you can try running the command. You are good to go :) For your reference, I have attached images below. [Step1](https://i.stack.imgur.com/tsLiH.png) [Step2](https://i.stack.imgur.com/usyPh.png) [Step3](https://i.stack.imgur.com/L0K19.png)
29,419,322
I am getting the following error while executing the below code snippet exactly at the line `if uID in repo.git.log():`, the problem is in `repo.git.log()`, I have looked at all the similar questions on Stack Overflow which suggests to use `decode("utf-8")`. how do I convert `repo.git.log()` into `decode("utf-8")`? ``` UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ``` Relavant code: ``` .................. uID = gerritInfo['id'].decode("utf-8") if uID in repo.git.log(): inwslist.append(gerritpatch) ..................... Traceback (most recent call last): File "/prj/host_script/script.py", line 1417, in <module> result=main() File "/prj/host_script/script.py", line 1028, in main if uID in repo.git.log(): File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 431, in <lambda> return lambda *args, **kwargs: self._call_process(name, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 802, in _call_process return self.execute(make_call(), **_kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 610, in execute stdout_value = stdout_value.decode(defenc) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ```
2015/04/02
[ "https://Stackoverflow.com/questions/29419322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
0x92 is a smart quote(’) of Windows-1252. It simply doesn't exist in unicode, therefore it can't be decoded. Maybe your file was edited by a Windows machine which basically caused this problem?
0x92 does not exist in the encoding UTF-8. As Exceen stated in his answer 0x92 is used in Windows-1252 as a smart quote. The way to resolve this is to use the windows 1252 encoding or to update the smart quote to a normal quote.
29,419,322
I am getting the following error while executing the below code snippet exactly at the line `if uID in repo.git.log():`, the problem is in `repo.git.log()`, I have looked at all the similar questions on Stack Overflow which suggests to use `decode("utf-8")`. how do I convert `repo.git.log()` into `decode("utf-8")`? ``` UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ``` Relavant code: ``` .................. uID = gerritInfo['id'].decode("utf-8") if uID in repo.git.log(): inwslist.append(gerritpatch) ..................... Traceback (most recent call last): File "/prj/host_script/script.py", line 1417, in <module> result=main() File "/prj/host_script/script.py", line 1028, in main if uID in repo.git.log(): File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 431, in <lambda> return lambda *args, **kwargs: self._call_process(name, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 802, in _call_process return self.execute(make_call(), **_kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 610, in execute stdout_value = stdout_value.decode(defenc) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ```
2015/04/02
[ "https://Stackoverflow.com/questions/29419322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Use `encoding='cp1252'` will solve the issue.
After good research, I got the solution. In my case, **`datadump.json`** file was having the issue. * Simply Open the file in notepad format * Click on save as option * Go to encoding section below & Click on "UTF-8" * Save the file. Now you can try running the command. You are good to go :) For your reference, I have attached images below. [Step1](https://i.stack.imgur.com/tsLiH.png) [Step2](https://i.stack.imgur.com/usyPh.png) [Step3](https://i.stack.imgur.com/L0K19.png)
29,419,322
I am getting the following error while executing the below code snippet exactly at the line `if uID in repo.git.log():`, the problem is in `repo.git.log()`, I have looked at all the similar questions on Stack Overflow which suggests to use `decode("utf-8")`. how do I convert `repo.git.log()` into `decode("utf-8")`? ``` UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ``` Relavant code: ``` .................. uID = gerritInfo['id'].decode("utf-8") if uID in repo.git.log(): inwslist.append(gerritpatch) ..................... Traceback (most recent call last): File "/prj/host_script/script.py", line 1417, in <module> result=main() File "/prj/host_script/script.py", line 1028, in main if uID in repo.git.log(): File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 431, in <lambda> return lambda *args, **kwargs: self._call_process(name, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 802, in _call_process return self.execute(make_call(), **_kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 610, in execute stdout_value = stdout_value.decode(defenc) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ```
2015/04/02
[ "https://Stackoverflow.com/questions/29419322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Use `encoding='cp1252'` will solve the issue.
0x92 does not exist in the encoding UTF-8. As Exceen stated in his answer 0x92 is used in Windows-1252 as a smart quote. The way to resolve this is to use the windows 1252 encoding or to update the smart quote to a normal quote.
29,419,322
I am getting the following error while executing the below code snippet exactly at the line `if uID in repo.git.log():`, the problem is in `repo.git.log()`, I have looked at all the similar questions on Stack Overflow which suggests to use `decode("utf-8")`. how do I convert `repo.git.log()` into `decode("utf-8")`? ``` UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ``` Relavant code: ``` .................. uID = gerritInfo['id'].decode("utf-8") if uID in repo.git.log(): inwslist.append(gerritpatch) ..................... Traceback (most recent call last): File "/prj/host_script/script.py", line 1417, in <module> result=main() File "/prj/host_script/script.py", line 1028, in main if uID in repo.git.log(): File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 431, in <lambda> return lambda *args, **kwargs: self._call_process(name, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 802, in _call_process return self.execute(make_call(), **_kwargs) File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 610, in execute stdout_value = stdout_value.decode(defenc) File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte ```
2015/04/02
[ "https://Stackoverflow.com/questions/29419322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
After good research, I got the solution. In my case, **`datadump.json`** file was having the issue. * Simply Open the file in notepad format * Click on save as option * Go to encoding section below & Click on "UTF-8" * Save the file. Now you can try running the command. You are good to go :) For your reference, I have attached images below. [Step1](https://i.stack.imgur.com/tsLiH.png) [Step2](https://i.stack.imgur.com/usyPh.png) [Step3](https://i.stack.imgur.com/L0K19.png)
0x92 does not exist in the encoding UTF-8. As Exceen stated in his answer 0x92 is used in Windows-1252 as a smart quote. The way to resolve this is to use the windows 1252 encoding or to update the smart quote to a normal quote.
32,586,612
I was getting started with **AWS' Elastic Beanstalk**. I am following this [tutorial](https://realpython.com/blog/python/deploying-a-django-app-to-aws-elastic-beanstalk/) to **deploy a Django/PostgreSQL app**. I did everything before the 'Configuring a Database' section. The deployment was also successful but I am getting an Internal Server Error. Here's the traceback from the logs: ``` mod_wsgi (pid=30226): Target WSGI script '/opt/python/current/app/polly/wsgi.py' cannot be loaded as Python module. [Tue Sep 15 12:06:43.472954 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] mod_wsgi (pid=30226): Exception occurred processing WSGI script '/opt/python/current/app/polly/wsgi.py'. [Tue Sep 15 12:06:43.474702 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] Traceback (most recent call last): [Tue Sep 15 12:06:43.474727 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] File "/opt/python/current/app/polly/wsgi.py", line 12, in <module> [Tue Sep 15 12:06:43.474777 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] from django.core.wsgi import get_wsgi_application [Tue Sep 15 12:06:43.474799 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] ImportError: No module named django.core.wsgi ``` Any idea what's wrong?
2015/09/15
[ "https://Stackoverflow.com/questions/32586612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4201498/" ]
Have you created a `requirements.txt` in the root of your application? [Elastic Beanstalk will automatically install the packages from this file upon deployment.](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/python-configuration-requirements.html) (Note it might need to be checked into source control to be deployed.) `pip freeze > requirements.txt` (You will probably want to do that from within a virtualenv so that you only pick up the packages your app actually needs to run. Doing that with your system Python will pick up every package you've ever installed system-wide.)
The answer (<https://stackoverflow.com/a/47209268/6169225>) by [carl-g](https://stackoverflow.com/users/39396/carl-g) is correct. One thing that got me was that `requirements.txt` was in the wrong directory. Let's say you created a django project called `mysite`. This is the directory in which you run the `eb` command(s) --> make sure `requirements.txt` is in this directory.
32,586,612
I was getting started with **AWS' Elastic Beanstalk**. I am following this [tutorial](https://realpython.com/blog/python/deploying-a-django-app-to-aws-elastic-beanstalk/) to **deploy a Django/PostgreSQL app**. I did everything before the 'Configuring a Database' section. The deployment was also successful but I am getting an Internal Server Error. Here's the traceback from the logs: ``` mod_wsgi (pid=30226): Target WSGI script '/opt/python/current/app/polly/wsgi.py' cannot be loaded as Python module. [Tue Sep 15 12:06:43.472954 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] mod_wsgi (pid=30226): Exception occurred processing WSGI script '/opt/python/current/app/polly/wsgi.py'. [Tue Sep 15 12:06:43.474702 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] Traceback (most recent call last): [Tue Sep 15 12:06:43.474727 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] File "/opt/python/current/app/polly/wsgi.py", line 12, in <module> [Tue Sep 15 12:06:43.474777 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] from django.core.wsgi import get_wsgi_application [Tue Sep 15 12:06:43.474799 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] ImportError: No module named django.core.wsgi ``` Any idea what's wrong?
2015/09/15
[ "https://Stackoverflow.com/questions/32586612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4201498/" ]
Have you created a `requirements.txt` in the root of your application? [Elastic Beanstalk will automatically install the packages from this file upon deployment.](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/python-configuration-requirements.html) (Note it might need to be checked into source control to be deployed.) `pip freeze > requirements.txt` (You will probably want to do that from within a virtualenv so that you only pick up the packages your app actually needs to run. Doing that with your system Python will pick up every package you've ever installed system-wide.)
If you forget the **.ebextensions** folder you will get the same error. I was following along with a good simple (non Elastic Beanstalk) [tutorial](https://scotch.io/tutorials/build-your-first-python-and-django-application) and missed step 3 & 4 of [Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html). I was using Django 1.11 and Python 2.7