qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
22,385,108
On the `operator` module, we have the `or_` function, [which is the bitwise or](http://docs.python.org/2/library/operator.html#operator.or_) (`|`). However I can't seem to find the logical or (`or`). The documentation [doesn't seem to list it](http://docs.python.org/2/library/operator.html#mapping-operators-to-functions). I'm wondering why isn't it included? Is it not considered a operator? Is there a builtin function that provides its behaviour?
2014/03/13
[ "https://Stackoverflow.com/questions/22385108", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1595865/" ]
It's not possible: ------------------ [This can explicitly be found in the docs](http://docs.python.org/2/reference/expressions.html#boolean-operations): > > The expression x or y first evaluates x; if x is true, its value is > returned; otherwise, y is evaluated and the resulting value is > returned. > > > It does not exist as an `operator` function because due to the language specification, it is impossible to implement because you cannot delay execution of a called argument when calling the function. Here is an example of `or` in action: ``` def foo(): return 'foo' def bar(): raise RuntimeError ``` If bar is called, we get a Runtime error. And looking at the following line, we see that Python shortcuts the evaluation of the line, since `foo` returns a `True`-ish value. ``` >>> foo() or bar() 'foo' ``` We can approximate: ------------------- We can simulate this behavior by passing in uncalled functions, and then calling them inside our `or` function: ``` def my_or(*funcs): for func in funcs: call = func() if call: return call return call >>> my_or(foo, bar) 'foo' ``` But you *cannot* shortcut execution of called callables that are passed to a function: ``` >>> my_or(foo, bar()) Traceback (most recent call last): File "<pyshell#28>", line 1, in <module> like_or(foo, bar()) File "<pyshell#24>", line 2, in bar raise RuntimeError RuntimeError ``` *So it would be improper to include such a function in the built-ins or standard library because users would expect an `or` function to work just as a boolean `or` test, which again, is impossible.*
The reason you are getting `1` after executing `>>> 1 or 2` is because `1` is `true` so the expression has been satisfied. It might make more sense if you try executing `>>> 0 or 2`. It will return `2` because the first statement is `0` or `false` so the second statement gets evaluated. In this case `2` evaluates to a `true` boolean value. The `and` and `or` operators evaluate the first value and then plug the result into the "AND and OR" truth tables. In the case of `and` the second value is only considered if the first evaluates to `true`. In the case of the `or` operator, if the first value evaluates to `true` the expression is true and can return, if it isn't the second value is evaluated.
22,385,108
On the `operator` module, we have the `or_` function, [which is the bitwise or](http://docs.python.org/2/library/operator.html#operator.or_) (`|`). However I can't seem to find the logical or (`or`). The documentation [doesn't seem to list it](http://docs.python.org/2/library/operator.html#mapping-operators-to-functions). I'm wondering why isn't it included? Is it not considered a operator? Is there a builtin function that provides its behaviour?
2014/03/13
[ "https://Stackoverflow.com/questions/22385108", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1595865/" ]
The closest thing to a built-in `or` function is [any](http://docs.python.org/2.7/library/functions.html#any): ``` >>> any((1, 2)) True ``` If you wanted to duplicate `or`'s functionality of returning non-boolean operands, you could use [next](http://docs.python.org/2.7/library/functions.html#next) with a filter: ``` >>> next(operand for operand in (1, 2) if operand) 1 ``` But like Martijn said, neither are true drop-in replacements for `or` because it short-circuits. A true `or` function would have to accept functions to be able to avoid evaluating all the results: ``` logical_or(lambda: 1, lambda: 2) ``` This is somewhat unwieldy, and would be inconsistent with the rest of the `operator` module, so it's probably best that it's left out and you use other explicit methods instead.
It's not possible: ------------------ [This can explicitly be found in the docs](http://docs.python.org/2/reference/expressions.html#boolean-operations): > > The expression x or y first evaluates x; if x is true, its value is > returned; otherwise, y is evaluated and the resulting value is > returned. > > > It does not exist as an `operator` function because due to the language specification, it is impossible to implement because you cannot delay execution of a called argument when calling the function. Here is an example of `or` in action: ``` def foo(): return 'foo' def bar(): raise RuntimeError ``` If bar is called, we get a Runtime error. And looking at the following line, we see that Python shortcuts the evaluation of the line, since `foo` returns a `True`-ish value. ``` >>> foo() or bar() 'foo' ``` We can approximate: ------------------- We can simulate this behavior by passing in uncalled functions, and then calling them inside our `or` function: ``` def my_or(*funcs): for func in funcs: call = func() if call: return call return call >>> my_or(foo, bar) 'foo' ``` But you *cannot* shortcut execution of called callables that are passed to a function: ``` >>> my_or(foo, bar()) Traceback (most recent call last): File "<pyshell#28>", line 1, in <module> like_or(foo, bar()) File "<pyshell#24>", line 2, in bar raise RuntimeError RuntimeError ``` *So it would be improper to include such a function in the built-ins or standard library because users would expect an `or` function to work just as a boolean `or` test, which again, is impossible.*
12,866,612
I need to get distribute version 0.6.28 running on Heroku. I updated my requirements.txt, but that seems to have no effect. I'm trying to install from a module from a tarball that required this later version of the distribute package. During deploy I only get this: ``` Running setup.py egg_info for package from http://downloads.sourceforge.net/project/mysql-python/mysql-python-test/1.2.4b4/MySQL-python-1.2.4b4.tar.gz The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.27 (/tmp/build_ibj6h3in4vgp/.heroku/venv/lib/python2.7/site-packages/distribute-0.6.27-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, ```
2012/10/12
[ "https://Stackoverflow.com/questions/12866612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/591222/" ]
Ok, here's a solution that does work for the moment. Long-term fix I think is Heroku upgrading their version of distribute. 1. Fork the python buildpack: <https://github.com/heroku/heroku-buildpack-python/> 2. Add the requirements you need to the pack (I put them in bin/compile, just before the other pip install requirements step). See <https://github.com/buildingenergy/heroku-buildpack-python/commit/12635e22aa3a3651f9bedb3b326e2cb4fd1d2a4b> for that diff. 3. Change the buildpack for your app: heroku config:add BUILDPACK\_URL=git://github.com/heroku/heroku-buildpack-python.git 4. Push again. It should work.
Try adding distribute with the specific version to your dependencies on a first push, then adding the required dependency. ``` cat requirements.txt ... distribute==0.6.28 ... git push heroku master ... cat requirements.txt ... your deps here ```
37,192,957
I have a numpy and a boolean array: ``` nparray = [ 12.66 12.75 12.01 13.51 13.67 ] bool = [ True False False True True ] ``` I would like to replace all the values in `nparray` by the same value divided by 3 where `bool` is False. I am a student, and I'm reasonably new to python indexing. Any advice or suggestions are greatly appreciated!
2016/05/12
[ "https://Stackoverflow.com/questions/37192957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6141885/" ]
naming an array bool might not be the best idea. As ayhan did try renaming it to bl or something else. You can use numpy.where see the docs [here](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html) ``` nparray2 = np.where(bl == False, nparray/3, nparray) ```
Use [`boolean indexing`](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.indexing.html#boolean-or-mask-index-arrays) with `~` as the negation operator: ``` arr = np.array([12.66, 12.75, 12.01, 13.51, 13.67 ]) bl = np.array([True, False, False, True ,True]) arr[~bl] = arr[~bl] / 3 array([ 12.66 , 4.25 , 4.00333333, 13.51 , 13.67 ]) ```
37,192,957
I have a numpy and a boolean array: ``` nparray = [ 12.66 12.75 12.01 13.51 13.67 ] bool = [ True False False True True ] ``` I would like to replace all the values in `nparray` by the same value divided by 3 where `bool` is False. I am a student, and I'm reasonably new to python indexing. Any advice or suggestions are greatly appreciated!
2016/05/12
[ "https://Stackoverflow.com/questions/37192957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6141885/" ]
Use [`boolean indexing`](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.indexing.html#boolean-or-mask-index-arrays) with `~` as the negation operator: ``` arr = np.array([12.66, 12.75, 12.01, 13.51, 13.67 ]) bl = np.array([True, False, False, True ,True]) arr[~bl] = arr[~bl] / 3 array([ 12.66 , 4.25 , 4.00333333, 13.51 , 13.67 ]) ```
With just python you can do it like this: ``` nparray = [12.66, 12.75, 12.01, 13.51, 13.67] bool = [True, False, False, True, True] map(lambda x, y: x if y else x/3.0, nparray, bool) ``` And the result is: ``` [12.66, 4.25, 4.003333333333333, 13.51, 13.67] ```
37,192,957
I have a numpy and a boolean array: ``` nparray = [ 12.66 12.75 12.01 13.51 13.67 ] bool = [ True False False True True ] ``` I would like to replace all the values in `nparray` by the same value divided by 3 where `bool` is False. I am a student, and I'm reasonably new to python indexing. Any advice or suggestions are greatly appreciated!
2016/05/12
[ "https://Stackoverflow.com/questions/37192957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6141885/" ]
naming an array bool might not be the best idea. As ayhan did try renaming it to bl or something else. You can use numpy.where see the docs [here](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html) ``` nparray2 = np.where(bl == False, nparray/3, nparray) ```
With just python you can do it like this: ``` nparray = [12.66, 12.75, 12.01, 13.51, 13.67] bool = [True, False, False, True, True] map(lambda x, y: x if y else x/3.0, nparray, bool) ``` And the result is: ``` [12.66, 4.25, 4.003333333333333, 13.51, 13.67] ```
38,638,180
Running interpreter ``` >>> x = 5000 >>> y = 5000 >>> print(x is y) False ``` running the same in script using `python test.py` returns `True` What the heck?
2016/07/28
[ "https://Stackoverflow.com/questions/38638180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3337819/" ]
You need to use the [EnhancedPatternLayout](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/EnhancedPatternLayout.html) then you can use the `{GMT+0}` specifier as per documentation. If you instead you want to run your application in UTC timezone, you can add the following JVM parameter : ``` -Duser.timezone=UTC ```
``` log4j.appender.A1.layout=org.apache.log4j.EnhancedPatternLayout log4j.appender.A1.layout.ConversionPattern=[%d{ISO8601}{GMT}] %-4r [%t] %-5p %c %x - %m%n ```
64,926,963
I used the official docker image of flask. And installed the rpi.gpio library in the container ``` pip install rpi.gpio ``` It's succeeded: ``` root@e31ba5814e51:/app# pip install rpi.gpio Collecting rpi.gpio Downloading RPi.GPIO-0.7.0.tar.gz (30 kB) Building wheels for collected packages: rpi.gpio Building wheel for rpi.gpio (setup.py) ... done Created wheel for rpi.gpio: filename=RPi.GPIO-0.7.0-cp39-cp39-linux_armv7l.whl size=68495 sha256=0c2c43867c304f2ca21da6cc923b13e4ba22a60a77f7309be72d449c51c669db Stored in directory: /root/.cache/pip/wheels/09/be/52/39b324bfaf72ab9a47e81519994b2be5ddae1e99ddacd7a18e Successfully built rpi.gpio Installing collected packages: rpi.gpio Successfully installed rpi.gpio-0.7.0 ``` But it prompted the following error: ``` Traceback (most recent call last): File "/app/hello/app2.py", line 2, in <module> import RPi.GPIO as GPIO File "/usr/local/lib/python3.9/site-packages/RPi/GPIO/__init__.py", line 23, in <module> from RPi._GPIO import * RuntimeError: This module can only be run on a Raspberry Pi! ``` I tried the method in this link, but it didn't work: [Docker Access to Raspberry Pi GPIO Pins](https://stackoverflow.com/questions/30059784/docker-access-to-raspberry-pi-gpio-pins) I want to know if this can be done, and if so, how to proceed.
2020/11/20
[ "https://Stackoverflow.com/questions/64926963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12192583/" ]
You need to close the second file. You were missing the () at the end of `f2.close` so the close method actually won't have been executed. In the example below, I am using `with` which creates a context manager to automatically close the file. ``` with open('BestillingNr.txt', 'r') as f: bestillingNr = int(f.read()) bestillingNr += 1 with open('BestillingNr.txt', 'w') as f2: f2.write(f'{str(bestillingNr)}') ```
after this line `f2.write(f'{str(bestillingNr)}')`, you should add flush command `f2.flush()` This code is work well: ``` f = open('BestillingNr.txt', 'r') bestillingNr = int(f.read()) f.close() bestillingNr += 1 f2 = open('BestillingNr.txt', 'w') f2.write(f'{str(bestillingNr)}') f2.flush() f2.close() ```
10,505,851
I have a complex class and I want to simplify it by implementing a facade class (assume I have no control on complex class). My problem is that complex class has many methods and I will just simplify some of them and rest of the will stay as they are. What I mean by "simplify" is explained below. I want to find a way that if a method is implemented with facade then call it, if not then call the method in complex class. The reason I want this is to write less code :) [less is more] **Example:** ``` Facade facade = // initialize facade.simplified(); // defined in Facade class so call it // not defined in Facade but exists in the complex class // so call the one in the complex class facade.alreadySimple(); ``` The options that comes to mind mind are: **Option 1:** Write a class holding a variable of complex class and implement complex ones then implement simple ones with direct delegation: ``` class Facade { private PowerfulButComplexClass realWorker = // initialize public void simplified() { // do stuff } public void alreadySimple() { realWorker.alreadySimple(); } // more stuff } ``` But with this approach I will need to implement all the simple methods with just a single delegation statement. So I need to write more code (it is simple though) **Option 2:** Extend the complex class and implement simplified methods but then both simple and complex versions of these methods will be visible. In python I can achieve similar behaviour like this: ``` class PowerfulButComplexClass(object): def alreadySimple(self): # really simple # some very complex methods class Facade(object): def __init__(self): self.realworker = PowerfulButComplexClass() def simplified(self): # simplified version of complex methods in PowerfulButComplexClass def __getattribute__(self, key): """For rest of the PowerfulButComplexClass' methods use them as they are because they are simple enough. """ try: # return simplified version if we did attr = object.__getattribute__(self, key) except AttributeError: # return PowerfulButComplexClass' version because it is already simple attr = object.__getattribute__(self.data, key) return attr obj = Facace() obj.simplified() # call the one we have defined obj.alreadySimple( # call the one defined in PowerfulButComplexClass ``` *So what is the Java way to achieve this?* Edit: What I mean by "simplify": A complex method can be either a method with too many arguments ``` void complex method(arg1, arg2, ..., argn) // n is sufficiently large ``` or a set of related methods that will almost always called together to achieve a single task ``` outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); actualResult = someAnotherMethod(outArg2); ``` so we want to have something like this: ``` String simplified(arg1, arg2, arg3) { outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); return someAnotherMethod(outArg2); } ```
2012/05/08
[ "https://Stackoverflow.com/questions/10505851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/932278/" ]
It's called Inheritence. Consider the following code: ``` class Complex { public function complex() { /* Complex Function */ } public function simple() { /* Already simple function */ } } class SimplifiedComplex extends Complex { public function complex() { /* Simplify Here */ } } ``` The `simple()` method will work on a `SimplifiedComplex` object.
Depending on your use case, you might want to create a facade in front of some of the functionality in complex class by delegating (Option 1), and instead of providing support for the rest of the functionality in complex class, you provide means to access complex class directly (getComplexClass). This might make the design clearer. Consider, for example, a complex class handling most of the features of a bank system. Creating a class named "Account" that has access to the complex class but only uses the methods relevant for a bank account helps the design. For convenience, the Account class could have a method getBankSystemForAccount or something similar.
10,505,851
I have a complex class and I want to simplify it by implementing a facade class (assume I have no control on complex class). My problem is that complex class has many methods and I will just simplify some of them and rest of the will stay as they are. What I mean by "simplify" is explained below. I want to find a way that if a method is implemented with facade then call it, if not then call the method in complex class. The reason I want this is to write less code :) [less is more] **Example:** ``` Facade facade = // initialize facade.simplified(); // defined in Facade class so call it // not defined in Facade but exists in the complex class // so call the one in the complex class facade.alreadySimple(); ``` The options that comes to mind mind are: **Option 1:** Write a class holding a variable of complex class and implement complex ones then implement simple ones with direct delegation: ``` class Facade { private PowerfulButComplexClass realWorker = // initialize public void simplified() { // do stuff } public void alreadySimple() { realWorker.alreadySimple(); } // more stuff } ``` But with this approach I will need to implement all the simple methods with just a single delegation statement. So I need to write more code (it is simple though) **Option 2:** Extend the complex class and implement simplified methods but then both simple and complex versions of these methods will be visible. In python I can achieve similar behaviour like this: ``` class PowerfulButComplexClass(object): def alreadySimple(self): # really simple # some very complex methods class Facade(object): def __init__(self): self.realworker = PowerfulButComplexClass() def simplified(self): # simplified version of complex methods in PowerfulButComplexClass def __getattribute__(self, key): """For rest of the PowerfulButComplexClass' methods use them as they are because they are simple enough. """ try: # return simplified version if we did attr = object.__getattribute__(self, key) except AttributeError: # return PowerfulButComplexClass' version because it is already simple attr = object.__getattribute__(self.data, key) return attr obj = Facace() obj.simplified() # call the one we have defined obj.alreadySimple( # call the one defined in PowerfulButComplexClass ``` *So what is the Java way to achieve this?* Edit: What I mean by "simplify": A complex method can be either a method with too many arguments ``` void complex method(arg1, arg2, ..., argn) // n is sufficiently large ``` or a set of related methods that will almost always called together to achieve a single task ``` outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); actualResult = someAnotherMethod(outArg2); ``` so we want to have something like this: ``` String simplified(arg1, arg2, arg3) { outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); return someAnotherMethod(outArg2); } ```
2012/05/08
[ "https://Stackoverflow.com/questions/10505851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/932278/" ]
It's called Inheritence. Consider the following code: ``` class Complex { public function complex() { /* Complex Function */ } public function simple() { /* Already simple function */ } } class SimplifiedComplex extends Complex { public function complex() { /* Simplify Here */ } } ``` The `simple()` method will work on a `SimplifiedComplex` object.
This is ugly in Java, but you can write a function that takes the name of the method you want to call, and a parameter list, and use reflection to find the appropriate method to call. This will be conceptually similar to how you'd do it in Python, except much uglier.
10,505,851
I have a complex class and I want to simplify it by implementing a facade class (assume I have no control on complex class). My problem is that complex class has many methods and I will just simplify some of them and rest of the will stay as they are. What I mean by "simplify" is explained below. I want to find a way that if a method is implemented with facade then call it, if not then call the method in complex class. The reason I want this is to write less code :) [less is more] **Example:** ``` Facade facade = // initialize facade.simplified(); // defined in Facade class so call it // not defined in Facade but exists in the complex class // so call the one in the complex class facade.alreadySimple(); ``` The options that comes to mind mind are: **Option 1:** Write a class holding a variable of complex class and implement complex ones then implement simple ones with direct delegation: ``` class Facade { private PowerfulButComplexClass realWorker = // initialize public void simplified() { // do stuff } public void alreadySimple() { realWorker.alreadySimple(); } // more stuff } ``` But with this approach I will need to implement all the simple methods with just a single delegation statement. So I need to write more code (it is simple though) **Option 2:** Extend the complex class and implement simplified methods but then both simple and complex versions of these methods will be visible. In python I can achieve similar behaviour like this: ``` class PowerfulButComplexClass(object): def alreadySimple(self): # really simple # some very complex methods class Facade(object): def __init__(self): self.realworker = PowerfulButComplexClass() def simplified(self): # simplified version of complex methods in PowerfulButComplexClass def __getattribute__(self, key): """For rest of the PowerfulButComplexClass' methods use them as they are because they are simple enough. """ try: # return simplified version if we did attr = object.__getattribute__(self, key) except AttributeError: # return PowerfulButComplexClass' version because it is already simple attr = object.__getattribute__(self.data, key) return attr obj = Facace() obj.simplified() # call the one we have defined obj.alreadySimple( # call the one defined in PowerfulButComplexClass ``` *So what is the Java way to achieve this?* Edit: What I mean by "simplify": A complex method can be either a method with too many arguments ``` void complex method(arg1, arg2, ..., argn) // n is sufficiently large ``` or a set of related methods that will almost always called together to achieve a single task ``` outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); actualResult = someAnotherMethod(outArg2); ``` so we want to have something like this: ``` String simplified(arg1, arg2, arg3) { outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); return someAnotherMethod(outArg2); } ```
2012/05/08
[ "https://Stackoverflow.com/questions/10505851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/932278/" ]
I think you've already called it. I would go with Option 1. It provides the most flexibility given the rigidity of java. I prefer it because it favors composition over inheritance. Although this creates more code, I find designs like this generally are easier to modify in the long run. Inheritance should only be used to model strict "is a" relationships where the subclass necessarily has **all the properties and behaviors of the base class**. If you're using inheritance for anything else your asking for trouble. Finally, I don't buy into the idea of "Less Is More"(Insert incredibly concise, indecipherable perl example here). I buy into the principle of "Code should be as simple as it needs to be and no simpler".
Depending on your use case, you might want to create a facade in front of some of the functionality in complex class by delegating (Option 1), and instead of providing support for the rest of the functionality in complex class, you provide means to access complex class directly (getComplexClass). This might make the design clearer. Consider, for example, a complex class handling most of the features of a bank system. Creating a class named "Account" that has access to the complex class but only uses the methods relevant for a bank account helps the design. For convenience, the Account class could have a method getBankSystemForAccount or something similar.
10,505,851
I have a complex class and I want to simplify it by implementing a facade class (assume I have no control on complex class). My problem is that complex class has many methods and I will just simplify some of them and rest of the will stay as they are. What I mean by "simplify" is explained below. I want to find a way that if a method is implemented with facade then call it, if not then call the method in complex class. The reason I want this is to write less code :) [less is more] **Example:** ``` Facade facade = // initialize facade.simplified(); // defined in Facade class so call it // not defined in Facade but exists in the complex class // so call the one in the complex class facade.alreadySimple(); ``` The options that comes to mind mind are: **Option 1:** Write a class holding a variable of complex class and implement complex ones then implement simple ones with direct delegation: ``` class Facade { private PowerfulButComplexClass realWorker = // initialize public void simplified() { // do stuff } public void alreadySimple() { realWorker.alreadySimple(); } // more stuff } ``` But with this approach I will need to implement all the simple methods with just a single delegation statement. So I need to write more code (it is simple though) **Option 2:** Extend the complex class and implement simplified methods but then both simple and complex versions of these methods will be visible. In python I can achieve similar behaviour like this: ``` class PowerfulButComplexClass(object): def alreadySimple(self): # really simple # some very complex methods class Facade(object): def __init__(self): self.realworker = PowerfulButComplexClass() def simplified(self): # simplified version of complex methods in PowerfulButComplexClass def __getattribute__(self, key): """For rest of the PowerfulButComplexClass' methods use them as they are because they are simple enough. """ try: # return simplified version if we did attr = object.__getattribute__(self, key) except AttributeError: # return PowerfulButComplexClass' version because it is already simple attr = object.__getattribute__(self.data, key) return attr obj = Facace() obj.simplified() # call the one we have defined obj.alreadySimple( # call the one defined in PowerfulButComplexClass ``` *So what is the Java way to achieve this?* Edit: What I mean by "simplify": A complex method can be either a method with too many arguments ``` void complex method(arg1, arg2, ..., argn) // n is sufficiently large ``` or a set of related methods that will almost always called together to achieve a single task ``` outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); actualResult = someAnotherMethod(outArg2); ``` so we want to have something like this: ``` String simplified(arg1, arg2, arg3) { outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); return someAnotherMethod(outArg2); } ```
2012/05/08
[ "https://Stackoverflow.com/questions/10505851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/932278/" ]
I think you've already called it. I would go with Option 1. It provides the most flexibility given the rigidity of java. I prefer it because it favors composition over inheritance. Although this creates more code, I find designs like this generally are easier to modify in the long run. Inheritance should only be used to model strict "is a" relationships where the subclass necessarily has **all the properties and behaviors of the base class**. If you're using inheritance for anything else your asking for trouble. Finally, I don't buy into the idea of "Less Is More"(Insert incredibly concise, indecipherable perl example here). I buy into the principle of "Code should be as simple as it needs to be and no simpler".
This is ugly in Java, but you can write a function that takes the name of the method you want to call, and a parameter list, and use reflection to find the appropriate method to call. This will be conceptually similar to how you'd do it in Python, except much uglier.
26,747,308
I have a Python script, say `myscript.py`, that uses relative module imports, ie `from .. import module1`, where my project layout is as follows: ``` project + outer_module - __init__.py - module1.py + inner_module - __init__.py - myscript.py - myscript.sh ``` And I have a Bash script, say `myscript.sh`, which is a wrapper for my python script, shown below: ``` #!/bin/bash python -m outer_module.inner_module.myscript $@ ``` This works to execute `myscript.py` and forwards the arguments to my script as desired, but it only works when I call `./outer_module/inner_module/myscript.sh` from within the `project` directory shown above. How can I make this script work from anywhere? For example, how can I make this work for a call like `bash /root/to/my/project/outer_module/inner_module/myscript.sh`? Here are my attempts: When executing `myscript.sh` from anywhere else, I get the error: `No module named outer_module.inner_module`. Then I tried another approach to execute the bash script from anywhere, by replacing `myscript.sh` with: ``` #!/bin/bash scriptdir=`dirname "$BASH_SOURCE"` python $scriptdir/myscript.py $@ ``` When I execute `myscript.sh` as shown above, I get the following: ``` Traceback (most recent call last): File "./inner_module/myscript.py", line 10, in <module> from .. import module1 ValueError: Attempted relative import in non-package ``` Which is due to the relative import on the first line in `myscript.py`, as mentioned earlier, which is `from .. import module1`.
2014/11/04
[ "https://Stackoverflow.com/questions/26747308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1884158/" ]
You want to create a custom filter. It can be done very easily. This filter will test the name of your friend against the search. If ``` app.filter('filterFriends', function() { return function(friends, search) { if (search === "") return friends; return friends.filter(function(friend) { return friend.name.match(search); }); } }); ``` This should give you a good idea of how to create filters, and you can customize it as you like. In your HTML you would use it like this. ``` <input ng-model="search"> <tr ng-repeat="friend in friends | filterFriends:search"> <td>{{friend.name}}</td> <td>{{friend.phone}}</td> </tr> ``` Here is a [JSFiddle](http://jsfiddle.net/zargyle/dxzfku44/).
I think it would be easier to just do: ``` <tr ng-repeat="friend in friends" ng-show="([friend] | filter:search).length > 0"> <td>{{friend.name}}</td> <td>{{friend.phone}}</td> </tr> ``` Also this way the filter will apply to all the properties, not just name.
55,374,909
I am new in python so I'm trying to read a csv with 700 lines included a header, and get a list with the unique values of the first csv column. Sample CSV: ``` SKU;PRICE;SUPPLIER X100;100;ABC X100;120;ADD X101;110;ABV X102;100;ABC X102;105;ABV X100;119;ABG ``` I used the example here [How to create a list in Python with the unique values of a CSV file?](https://stackoverflow.com/questions/24441606/how-to-create-a-list-in-python-with-the-unique-values-of-a-csv-file) so I did the following: ``` import csv mainlist=[] with open('final_csv.csv', 'r', encoding='utf-8') as csvf: rows = csv.reader(csvf, delimiter=";") for row in rows: if row[0] not in rows: mainlist.append(row[0]) print(mainlist) ``` I noticed that in debugging, rows is 1 line not 700 and I get only the ['SKU'] field what I did wrong? thank you
2019/03/27
[ "https://Stackoverflow.com/questions/55374909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
A solution using pandas. You'll need to call the `unique` method on the correct column, this will return a pandas series with the unique values in that column, then convert it to a list using the `tolist` method. An example on the `SKU` column below. ``` import pandas as pd df = pd.read_csv('final_csv.csv', sep=";") sku_unique = df['SKU'].unique().tolist() ``` If you don't know / care for the column name you can use `iloc` on the correct number of column. Note that the count index starts at 0: ``` df.iloc[:,0].unique().tolist() ``` If the question is intending get only the values occurring once then you can use the `value_counts` method. This will create a series with the index as the values of `SKU` with the counts as values, you must then convert the index of the series to a list in a similar manner. Using the first example: ``` import pandas as pd df = pd.read_csv('final_csv.csv', sep=";") sku_counts = df['SKU'].value_counts() sku_single_counts = sku_counts[sku_counts == 1].index.tolist() ```
A solution using neither `pandas` nor `csv` : ```py lines = open('file.csv', 'r').read().splitlines()[1:] col0 = [v.split(';')[0] for v in lines] uniques = filter(lambda x: col0.count(x) == 1, col0) ``` or, using `map` (but less readable) : ```py col0 = list(map(lambda line: line.split(';')[0], open('file.csv', 'r').read().splitlines()[1:])) uniques = filter(lambda x: col0.count(x) == 1, col0) ```
55,374,909
I am new in python so I'm trying to read a csv with 700 lines included a header, and get a list with the unique values of the first csv column. Sample CSV: ``` SKU;PRICE;SUPPLIER X100;100;ABC X100;120;ADD X101;110;ABV X102;100;ABC X102;105;ABV X100;119;ABG ``` I used the example here [How to create a list in Python with the unique values of a CSV file?](https://stackoverflow.com/questions/24441606/how-to-create-a-list-in-python-with-the-unique-values-of-a-csv-file) so I did the following: ``` import csv mainlist=[] with open('final_csv.csv', 'r', encoding='utf-8') as csvf: rows = csv.reader(csvf, delimiter=";") for row in rows: if row[0] not in rows: mainlist.append(row[0]) print(mainlist) ``` I noticed that in debugging, rows is 1 line not 700 and I get only the ['SKU'] field what I did wrong? thank you
2019/03/27
[ "https://Stackoverflow.com/questions/55374909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
If you want the unique values of the first column, you could modify your code to use a `set` instead of a `list`. Maybe like this: ``` import collections import csv filename = 'final_csv.csv' sku_list = [] with open(filename, 'r', encoding='utf-8') as f: csv_reader = csv.reader(f, delimiter=";") for i, row in enumerate(csv_reader): if i == 0: # skip the header continue try: sku = row[0] sku_list.append(sku) except IndexError: pass print('All SKUs:') print(sku_list) sku_set = set(sku_list) print('SKUs after removing duplicates:') print(sku_set) c = collections.Counter(sku_list) sku_list_2 = [k for k, v in c.items() if v == 1] print('SKUs that appear only once:') print(sku_list_2) with open('output.csv', 'w') as f: for sku in sorted(sku_set): f.write('{}\n'.format(sku)) ```
A solution using neither `pandas` nor `csv` : ```py lines = open('file.csv', 'r').read().splitlines()[1:] col0 = [v.split(';')[0] for v in lines] uniques = filter(lambda x: col0.count(x) == 1, col0) ``` or, using `map` (but less readable) : ```py col0 = list(map(lambda line: line.split(';')[0], open('file.csv', 'r').read().splitlines()[1:])) uniques = filter(lambda x: col0.count(x) == 1, col0) ```
42,854,598
If I reset the index of my pandas DataFrame with "inplace=True" (following [the documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html)) it returns a class 'NoneType'. If I reset the index with "inplace=False" it returns the DataFrame with the new index. Why? ``` print(type(testDataframe)) print(testDataframe.head()) ``` returns: ``` <class 'pandas.core.frame.DataFrame'> ALandbouwBosbouwEnVisserij AantalInkomensontvangers AantalInwoners \ 0 73780.0 None 16979120 1 290.0 None 25243 2 20.0 None 3555 ``` Set\_index returns a new index: ``` testDataframe = testDataframe.set_index(['Codering']) print(type(testDataframe)) print(testDataframe.head()) ``` returns ``` <class 'pandas.core.frame.DataFrame'> ALandbouwBosbouwEnVisserij AantalInkomensontvangers \ Codering NL00 73780.0 None GM1680 290.0 None WK168000 20.0 None BU16800000 15.0 None ``` But the same set\_index with "inplace=True": ``` testDataframe = testDataframe.set_index(['Codering'], inplace=True) print(type(testDataframe)) print(testDataframe.head()) ``` returns ``` <class 'NoneType'> --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-50-0d6304ebaae1> in <module>() ``` Version info: ``` python: 3.4.4.final.0 python-bits: 64 pandas: 0.18.1 numpy: 1.11.1 IPython: 5.2.2 ```
2017/03/17
[ "https://Stackoverflow.com/questions/42854598", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6421290/" ]
Ok, now I understand, thanks for the comments! So inplace=True should return None **and** make the change in the original object. It seemed that on listing the dataframe again, no changes were present. But of course I should not have **assigned the return value** to the dataframe, i.e. ``` testDataframe = testDataframe.set_index(['Codering'], inplace=True) ``` should just be ``` testDataframe.set_index(['Codering'], inplace=True) ``` or ``` testDataframe = testDataframe.set_index(['Codering'], inplace=False) ``` otherwise the return value of the inplace index change (None) is the new content of the dataframe which is of course not the intend. I am sure this is obvious to many and now it is to me as well but it wasn't without your help, thanks!!!
inplace=True is always changed in the original data\_frame. If you want a changed data\_frame then remove second parameter i.e `inplace = True` ``` new_data_frame = testDataframe.set_index(['Codering']) ``` Then ``` print(new_data_frame) ```
32,303,006
I need to compute the integral of the following function within ranges that start as low as `-150`: ``` import numpy as np from scipy.special import ndtr def my_func(x): return np.exp(x ** 2) * 2 * ndtr(x * np.sqrt(2)) ``` The problem is that this part of the function ``` np.exp(x ** 2) ``` tends toward infinity -- I get `inf` for values of `x` less than approximately `-26`. And this part of the function ``` 2 * ndtr(x * np.sqrt(2)) ``` which is equivalent to ``` from scipy.special import erf 1 + erf(x) ``` tends toward 0. So, a very, very large number times a very, very small number should give me a reasonably sized number -- but, instead of that, `python` is giving me `nan`. What can I do to circumvent this problem?
2015/08/31
[ "https://Stackoverflow.com/questions/32303006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2623899/" ]
I think a combination of @askewchan's solution and [`scipy.special.log_ndtr`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.log_ndtr.html) will do the trick: ``` from scipy.special import log_ndtr _log2 = np.log(2) _sqrt2 = np.sqrt(2) def my_func(x): return np.exp(x ** 2) * 2 * ndtr(x * np.sqrt(2)) def my_func2(x): return np.exp(x * x + _log2 + log_ndtr(x * _sqrt2)) print(my_func(-150)) # nan print(my_func2(-150) # 0.0037611803122451198 ``` For `x <= -20`, `log_ndtr(x)` [uses a Taylor series expansion of the error function to iteratively compute the log CDF directly](https://github.com/scipy/scipy/blob/07c263874ca93312d9a54c1249ff3a7b9a88226e/scipy/special/cephes/ndtr.c#L296-L338), which is much more numerically stable than simply taking `log(ndtr(x))`. --- Update ====== As you mentioned in the comments, the `exp` can also overflow if `x` is sufficiently large. Whilst you could work around this using `mpmath.exp`, a simpler and faster method is to cast up to a `np.longdouble` which, on my machine, can represent values up to 1.189731495357231765e+4932: ``` import mpmath def my_func3(x): return mpmath.exp(x * x + _log2 + log_ndtr(x * _sqrt2)) def my_func4(x): return np.exp(np.float128(x * x + _log2 + log_ndtr(x * _sqrt2))) print(my_func2(50)) # inf print(my_func3(50)) # mpf('1.0895188633566085e+1086') print(my_func4(50)) # 1.0895188633566084842e+1086 %timeit my_func3(50) # The slowest run took 8.01 times longer than the fastest. This could mean that # an intermediate result is being cached 100000 loops, best of 3: 15.5 µs per # loop %timeit my_func4(50) # The slowest run took 11.11 times longer than the fastest. This could mean # that an intermediate result is being cached 100000 loops, best of 3: 2.9 µs # per loop ```
Not sure how helpful will this be, but here are a couple of thoughts that are too long for a comment. You need to calculate the integral of [![2 \cdot e^{x^2} \cdot f(\sqrt{2}x)](https://i.stack.imgur.com/Tt5t4.gif)](https://i.stack.imgur.com/Tt5t4.gif), which you [correctly identified](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.ndtr.html#scipy.special.ndtr) would be [![e^{x^2}*(1 + erf(x))](https://i.stack.imgur.com/qgWkq.gif)](https://i.stack.imgur.com/qgWkq.gif). Opening the brackets you can integrate both parts of the summation. [![enter image description here](https://i.stack.imgur.com/EcbfJ.png)](https://i.stack.imgur.com/EcbfJ.png) Scipy has this [imaginary error function implemented](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfi.html#scipy.special.erfi) The second part is harder: [![enter image description here](https://i.stack.imgur.com/nGtc7.png)](https://i.stack.imgur.com/nGtc7.png) This is a [generalized hypergeometric function](https://en.wikipedia.org/wiki/Generalized_hypergeometric_function). Sadly it looks like scipy [does not have an implementation of it](http://docs.scipy.org/doc/scipy/reference/special.html#hypergeometric-functions), but [this package](https://code.google.com/p/mpmath/) claims it does. Here I used indefinite integrals without constants, knowing the `from` `to` values it is clear how to use definite ones.
32,303,006
I need to compute the integral of the following function within ranges that start as low as `-150`: ``` import numpy as np from scipy.special import ndtr def my_func(x): return np.exp(x ** 2) * 2 * ndtr(x * np.sqrt(2)) ``` The problem is that this part of the function ``` np.exp(x ** 2) ``` tends toward infinity -- I get `inf` for values of `x` less than approximately `-26`. And this part of the function ``` 2 * ndtr(x * np.sqrt(2)) ``` which is equivalent to ``` from scipy.special import erf 1 + erf(x) ``` tends toward 0. So, a very, very large number times a very, very small number should give me a reasonably sized number -- but, instead of that, `python` is giving me `nan`. What can I do to circumvent this problem?
2015/08/31
[ "https://Stackoverflow.com/questions/32303006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2623899/" ]
There already is such a function: [`erfcx`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfcx.html). I think `erfcx(-x)` should give you the integrand you want (note that `1+erf(x)=erfc(-x)`).
Not sure how helpful will this be, but here are a couple of thoughts that are too long for a comment. You need to calculate the integral of [![2 \cdot e^{x^2} \cdot f(\sqrt{2}x)](https://i.stack.imgur.com/Tt5t4.gif)](https://i.stack.imgur.com/Tt5t4.gif), which you [correctly identified](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.ndtr.html#scipy.special.ndtr) would be [![e^{x^2}*(1 + erf(x))](https://i.stack.imgur.com/qgWkq.gif)](https://i.stack.imgur.com/qgWkq.gif). Opening the brackets you can integrate both parts of the summation. [![enter image description here](https://i.stack.imgur.com/EcbfJ.png)](https://i.stack.imgur.com/EcbfJ.png) Scipy has this [imaginary error function implemented](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfi.html#scipy.special.erfi) The second part is harder: [![enter image description here](https://i.stack.imgur.com/nGtc7.png)](https://i.stack.imgur.com/nGtc7.png) This is a [generalized hypergeometric function](https://en.wikipedia.org/wiki/Generalized_hypergeometric_function). Sadly it looks like scipy [does not have an implementation of it](http://docs.scipy.org/doc/scipy/reference/special.html#hypergeometric-functions), but [this package](https://code.google.com/p/mpmath/) claims it does. Here I used indefinite integrals without constants, knowing the `from` `to` values it is clear how to use definite ones.
46,409,846
I am trying to implement `Kmeans` algorithm in python which will use `cosine distance` instead of euclidean distance as distance metric. I understand that using different distance function can be fatal and should done carefully. Using cosine distance as metric forces me to change the average function (the average in accordance to cosine distance must be an element by element average of the normalized vectors). I have seen [this](https://gist.github.com/jaganadhg/b3f6af86ad99bf6e9bb7be21e5baa1b5) elegant solution of manually overriding the distance function of sklearn, and I want to use the same technique to override the averaging section of the code but I couldn't find it. Does anyone knows How can it be done ? How critical is it that the distance metric doesn't satisfy the triangular inequality? If anyone knows a different efficient implementation of kmeans where I use cosine metric or satisfy an distance and averaging functions it would also be realy helpful. Thank you very much! **Edit:** After using the angular distance instead of cosine distance, The code looks as something like that: ``` def KMeans_cosine_fit(sparse_data, nclust = 10, njobs=-1, randomstate=None): # Manually override euclidean def euc_dist(X, Y = None, Y_norm_squared = None, squared = False): #return pairwise_distances(X, Y, metric = 'cosine', n_jobs = 10) return np.arccos(cosine_similarity(X, Y))/np.pi k_means_.euclidean_distances = euc_dist kmeans = k_means_.KMeans(n_clusters = nclust, n_jobs = njobs, random_state = randomstate) _ = kmeans.fit(sparse_data) return kmeans ``` I noticed (with mathematics calculations) that if the vectors are normalized the standard average works well for the angular metric. As far as I understand, I have to change `_mini_batch_step()` in [k\_means\_.py](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/cluster/k_means_.py). But the function is pretty complicated and I couldn't understand how to do it. Does anyone knows about alternative solution? Or maybe, Does anyone knows how can I edit this function with a one that always forces the centroids to be normalized?
2017/09/25
[ "https://Stackoverflow.com/questions/46409846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8671126/" ]
So it turns out you can just normalise X to be of unit length and use K-means as normal. The reason being if X1 and X2 are unit vectors, looking at the following equation, the term inside the brackets in the last line is cosine distance. [![vect_dist](https://i.stack.imgur.com/7amCq.png)](https://i.stack.imgur.com/7amCq.png) So in terms of using k-means, simply do: ```py length = np.sqrt((X**2).sum(axis=1))[:,None] X = X / length kmeans = KMeans(n_clusters=10, random_state=0).fit(X) ``` And if you need the centroids and distance matrix do: ```py len_ = np.sqrt(np.square(kmeans.cluster_centers_).sum(axis=1)[:,None]) centers = kmeans.cluster_centers_ / len_ dist = 1 - np.dot(centers, X.T) # K x N matrix of cosine distances ``` Notes: ------ * Just realised that you are trying to minimise the distance between the mean vector of the cluster, and its constituents. The mean vector has length of less than one when you simply average the vectors. But in practice, it's still worth running the normal sklearn algorithm and checking the length of the mean vector. In my case the mean vectors were close to unit length (averaging around 0.9, but this depends on how dense your data is). TLDR: Use the [spherecluster](https://github.com/jasonlaska/spherecluster) package as @σηγ pointed out.
Unfortunately no. Sklearn current implementation of k-means only uses Euclidean distances. The reason is K-means includes calculation to find the cluster center and assign a sample to the closest center, and Euclidean only have the meaning of the center among samples. If you want to use K-means with cosine distance, you need to make your own function or class. Or, try to use other clustering algorithm such as DBSCAN.
46,409,846
I am trying to implement `Kmeans` algorithm in python which will use `cosine distance` instead of euclidean distance as distance metric. I understand that using different distance function can be fatal and should done carefully. Using cosine distance as metric forces me to change the average function (the average in accordance to cosine distance must be an element by element average of the normalized vectors). I have seen [this](https://gist.github.com/jaganadhg/b3f6af86ad99bf6e9bb7be21e5baa1b5) elegant solution of manually overriding the distance function of sklearn, and I want to use the same technique to override the averaging section of the code but I couldn't find it. Does anyone knows How can it be done ? How critical is it that the distance metric doesn't satisfy the triangular inequality? If anyone knows a different efficient implementation of kmeans where I use cosine metric or satisfy an distance and averaging functions it would also be realy helpful. Thank you very much! **Edit:** After using the angular distance instead of cosine distance, The code looks as something like that: ``` def KMeans_cosine_fit(sparse_data, nclust = 10, njobs=-1, randomstate=None): # Manually override euclidean def euc_dist(X, Y = None, Y_norm_squared = None, squared = False): #return pairwise_distances(X, Y, metric = 'cosine', n_jobs = 10) return np.arccos(cosine_similarity(X, Y))/np.pi k_means_.euclidean_distances = euc_dist kmeans = k_means_.KMeans(n_clusters = nclust, n_jobs = njobs, random_state = randomstate) _ = kmeans.fit(sparse_data) return kmeans ``` I noticed (with mathematics calculations) that if the vectors are normalized the standard average works well for the angular metric. As far as I understand, I have to change `_mini_batch_step()` in [k\_means\_.py](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/cluster/k_means_.py). But the function is pretty complicated and I couldn't understand how to do it. Does anyone knows about alternative solution? Or maybe, Does anyone knows how can I edit this function with a one that always forces the centroids to be normalized?
2017/09/25
[ "https://Stackoverflow.com/questions/46409846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8671126/" ]
You can normalize your data and then use KMeans. ``` from sklearn import preprocessing from sklearn.cluster import KMeans kmeans = KMeans().fit(preprocessing.normalize(X)) ```
Unfortunately no. Sklearn current implementation of k-means only uses Euclidean distances. The reason is K-means includes calculation to find the cluster center and assign a sample to the closest center, and Euclidean only have the meaning of the center among samples. If you want to use K-means with cosine distance, you need to make your own function or class. Or, try to use other clustering algorithm such as DBSCAN.
46,409,846
I am trying to implement `Kmeans` algorithm in python which will use `cosine distance` instead of euclidean distance as distance metric. I understand that using different distance function can be fatal and should done carefully. Using cosine distance as metric forces me to change the average function (the average in accordance to cosine distance must be an element by element average of the normalized vectors). I have seen [this](https://gist.github.com/jaganadhg/b3f6af86ad99bf6e9bb7be21e5baa1b5) elegant solution of manually overriding the distance function of sklearn, and I want to use the same technique to override the averaging section of the code but I couldn't find it. Does anyone knows How can it be done ? How critical is it that the distance metric doesn't satisfy the triangular inequality? If anyone knows a different efficient implementation of kmeans where I use cosine metric or satisfy an distance and averaging functions it would also be realy helpful. Thank you very much! **Edit:** After using the angular distance instead of cosine distance, The code looks as something like that: ``` def KMeans_cosine_fit(sparse_data, nclust = 10, njobs=-1, randomstate=None): # Manually override euclidean def euc_dist(X, Y = None, Y_norm_squared = None, squared = False): #return pairwise_distances(X, Y, metric = 'cosine', n_jobs = 10) return np.arccos(cosine_similarity(X, Y))/np.pi k_means_.euclidean_distances = euc_dist kmeans = k_means_.KMeans(n_clusters = nclust, n_jobs = njobs, random_state = randomstate) _ = kmeans.fit(sparse_data) return kmeans ``` I noticed (with mathematics calculations) that if the vectors are normalized the standard average works well for the angular metric. As far as I understand, I have to change `_mini_batch_step()` in [k\_means\_.py](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/cluster/k_means_.py). But the function is pretty complicated and I couldn't understand how to do it. Does anyone knows about alternative solution? Or maybe, Does anyone knows how can I edit this function with a one that always forces the centroids to be normalized?
2017/09/25
[ "https://Stackoverflow.com/questions/46409846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8671126/" ]
So it turns out you can just normalise X to be of unit length and use K-means as normal. The reason being if X1 and X2 are unit vectors, looking at the following equation, the term inside the brackets in the last line is cosine distance. [![vect_dist](https://i.stack.imgur.com/7amCq.png)](https://i.stack.imgur.com/7amCq.png) So in terms of using k-means, simply do: ```py length = np.sqrt((X**2).sum(axis=1))[:,None] X = X / length kmeans = KMeans(n_clusters=10, random_state=0).fit(X) ``` And if you need the centroids and distance matrix do: ```py len_ = np.sqrt(np.square(kmeans.cluster_centers_).sum(axis=1)[:,None]) centers = kmeans.cluster_centers_ / len_ dist = 1 - np.dot(centers, X.T) # K x N matrix of cosine distances ``` Notes: ------ * Just realised that you are trying to minimise the distance between the mean vector of the cluster, and its constituents. The mean vector has length of less than one when you simply average the vectors. But in practice, it's still worth running the normal sklearn algorithm and checking the length of the mean vector. In my case the mean vectors were close to unit length (averaging around 0.9, but this depends on how dense your data is). TLDR: Use the [spherecluster](https://github.com/jasonlaska/spherecluster) package as @σηγ pointed out.
You can normalize your data and then use KMeans. ``` from sklearn import preprocessing from sklearn.cluster import KMeans kmeans = KMeans().fit(preprocessing.normalize(X)) ```
73,326,374
I want to solve NP-hard combinatorial optimization problem using quantum optimization.In this regard, I am using "classiq" python library, which a high level API for making hardware compatible quantum circuits, with IBMQ backend. To use "classiq", you have to first do the authentication of your machine (according to the official "classiq" website: <https://docs.classiq.io/latest/getting-started/python-sdk/> Unfortunatly, whenever I ran the statement (classiq.authenticate()), I got runtime error as shown in the attached figure (with the full traceback). [enter image description here](https://i.stack.imgur.com/vmqe2.png)
2022/08/11
[ "https://Stackoverflow.com/questions/73326374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17990100/" ]
Prepared statement escaped the parameters: ```java private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) " + "VALUES ('?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?')"; ``` It needed to be replaced with: ```java private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) " + "VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"; ```
The answer to this question is that we do not need `'` around `?` in insert sql. This is wrong: ``` private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) VALUES ('?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?') "; ``` correct one is: ``` private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) "; ```
63,532,068
I have my python3 path under `/usr/bin`, any my pip path under `/.local/bin` of my local repository. With some pip modules installed, I can run my code successfully through `python3 mycode.py`. But I tried to run the shell script: ``` #!/usr/bin echo "starting now..." nohup python3 mycode.py > log.txt 2>&1 & echo $! > pid echo "mycode.py started at pid: " cat pid ``` and my python code: mycode.py ``` #!/usr/bin/env python3 from aiocqhttp import CQHttp, Event, Message, MessageSegment ... ... ... ``` It gives me: ``` Traceback (most recent call last): File "mycode.py", line 2, in <module> from aiocqhttp import CQHttp, Event, Message, MessageSegment ModuleNotFoundError: No module named 'aiocqhttp' ``` I tried to replace the interpreter of the shell script with my local path, but it doesn't help. How should I run my python codes using the shell script?
2020/08/22
[ "https://Stackoverflow.com/questions/63532068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9207332/" ]
Just use: ``` msgbox Range("B1:B19").SpecialCells(xlCellTypeBlanks).Address ```
This solution adapts your code. ``` Dim cell As Range Dim emptyStr As String emptyStr = "" For Each cell In Range("B1:B19") If IsEmpty(cell) Then _ emptyStr = emptyStr & cell.Address(0, 0) & ", " Next cell If emptyStr <> "" Then MsgBox Left(emptyStr, Len(emptyStr) - 2) ``` If the `cell` is empty, it stores the address in `emptyStr`. The `if` condition can be condensed as `isEmpty` returns a Boolean.
63,532,068
I have my python3 path under `/usr/bin`, any my pip path under `/.local/bin` of my local repository. With some pip modules installed, I can run my code successfully through `python3 mycode.py`. But I tried to run the shell script: ``` #!/usr/bin echo "starting now..." nohup python3 mycode.py > log.txt 2>&1 & echo $! > pid echo "mycode.py started at pid: " cat pid ``` and my python code: mycode.py ``` #!/usr/bin/env python3 from aiocqhttp import CQHttp, Event, Message, MessageSegment ... ... ... ``` It gives me: ``` Traceback (most recent call last): File "mycode.py", line 2, in <module> from aiocqhttp import CQHttp, Event, Message, MessageSegment ModuleNotFoundError: No module named 'aiocqhttp' ``` I tried to replace the interpreter of the shell script with my local path, but it doesn't help. How should I run my python codes using the shell script?
2020/08/22
[ "https://Stackoverflow.com/questions/63532068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9207332/" ]
Just use: ``` msgbox Range("B1:B19").SpecialCells(xlCellTypeBlanks).Address ```
Please try this code. ``` Sub ListEmptyCells() Dim Rng As Range Dim List As Variant Dim Txt As String Set Rng = Range("B1:B19") On Error Resume Next List = Rng.SpecialCells(xlCellTypeBlanks).Address(0, 0) If Err Then Txt = "There are no empty cells in" & vbCr & _ "the examined range." Else Txt = "The following cells are empty." & vbCr & _ Join(Split(List, ","), vbCr) End If MsgBox Txt, vbInformation, "Range " & Rng.Address(0, 0) Err.Clear End Sub ``` It uses Excel's own SpecialCells(xlCellTypeBlank), avoiding an error which must occur if this method returns nothing, and presenting the result in a legible format created by manipulating the range address if one is returned.
63,532,068
I have my python3 path under `/usr/bin`, any my pip path under `/.local/bin` of my local repository. With some pip modules installed, I can run my code successfully through `python3 mycode.py`. But I tried to run the shell script: ``` #!/usr/bin echo "starting now..." nohup python3 mycode.py > log.txt 2>&1 & echo $! > pid echo "mycode.py started at pid: " cat pid ``` and my python code: mycode.py ``` #!/usr/bin/env python3 from aiocqhttp import CQHttp, Event, Message, MessageSegment ... ... ... ``` It gives me: ``` Traceback (most recent call last): File "mycode.py", line 2, in <module> from aiocqhttp import CQHttp, Event, Message, MessageSegment ModuleNotFoundError: No module named 'aiocqhttp' ``` I tried to replace the interpreter of the shell script with my local path, but it doesn't help. How should I run my python codes using the shell script?
2020/08/22
[ "https://Stackoverflow.com/questions/63532068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9207332/" ]
Just use: ``` msgbox Range("B1:B19").SpecialCells(xlCellTypeBlanks).Address ```
Find Blank Cells Using 'SpecialCells' ------------------------------------- * The 2nd Sub (`listBlanks`) is the main Sub. * The 1st Sub shows how to use the main Sub. * The 3rd Sub shows how `SpecialCells` works, which on one hand might be considered unreliable or on the other hand could be used to one's advantage. * After using the 3rd Sub, one could conclude that `SpecialCells` 'considers' only cells at the intersection of the `UsedRange` and the 'supplied' range. **The Code** ``` Option Explicit Sub testListBlanks() Const RangeAddress As String = "B1:B19" Dim rng As Range: Set rng = Range(RangeAddress) listBlanks rng listBlanks rng, True End Sub Sub listBlanks(SourceRange As Range, _ Optional useList As Boolean = False) Const proc As String = "'listBlanks'" On Error GoTo clearError Dim rng As Range: Set rng = SourceRange.SpecialCells(xlCellTypeBlanks) Dim msgString As String GoSub writeMsg MsgBox msgString, vbInformation, "Blank Cells Found ('" & proc & "')" Exit Sub writeMsg: msgString = "Blank Cells in Range '" & SourceRange.Address(False, False) _ & "'" & vbLf & vbLf & "The cells in range '" _ & rng.Address(False, False) & "' are blank." If useList Then GoSub writeList Return writeList: Dim cel As Range, i As Long, CellList As String For Each cel In rng.Cells CellList = CellList & vbLf & cel.Address(False, False) Next cel msgString = msgString & vbLf & vbLf _ & "The range contains the following " & rng.Cells.Count _ & " empty cells:" & vbLf & CellList Return clearError: If Err.Number = 1004 And Err.Description = "No cells were found." Then MsgBox "No blank cells in range '" & SourceRange.Address(False, False) _ & "' were found.", vbInformation, "No Blanks ('" & proc & "')" Exit Sub Else MsgBox "An unexpected error occurred." & vbLf _ & "Run-time error '" & Err.Number & "': " & Err.Description, _ vbCritical, "Error in " & proc End If End Sub Sub testUsedRangeAndSpecialCells() Const wsName As String = "Sheet2" Dim wb As Workbook: Set wb = ThisWorkbook Dim ws As Worksheet: Set ws = wb.Worksheets(wsName) With ws .Range("A:B").ClearContents Debug.Print .UsedRange.Address .Cells(1, 1).Value = 1 Debug.Print .UsedRange.Address .Cells(1, 2).Value = 2 Debug.Print .UsedRange.Address .Cells(2, 1).Value = 1 Debug.Print .UsedRange.Address .Cells(2, 2).Value = 2 Debug.Print .UsedRange.Address .Cells(2, 3).Value = 3 Debug.Print .UsedRange.Address .Cells(2, 3).ClearContents Debug.Print .UsedRange.Address .Cells(1, 2).ClearContents Debug.Print .Columns("B").SpecialCells(xlCellTypeBlanks).Address Dim rng As Range: Set rng = .Columns("C") Debug.Print rng.Address On Error Resume Next Set rng = rng.SpecialCells(xlCellTypeBlanks) If Err.Number <> 0 Then MsgBox "We know that all cells are blank in range '" _ & rng.Address(False, False) & "', but 'SpecialCells' " _ & "doesn't consider them since they are not part of 'UsedRange'." Debug.Print "No blank cells (not quite)" Else Debug.Print rng.Address End If On Error Goto 0 .Cells(3, 4).Value = 4 Set rng = rng.SpecialCells(xlCellTypeBlanks) Debug.Print rng.Address(False, False) End With End Sub ``` **The result of the 3rd Sub (`testUsedRangeAndSpecialCells`)** ``` $A$1 $A$1 $A$1:$B$1 $A$1:$B$2 $A$1:$B$2 $A$1:$C$2 $A$1:$B$2 $B$1 $C:$C No blank cells (not quite) C1:C3 ```
63,532,068
I have my python3 path under `/usr/bin`, any my pip path under `/.local/bin` of my local repository. With some pip modules installed, I can run my code successfully through `python3 mycode.py`. But I tried to run the shell script: ``` #!/usr/bin echo "starting now..." nohup python3 mycode.py > log.txt 2>&1 & echo $! > pid echo "mycode.py started at pid: " cat pid ``` and my python code: mycode.py ``` #!/usr/bin/env python3 from aiocqhttp import CQHttp, Event, Message, MessageSegment ... ... ... ``` It gives me: ``` Traceback (most recent call last): File "mycode.py", line 2, in <module> from aiocqhttp import CQHttp, Event, Message, MessageSegment ModuleNotFoundError: No module named 'aiocqhttp' ``` I tried to replace the interpreter of the shell script with my local path, but it doesn't help. How should I run my python codes using the shell script?
2020/08/22
[ "https://Stackoverflow.com/questions/63532068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9207332/" ]
Just use: ``` msgbox Range("B1:B19").SpecialCells(xlCellTypeBlanks).Address ```
**List blanks via dynamic arrays and spill range reference** Using the new dynamic array possibilities of **Microsoft 365** (writing e.g. to target `C1:C?` in section `b)`) ``` =$B$1:$B$19="" ``` and a so called ►**spill range** reference (as argument in the function `Textjoin()`, vers. 2019+ in section `c)`) ``` C1# ' note the `#` suffix! ``` you could code as follows: ``` Sub TestSpillRange() With Sheet1 'a) define range Dim rng As Range Set rng = .Range("B1:B19") 'b) check empty cell condition and enter boolean values into spill range C1# .Range("C1").Formula2 = "=" & rng.Address & "=""""" 'c) choose wanted values in spill range and connect them to result string Dim msg As Variant msg = Evaluate("TextJoin("","",true,if(C1#=true,""B""&row(C1#),""""))") MsgBox msg, vbInformation, "Empty cells" End With End Sub ```
52,302,767
In my pc, how can i listen incoming connection as a DLNA server? On my TV there is the possibility for get media from a DLNA server, i would write a simple python script that grant access at TV to my files. The ends are: * LG webOS smartTV * macOS
2018/09/12
[ "https://Stackoverflow.com/questions/52302767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8954551/" ]
One project I've found is Cohen3. Does not look as too much alive, but at least supports Python version 3: <https://github.com/opacam/Cohen3> There's also a Coherence project (it seems Cohen3 is somehow derived from it, not sure): <https://gitlab.digitalcourage.de/coherence>
There is already server app doing it [Serviio](http://serviio.org/) I already using this on windows 10 and raspberry pi 2 and 3 and it works well, easy to setup by web app and you'll save a time with making your own solution.
10,496,649
Continued from [How to use wxSpellCheckerDialog in Django?](https://stackoverflow.com/questions/10474971/how-to-use-wxspellcheckerdialog-python-django/10476811#comment13562681_10476811) I have added spell checking to Django application using pyenchant. It works correctly when first run. But when I call it again (or after several runs) it gives following error. ![enter image description here](https://i.stack.imgur.com/IjzSy.png) > > PyAssertionError at /quiz/submit/ > > > C++ assertion "wxThread::IsMain()" > failed at ....\src\msw\evtloop.cpp(244) in wxEventLoop::Dispatch(): > only the main thread can process Windows messages > > > How to fix this?
2012/05/08
[ "https://Stackoverflow.com/questions/10496649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1110394/" ]
You don't need wxPython to use pyEnchant. And you certainly shouldn't be using wx stuff with django. wxPython is for desktop GUIs, while django is a web app framework. As "uhz" pointed out, you can't call wxPython methods outside the main thread that wxPython runs in unless you use its threadsafe methods, such as wx.CallAfter. I don't know why you'd call wxPython from Django though.
It seems you are trying to use wx controls from inside Django code, is that correct? If so you are doing a very weird thing :) When you write a GUI application with wxPython there is one main thread which can process Window messages - the main thread is defined as the one where wx.App was created. You are trying to do a UI thing from a non-UI thread. So probably at first run everything works (everything is performed in the GUI thread) but on second attempt a different python thread (spawned by django?) is performing some illegal GUI actions. You could try using wx.CallAfter which would execute a function from the argument in GUI thread but this is non-blocking. Also I've found something you might consider: wxAnyThread [wxAnyThread](http://pypi.python.org/pypi/wxAnyThread). But I didn't use it and I don't know if it applies in your case.
55,585,079
I tried running this code in TensorFlow 2.0 (alpha): ```py import tensorflow_hub as hub @tf.function def elmo(texts): elmo_module = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True) return elmo_module(texts, signature="default", as_dict=True) embeds = elmo(tf.constant(["the cat is on the mat", "dogs are in the fog"])) ``` But I got this error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-c7f14c7ed0e9> in <module> 9 10 elmo(tf.constant(["the cat is on the mat", ---> 11 "dogs are in the fog"])) .../tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 417 # This is the first call of __call__, so we have to initialize. 418 initializer_map = {} --> 419 self._initialize(args, kwds, add_initializers_to=initializer_map) 420 if self._created_variables: 421 try: .../tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 361 self._concrete_stateful_fn = ( 362 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 363 *args, **kwds)) 364 365 def invalid_creator_scope(*unused_args, **unused_kwds): .../tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 1322 if self.input_signature: 1323 args, kwargs = None, None -> 1324 graph_function, _, _ = self._maybe_define_function(args, kwargs) 1325 return graph_function 1326 .../tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 1585 or call_context_key not in self._function_cache.missed): 1586 self._function_cache.missed.add(call_context_key) -> 1587 graph_function = self._create_graph_function(args, kwargs) 1588 self._function_cache.primary[cache_key] = graph_function 1589 return graph_function, args, kwargs .../tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 1518 arg_names=arg_names, 1519 override_flat_arg_shapes=override_flat_arg_shapes, -> 1520 capture_by_value=self._capture_by_value), 1521 self._function_attributes) 1522 .../tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 705 converted_func) 706 --> 707 func_outputs = python_func(*func_args, **func_kwargs) 708 709 # invariant: `func_outputs` contains only Tensors, IndexedSlices, .../tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 314 # __wrapped__ allows AutoGraph to swap in a converted function. We give 315 # the function a weak reference to itself to avoid a reference cycle. --> 316 return weak_wrapped_fn().__wrapped__(*args, **kwds) 317 weak_wrapped_fn = weakref.ref(wrapped_fn) 318 .../tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 697 optional_features=autograph_options, 698 force_conversion=True, --> 699 ), args, kwargs) 700 701 # Wrapping around a decorator allows checks like tf_inspect.getargspec .../tensorflow/python/autograph/impl/api.py in converted_call(f, owner, options, args, kwargs) 355 356 if kwargs is not None: --> 357 result = converted_f(*effective_args, **kwargs) 358 else: 359 result = converted_f(*effective_args) /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmp4v3g2d_1.py in tf__elmo(texts) 11 retval_ = None 12 print('Eager:', ag__.converted_call('executing_eagerly', tf, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (), None)) ---> 13 elmo_module = ag__.converted_call('Module', hub, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), ('https://tfhub.dev/google/elmo/2',), {'trainable': True}) 14 do_return = True 15 retval_ = ag__.converted_call(elmo_module, None, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (texts,), {'signature': 'default', 'as_dict': True}) .../tensorflow/python/autograph/impl/api.py in converted_call(f, owner, options, args, kwargs) 252 if tf_inspect.isclass(f): 253 logging.log(2, 'Permanently whitelisted: %s: constructor', f) --> 254 return _call_unconverted(f, args, kwargs) 255 256 # Other built-in modules are permanently whitelisted. .../tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs) 174 175 if kwargs is not None: --> 176 return f(*args, **kwargs) 177 else: 178 return f(*args) .../tensorflow_hub/module.py in __init__(self, spec, trainable, name, tags) 167 name=self._name, 168 trainable=self._trainable, --> 169 tags=self._tags) 170 # pylint: enable=protected-access 171 .../tensorflow_hub/native_module.py in _create_impl(self, name, trainable, tags) 338 trainable=trainable, 339 checkpoint_path=self._checkpoint_variables_path, --> 340 name=name) 341 342 def _export(self, path, variables_saver): .../tensorflow_hub/native_module.py in __init__(self, spec, meta_graph, trainable, checkpoint_path, name) 389 # TPU training code. 390 with tf.init_scope(): --> 391 self._init_state(name) 392 393 def _init_state(self, name): .../tensorflow_hub/native_module.py in _init_state(self, name) 392 393 def _init_state(self, name): --> 394 variable_tensor_map, self._state_map = self._create_state_graph(name) 395 self._variable_map = recover_partitioned_variable_map( 396 get_node_map_from_tensor_map(variable_tensor_map)) .../tensorflow_hub/native_module.py in _create_state_graph(self, name) 449 meta_graph, 450 input_map={}, --> 451 import_scope=relative_scope_name) 452 453 # Build a list from the variable name in the module definition to the actual .../tensorflow/python/training/saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs) 1443 """ # pylint: disable=g-doc-exception 1444 return _import_meta_graph_with_return_elements( -> 1445 meta_graph_or_file, clear_devices, import_scope, **kwargs)[0] 1446 1447 .../tensorflow/python/training/saver.py in _import_meta_graph_with_return_elements(meta_graph_or_file, clear_devices, import_scope, return_elements, **kwargs) 1451 """Import MetaGraph, and return both a saver and returned elements.""" 1452 if context.executing_eagerly(): -> 1453 raise RuntimeError("Exporting/importing meta graphs is not supported when " 1454 "eager execution is enabled. No graph exists when eager " 1455 "execution is enabled.") RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled. ```
2019/04/09
[ "https://Stackoverflow.com/questions/55585079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/38626/" ]
In Tensorflow 2.0 you should be using `hub.load()` or `hub.KerasLayer()`. **[April 2019]** - For now only Tensorflow 2.0 modules are loadable via them. In the future many of 1.x Hub modules should be loadable as well. For the 2.x only modules you can see examples in the notebooks created for the modules [here](https://tfhub.dev/s?q=tf2-preview)
this function load will work with tensorflow 2 ``` embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder-large/3") ``` instead of ``` embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3") ``` [this is not accepted in tf2] use hub.load()
55,585,079
I tried running this code in TensorFlow 2.0 (alpha): ```py import tensorflow_hub as hub @tf.function def elmo(texts): elmo_module = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True) return elmo_module(texts, signature="default", as_dict=True) embeds = elmo(tf.constant(["the cat is on the mat", "dogs are in the fog"])) ``` But I got this error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-c7f14c7ed0e9> in <module> 9 10 elmo(tf.constant(["the cat is on the mat", ---> 11 "dogs are in the fog"])) .../tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 417 # This is the first call of __call__, so we have to initialize. 418 initializer_map = {} --> 419 self._initialize(args, kwds, add_initializers_to=initializer_map) 420 if self._created_variables: 421 try: .../tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 361 self._concrete_stateful_fn = ( 362 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 363 *args, **kwds)) 364 365 def invalid_creator_scope(*unused_args, **unused_kwds): .../tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 1322 if self.input_signature: 1323 args, kwargs = None, None -> 1324 graph_function, _, _ = self._maybe_define_function(args, kwargs) 1325 return graph_function 1326 .../tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 1585 or call_context_key not in self._function_cache.missed): 1586 self._function_cache.missed.add(call_context_key) -> 1587 graph_function = self._create_graph_function(args, kwargs) 1588 self._function_cache.primary[cache_key] = graph_function 1589 return graph_function, args, kwargs .../tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 1518 arg_names=arg_names, 1519 override_flat_arg_shapes=override_flat_arg_shapes, -> 1520 capture_by_value=self._capture_by_value), 1521 self._function_attributes) 1522 .../tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 705 converted_func) 706 --> 707 func_outputs = python_func(*func_args, **func_kwargs) 708 709 # invariant: `func_outputs` contains only Tensors, IndexedSlices, .../tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 314 # __wrapped__ allows AutoGraph to swap in a converted function. We give 315 # the function a weak reference to itself to avoid a reference cycle. --> 316 return weak_wrapped_fn().__wrapped__(*args, **kwds) 317 weak_wrapped_fn = weakref.ref(wrapped_fn) 318 .../tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 697 optional_features=autograph_options, 698 force_conversion=True, --> 699 ), args, kwargs) 700 701 # Wrapping around a decorator allows checks like tf_inspect.getargspec .../tensorflow/python/autograph/impl/api.py in converted_call(f, owner, options, args, kwargs) 355 356 if kwargs is not None: --> 357 result = converted_f(*effective_args, **kwargs) 358 else: 359 result = converted_f(*effective_args) /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmp4v3g2d_1.py in tf__elmo(texts) 11 retval_ = None 12 print('Eager:', ag__.converted_call('executing_eagerly', tf, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (), None)) ---> 13 elmo_module = ag__.converted_call('Module', hub, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), ('https://tfhub.dev/google/elmo/2',), {'trainable': True}) 14 do_return = True 15 retval_ = ag__.converted_call(elmo_module, None, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (texts,), {'signature': 'default', 'as_dict': True}) .../tensorflow/python/autograph/impl/api.py in converted_call(f, owner, options, args, kwargs) 252 if tf_inspect.isclass(f): 253 logging.log(2, 'Permanently whitelisted: %s: constructor', f) --> 254 return _call_unconverted(f, args, kwargs) 255 256 # Other built-in modules are permanently whitelisted. .../tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs) 174 175 if kwargs is not None: --> 176 return f(*args, **kwargs) 177 else: 178 return f(*args) .../tensorflow_hub/module.py in __init__(self, spec, trainable, name, tags) 167 name=self._name, 168 trainable=self._trainable, --> 169 tags=self._tags) 170 # pylint: enable=protected-access 171 .../tensorflow_hub/native_module.py in _create_impl(self, name, trainable, tags) 338 trainable=trainable, 339 checkpoint_path=self._checkpoint_variables_path, --> 340 name=name) 341 342 def _export(self, path, variables_saver): .../tensorflow_hub/native_module.py in __init__(self, spec, meta_graph, trainable, checkpoint_path, name) 389 # TPU training code. 390 with tf.init_scope(): --> 391 self._init_state(name) 392 393 def _init_state(self, name): .../tensorflow_hub/native_module.py in _init_state(self, name) 392 393 def _init_state(self, name): --> 394 variable_tensor_map, self._state_map = self._create_state_graph(name) 395 self._variable_map = recover_partitioned_variable_map( 396 get_node_map_from_tensor_map(variable_tensor_map)) .../tensorflow_hub/native_module.py in _create_state_graph(self, name) 449 meta_graph, 450 input_map={}, --> 451 import_scope=relative_scope_name) 452 453 # Build a list from the variable name in the module definition to the actual .../tensorflow/python/training/saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs) 1443 """ # pylint: disable=g-doc-exception 1444 return _import_meta_graph_with_return_elements( -> 1445 meta_graph_or_file, clear_devices, import_scope, **kwargs)[0] 1446 1447 .../tensorflow/python/training/saver.py in _import_meta_graph_with_return_elements(meta_graph_or_file, clear_devices, import_scope, return_elements, **kwargs) 1451 """Import MetaGraph, and return both a saver and returned elements.""" 1452 if context.executing_eagerly(): -> 1453 raise RuntimeError("Exporting/importing meta graphs is not supported when " 1454 "eager execution is enabled. No graph exists when eager " 1455 "execution is enabled.") RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled. ```
2019/04/09
[ "https://Stackoverflow.com/questions/55585079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/38626/" ]
In Tensorflow 2.0 you should be using `hub.load()` or `hub.KerasLayer()`. **[April 2019]** - For now only Tensorflow 2.0 modules are loadable via them. In the future many of 1.x Hub modules should be loadable as well. For the 2.x only modules you can see examples in the notebooks created for the modules [here](https://tfhub.dev/s?q=tf2-preview)
**January 2021** To use a model from TF Hub, including ELMO e.g., with tensorflow 2.x load and unpack a model locally: ``` cd ~/tfhub/elmo3 model_link='https://tfhub.dev/google/elmo/3' model_link=$model_link'?tf-hub-format=compressed' wget $model_link -O model tar xvzf model rm model ``` Then use `hub.load()`: ``` import tensorflow as tf import tensorflow_hub as hub elmo = hub.load("/home/user/tfhub/elmo3") embeddings = elmo.signatures["default"](tf.constant([ "i like green eggs and ham", "i like green ham and eggs" ]) )["elmo"] ``` This function [can handle](https://www.tensorflow.org/hub/api_docs/python/hub/load) the deprecated TF1 Hub format
16,138,090
What is the most efficient and portable way to generate a random random in `[0,1]` in Cython? One approach is to use `INT_MAX` and `rand()` from the C library: ``` from libc.stdlib cimport rand cdef extern from "limits.h": int INT_MAX cdef float randnum = rand() / float(INT_MAX) ``` Is it OK to use `INT_MAX` in this way? I noticed that it's quite different from the constant you get from Python's max int: ``` import sys print INT_MAX print sys.maxint ``` yields: ``` 2147483647 (C max int) 9223372036854775807 (python max int) ``` Which is the right "normalization" number for `rand()`? **EDIT** additionally, how can the random seed be set (e.g. seeded based on current time) if one uses the C approach of calling `rand()` from libc?
2013/04/22
[ "https://Stackoverflow.com/questions/16138090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The C standard says `rand` returns an `int` in the range 0 to RAND\_MAX inclusive, so dividing it by RAND\_MAX (from `stdlib.h`) is the proper way to normalise it. In practice, RAND\_MAX will almost always be equal to MAX\_INT, but don't rely on that. Because `rand` has been part of ISO C since C89, it's guaranteed to be available everywhere, but no guarantees are made regarding the quality of its random numbers. If portability is your main concern, though, it's your best option, unless you're willing to use Python's `random` module. Python's [`sys.maxint`](http://docs.python.org/2/library/sys.html#sys.maxint) is a different concept entirely; it's just the largest positive number Python can represent in *its own* int type; larger ones will have to be longs. Python's ints and longs aren't particularly related to C's.
'c' stdlib rand() returns a number between 0 and RAND\_MAX which is generally 32767. Is there any reason not to use the python random() ? [Generate random integers between 0 and 9](https://stackoverflow.com/questions/3996904/python-generate-random-integers-between-0-and-9)
16,138,090
What is the most efficient and portable way to generate a random random in `[0,1]` in Cython? One approach is to use `INT_MAX` and `rand()` from the C library: ``` from libc.stdlib cimport rand cdef extern from "limits.h": int INT_MAX cdef float randnum = rand() / float(INT_MAX) ``` Is it OK to use `INT_MAX` in this way? I noticed that it's quite different from the constant you get from Python's max int: ``` import sys print INT_MAX print sys.maxint ``` yields: ``` 2147483647 (C max int) 9223372036854775807 (python max int) ``` Which is the right "normalization" number for `rand()`? **EDIT** additionally, how can the random seed be set (e.g. seeded based on current time) if one uses the C approach of calling `rand()` from libc?
2013/04/22
[ "https://Stackoverflow.com/questions/16138090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The C standard says `rand` returns an `int` in the range 0 to RAND\_MAX inclusive, so dividing it by RAND\_MAX (from `stdlib.h`) is the proper way to normalise it. In practice, RAND\_MAX will almost always be equal to MAX\_INT, but don't rely on that. Because `rand` has been part of ISO C since C89, it's guaranteed to be available everywhere, but no guarantees are made regarding the quality of its random numbers. If portability is your main concern, though, it's your best option, unless you're willing to use Python's `random` module. Python's [`sys.maxint`](http://docs.python.org/2/library/sys.html#sys.maxint) is a different concept entirely; it's just the largest positive number Python can represent in *its own* int type; larger ones will have to be longs. Python's ints and longs aren't particularly related to C's.
I'm not sure if drand is a new addition but it seems to do exactly what you want while avoiding the costly division. ``` cdef extern from "stdlib.h": double drand48() void srand48(long int seedval) cdef extern from "time.h": long int time(int) # srand48(time(0)) srand48(100) # TODO: this is a seed to reproduce bugs, put to line of code above for # production drand48() #This gives a float in range [0,1) ``` I came across [this idea](https://xor0110.wordpress.com/2010/09/24/how-to-generate-floating-point-random-numbers-efficiently/) while researching if your division method generated sufficient randomness. The source I found makes the good point that in my case I am comparing the random number to a decimal with two digits so I really only need 3 decimal points of precision. So INT\_MAX is more than adequate. But, it seems like drand48 saves the cost of division so might be worth using.
16,138,090
What is the most efficient and portable way to generate a random random in `[0,1]` in Cython? One approach is to use `INT_MAX` and `rand()` from the C library: ``` from libc.stdlib cimport rand cdef extern from "limits.h": int INT_MAX cdef float randnum = rand() / float(INT_MAX) ``` Is it OK to use `INT_MAX` in this way? I noticed that it's quite different from the constant you get from Python's max int: ``` import sys print INT_MAX print sys.maxint ``` yields: ``` 2147483647 (C max int) 9223372036854775807 (python max int) ``` Which is the right "normalization" number for `rand()`? **EDIT** additionally, how can the random seed be set (e.g. seeded based on current time) if one uses the C approach of calling `rand()` from libc?
2013/04/22
[ "https://Stackoverflow.com/questions/16138090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The C standard says `rand` returns an `int` in the range 0 to RAND\_MAX inclusive, so dividing it by RAND\_MAX (from `stdlib.h`) is the proper way to normalise it. In practice, RAND\_MAX will almost always be equal to MAX\_INT, but don't rely on that. Because `rand` has been part of ISO C since C89, it's guaranteed to be available everywhere, but no guarantees are made regarding the quality of its random numbers. If portability is your main concern, though, it's your best option, unless you're willing to use Python's `random` module. Python's [`sys.maxint`](http://docs.python.org/2/library/sys.html#sys.maxint) is a different concept entirely; it's just the largest positive number Python can represent in *its own* int type; larger ones will have to be longs. Python's ints and longs aren't particularly related to C's.
All of the above answers are correct, but I'd like to add a note that took me way too long to catch. The C rand() function is NOT thread-safe. So if you are running cython in parallel without the gil, the standard C rand() function has a chance of causing enormous slowdowns while it tries to handle all of the kernel calls. Just a warning.
52,397,563
I've been pulling my hair out trying to figure this one out, hoping someone else has already encountered this and knows how to solve it :) I'm trying to build a very simple Flask endpoint that just needs to call a long running, blocking `php` script (think `while true {...}`). I've tried a few different methods to async launch the script, but the problem is my browser never actually receives the response back, even though the code for generating the response after running the script is executed. I've tried using both `multiprocessing` and `threading`, neither seem to work: ``` # multiprocessing attempt @app.route('/endpoint') def endpoint(): def worker(): subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp) p = multiprocessing.Process(target=worker) print '111111' p.start() print '222222' return json.dumps({ 'success': True }) # threading attempt @app.route('/endpoint') def endpoint(): def thread_func(): subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp) t = threading.Thread(target=thread_func) print '111111' t.start() print '222222' return json.dumps({ 'success': True }) ``` In both scenarios I see the `111111` and `222222`, yet my browser still hangs on the response from the endpoint. I've tried `p.daemon = True` for the process, as well as `p.terminate()` but no luck. I had hoped launching a script with nohup in a different shell and separate processs/thread would just work, but somehow Flask or uWSGI is impacted by it. ### Update Since this does work locally on my Mac when I start my Flask app directly with `python app.py` and hit it directly without going through my Nginx proxy and uWSGI, I'm starting to believe it may not be the code itself that is having issues. And because my Nginx just forwards the request to uWSGI, I believe it may possibly be something there that's causing it. Here is my ini configuration for the domain for uWSGI, which I'm running in emperor mode: ``` [uwsgi] protocol = uwsgi max-requests = 5000 chmod-socket = 660 master = True vacuum = True enable-threads = True auto-procname = True procname-prefix = michael- chdir = /srv/www/mysite.com module = app callable = app socket = /tmp/mysite.com.sock ```
2018/09/19
[ "https://Stackoverflow.com/questions/52397563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407296/" ]
This kind of stuff is the actual and probably main use case for `Python Celery` (<https://docs.celeryproject.org/>). As a general rule, do not run long-running jobs that are CPU-bound in the `wsgi` process. It's tricky, it's inefficient, and most important thing, it's more complicated than setting up an *async* task in a celery worker. If you want to just prototype you can set the broker to `memory` and not using an external server, or run a single-threaded `redis` on the very same machine. This way you can launch the task, call `task.result()` which is blocking, but it blocks in an **IO-bound fashion**, **or even better you can just return immediately by retrieving the `task_id` and build a second endpoint** `/result?task_id=<task_id>` that checks if result is available: ``` result = AsyncResult(task_id, app=app) if result.state == "SUCCESS": return result.get() else: return result.state # or do something else depending on the state ``` This way you have a non-blocking `wsgi` app that does what is best suited for: short time CPU-unbound calls that have IO calls at most with OS-level scheduling, then you can rely directly to the `wsgi` server `workers|processes|threads` or whatever you need to scale the API in whatever wsgi-server like uwsgi, gunicorn, etc. for the 99% of workloads as celery scales horizontally by increasing the number of worker processes.
This approach works for me, it calls the timeout command (sleep 10s) in the command line and lets it work in the background. It returns the response immediately. ``` @app.route('/endpoint1') def endpoint1(): subprocess.Popen('timeout 10', shell=True) return 'success1' ``` However, not testing on WSGI server, but just locally.
52,397,563
I've been pulling my hair out trying to figure this one out, hoping someone else has already encountered this and knows how to solve it :) I'm trying to build a very simple Flask endpoint that just needs to call a long running, blocking `php` script (think `while true {...}`). I've tried a few different methods to async launch the script, but the problem is my browser never actually receives the response back, even though the code for generating the response after running the script is executed. I've tried using both `multiprocessing` and `threading`, neither seem to work: ``` # multiprocessing attempt @app.route('/endpoint') def endpoint(): def worker(): subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp) p = multiprocessing.Process(target=worker) print '111111' p.start() print '222222' return json.dumps({ 'success': True }) # threading attempt @app.route('/endpoint') def endpoint(): def thread_func(): subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp) t = threading.Thread(target=thread_func) print '111111' t.start() print '222222' return json.dumps({ 'success': True }) ``` In both scenarios I see the `111111` and `222222`, yet my browser still hangs on the response from the endpoint. I've tried `p.daemon = True` for the process, as well as `p.terminate()` but no luck. I had hoped launching a script with nohup in a different shell and separate processs/thread would just work, but somehow Flask or uWSGI is impacted by it. ### Update Since this does work locally on my Mac when I start my Flask app directly with `python app.py` and hit it directly without going through my Nginx proxy and uWSGI, I'm starting to believe it may not be the code itself that is having issues. And because my Nginx just forwards the request to uWSGI, I believe it may possibly be something there that's causing it. Here is my ini configuration for the domain for uWSGI, which I'm running in emperor mode: ``` [uwsgi] protocol = uwsgi max-requests = 5000 chmod-socket = 660 master = True vacuum = True enable-threads = True auto-procname = True procname-prefix = michael- chdir = /srv/www/mysite.com module = app callable = app socket = /tmp/mysite.com.sock ```
2018/09/19
[ "https://Stackoverflow.com/questions/52397563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407296/" ]
This kind of stuff is the actual and probably main use case for `Python Celery` (<https://docs.celeryproject.org/>). As a general rule, do not run long-running jobs that are CPU-bound in the `wsgi` process. It's tricky, it's inefficient, and most important thing, it's more complicated than setting up an *async* task in a celery worker. If you want to just prototype you can set the broker to `memory` and not using an external server, or run a single-threaded `redis` on the very same machine. This way you can launch the task, call `task.result()` which is blocking, but it blocks in an **IO-bound fashion**, **or even better you can just return immediately by retrieving the `task_id` and build a second endpoint** `/result?task_id=<task_id>` that checks if result is available: ``` result = AsyncResult(task_id, app=app) if result.state == "SUCCESS": return result.get() else: return result.state # or do something else depending on the state ``` This way you have a non-blocking `wsgi` app that does what is best suited for: short time CPU-unbound calls that have IO calls at most with OS-level scheduling, then you can rely directly to the `wsgi` server `workers|processes|threads` or whatever you need to scale the API in whatever wsgi-server like uwsgi, gunicorn, etc. for the 99% of workloads as celery scales horizontally by increasing the number of worker processes.
Would it be enough to use a background task? Then you only need to import `threading` e.g. ``` import threading import .... def endpoint(): """My endpoint.""" try: t = BackgroundTasks() t.start() except RuntimeError as exception: return f"An error occurred during endpoint: {exception}", 400 return "successful.", 200 return "successfully started.", 200 class BackgroundTasks(threading.Thread): def run(self,*args,**kwargs): ...do long running stuff ```
28,150,433
I'm trying to install Scrapy on windows 7. I'm following these instructions: <http://doc.scrapy.org/en/0.24/intro/install.html#intro-install> I’ve downloaded and installed python-2.7.5.msi for windows following this tutorial <https://adesquared.wordpress.com/2013/07/07/setting-up-python-and-easy_install-on-windows-7/>, and I set up the environment variables as mentioned, but when I try to run python in my command prompt I get this error: ``` Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>python ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> python ez_setup.py install ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> ``` Could you please help me solve this?
2015/01/26
[ "https://Stackoverflow.com/questions/28150433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3640056/" ]
`ur` is python2 syntax you are trying to install an [incompatible](http://doc.scrapy.org/en/latest/faq.html#does-scrapy-work-with-python-3) package meant for python2 not python3: ``` _ajax_crawlable_re = re.compile(ur'<meta\s+name=["\']fragment["\']\s+content=["\']!["\']/?>') ^^ python2 syntax ``` Also pip is installed by default for python3.4
Step By Step way to install scrapy on Windows 7 1. Install Python 2.7 from [Python Download link](https://www.python.org/downloads) (Be sure to install Python 2.7 only because currently scrapy is not available for Python3 in Windows) 2. During pyhton install there is checkbox available to add python path to system variable click that option. Otherwise you can add path variable manually. You need to adjust PATH environment variable to include paths to the Python executable and additional scripts. The following paths need to be added to PATH `C:\Python27\;C:\Python27\Scripts\;` [![windows add path variable](https://i.stack.imgur.com/KvD96.jpg)](https://i.stack.imgur.com/KvD96.jpg) If you have any other problem in adding path variable please refer to this [link](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7) 3. To update the PATH open a Command prompt in administration mode and run: `:\python27\python.exe c:\python27\tools\scripts\win_add2path.py`.Close the command prompt window and reopen it so changes take effect, run the following command, to check ever thing added to path variable. `python -–version` which will give output as `Python 2.7.12` (your version might be different than mine) `pip --version` which will give output as `pip 9.0.1` (your version might be different than mine) 4. You need to install visual basic C++ Python complier. You can download that from [Download link](http://www.microsoft.com/en-us/download/details.aspx?id=44266) 5. Then you install to install libxml which python library used by scrapy. You download it by writing a command `pip install libxml` into command prompt. but if you face some problem in pip installation you can download it from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> *download **libxml** package according to your system architecture*. Open command prompt into that download directory and `pip install NAME_OF_PACKAGE.whl` 6. Install pywin32 from [Download link](http://sourceforge.net/projects/pywin32/). *Be sure you download the architecture (win32 or amd64) that matches your system* 7. Then open command prompt and run this command `pip install scrapy` I hope this will help in successful installing scrapy 8. For Reference use can your these links [Scrapy official Page](https://doc.scrapy.org/en/latest/intro/install.html) and [Blog on how to install scrapy on windows](http://www.blog.thoughttoast.com/crawling/scrapy/install-scrapy-in-windows/)
28,150,433
I'm trying to install Scrapy on windows 7. I'm following these instructions: <http://doc.scrapy.org/en/0.24/intro/install.html#intro-install> I’ve downloaded and installed python-2.7.5.msi for windows following this tutorial <https://adesquared.wordpress.com/2013/07/07/setting-up-python-and-easy_install-on-windows-7/>, and I set up the environment variables as mentioned, but when I try to run python in my command prompt I get this error: ``` Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>python ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> python ez_setup.py install ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> ``` Could you please help me solve this?
2015/01/26
[ "https://Stackoverflow.com/questions/28150433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3640056/" ]
`ur` is python2 syntax you are trying to install an [incompatible](http://doc.scrapy.org/en/latest/faq.html#does-scrapy-work-with-python-3) package meant for python2 not python3: ``` _ajax_crawlable_re = re.compile(ur'<meta\s+name=["\']fragment["\']\s+content=["\']!["\']/?>') ^^ python2 syntax ``` Also pip is installed by default for python3.4
**How to install Scrapy 1.4 on Python 3.6 on Windows 8.1 Pro x64** ``` pip install virtualenv pip install virtualenvwrapper pip install virtualenvwrapper-win mkvirtualenv my_scrapy_project ``` I advice to use virtualenv. In my example I am using name *my\_scrapy\_project* for my virtual environment. If you want to go out of virtualenv, simply type **deactivate**, if you want to go back into, simply type **workon *my\_scrapy\_project***. * Go to: <http://landinghub.visualstudio.com/visual-cpp-build-tools> * Click on button: Download Visual C++ Build Tools 2015 * Install these tools. * Go to: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> * Find and download: lxml-4.1.1-cp36-cp36m-win32.whl * Move this file to your active directory in commandline and install it: `pip install lxml-4.1.1-cp36-cp36m-win32.whl` ``` pip install scrapy ``` And that is all, it should work.
28,150,433
I'm trying to install Scrapy on windows 7. I'm following these instructions: <http://doc.scrapy.org/en/0.24/intro/install.html#intro-install> I’ve downloaded and installed python-2.7.5.msi for windows following this tutorial <https://adesquared.wordpress.com/2013/07/07/setting-up-python-and-easy_install-on-windows-7/>, and I set up the environment variables as mentioned, but when I try to run python in my command prompt I get this error: ``` Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>python ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> python ez_setup.py install ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> ``` Could you please help me solve this?
2015/01/26
[ "https://Stackoverflow.com/questions/28150433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3640056/" ]
Scrapy doesn't work with Python 3 as mentioned in their [FAQ](http://doc.scrapy.org/en/latest/faq.html#faq-python-versions) you should install Python 2.7
Step By Step way to install scrapy on Windows 7 1. Install Python 2.7 from [Python Download link](https://www.python.org/downloads) (Be sure to install Python 2.7 only because currently scrapy is not available for Python3 in Windows) 2. During pyhton install there is checkbox available to add python path to system variable click that option. Otherwise you can add path variable manually. You need to adjust PATH environment variable to include paths to the Python executable and additional scripts. The following paths need to be added to PATH `C:\Python27\;C:\Python27\Scripts\;` [![windows add path variable](https://i.stack.imgur.com/KvD96.jpg)](https://i.stack.imgur.com/KvD96.jpg) If you have any other problem in adding path variable please refer to this [link](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7) 3. To update the PATH open a Command prompt in administration mode and run: `:\python27\python.exe c:\python27\tools\scripts\win_add2path.py`.Close the command prompt window and reopen it so changes take effect, run the following command, to check ever thing added to path variable. `python -–version` which will give output as `Python 2.7.12` (your version might be different than mine) `pip --version` which will give output as `pip 9.0.1` (your version might be different than mine) 4. You need to install visual basic C++ Python complier. You can download that from [Download link](http://www.microsoft.com/en-us/download/details.aspx?id=44266) 5. Then you install to install libxml which python library used by scrapy. You download it by writing a command `pip install libxml` into command prompt. but if you face some problem in pip installation you can download it from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> *download **libxml** package according to your system architecture*. Open command prompt into that download directory and `pip install NAME_OF_PACKAGE.whl` 6. Install pywin32 from [Download link](http://sourceforge.net/projects/pywin32/). *Be sure you download the architecture (win32 or amd64) that matches your system* 7. Then open command prompt and run this command `pip install scrapy` I hope this will help in successful installing scrapy 8. For Reference use can your these links [Scrapy official Page](https://doc.scrapy.org/en/latest/intro/install.html) and [Blog on how to install scrapy on windows](http://www.blog.thoughttoast.com/crawling/scrapy/install-scrapy-in-windows/)
28,150,433
I'm trying to install Scrapy on windows 7. I'm following these instructions: <http://doc.scrapy.org/en/0.24/intro/install.html#intro-install> I’ve downloaded and installed python-2.7.5.msi for windows following this tutorial <https://adesquared.wordpress.com/2013/07/07/setting-up-python-and-easy_install-on-windows-7/>, and I set up the environment variables as mentioned, but when I try to run python in my command prompt I get this error: ``` Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>python ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> python ez_setup.py install ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> ``` Could you please help me solve this?
2015/01/26
[ "https://Stackoverflow.com/questions/28150433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3640056/" ]
Scrapy doesn't work with Python 3 as mentioned in their [FAQ](http://doc.scrapy.org/en/latest/faq.html#faq-python-versions) you should install Python 2.7
**How to install Scrapy 1.4 on Python 3.6 on Windows 8.1 Pro x64** ``` pip install virtualenv pip install virtualenvwrapper pip install virtualenvwrapper-win mkvirtualenv my_scrapy_project ``` I advice to use virtualenv. In my example I am using name *my\_scrapy\_project* for my virtual environment. If you want to go out of virtualenv, simply type **deactivate**, if you want to go back into, simply type **workon *my\_scrapy\_project***. * Go to: <http://landinghub.visualstudio.com/visual-cpp-build-tools> * Click on button: Download Visual C++ Build Tools 2015 * Install these tools. * Go to: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> * Find and download: lxml-4.1.1-cp36-cp36m-win32.whl * Move this file to your active directory in commandline and install it: `pip install lxml-4.1.1-cp36-cp36m-win32.whl` ``` pip install scrapy ``` And that is all, it should work.
28,150,433
I'm trying to install Scrapy on windows 7. I'm following these instructions: <http://doc.scrapy.org/en/0.24/intro/install.html#intro-install> I’ve downloaded and installed python-2.7.5.msi for windows following this tutorial <https://adesquared.wordpress.com/2013/07/07/setting-up-python-and-easy_install-on-windows-7/>, and I set up the environment variables as mentioned, but when I try to run python in my command prompt I get this error: ``` Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>python ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> python ez_setup.py install ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> ``` Could you please help me solve this?
2015/01/26
[ "https://Stackoverflow.com/questions/28150433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3640056/" ]
Step By Step way to install scrapy on Windows 7 1. Install Python 2.7 from [Python Download link](https://www.python.org/downloads) (Be sure to install Python 2.7 only because currently scrapy is not available for Python3 in Windows) 2. During pyhton install there is checkbox available to add python path to system variable click that option. Otherwise you can add path variable manually. You need to adjust PATH environment variable to include paths to the Python executable and additional scripts. The following paths need to be added to PATH `C:\Python27\;C:\Python27\Scripts\;` [![windows add path variable](https://i.stack.imgur.com/KvD96.jpg)](https://i.stack.imgur.com/KvD96.jpg) If you have any other problem in adding path variable please refer to this [link](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7) 3. To update the PATH open a Command prompt in administration mode and run: `:\python27\python.exe c:\python27\tools\scripts\win_add2path.py`.Close the command prompt window and reopen it so changes take effect, run the following command, to check ever thing added to path variable. `python -–version` which will give output as `Python 2.7.12` (your version might be different than mine) `pip --version` which will give output as `pip 9.0.1` (your version might be different than mine) 4. You need to install visual basic C++ Python complier. You can download that from [Download link](http://www.microsoft.com/en-us/download/details.aspx?id=44266) 5. Then you install to install libxml which python library used by scrapy. You download it by writing a command `pip install libxml` into command prompt. but if you face some problem in pip installation you can download it from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> *download **libxml** package according to your system architecture*. Open command prompt into that download directory and `pip install NAME_OF_PACKAGE.whl` 6. Install pywin32 from [Download link](http://sourceforge.net/projects/pywin32/). *Be sure you download the architecture (win32 or amd64) that matches your system* 7. Then open command prompt and run this command `pip install scrapy` I hope this will help in successful installing scrapy 8. For Reference use can your these links [Scrapy official Page](https://doc.scrapy.org/en/latest/intro/install.html) and [Blog on how to install scrapy on windows](http://www.blog.thoughttoast.com/crawling/scrapy/install-scrapy-in-windows/)
**How to install Scrapy 1.4 on Python 3.6 on Windows 8.1 Pro x64** ``` pip install virtualenv pip install virtualenvwrapper pip install virtualenvwrapper-win mkvirtualenv my_scrapy_project ``` I advice to use virtualenv. In my example I am using name *my\_scrapy\_project* for my virtual environment. If you want to go out of virtualenv, simply type **deactivate**, if you want to go back into, simply type **workon *my\_scrapy\_project***. * Go to: <http://landinghub.visualstudio.com/visual-cpp-build-tools> * Click on button: Download Visual C++ Build Tools 2015 * Install these tools. * Go to: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> * Find and download: lxml-4.1.1-cp36-cp36m-win32.whl * Move this file to your active directory in commandline and install it: `pip install lxml-4.1.1-cp36-cp36m-win32.whl` ``` pip install scrapy ``` And that is all, it should work.
71,102,179
I have tried anything in the internet and nothing works. My code: ```py @bot.command() async def bug(ctx, bug): deleted_message_id = ctx.id await ctx.channel.send(str(deleted_message_id)) await ctx.send(ctx.author.mention+", you're bug report hs been reported!") ``` I use python version 3.10
2022/02/13
[ "https://Stackoverflow.com/questions/71102179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18196675/" ]
If you don't want to store the translation on the elements html themselves, you'll need to have an object mapping the values to the names to be output. Here's a minimalist example: ```js const numberNameMap = { 1: 'one', 2: 'two', 3: 'three', 4: 'four', 5: 'five' }; const select = document.querySelector('select'); const output = document.getElementById('output'); select.addEventListener('change', ({ target: { value } }) => { output.textContent = numberNameMap[value]; }); ``` ```html <select> <option>1</option> <option>2</option> <option>3</option> <option>4</option> <option>5</option> </select> <div id="output"></div> ```
I've simplified your code to do exactly what you want. Try this ```js $(window).keydown(function(e) { if( [37, 38, 39, 40, 13].includes(e.which) ){ e.preventDefault(); let $col = $('.multiple li.selected'); let $row = $col.closest('.multiple'); let $nextCol = $col; let $nextRow = $row; if( [37, 38, 39, 40].includes(e.which) && $col.length === 0 ){ $('.multiple:first-child').find('li:first').addClass('selected'); return; } switch (e.which) { case 38: // up key $nextRow = $row.is(':first-child') ? $row.siblings(":last-child") : $row.prev(); break; case 40: // down key $nextRow = $row.is(':last-child') ? $row.siblings(":first-child") : $row.next(); break; case 37: // left key $nextCol = $col.is(':first-child') ? $col.siblings(":last-child") : $col.prev(); break; case 39: // right key $nextCol = $col.is(':last-child') ? $col.siblings(":first-child") : $col.next(); break; case 13: // enter key if($col.length > 0) $('#screen').text($col.find('a').trigger('click').text()); break; } $('.multiple li').removeClass('selected'); $nextRow.find('li').eq($nextCol.index()).addClass('selected'); } }); ``` ```css li.selected { background: green; } .multiple, .allRows li{ text-align:center; position:relative; display: flex; flex-wrap: nowrap; } .allRows a { text-decoration:none; padding: 3px; border: 1px solid #555; color: #222; margin: 0px; text-align: center; font-size:20px; } #screen { background:#222; color:#ddd; position:absolute; top: 100px; padding:30px 150px; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ul class="allRows"> <div class="multiple"> <li><a href="#" patterns="one" id="one">One</a></li> <li><a href="#" patterns="two" id="two">Two</a></li> <li><a href="#" patterns="three" id="three">Three</a></li> <li><a href="#" patterns="four" id="four">Four</a></li> <li><a href="#" patterns="five" id="five">Five</a></li> <li><a href="#" patterns="six" id="six">Six</a></li> </div> <div class="multiple"> <li><a href="#" patterns="seven" id="seven">Seven</a></li> <li><a href="#" patterns="eight" id="eight">Eight</a></li> <li><a href="#" patterns="nine" id="nine">Nine</a></li> <li><a href="#" patterns="ten" id="ten">Ten</a></li> <li><a href="#" patterns="eleven" id="eleven">Eleven</a></li> <li><a href="#" patterns="twelve" id="twelve">Twelve</a></li> </div> </ul> <div id="screen">Numbers</div> ```
71,102,179
I have tried anything in the internet and nothing works. My code: ```py @bot.command() async def bug(ctx, bug): deleted_message_id = ctx.id await ctx.channel.send(str(deleted_message_id)) await ctx.send(ctx.author.mention+", you're bug report hs been reported!") ``` I use python version 3.10
2022/02/13
[ "https://Stackoverflow.com/questions/71102179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18196675/" ]
You mean like this? ```js var columns = 0; var rowButtons = 0; var $rows = $('.multiple'); var liSelected; var arrows = { left: 37, up: 38, right: 39, down: 40, enter: 13 }; $(window).keydown(function(e) { if (Object.values(arrows).indexOf(e.which) > -1) { e.preventDefault(); $('.multiple li').removeClass('selected'); switch (e.which) { case arrows.up: rowButtons = rowButtons == 0 ? $rows.length - 1 : rowButtons - 1; break; case arrows.down: rowButtons = rowButtons == $rows.length - 1 ? 0 : rowButtons + 1; break; case arrows.left: $buttonsInRow = $('.multiple:eq(' + rowButtons + ') li'); columns = columns == 0 ? $buttonsInRow.length - 1 : columns - 1; break; case arrows.right: $buttonsInRow = $('.multiple:eq(' + rowButtons + ') li'); columns = columns == $buttonsInRow.length - 1 ? 0 : columns + 1; break; } buttonSelected = $('.multiple:eq(' + rowButtons + ') li:eq(' + columns + ')'); buttonSelected.addClass('selected'); } }); $(window).keydown(function(e){ if(e.which == 13){ //get id of the selected li>a element var elementId = document.getElementsByClassName('selected')[0].children[0].id; document.getElementById('screen').innerHTML = elementId; } }) ``` ```css li.selected { background: green; } .multiple, .allRows li{ text-align:center; position:relative; display: flex; flex-wrap: nowrap; } .allRows a { text-decoration:none; padding: 3px; border: 1px solid #555; color: #222; margin: 0px; text-align: center; font-size:20px; } #screen { background:#222; color:#ddd; position:absolute; top: 100px; padding:30px 150px; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ul class="allRows"> <div class="multiple"> <li><a href="#" patterns="one" id="one">One</a></li> <li><a href="#" patterns="two" id="two">Two</a></li> <li><a href="#" patterns="three" id="three">Three</a></li> <li><a href="#" patterns="four" id="four">Four</a></li> <li><a href="#" patterns="five" id="five">Five</a></li> <li><a href="#" patterns="six" id="six">Six</a></li> </div> <div class="multiple"> <li><a href="#" patterns="seven" id="seven">Seven</a></li> <li><a href="#" patterns="eight" id="eight">Eight</a></li> <li><a href="#" patterns="nine" id="nine">Nine</a></li> <li><a href="#" patterns="ten" id="ten">Ten</a></li> <li><a href="#" patterns="eleven" id="eleven">Eleven</a></li> <li><a href="#" patterns="twelve" id="twelve">Twelve</a></li></div> </ul> <div id="screen">Numbers</div> ``` I've removed ``` case arrows.enter: // find the column and row position: $buttonsInRow = $('.multiple:eq(' + rowButtons + ') li'); var position = (rowButtons * 6 ) + columns + 1; rowButtons = rowButtons == $rows.click; document.getElementById('screen').innerHTML = columns; break; ``` and added ``` $(window).keydown(function(e){ if(e.which == 13){ //get id of the selected li>a element var elementId = document.getElementsByClassName('selected')[0].children[0].id; document.getElementById('screen').innerHTML = elementId; } }) ``` After your `window.keypress` function for a dedicated `Enter` button event listener, also removed all the onClick functions under the assumptions that you thought of using them to get these values but ended up seeing how much work that logic will be so instead I look at the `li` element with the `selected` class and use it's child `a tag`'s ID since it is already a name value of the numeric value instead of having to create an array of values for a lookup.
I've simplified your code to do exactly what you want. Try this ```js $(window).keydown(function(e) { if( [37, 38, 39, 40, 13].includes(e.which) ){ e.preventDefault(); let $col = $('.multiple li.selected'); let $row = $col.closest('.multiple'); let $nextCol = $col; let $nextRow = $row; if( [37, 38, 39, 40].includes(e.which) && $col.length === 0 ){ $('.multiple:first-child').find('li:first').addClass('selected'); return; } switch (e.which) { case 38: // up key $nextRow = $row.is(':first-child') ? $row.siblings(":last-child") : $row.prev(); break; case 40: // down key $nextRow = $row.is(':last-child') ? $row.siblings(":first-child") : $row.next(); break; case 37: // left key $nextCol = $col.is(':first-child') ? $col.siblings(":last-child") : $col.prev(); break; case 39: // right key $nextCol = $col.is(':last-child') ? $col.siblings(":first-child") : $col.next(); break; case 13: // enter key if($col.length > 0) $('#screen').text($col.find('a').trigger('click').text()); break; } $('.multiple li').removeClass('selected'); $nextRow.find('li').eq($nextCol.index()).addClass('selected'); } }); ``` ```css li.selected { background: green; } .multiple, .allRows li{ text-align:center; position:relative; display: flex; flex-wrap: nowrap; } .allRows a { text-decoration:none; padding: 3px; border: 1px solid #555; color: #222; margin: 0px; text-align: center; font-size:20px; } #screen { background:#222; color:#ddd; position:absolute; top: 100px; padding:30px 150px; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ul class="allRows"> <div class="multiple"> <li><a href="#" patterns="one" id="one">One</a></li> <li><a href="#" patterns="two" id="two">Two</a></li> <li><a href="#" patterns="three" id="three">Three</a></li> <li><a href="#" patterns="four" id="four">Four</a></li> <li><a href="#" patterns="five" id="five">Five</a></li> <li><a href="#" patterns="six" id="six">Six</a></li> </div> <div class="multiple"> <li><a href="#" patterns="seven" id="seven">Seven</a></li> <li><a href="#" patterns="eight" id="eight">Eight</a></li> <li><a href="#" patterns="nine" id="nine">Nine</a></li> <li><a href="#" patterns="ten" id="ten">Ten</a></li> <li><a href="#" patterns="eleven" id="eleven">Eleven</a></li> <li><a href="#" patterns="twelve" id="twelve">Twelve</a></li> </div> </ul> <div id="screen">Numbers</div> ```
19,721,027
After reading the [Software Carpentry](http://software-carpentry.org/) essay on [Handling Configuration Files](http://software-carpentry.org/v4/essays/config.html) I'm interested in their *Method #5: put parameters in a dynamically-loaded code module*. Basically I want the power to do calculations within my input files to create my variables. Based on this SO answer for [how to import a string as a module](https://stackoverflow.com/a/7548190/2530083) I've written the following function to import a string or oen fileobject or STringIO as a module. I can then access varibales using the . operator: ``` import imp def make_module_from_text(reader): """make a module from file,StringIO, text etc Parameters ---------- reader : file_like object object to get text from Returns ------- m: module text as module """ #for making module out of strings/files see https://stackoverflow.com/a/7548190/2530083 mymodule = imp.new_module('mymodule') #may need to randomise the name; not sure exec reader in mymodule.__dict__ return mymodule ``` then ``` import textwrap reader = textwrap.dedent("""\ import numpy as np a = np.array([0,4,6,7], dtype=float) a_normalise = a/a[-1] """) mymod = make_module_from_text(reader) print(mymod.a_normalise) ``` gives ``` [ 0. 0.57142857 0.85714286 1. ] ``` All well and good so far, but having looked around it seems using python `eval` and `exec` introduces security holes if I don't trust the input. A common response is "Never use `eval or`exec; they are evil", but I really like the power and flexibility of executing the code. Using `{'__builtins__': None}` I don't think will work for me as I will want to import other modules (e.g. `import numpy as np` in my above code). A number of people (e.g. [here](https://stackoverflow.com/a/9558001/2530083)) suggest using the `ast` module but I am not at all clear on how to use it(can `ast` be used with `exec`?). Is there simple ways to whitelist/allow specific functionality (e.g. [here](https://stackoverflow.com/questions/10661079/restricting-pythons-syntax-to-execute-user-code-safely-is-this-a-safe-approach))? Is there simple ways to blacklist/disallow specific functionality? Is there a magic way to say execute this but don't do anythinh nasty. Basically what are the options for making sure `exec` doesn't run any nasty malicious code? **EDIT:** My example above of normalising an array within my input/configuration file is perhaps a bit simplistic as to what computations I would want to perform within my input/configuration file (I could easily write a method/function in my program to do that). But say my program calculates a property at various times. The user needs to specify the times in some way. Should I only accept a list of explicit time values so the user has to do some calculations before preparing the input file? (note: even using a list as configuration variable is not trivial [see here](https://stackoverflow.com/a/6759256/2530083)). I think that is very limiting. Should I allow start-end-step values and then use `numpy.linspace` within my program? I think that is limiting too; whatif I want to use `numpy.logspace` instead? What if I have some function that can accept a list of important time values and then nicely fills in other times to get well spaced time values. Wouldn't it be good for the user to be able to import that function and use it? What if I want to input a list of user defined objects? The thing is, I don't want to code for all these specific cases when the functinality of python is already there for me and my user to use. Once I accept that I do indead want the power and functionality of executing code in my input/configuration file I wonder if there is actually any difference, security wise, in using `exec` vs using `importlib` vs [imp.load\_source](https://stackoverflow.com/a/67692/2530083) and so on. To me there is the limited standard configparser or the all powerful, all dangerous exec. I just wish there was some middle ground with which I could say 'execute this... without stuffing up my computer'.
2013/11/01
[ "https://Stackoverflow.com/questions/19721027", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2530083/" ]
"Never use eval or exec; they are evil". This is the only answer that works here, I think. There is no fully safe way to use exec/eval on an untrusted string string or file. The best you can do is to come up with your own language, and either interpret it yourself or turn it into safe Python code before handling it to exec. Be careful to proceed from the ground up --- if you allow the whole Python language minus specific things you thought of as dangerous, it would never be really safe. For example, you can use the ast module if you want Python-like syntax; and then write a small custom ast interpreter that only recognizes a small subset of all possible nodes. That's the safest solution.
If you are willing to use PyPy, then its [sandboxing feature](http://doc.pypy.org/en/latest/sandbox.html) is specifically designed for running untrusted code, so it may be useful in your case. Note that there are some issues with CPython interoperability mentioned that you may need to check. Additionally, there is a link on this page to an abandoned project called pysandbox, explaining the [problems with sandboxing](https://mail.python.org/pipermail/python-dev/2013-November/130132.html) directly within python.
69,607,117
I am trying to train a dl model with tf.keras. I have 67 classes of images inside the image directory like airports, bookstore, casino. And for each classes i have at least 100 images. The data is from [mit indoor scene](http://web.mit.edu/torralba/www/indoor.html) dataset But when I am trying to train the model, I am constantly getting this error. ``` tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] (1) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] [[IteratorGetNext/_7]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_1570] Function call stack: train_function -> train_function ``` I tried to resolve the problem by resizing the image with the resizing layer, also included the `labels='inferred'` and `label_mode='categorical'` in the `image_dataset_from_directory` method and included `loss='categorical_crossentropy'` in the model compile method. Previously labels and label\_model were not set and loss was sparse\_categorical\_crossentropy which i think is not right. so I changed them as described above.But I am still having problems. There is one question related to this in [stackoverflow](https://stackoverflow.com/questions/65517669/invalidargumenterror-with-model-fit-in-tensorflow) but the person did not mentioned how he solved the problem just updated that - My suggestion is to check the metadata of the dataset. It helped to fix my problem. But did not mentioned what metadata to look for or what he did to solve the problem. The code that I am using to train the model - ``` import os import PIL import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras.layers import Conv2D, Dense, MaxPooling2D, GlobalAveragePooling2D from tensorflow.keras.layers import Flatten, Dropout, BatchNormalization, Rescaling from tensorflow.keras.models import Sequential from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping from tensorflow.keras.regularizers import l1, l2 import matplotlib.pyplot as plt import matplotlib.image as mpimg from pathlib import Path os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # define directory paths PROJECT_PATH = Path.cwd() DATA_PATH = PROJECT_PATH.joinpath('data', 'Images') # create a dataset batch_size = 32 img_height = 180 img_width = 180 train = tf.keras.utils.image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="training", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) valid = tf.keras.utils.image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="validation", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) class_names = train.class_names for image_batch, label_batch in train.take(1): print("\nImage shape:", image_batch.shape) print("Label Shape", label_batch.shape) # resize image resize_layer = tf.keras.layers.Resizing(img_height, img_width) train = train.map(lambda x, y: (resize_layer(x), y)) valid = valid.map(lambda x, y: (resize_layer(x), y)) # standardize the data normalization_layer = tf.keras.layers.Rescaling(1./255) train = train.map(lambda x, y: (normalization_layer(x), y)) valid = valid.map(lambda x, y: (normalization_layer(x), y)) image_batch, labels_batch = next(iter(train)) first_image = image_batch[0] print("\nImage (min, max) value:", (np.min(first_image), np.max(first_image))) print() # configure the dataset for performance AUTOTUNE = tf.data.AUTOTUNE train = train.cache().prefetch(buffer_size=AUTOTUNE) valid = valid.cache().prefetch(buffer_size=AUTOTUNE) # create a basic model architecture num_classes = len(class_names) # initiate a sequential model model = Sequential() # CONV1 model.add(Conv2D(filters=64, kernel_size=3, activation="relu", input_shape=(img_height, img_width, 3))) model.add(BatchNormalization()) # CONV2 model.add(Conv2D(filters=64, kernel_size=3, activation="relu")) model.add(BatchNormalization()) # Pool + Dropout model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.3)) # CONV3 model.add(Conv2D(filters=128, kernel_size=3, activation="relu")) model.add(BatchNormalization()) # CONV4 model.add(Conv2D(filters=128, kernel_size=3, activation="relu")) model.add(BatchNormalization()) # POOL + Dropout model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.3)) # FC5 model.add(Flatten()) model.add(Dense(128, activation="relu")) model.add(Dense(num_classes, activation="softmax")) # compile the model model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy']) # train the model epochs = 25 early_stopping_cb = EarlyStopping(patience=10, restore_best_weights=True) history = model.fit(train, validation_data=valid, epochs=epochs, callbacks=[early_stopping_cb], verbose=2) result = pd.DataFrame(history.history) print() print(result.head()) ``` **Note -** I just modified the code to make it as simple as possible to reduce the error. The model run for few batches than again got the above error. ``` Epoch 1/10 732/781 [===========================>..] - ETA: 22s - loss: 3.7882Traceback (most recent call last): File ".\02_model1.py", line 139, in <module> model.fit(train, epochs=10, validation_data=valid) File "C:\Users\BHOLA\anaconda3\lib\site-packages\keras\engine\training.py", line 1184, in fit tmp_logs = self.train_function(iterator) File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in __call__ result = self._call(*args, **kwds) File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 917, in _call return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 3039, in __call__ return graph_function._call_flat( File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 1963, in _call_flat return self._build_call_outputs(self._inference_function.call( File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call outputs = execute.execute( File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] (1) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] [[IteratorGetNext/_2]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_11840] Function call stack: train_function -> train_function ``` **Modified code -** ``` # create a dataset batch_size = 16 img_height = 256 img_width = 256 train = image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="training", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) valid = image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="validation", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) model = tf.keras.applications.Xception( weights=None, input_shape=(img_height, img_width, 3), classes=67) model.compile(optimizer='rmsprop', loss='categorical_crossentropy') model.fit(train, epochs=10, validation_data=valid) ```
2021/10/17
[ "https://Stackoverflow.com/questions/69607117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12195048/" ]
I think it might be a corrupted file. It is throwing an exception after a data integrity check in the `DecodeBMPv2` function (<https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L594>) If that's the issue and you want to find out which file(s) are throwing the exception, you can try something like this below on the directory containing the files. Remove/replace any files you find and it should train normally. ```py import glob img_paths = glob.glob(os.path.join(<path_to_dataset>,'*/*.*') # assuming you point to the directory containing the label folders. bad_paths = [] for image_path in img_paths: try: img_bytes = tf.io.read_file(path) decoded_img = tf.decode_image(img_bytes) except tf.errors.InvalidArgumentError as e: print(f"Found bad path {image_path}...{e}") bad_paths.append(image_path) print(f"{image_path}: OK") print("BAD PATHS:") for bad_path in bad_paths: print(f"{bad_path}") ```
This is in fact a **corrupted file problem**. However, the underlying issue is far more subtle. Here is an explanation of what is going on and how to circumvent this obstacle. I encountered the very same problem on the very same [MIT Indoor Scene Classification](https://www.kaggle.com/datasets/itsahmad/indoor-scenes-cvpr-2019) dataset. All the images are JPEG files (*spoiler alert: well, are they?*). It has been correctly noted that the exception is raised exactly [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L594), in a C++ file related to the [`tf.io.decode_image()`](https://www.tensorflow.org/api_docs/python/tf/io/decode_image) function. It is the `decode_image()` function where the issue lies, which is called by the [`tf.keras.utils.image_dataset_from_directory()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/image_dataset_from_directory). On the other hand, [`tf.keras.preprocessing.image.ImageDataGenerator().flow_from_directory()`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator#flow_from_directory) relies on [Pillow](https://pillow.readthedocs.io/en/stable/) under the hood (shown [here](https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/utils/image_utils.py#L340), which is called from [here](https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/preprocessing/image.py#L324)). This is the reason why adopting the `ImageDataGenerator` class works. After closer inspection of the corresponding C++ source file, one can observe that the function is actually called `DecodeBmpV2(...)`, as defined [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L519). This raises the question of why a JPEG image is being treated as a BMP one. The aforementioned function is actually called [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L211), as part of a basic switch statement the aim of which is further direct data conversion according to the determined type. Thus, the piece of code that determines the file type should be subjected to deeper analysis. The file type is determined according to the value of starting bytes (see [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L62)). Long story short, a simple comparison of so-called [magic bytes](https://en.wikipedia.org/wiki/List_of_file_signatures) that signify file type is performed. Here is a code extract with the corresponding magic bytes. ``` static const char kPngMagicBytes[] = "\x89\x50\x4E\x47\x0D\x0A\x1A\x0A"; static const char kGifMagicBytes[] = "\x47\x49\x46\x38"; static const char kBmpMagicBytes[] = "\x42\x4d"; static const char kJpegMagicBytes[] = "\xff\xd8\xff"; ``` After identifying which files raise the exception, I saw that they were supposed to be JPEG files, however, their starting bytes indicated a BMP format instead. Here is an example of 3 files and their first 10 bytes. ``` laundromat\laundry_room_area.jpg b'ffd8ffe000104a464946' laundromat\Laundry_Room_Edens1A.jpg b'ffd8ffe000104a464946' laundromat\Laundry_Room_bmp.jpg b'424d3800030000000000' ``` Look at the last one. It even contains the word *bmp* in the file name. Why is that so? I do not know. The dataset **does contain corrupted image files**. Someone probably converted the file from BMP to JPEG, yet the tool used did not work correctly. We can just guess the real reason, but that is now irrelevant. The method by which the file type is determined is different from the one performed by the Pillow package, thus, there is nothing we can do about it. The recommendation is to identify the corrupted files, which is actually easy or to rely on the `ImageDataGenerator`. However, I would advise against doing so as this class has been marked as deprecated. It is not a bug in code per se, but rather bad data inadvertently introduced into the dataset.
61,183,427
I am beginner of python and trying to insert percentage sign in the output. Below is the code that I've got. ``` print('accuracy :', accuracy_score(y_true, y_pred)*100) ``` when I run this code I got 50.0001 and I would like to have %sign at the end of the number so I tried to do as below ``` print('Macro average precision :', precision_score(y_true, y_pred, average='macro')*100"%\n") ``` I got error say `SyntaxError: invalid syntax` Can any one help with this? Thank you!
2020/04/13
[ "https://Stackoverflow.com/questions/61183427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10267370/" ]
Use f strings: ``` print(f"Macro average precision : {precision_score(y_true, y_pred, average='macro')*100}%\n") ``` Or convert the value to string, and add (*concatenate*) the strings: ``` print('Macro average precision : ' + str(precision_score(y_true, y_pred, average='macro')*100) + "%\n") ``` See [the discussion here](https://stackoverflow.com/questions/59180574/string-concatenation-with-vs-f-string_) of the merits of each; basically the first is more convenient; and the second is computationally faster, and perhaps more simple to understand.
The simple, "low-tech" way is to correct your (lack of) output expression. Convert the float to a string and concatenate. To make it easy to follow: ``` pct = precision_score(y_true, y_pred, average='macro')*100 print('Macro average precision : ' + str(pct) + "%\n") ``` This is inelegant, but easy to follow.
61,183,427
I am beginner of python and trying to insert percentage sign in the output. Below is the code that I've got. ``` print('accuracy :', accuracy_score(y_true, y_pred)*100) ``` when I run this code I got 50.0001 and I would like to have %sign at the end of the number so I tried to do as below ``` print('Macro average precision :', precision_score(y_true, y_pred, average='macro')*100"%\n") ``` I got error say `SyntaxError: invalid syntax` Can any one help with this? Thank you!
2020/04/13
[ "https://Stackoverflow.com/questions/61183427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10267370/" ]
Use f strings: ``` print(f"Macro average precision : {precision_score(y_true, y_pred, average='macro')*100}%\n") ``` Or convert the value to string, and add (*concatenate*) the strings: ``` print('Macro average precision : ' + str(precision_score(y_true, y_pred, average='macro')*100) + "%\n") ``` See [the discussion here](https://stackoverflow.com/questions/59180574/string-concatenation-with-vs-f-string_) of the merits of each; basically the first is more convenient; and the second is computationally faster, and perhaps more simple to understand.
You can try this: ``` print('accuracy: {:.2f}%'.format(100*accuracy_score(y_true, y_pred))) ```
61,183,427
I am beginner of python and trying to insert percentage sign in the output. Below is the code that I've got. ``` print('accuracy :', accuracy_score(y_true, y_pred)*100) ``` when I run this code I got 50.0001 and I would like to have %sign at the end of the number so I tried to do as below ``` print('Macro average precision :', precision_score(y_true, y_pred, average='macro')*100"%\n") ``` I got error say `SyntaxError: invalid syntax` Can any one help with this? Thank you!
2020/04/13
[ "https://Stackoverflow.com/questions/61183427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10267370/" ]
Use f strings: ``` print(f"Macro average precision : {precision_score(y_true, y_pred, average='macro')*100}%\n") ``` Or convert the value to string, and add (*concatenate*) the strings: ``` print('Macro average precision : ' + str(precision_score(y_true, y_pred, average='macro')*100) + "%\n") ``` See [the discussion here](https://stackoverflow.com/questions/59180574/string-concatenation-with-vs-f-string_) of the merits of each; basically the first is more convenient; and the second is computationally faster, and perhaps more simple to understand.
One of the ways to go about fixing this is by using string concatenation. You can add the percent symbol to your output from your function using a simple + operator. However, the output of your function needs to be cast to a string data type in order to be able to concatenate it with a string. To cast something to a string, use str() So the correct way to fix your print statement using this explanation would be: ``` print('Macro average precision : ' + str(precision_score(y_true, y_pred, average='macro')*100) + "%\n") ```
52,418,698
I have Spark and Airflow cluster, I want to send a spark job from Airflow container to Spark container. But I am new about Airflow and I don't know which configuration I need to perform. I copied `spark_submit_operator.py` under the plugins folder. ```py from airflow import DAG from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator from datetime import datetime, timedelta args = { 'owner': 'airflow', 'start_date': datetime(2018, 7, 31) } dag = DAG('spark_example_new', default_args=args, schedule_interval="*/10 * * * *") operator = SparkSubmitOperator( task_id='spark_submit_job', conn_id='spark_default', java_class='Simple', application='/spark/abc.jar', total_executor_cores='1', executor_cores='1', executor_memory='2g', num_executors='1', name='airflow-spark-example', verbose=False, driver_memory='1g', application_args=["1000"], conf={'master':'spark://master:7077'}, dag=dag, ) ``` **master** is the hostname of our Spark Master container. When I run the dag, it produce following error: ``` [2018-09-20 05:57:46,637] {{models.py:1569}} INFO - Executing <Task(SparkSubmitOperator): spark_submit_job> on 2018-09-20T05:57:36.756154+00:00 [2018-09-20 05:57:46,637] {{base_task_runner.py:124}} INFO - Running: ['bash', '-c', 'airflow run spark_example_new spark_submit_job 2018-09-20T05:57:36.756154+00:00 --job_id 4 --raw -sd DAGS_FOLDER/firstJob.py --cfg_path /tmp/tmpn2hznb5_'] [2018-09-20 05:57:47,002] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,001] {{settings.py:174}} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800 [2018-09-20 05:57:47,312] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,311] {{__init__.py:51}} INFO - Using executor CeleryExecutor [2018-09-20 05:57:47,428] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,428] {{models.py:258}} INFO - Filling up the DagBag from /usr/local/airflow/dags/firstJob.py [2018-09-20 05:57:47,447] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,447] {{cli.py:492}} INFO - Running <TaskInstance: spark_example_new.spark_submit_job 2018-09-20T05:57:36.756154+00:00 [running]> on host e6dd59dc595f [2018-09-20 05:57:47,471] {{logging_mixin.py:95}} INFO - [2018-09-20 05:57:47,470] {{spark_submit_hook.py:283}} INFO - Spark-Submit cmd: ['spark-submit', '--master', 'yarn', '--conf', 'master=spark://master:7077', '--num-executors', '1', '--total-executor-cores', '1', '--executor-cores', '1', '--executor-memory', '2g', '--driver-memory', '1g', '--name', 'airflow-spark-example', '--class', 'Simple', '/spark/ugur.jar', '1000'] [2018-09-20 05:57:47,473] {{models.py:1736}} ERROR - [Errno 2] No such file or directory: 'spark-submit': 'spark-submit' Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1633, in _run_raw_task result = task_copy.execute(context=context) File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/spark_submit_operator.py", line 168, in execute self._hook.submit(self._application) File "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/spark_submit_hook.py", line 330, in submit **kwargs) File "/usr/local/lib/python3.6/subprocess.py", line 709, in __init__ restore_signals, start_new_session) File "/usr/local/lib/python3.6/subprocess.py", line 1344, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'spark-submit': 'spark-submit' ``` It's running command: ``` Spark-Submit cmd: ['spark-submit', '--master', 'yarn', '--conf', 'master=spark://master:7077', '--num-executors', '1', '--total-executor-cores', '1', '--executor-cores', '1', '--executor-memory', '2g', '--driver-memory', '1g', '--name', 'airflow-spark-example', '--class', 'Simple', '/spark/ugur.jar', '1000'] ``` but I didn't use yarn.
2018/09/20
[ "https://Stackoverflow.com/questions/52418698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4230665/" ]
If you use SparkSubmitOperator the connection to master will be set as "Yarn" by default regardless master which you set on your python code, however, you can override master by specifying conn\_id through its constructor with the condition you have already created the aforementioned `conn_id` at "Admin->Connection" menu in Airflow Web Interface. I hope it can help you.
I guess you need to set master in extra options in your connections for this spark conn id (spark\_default).By default is yarn, so try{"master":"your-con"} Also does your airflow user has spark-submit in the classpath?, from logs looks like it doesnt.
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
> > Could i iterate through the string itself like so? > > > > ``` > for letter in s: > #compare stuff > > ``` > > How would I compare specific indexes in the string using the above method? > > > The pythonic way of doing these kind of comparisons without specifically referring to indexes would be: ``` for curr, nextt, nexttt in zip(s, s[1:], s[2:]): if curr == 'b' and nextt == 'o' and nexttt == 'b': count += 1 ``` This avoids out of index errors. You can also use a comprehension and in this way you eliminate the need to initialize and update the `count` variable. This line will do the same as your C code: ``` >>> sum(1 for curr, nextt, nexttt in zip(s, s[1:], s[2:]) if curr == 'b' and nextt == 'o' and nexttt == 'b') 2 ``` **How it works:** This is the result of the zip between the lists: ``` >>> s 'bobgetbob' >>> s[1:] 'obgetbob' >>> s[2:] 'bgetbob' >>> zip(s, s[1:], s[2:]) [('b', 'o', 'b'), ('o', 'b', 'g'), ('b', 'g', 'e'), ('g', 'e', 't'), ('e', 't', 'b'), ('t', 'b', 'o'), ('b', 'o', 'b')] ``` In the loop you iterate the list unpacking each of the tuples to the three variables. Finally, if you really need the index you can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate): ``` >>> for i, c in enumerate(s): print i, c 0 b 1 o 2 b 3 g 4 e 5 t 6 b 7 o 8 b ```
Try this - i.e. go to len(s)-2 as you won't ever get a bob starting after that point ``` count = 0 s = "bobgetbob" for i in range(len(s) - 2): if s[i] == 'b' and s[i + 1] == 'o' and s[i + 2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
> > Could i iterate through the string itself like so? > > > > ``` > for letter in s: > #compare stuff > > ``` > > How would I compare specific indexes in the string using the above method? > > > The pythonic way of doing these kind of comparisons without specifically referring to indexes would be: ``` for curr, nextt, nexttt in zip(s, s[1:], s[2:]): if curr == 'b' and nextt == 'o' and nexttt == 'b': count += 1 ``` This avoids out of index errors. You can also use a comprehension and in this way you eliminate the need to initialize and update the `count` variable. This line will do the same as your C code: ``` >>> sum(1 for curr, nextt, nexttt in zip(s, s[1:], s[2:]) if curr == 'b' and nextt == 'o' and nexttt == 'b') 2 ``` **How it works:** This is the result of the zip between the lists: ``` >>> s 'bobgetbob' >>> s[1:] 'obgetbob' >>> s[2:] 'bgetbob' >>> zip(s, s[1:], s[2:]) [('b', 'o', 'b'), ('o', 'b', 'g'), ('b', 'g', 'e'), ('g', 'e', 't'), ('e', 't', 'b'), ('t', 'b', 'o'), ('b', 'o', 'b')] ``` In the loop you iterate the list unpacking each of the tuples to the three variables. Finally, if you really need the index you can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate): ``` >>> for i, c in enumerate(s): print i, c 0 b 1 o 2 b 3 g 4 e 5 t 6 b 7 o 8 b ```
a generator expression and sum would be a better way to solve it: ``` print("number of bobs {}".format(sum(s[i:i+3] == "bob" for i in xrange(len(s)) ))) ``` You can also cheat a bit with indexing i.e `s[i+2:i+3]` will not throw an indexError : ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): print(s[i+1:i+1]) if s[i] == 'b' and s[i+1:i+2] == 'o' and s[i+2:i+3] == 'b': count += 1 print "Number of 'bobs' is: %d" % count Number of 'bobs' is: 2 ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
> > Could i iterate through the string itself like so? > > > > ``` > for letter in s: > #compare stuff > > ``` > > How would I compare specific indexes in the string using the above method? > > > The pythonic way of doing these kind of comparisons without specifically referring to indexes would be: ``` for curr, nextt, nexttt in zip(s, s[1:], s[2:]): if curr == 'b' and nextt == 'o' and nexttt == 'b': count += 1 ``` This avoids out of index errors. You can also use a comprehension and in this way you eliminate the need to initialize and update the `count` variable. This line will do the same as your C code: ``` >>> sum(1 for curr, nextt, nexttt in zip(s, s[1:], s[2:]) if curr == 'b' and nextt == 'o' and nexttt == 'b') 2 ``` **How it works:** This is the result of the zip between the lists: ``` >>> s 'bobgetbob' >>> s[1:] 'obgetbob' >>> s[2:] 'bgetbob' >>> zip(s, s[1:], s[2:]) [('b', 'o', 'b'), ('o', 'b', 'g'), ('b', 'g', 'e'), ('g', 'e', 't'), ('e', 't', 'b'), ('t', 'b', 'o'), ('b', 'o', 'b')] ``` In the loop you iterate the list unpacking each of the tuples to the three variables. Finally, if you really need the index you can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate): ``` >>> for i, c in enumerate(s): print i, c 0 b 1 o 2 b 3 g 4 e 5 t 6 b 7 o 8 b ```
In general, that is the slow way to do it; you are better off delegating as much as possible to higher-performance object methods like `str.find`: ``` def how_many(needle, haystack): """ Given needle: str to search for haystack: str to search in Return the number of (possibly overlapping) occurrences of needle which appear in haystack ex, how_many("bb", "bbbbb") => 4 """ count = 0 i = 0 # starting search index while True: ni = haystack.find(needle, i) if ni != -1: count += 1 i = ni + 1 else: return count how_many("bob", "bobgetbob") # => 2 ``` --- `haystack.find(needle, i)` returns the start index of the next occurrence of `needle` beginning on or after index `i`, or `-1` if there is no such occurrence. So ``` "bobgetbob".find("bob", 0) # returns 0 => found 1 "bobgetbob".find("bob", 1) # returns 6 => found 1 "bobgetbob".find("bob", 7) # returns -1 => no more ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
> > Could i iterate through the string itself like so? > > > > ``` > for letter in s: > #compare stuff > > ``` > > How would I compare specific indexes in the string using the above method? > > > The pythonic way of doing these kind of comparisons without specifically referring to indexes would be: ``` for curr, nextt, nexttt in zip(s, s[1:], s[2:]): if curr == 'b' and nextt == 'o' and nexttt == 'b': count += 1 ``` This avoids out of index errors. You can also use a comprehension and in this way you eliminate the need to initialize and update the `count` variable. This line will do the same as your C code: ``` >>> sum(1 for curr, nextt, nexttt in zip(s, s[1:], s[2:]) if curr == 'b' and nextt == 'o' and nexttt == 'b') 2 ``` **How it works:** This is the result of the zip between the lists: ``` >>> s 'bobgetbob' >>> s[1:] 'obgetbob' >>> s[2:] 'bgetbob' >>> zip(s, s[1:], s[2:]) [('b', 'o', 'b'), ('o', 'b', 'g'), ('b', 'g', 'e'), ('g', 'e', 't'), ('e', 't', 'b'), ('t', 'b', 'o'), ('b', 'o', 'b')] ``` In the loop you iterate the list unpacking each of the tuples to the three variables. Finally, if you really need the index you can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate): ``` >>> for i, c in enumerate(s): print i, c 0 b 1 o 2 b 3 g 4 e 5 t 6 b 7 o 8 b ```
``` count = 0 for i in range(0,len(s)-2): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
In general, that is the slow way to do it; you are better off delegating as much as possible to higher-performance object methods like `str.find`: ``` def how_many(needle, haystack): """ Given needle: str to search for haystack: str to search in Return the number of (possibly overlapping) occurrences of needle which appear in haystack ex, how_many("bb", "bbbbb") => 4 """ count = 0 i = 0 # starting search index while True: ni = haystack.find(needle, i) if ni != -1: count += 1 i = ni + 1 else: return count how_many("bob", "bobgetbob") # => 2 ``` --- `haystack.find(needle, i)` returns the start index of the next occurrence of `needle` beginning on or after index `i`, or `-1` if there is no such occurrence. So ``` "bobgetbob".find("bob", 0) # returns 0 => found 1 "bobgetbob".find("bob", 1) # returns 6 => found 1 "bobgetbob".find("bob", 7) # returns -1 => no more ```
Try this - i.e. go to len(s)-2 as you won't ever get a bob starting after that point ``` count = 0 s = "bobgetbob" for i in range(len(s) - 2): if s[i] == 'b' and s[i + 1] == 'o' and s[i + 2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
Try this - i.e. go to len(s)-2 as you won't ever get a bob starting after that point ``` count = 0 s = "bobgetbob" for i in range(len(s) - 2): if s[i] == 'b' and s[i + 1] == 'o' and s[i + 2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
``` count = 0 for i in range(0,len(s)-2): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
In general, that is the slow way to do it; you are better off delegating as much as possible to higher-performance object methods like `str.find`: ``` def how_many(needle, haystack): """ Given needle: str to search for haystack: str to search in Return the number of (possibly overlapping) occurrences of needle which appear in haystack ex, how_many("bb", "bbbbb") => 4 """ count = 0 i = 0 # starting search index while True: ni = haystack.find(needle, i) if ni != -1: count += 1 i = ni + 1 else: return count how_many("bob", "bobgetbob") # => 2 ``` --- `haystack.find(needle, i)` returns the start index of the next occurrence of `needle` beginning on or after index `i`, or `-1` if there is no such occurrence. So ``` "bobgetbob".find("bob", 0) # returns 0 => found 1 "bobgetbob".find("bob", 1) # returns 6 => found 1 "bobgetbob".find("bob", 7) # returns -1 => no more ```
a generator expression and sum would be a better way to solve it: ``` print("number of bobs {}".format(sum(s[i:i+3] == "bob" for i in xrange(len(s)) ))) ``` You can also cheat a bit with indexing i.e `s[i+2:i+3]` will not throw an indexError : ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): print(s[i+1:i+1]) if s[i] == 'b' and s[i+1:i+2] == 'o' and s[i+2:i+3] == 'b': count += 1 print "Number of 'bobs' is: %d" % count Number of 'bobs' is: 2 ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
a generator expression and sum would be a better way to solve it: ``` print("number of bobs {}".format(sum(s[i:i+3] == "bob" for i in xrange(len(s)) ))) ``` You can also cheat a bit with indexing i.e `s[i+2:i+3]` will not throw an indexError : ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): print(s[i+1:i+1]) if s[i] == 'b' and s[i+1:i+2] == 'o' and s[i+2:i+3] == 'b': count += 1 print "Number of 'bobs' is: %d" % count Number of 'bobs' is: 2 ```
``` count = 0 for i in range(0,len(s)-2): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
In general, that is the slow way to do it; you are better off delegating as much as possible to higher-performance object methods like `str.find`: ``` def how_many(needle, haystack): """ Given needle: str to search for haystack: str to search in Return the number of (possibly overlapping) occurrences of needle which appear in haystack ex, how_many("bb", "bbbbb") => 4 """ count = 0 i = 0 # starting search index while True: ni = haystack.find(needle, i) if ni != -1: count += 1 i = ni + 1 else: return count how_many("bob", "bobgetbob") # => 2 ``` --- `haystack.find(needle, i)` returns the start index of the next occurrence of `needle` beginning on or after index `i`, or `-1` if there is no such occurrence. So ``` "bobgetbob".find("bob", 0) # returns 0 => found 1 "bobgetbob".find("bob", 1) # returns 6 => found 1 "bobgetbob".find("bob", 7) # returns -1 => no more ```
``` count = 0 for i in range(0,len(s)-2): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
11,464,750
useI am working on a python script to check if the url is working. The script will write the url and response code to a log file. To speed up the check, I am using threading and queue. The script works well if the number of url's to check is small but when increasing the number of url's to hundreds, some url's just will miss from the log file. Is there anything I need to fix? My script is ``` #!/usr/bin/env python import Queue import threading import urllib2,urllib,sys,cx_Oracle,os import time from urllib2 import HTTPError, URLError queue = Queue.Queue() ##print_queue = Queue.Queue() class NoRedirectHandler(urllib2.HTTPRedirectHandler): def http_error_302(self, req, fp, code, msg, headers): infourl = urllib.addinfourl(fp, headers, req.get_full_url()) infourl.status = code infourl.code = code return infourl http_error_300 = http_error_302 http_error_301 = http_error_302 http_error_303 = http_error_302 http_error_307 = http_error_302 class ThreadUrl(threading.Thread): #Threaded Url Grab ## def __init__(self, queue, print_queue): def __init__(self, queue,error_log): threading.Thread.__init__(self) self.queue = queue ## self.print_queue = print_queue self.error_log = error_log def do_something_with_exception(self,idx,url,error_log): exc_type, exc_value = sys.exc_info()[:2] ## self.print_queue.put([idx,url,exc_type.__name__]) with open( error_log, 'a') as err_log_f: err_log_f.write("{0},{1},{2}\n".format(idx,url,exc_type.__name__)) def openUrl(self,pair): try: idx = pair[1] url = 'http://'+pair[2] opener = urllib2.build_opener(NoRedirectHandler()) urllib2.install_opener(opener) request = urllib2.Request(url) request.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1') #open urls of hosts resp = urllib2.urlopen(request, timeout=10) ## self.print_queue.put([idx,url,resp.code]) with open( self.error_log, 'a') as err_log_f: err_log_f.write("{0},{1},{2}\n".format(idx,url,resp.code)) except: self.do_something_with_exception(idx,url,self.error_log) def run(self): while True: #grabs host from queue pair = self.queue.get() self.openUrl(pair) #signals to queue job is done self.queue.task_done() def readUrlFromDB(queue,connect_string,column_name,table_name): try: connection = cx_Oracle.Connection(connect_string) cursor = cx_Oracle.Cursor(connection) query = 'select ' + column_name + ' from ' + table_name cursor.execute(query) #Count lines in the file rows = cursor.fetchall() total = cursor.rowcount #Loop through returned urls for row in rows: #print row[1],row[2] ## url = 'http://'+row[2] queue.put(row) cursor.close() connection.close() return total except cx_Oracle.DatabaseError, e: print e[0].context raise def main(): start = time.time() error_log = "D:\\chkWebsite_Error_Log.txt" #Check if error_log file exists #If exists then deletes it if os.path.isfile(error_log): os.remove(error_log) #spawn a pool of threads, and pass them queue instance for i in range(10): t = ThreadUrl(queue,error_log) t.setDaemon(True) t.start() connect_string,column_name,table_name = "user/pass@db","*","T_URL_TEST" tn = readUrlFromDB(queue,connect_string,column_name,table_name) #wait on the queue until everything has been processed queue.join() ## print_queue.join() print "Total retrived: {0}".format(tn) print "Elapsed Time: %s" % (time.time() - start) main() ```
2012/07/13
[ "https://Stackoverflow.com/questions/11464750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1516432/" ]
Python's threading module isn't really multithreaded because of the global interpreter lock, <http://wiki.python.org/moin/GlobalInterpreterLock> as such you should really use `multiprocessing` <http://docs.python.org/library/multiprocessing.html> if you really want to take advantage of multiple cores. Also you seem to be accessing a file simultatnously ``` with open( self.error_log, 'a') as err_log_f: err_log_f.write("{0},{1},{2}\n".format(idx,url,resp.code)) ``` This is really bad AFAIK, if two threads are trying to write to the same file at the same time or almost at the same time, keep in mind, their not really multithreaded, the behavior tends to be undefined, imagine one thread writing while another just closed it... Anyway you would need a third queue to handle writing to the file.
At first glance this looks like a race condition, since many threads are trying to write to the log file at the same time. See [this question](https://stackoverflow.com/q/489861/520779) for some pointers on how to lock a file for writing (so only one thread can access it at a time).
13,350,427
I am learning to use javascript, ajax, python and django together. In my project, a user selects a language from a drop-down list. Then the selected is sent back to the server. Then the server sends the response back to the django template. This is done by javascript. In the django template, I need the response, for example, German, to update the html code. How to pass the response to the html code. The response can be seen in the range of .... How to do it without reload the html page? Thanks.
2012/11/12
[ "https://Stackoverflow.com/questions/13350427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1165201/" ]
You could use jquery to send the ajax request and server could send the response with html content. For example, Server: When server receives the ajax request. This would return the html content i.e. a template which could be rendered to the client via ajax ``` def update_html_on_client(request): language = request.GET.get('language', None) #for selected language do something cal_1 = ... return render_to_response('my_template.html', {'cal':cal_1}, content_instance = template.RequestContent(request)) ``` Template: This is example of ajax function you would use to generate ajax request. You can select the div in which you can populate the html response returned by the server. ``` function getServerResponse(){ $.ajax({ url: 'your_url_here', data: {language:'German'}, dataType:'html' success : function(data, status, xhr){ $('#server_response').html(data); } }); } ```
Because your templates are rendered on the server, your best bet is to simply reload the page (which re-renders the page with your newly selected language). An alternative to using ajax would be to store the language in a cookie, that way you don't have to maintain state on the client. Either way, the reload is still necessary. You could investigate client side templating. [Handlebars](http://handlebarsjs.com/) is a good choice.
66,889,445
I am trying to push my app to heroku, I am following the tutorial from udemy - everything goes smooth as explained. Once I am at the last step - doing `git push heroku master` I get the following error in the console: ``` (flaskdeploy) C:\Users\dmitr\Desktop\jose\FLASK_heroku>git push heroku master Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 8 threads remote: ! remote: ! git push heroku <branchname>:main remote: ! remote: ! This article goes into details on the behavior: remote: ! https://devcenter.heroku.com/articles/duplicate-build-version remote: remote: Verifying deploy... remote: remote: ! Push rejected to mitya-test-app. remote: To https://git.heroku.com/mitya-test-app.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to 'https://git.heroku.com/mitya-test-app.git' ``` On a heroku site in the log area I have the following explanation of this error: ``` -----> Building on the Heroku-20 stack -----> Determining which buildpack to use for this app -----> Python app detected -----> Installing python-3.6.13 -----> Installing pip 20.1.1, setuptools 47.1.1 and wheel 0.34.2 -----> Installing SQLite3 -----> Installing requirements with pip Collecting certifi==2020.12.5 Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB) Processing /home/linux1/recipes/ci/click_1610990599742/work ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/home/linux1/recipes/ci/click_1610990599742/work' ! Push rejected, failed to compile Python app. ! Push failed ``` In my requirements.txt I have the following: ``` certifi==2020.12.5 click @ file:///home/linux1/recipes/ci/click_1610990599742/work Flask @ file:///home/ktietz/src/ci/flask_1611932660458/work gunicorn==20.0.4 itsdangerous @ file:///home/ktietz/src/ci/itsdangerous_1611932585308/work Jinja2 @ file:///tmp/build/80754af9/jinja2_1612213139570/work MarkupSafe @ file:///C:/ci/markupsafe_1607027406824/work Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work wincertstore==0.2 ``` This is, in fact, only a super simple test app that consists of `app.py`: ``` from flask import Flask app = Flask(__name__) @app.route('/') def index(): return "THE SITE IS OK" if __name__ == '__main__': app.run() ``` and `Procfile` with this inside: ``` web: gunicorn app:app ```
2021/03/31
[ "https://Stackoverflow.com/questions/66889445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11803958/" ]
You could use [repr](https://docs.python.org/3/library/functions.html#repr) ```py print(repr(var)) ```
Maybe you want to use f-string ? ``` var = "textlol" print(f"\"{var}\"") ```
66,889,445
I am trying to push my app to heroku, I am following the tutorial from udemy - everything goes smooth as explained. Once I am at the last step - doing `git push heroku master` I get the following error in the console: ``` (flaskdeploy) C:\Users\dmitr\Desktop\jose\FLASK_heroku>git push heroku master Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 8 threads remote: ! remote: ! git push heroku <branchname>:main remote: ! remote: ! This article goes into details on the behavior: remote: ! https://devcenter.heroku.com/articles/duplicate-build-version remote: remote: Verifying deploy... remote: remote: ! Push rejected to mitya-test-app. remote: To https://git.heroku.com/mitya-test-app.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to 'https://git.heroku.com/mitya-test-app.git' ``` On a heroku site in the log area I have the following explanation of this error: ``` -----> Building on the Heroku-20 stack -----> Determining which buildpack to use for this app -----> Python app detected -----> Installing python-3.6.13 -----> Installing pip 20.1.1, setuptools 47.1.1 and wheel 0.34.2 -----> Installing SQLite3 -----> Installing requirements with pip Collecting certifi==2020.12.5 Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB) Processing /home/linux1/recipes/ci/click_1610990599742/work ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/home/linux1/recipes/ci/click_1610990599742/work' ! Push rejected, failed to compile Python app. ! Push failed ``` In my requirements.txt I have the following: ``` certifi==2020.12.5 click @ file:///home/linux1/recipes/ci/click_1610990599742/work Flask @ file:///home/ktietz/src/ci/flask_1611932660458/work gunicorn==20.0.4 itsdangerous @ file:///home/ktietz/src/ci/itsdangerous_1611932585308/work Jinja2 @ file:///tmp/build/80754af9/jinja2_1612213139570/work MarkupSafe @ file:///C:/ci/markupsafe_1607027406824/work Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work wincertstore==0.2 ``` This is, in fact, only a super simple test app that consists of `app.py`: ``` from flask import Flask app = Flask(__name__) @app.route('/') def index(): return "THE SITE IS OK" if __name__ == '__main__': app.run() ``` and `Procfile` with this inside: ``` web: gunicorn app:app ```
2021/03/31
[ "https://Stackoverflow.com/questions/66889445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11803958/" ]
You could use [repr](https://docs.python.org/3/library/functions.html#repr) ```py print(repr(var)) ```
It's easy as you see in below: ``` var = '"text"' print(var) ```
66,889,445
I am trying to push my app to heroku, I am following the tutorial from udemy - everything goes smooth as explained. Once I am at the last step - doing `git push heroku master` I get the following error in the console: ``` (flaskdeploy) C:\Users\dmitr\Desktop\jose\FLASK_heroku>git push heroku master Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 8 threads remote: ! remote: ! git push heroku <branchname>:main remote: ! remote: ! This article goes into details on the behavior: remote: ! https://devcenter.heroku.com/articles/duplicate-build-version remote: remote: Verifying deploy... remote: remote: ! Push rejected to mitya-test-app. remote: To https://git.heroku.com/mitya-test-app.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to 'https://git.heroku.com/mitya-test-app.git' ``` On a heroku site in the log area I have the following explanation of this error: ``` -----> Building on the Heroku-20 stack -----> Determining which buildpack to use for this app -----> Python app detected -----> Installing python-3.6.13 -----> Installing pip 20.1.1, setuptools 47.1.1 and wheel 0.34.2 -----> Installing SQLite3 -----> Installing requirements with pip Collecting certifi==2020.12.5 Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB) Processing /home/linux1/recipes/ci/click_1610990599742/work ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/home/linux1/recipes/ci/click_1610990599742/work' ! Push rejected, failed to compile Python app. ! Push failed ``` In my requirements.txt I have the following: ``` certifi==2020.12.5 click @ file:///home/linux1/recipes/ci/click_1610990599742/work Flask @ file:///home/ktietz/src/ci/flask_1611932660458/work gunicorn==20.0.4 itsdangerous @ file:///home/ktietz/src/ci/itsdangerous_1611932585308/work Jinja2 @ file:///tmp/build/80754af9/jinja2_1612213139570/work MarkupSafe @ file:///C:/ci/markupsafe_1607027406824/work Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work wincertstore==0.2 ``` This is, in fact, only a super simple test app that consists of `app.py`: ``` from flask import Flask app = Flask(__name__) @app.route('/') def index(): return "THE SITE IS OK" if __name__ == '__main__': app.run() ``` and `Procfile` with this inside: ``` web: gunicorn app:app ```
2021/03/31
[ "https://Stackoverflow.com/questions/66889445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11803958/" ]
You could use [repr](https://docs.python.org/3/library/functions.html#repr) ```py print(repr(var)) ```
You'll need to insert your string value into the string that is printed. There are a few ways to do this, take a look at [this answer on a similar question](https://stackoverflow.com/a/52155770/15469537). In your case, you want to print `"` around the string. As `"` is a special character, you'll need to insert a backslash before it to 'escape' it, and print the `"` instead of ending the string. The end result would be something like ``` var = "text" print("\"{}\"".format(var)) ```
66,889,445
I am trying to push my app to heroku, I am following the tutorial from udemy - everything goes smooth as explained. Once I am at the last step - doing `git push heroku master` I get the following error in the console: ``` (flaskdeploy) C:\Users\dmitr\Desktop\jose\FLASK_heroku>git push heroku master Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 8 threads remote: ! remote: ! git push heroku <branchname>:main remote: ! remote: ! This article goes into details on the behavior: remote: ! https://devcenter.heroku.com/articles/duplicate-build-version remote: remote: Verifying deploy... remote: remote: ! Push rejected to mitya-test-app. remote: To https://git.heroku.com/mitya-test-app.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to 'https://git.heroku.com/mitya-test-app.git' ``` On a heroku site in the log area I have the following explanation of this error: ``` -----> Building on the Heroku-20 stack -----> Determining which buildpack to use for this app -----> Python app detected -----> Installing python-3.6.13 -----> Installing pip 20.1.1, setuptools 47.1.1 and wheel 0.34.2 -----> Installing SQLite3 -----> Installing requirements with pip Collecting certifi==2020.12.5 Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB) Processing /home/linux1/recipes/ci/click_1610990599742/work ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/home/linux1/recipes/ci/click_1610990599742/work' ! Push rejected, failed to compile Python app. ! Push failed ``` In my requirements.txt I have the following: ``` certifi==2020.12.5 click @ file:///home/linux1/recipes/ci/click_1610990599742/work Flask @ file:///home/ktietz/src/ci/flask_1611932660458/work gunicorn==20.0.4 itsdangerous @ file:///home/ktietz/src/ci/itsdangerous_1611932585308/work Jinja2 @ file:///tmp/build/80754af9/jinja2_1612213139570/work MarkupSafe @ file:///C:/ci/markupsafe_1607027406824/work Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work wincertstore==0.2 ``` This is, in fact, only a super simple test app that consists of `app.py`: ``` from flask import Flask app = Flask(__name__) @app.route('/') def index(): return "THE SITE IS OK" if __name__ == '__main__': app.run() ``` and `Procfile` with this inside: ``` web: gunicorn app:app ```
2021/03/31
[ "https://Stackoverflow.com/questions/66889445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11803958/" ]
You could use [repr](https://docs.python.org/3/library/functions.html#repr) ```py print(repr(var)) ```
add the quotation marks when you print it. This way you don't have to change the variable. Example: ``` var = "text" print('"'+'text'+'"') ``` Output: "text"
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
For a `list` (call it `seq`) and a target row length (call it `rowsize`), you would do something like this: ``` split_into_rows = [seq[i: i + rowsize] for i in range(0, len(seq), rowsize)] ``` You could then use the writer's `writerows` method to write elements to the file: ``` writer.writerows(split_into_rows) ``` For lazy evaluation, use a generator expression instead: ``` split_into_rows = (seq[i: i + rowsize] for i in range(0, len(seq), rowsize)) ```
I suggest using a temp array B. 1. Get the length of the original array A 2. Copy the first A.length/2 to array B 3. Add new line character to array B. 4. Append the rest of array A to B.
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
For a `list` (call it `seq`) and a target row length (call it `rowsize`), you would do something like this: ``` split_into_rows = [seq[i: i + rowsize] for i in range(0, len(seq), rowsize)] ``` You could then use the writer's `writerows` method to write elements to the file: ``` writer.writerows(split_into_rows) ``` For lazy evaluation, use a generator expression instead: ``` split_into_rows = (seq[i: i + rowsize] for i in range(0, len(seq), rowsize)) ```
when dealing with lists in this manner, it's best to use the numpy module: ``` import numpy as np A = ["1", "2", "3", "4", "5", "6", "7", "8"] #now to convert your list into a 2-Dimensional numpy array A = np.array((A)).reshape((2,4)) result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') #you use writer.writerows() instead of .writerow because you have multiple rows rather than one writer.writerows(row) result.close ``` numpy provides many ways of manipulating lists. Look at some documentation [here](http://docs.scipy.org/doc/numpy/reference/) Yes there are many ways to do what you want, but you'll have to deal with lists that are not just simply equivalent to `range(1,9)` and with numpy you can not only concisely do what you want to do, you can manipulate your list in many many different ways once you convert it to a numpy array
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
For a `list` (call it `seq`) and a target row length (call it `rowsize`), you would do something like this: ``` split_into_rows = [seq[i: i + rowsize] for i in range(0, len(seq), rowsize)] ``` You could then use the writer's `writerows` method to write elements to the file: ``` writer.writerows(split_into_rows) ``` For lazy evaluation, use a generator expression instead: ``` split_into_rows = (seq[i: i + rowsize] for i in range(0, len(seq), rowsize)) ```
This ``` import itertools l = ['a1','a2','b1','b2'] def toCSV(fname,l): with open(fname+'.csv','w') as f: f.write('\n'.join([','.join(list(g)) for k,g in itertools.groupby(l,key=lambda k: k[0])])) toCSV('mycsv',l) ``` Would yield ``` a1,a2 b1,b2 ``` It relies heavily on itertool's groupby function. [Python Doc](http://docs.python.org/2/library/itertools.html#itertools.groupby).Essentially it will group the element based on a key. The key can be calculated using a lambda expression, which in this case would be the algebraic letter before the number.
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
For a `list` (call it `seq`) and a target row length (call it `rowsize`), you would do something like this: ``` split_into_rows = [seq[i: i + rowsize] for i in range(0, len(seq), rowsize)] ``` You could then use the writer's `writerows` method to write elements to the file: ``` writer.writerows(split_into_rows) ``` For lazy evaluation, use a generator expression instead: ``` split_into_rows = (seq[i: i + rowsize] for i in range(0, len(seq), rowsize)) ```
``` >>> import csv >>> A = ["1", "2", "3", "4", "5", "6", "7", "8"] >>> with open("newfile.csv",'wb') as f: w = csv.writer(f) w.writerows(zip(*[iter(A)]*4)) ```
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
I suggest using a temp array B. 1. Get the length of the original array A 2. Copy the first A.length/2 to array B 3. Add new line character to array B. 4. Append the rest of array A to B.
This ``` import itertools l = ['a1','a2','b1','b2'] def toCSV(fname,l): with open(fname+'.csv','w') as f: f.write('\n'.join([','.join(list(g)) for k,g in itertools.groupby(l,key=lambda k: k[0])])) toCSV('mycsv',l) ``` Would yield ``` a1,a2 b1,b2 ``` It relies heavily on itertool's groupby function. [Python Doc](http://docs.python.org/2/library/itertools.html#itertools.groupby).Essentially it will group the element based on a key. The key can be calculated using a lambda expression, which in this case would be the algebraic letter before the number.
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
when dealing with lists in this manner, it's best to use the numpy module: ``` import numpy as np A = ["1", "2", "3", "4", "5", "6", "7", "8"] #now to convert your list into a 2-Dimensional numpy array A = np.array((A)).reshape((2,4)) result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') #you use writer.writerows() instead of .writerow because you have multiple rows rather than one writer.writerows(row) result.close ``` numpy provides many ways of manipulating lists. Look at some documentation [here](http://docs.scipy.org/doc/numpy/reference/) Yes there are many ways to do what you want, but you'll have to deal with lists that are not just simply equivalent to `range(1,9)` and with numpy you can not only concisely do what you want to do, you can manipulate your list in many many different ways once you convert it to a numpy array
This ``` import itertools l = ['a1','a2','b1','b2'] def toCSV(fname,l): with open(fname+'.csv','w') as f: f.write('\n'.join([','.join(list(g)) for k,g in itertools.groupby(l,key=lambda k: k[0])])) toCSV('mycsv',l) ``` Would yield ``` a1,a2 b1,b2 ``` It relies heavily on itertool's groupby function. [Python Doc](http://docs.python.org/2/library/itertools.html#itertools.groupby).Essentially it will group the element based on a key. The key can be calculated using a lambda expression, which in this case would be the algebraic letter before the number.
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
``` >>> import csv >>> A = ["1", "2", "3", "4", "5", "6", "7", "8"] >>> with open("newfile.csv",'wb') as f: w = csv.writer(f) w.writerows(zip(*[iter(A)]*4)) ```
This ``` import itertools l = ['a1','a2','b1','b2'] def toCSV(fname,l): with open(fname+'.csv','w') as f: f.write('\n'.join([','.join(list(g)) for k,g in itertools.groupby(l,key=lambda k: k[0])])) toCSV('mycsv',l) ``` Would yield ``` a1,a2 b1,b2 ``` It relies heavily on itertool's groupby function. [Python Doc](http://docs.python.org/2/library/itertools.html#itertools.groupby).Essentially it will group the element based on a key. The key can be calculated using a lambda expression, which in this case would be the algebraic letter before the number.
13,219,585
im teaching myself python and im abit confused ``` #!/usr/bin/python def Age(): age_ = int(input("How old are you? ")) def Name(): name_ = raw_input("What is your name? ") def Sex(): sex_ = raw_input("Are you a man(1) or a woman(2)? ") if sex_ == 1: man = 1 elif sex_ == 2: woman = 2 else: print "Please enter 1 for man or 2 for woman " Age() Name() Sex() print "Your name is " + name_ + " you are " + age_ + " years old and you are a " + sex_ + "." ``` **Error** > > File "./functions2.py", line 25, in print "Your name is " + > name\_ + " you are " + age\_ + " years old and you are a " + sex\_ + "." > NameError: name 'name\_' is not defined > > > Surely it is defined in the `Name()` function? Im confused :( --- Arr I am now thanks for making it a bit more newb proof, Ive now got a problem in the Sex() function. It was returning a number in the print rather than a word "man" or "woman" so I change the code to try fix it. But im getting the following error now File ``` "./functions2.py", line 16 2 = woman SyntaxError: can't assign to literal ``` I tried to make the 2 an str(2) but it gave me another error. Thanks for you help so far appreciated ``` #!/usr/bin/python def Age(): age_ = raw_input("How old are you? ") return age_ def Name(): name_ = raw_input("What is your name? ") return name_ def Sex(): sex_ = str(raw_input("Are you a man or a woman? ")) if sex_ == 1: 1 = man return sex_ elif sex_ == 2: 2 = woman return sex_ else: print "Please enter man or woman " age_ = Age() name_ = Name() sex_ = Sex() print "Your name is " + name_ + " you are " + age_ + " years old and you are a " + sex_ + "." ```
2012/11/04
[ "https://Stackoverflow.com/questions/13219585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1222298/" ]
Programming Lua has example on [non-preemptive multithreading](http://www.lua.org/pil/9.4.html) (using coroutines), which you can probably use almost directly. In my opinion coroutines will be a better solution for your use case. There is also [copas library](http://keplerproject.github.com/copas/index.html) that describes itself as "a dispatcher based on coroutines that can be used by TCP/IP servers", but you can actually use it for sending requests asynchronously too (using a combination of `addthread` and `step` calls).
Lua Lanes creates an entirely new (but minimal) Lua state for each lane. Any upvalues or arguments passed are copied, not referenced; this means your allClients table is being copied, along with the sockets it contains. The error is occurring because the sockets are userdata, which Lua Lanes does not know how to copy without some advice from a C module. Unfortunately, LuaSocket offers no such advice. (There are ways around this, but be cautious: LuaSocket is *not* thread-safe and synchronisation bugs are very difficult to track down.) Although it will not solve your problem, you should note that you need to require LuaSocket in your spawned lane; it is not copied by default. Solutions! ========== These are ordered from easy to hard (and mostly transcribed from my other answer [here](https://stackoverflow.com/a/12890914/837856)). Single-threaded Polling ----------------------- Repeatedly call polling functions in LuaSocket: * Blocking: call `socket.select` with no time argument and wait for the socket to be readable. * Non-blocking: call `socket.select` with a timeout argument of `0`, and use `sock:settimeout(0)` on the socket you're reading from. Simply call these repeatedly. I would suggest using a [coroutine scheduler](https://gist.github.com/1818054) for the non-blocking version, to allow other parts of the program to continue executing without causing too much delay. (If you go for this solution, I suggest reviewing [Paul's answer](https://stackoverflow.com/questions/13219570/lualanes-and-luasockets/13223515#13223515).) Dual-threaded Polling --------------------- Your main thread does not deal with sockets at all. Instead, it spawns another lane which requires LuaSocket, handles all clients and communicates with the main thread via a [linda](http://kotisivu.dnainternet.net/askok/bin/lanes/#lindas). This is probably the most viable option for you. Multi-threaded Polling ---------------------- Same as above, except each thread handles a subset of all the clients (1 to 1 is possible, but diminishing return will set in with large quantities of clients). I've made a simple example of this, available [here](https://gist.github.com/4026258). It relies on Lua Lanes 3.4.0 ([GitHub repo](https://github.com/LuaLanes/lanes/)) and a patched LuaSocket 2.0.2 ([source](http://luarocks.org/repositories/rocks/#luasocket), [patch](http://www.net-core.org/dl/luasocket-2.0.2-acceptfd.patch), [blog post re' patch](http://www.net-core.org/39/lua/patching-luasocket-to-make-it-compatible)) The results are promising, though you should definitely refactor my example code if you derive from it. [LuaJIT](http://luajit.org/luajit.html) + ENet ---------------------------------------------- [ENet](http://enet.bespin.org/) is a great library. It provides the perfect mix between TCP and UDP: reliable when desired, unreliable otherwise. It also abstracts operating system specific details, much like LuaSocket does. You can use the Lua API to bind it, or directly access it via [LuaJIT's FFI](http://luajit.org/ext_ffi.html) (recommended).
17,664,841
I have a set and dictionary and a value = 5 ``` v = s = {'a', 'b', 'c'} d = {'b':5 //<--new value} ``` If the key 'b' in dictionary d for example is in set s then I want to make that value equal to the new value when I return a dict comprehension or 0 if the key in set s is not in the dictionary d. So this is my code to do it where s['b'] = 5 and my new dictionary is is ... ``` {'a':0, 'b':5, 'c':0} ``` I wrote a dict comprehension ``` { k:d[k] if k in d else k:0 for k in s} ^ SyntaxError: invalid syntax ``` Why?! Im so furious it doesnt work. This is how you do if else in python isnt it?? So sorry everyone. For those who visited this page I originally put { k:d[k] if k in v else k:0 for k in v} and s['b'] = 5 was just a representation that the new dictionary i created would have a key 'b' equaling 5, but it isnt correct cus you cant iterate a set like that. So to reiterate v and s are equal. They just mean vector and set.
2013/07/15
[ "https://Stackoverflow.com/questions/17664841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1509483/" ]
The expanded form of what you're trying to achieve is ``` a = {} for k in v: a[k] = d[k] if k in d else 0 ``` where `d[k] if k in d else 0` is [the Python's version of ternary operator](https://docs.python.org/dev/faq/programming.html#is-there-an-equivalent-of-c-s-ternary-operator). See? You need to drop `k:` from the right part of the expression: ``` {k: d[k] if k in d else 0 for k in v} # ≡ {k: (d[k] if k in d else 0) for k in v} ``` You can write it concisely like ``` a = {k: d.get(k, 0) for k in d} ```
You can't use a ternary `if` expression for a name:value pair, because a name:value pair isn't a value. You *can* use an `if` expression for the value or key separately, which seems to be exactly what you want here: ``` {k: (d[k] if k in v else 0) for k in v} ``` However, this is kind of silly. You're doing `for k in v`, so every `k` is in `v` by definition. Maybe you wanted this: ``` {k: (d[k] if k in v else 0) for k in d} ```
17,664,841
I have a set and dictionary and a value = 5 ``` v = s = {'a', 'b', 'c'} d = {'b':5 //<--new value} ``` If the key 'b' in dictionary d for example is in set s then I want to make that value equal to the new value when I return a dict comprehension or 0 if the key in set s is not in the dictionary d. So this is my code to do it where s['b'] = 5 and my new dictionary is is ... ``` {'a':0, 'b':5, 'c':0} ``` I wrote a dict comprehension ``` { k:d[k] if k in d else k:0 for k in s} ^ SyntaxError: invalid syntax ``` Why?! Im so furious it doesnt work. This is how you do if else in python isnt it?? So sorry everyone. For those who visited this page I originally put { k:d[k] if k in v else k:0 for k in v} and s['b'] = 5 was just a representation that the new dictionary i created would have a key 'b' equaling 5, but it isnt correct cus you cant iterate a set like that. So to reiterate v and s are equal. They just mean vector and set.
2013/07/15
[ "https://Stackoverflow.com/questions/17664841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1509483/" ]
The expanded form of what you're trying to achieve is ``` a = {} for k in v: a[k] = d[k] if k in d else 0 ``` where `d[k] if k in d else 0` is [the Python's version of ternary operator](https://docs.python.org/dev/faq/programming.html#is-there-an-equivalent-of-c-s-ternary-operator). See? You need to drop `k:` from the right part of the expression: ``` {k: d[k] if k in d else 0 for k in v} # ≡ {k: (d[k] if k in d else 0) for k in v} ``` You can write it concisely like ``` a = {k: d.get(k, 0) for k in d} ```
``` In [82]: s = {'a', 'b', 'c'} In [83]: d = {'b':5 } In [85]: {key: d.get(key, 0) for key in s} Out[85]: {'a': 0, 'b': 5, 'c': 0} ```
17,664,841
I have a set and dictionary and a value = 5 ``` v = s = {'a', 'b', 'c'} d = {'b':5 //<--new value} ``` If the key 'b' in dictionary d for example is in set s then I want to make that value equal to the new value when I return a dict comprehension or 0 if the key in set s is not in the dictionary d. So this is my code to do it where s['b'] = 5 and my new dictionary is is ... ``` {'a':0, 'b':5, 'c':0} ``` I wrote a dict comprehension ``` { k:d[k] if k in d else k:0 for k in s} ^ SyntaxError: invalid syntax ``` Why?! Im so furious it doesnt work. This is how you do if else in python isnt it?? So sorry everyone. For those who visited this page I originally put { k:d[k] if k in v else k:0 for k in v} and s['b'] = 5 was just a representation that the new dictionary i created would have a key 'b' equaling 5, but it isnt correct cus you cant iterate a set like that. So to reiterate v and s are equal. They just mean vector and set.
2013/07/15
[ "https://Stackoverflow.com/questions/17664841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1509483/" ]
The expanded form of what you're trying to achieve is ``` a = {} for k in v: a[k] = d[k] if k in d else 0 ``` where `d[k] if k in d else 0` is [the Python's version of ternary operator](https://docs.python.org/dev/faq/programming.html#is-there-an-equivalent-of-c-s-ternary-operator). See? You need to drop `k:` from the right part of the expression: ``` {k: d[k] if k in d else 0 for k in v} # ≡ {k: (d[k] if k in d else 0) for k in v} ``` You can write it concisely like ``` a = {k: d.get(k, 0) for k in d} ```
This should solve your problem: ``` >>> dict((k, d.get(k, 0)) for k in s) {'a': 0, 'c': 0, 'b': 5} ```
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
In c# 6.0 you can now do ``` string username = SomeUserObject?.Username; ``` username will be set to null if SomeUSerObject is null. If you want it to get the value "", you can do ``` string username = SomeUserObject?.Username ?? ""; ```
It is called null coalescing and is performed as follows: ``` string username = SomeUserObject.Username ?? "" ```
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
You can use ? : as others have suggested but you might want to consider the Null object pattern where you create a special static User `User.NotloggedIn` and use that instead of null everywhere. Then it becomes easy to always do `.Username`. Other benefits: you get / can generate different exceptions for the case (null) where you didn't assign a variable and (not logged in) where that user isn't allowed to do something. Your NotloggedIn user can be a derived class from User, say NotLoggedIn that overrides methods and throws exceptions on things you can't do when not logged in, like make payments, send emails, ... As a derived class from User you get some fairly nice syntactic sugar as you can do things like `if (someuser is NotLoggedIn) ...`
It is called null coalescing and is performed as follows: ``` string username = SomeUserObject.Username ?? "" ```
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
You can use ? : as others have suggested but you might want to consider the Null object pattern where you create a special static User `User.NotloggedIn` and use that instead of null everywhere. Then it becomes easy to always do `.Username`. Other benefits: you get / can generate different exceptions for the case (null) where you didn't assign a variable and (not logged in) where that user isn't allowed to do something. Your NotloggedIn user can be a derived class from User, say NotLoggedIn that overrides methods and throws exceptions on things you can't do when not logged in, like make payments, send emails, ... As a derived class from User you get some fairly nice syntactic sugar as you can do things like `if (someuser is NotLoggedIn) ...`
You're thinking of the ternary operator. ``` string username = SomeUserObject == null ? "" : SomeUserObject.Username; ``` See <http://msdn.microsoft.com/en-us/library/ty67wk28.aspx> for more details.
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
The closest you're going to get, I think, is: ``` string username = SomeUserObject == null ? null : SomeUserObject.Username; ```
This is probably as close as you are going to get: ``` string username = (SomeUserObject != null) ? SomeUserObject.Username : null; ```
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
The closest you're going to get, I think, is: ``` string username = SomeUserObject == null ? null : SomeUserObject.Username; ```
You're thinking of the ternary operator. ``` string username = SomeUserObject == null ? "" : SomeUserObject.Username; ``` See <http://msdn.microsoft.com/en-us/library/ty67wk28.aspx> for more details.
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
In c# 6.0 you can now do ``` string username = SomeUserObject?.Username; ``` username will be set to null if SomeUSerObject is null. If you want it to get the value "", you can do ``` string username = SomeUserObject?.Username ?? ""; ```
This is probably as close as you are going to get: ``` string username = (SomeUserObject != null) ? SomeUserObject.Username : null; ```
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
The closest you're going to get, I think, is: ``` string username = SomeUserObject == null ? null : SomeUserObject.Username; ```
It is called null coalescing and is performed as follows: ``` string username = SomeUserObject.Username ?? "" ```
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
The closest you're going to get, I think, is: ``` string username = SomeUserObject == null ? null : SomeUserObject.Username; ```
You can use ? : as others have suggested but you might want to consider the Null object pattern where you create a special static User `User.NotloggedIn` and use that instead of null everywhere. Then it becomes easy to always do `.Username`. Other benefits: you get / can generate different exceptions for the case (null) where you didn't assign a variable and (not logged in) where that user isn't allowed to do something. Your NotloggedIn user can be a derived class from User, say NotLoggedIn that overrides methods and throws exceptions on things you can't do when not logged in, like make payments, send emails, ... As a derived class from User you get some fairly nice syntactic sugar as you can do things like `if (someuser is NotLoggedIn) ...`
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
This is probably as close as you are going to get: ``` string username = (SomeUserObject != null) ? SomeUserObject.Username : null; ```
You're thinking of the ternary operator. ``` string username = SomeUserObject == null ? "" : SomeUserObject.Username; ``` See <http://msdn.microsoft.com/en-us/library/ty67wk28.aspx> for more details.
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
Your syntax on the second is slightly off. ``` string name = SomeUserObject != null ? SomeUserObject.Username : string.Empty; ```
This is probably as close as you are going to get: ``` string username = (SomeUserObject != null) ? SomeUserObject.Username : null; ```
70,279,160
trying to find out correct syntax to traverse through through the list to get all values and insert into oracle. edit: Below is the json structure : ``` [{ "publishTime" : "2021-11-02T20:18:36.223Z", "data" : { "DateTime" : "2021-11-01T15:10:17Z", "Name" : "x1", "frtb" : { "code" : "set1" }, "mainAccounts" : [ { "System" : { "identifier" : { "domain" : "System", "id" : "xxx" }, "name" : "TEST1" }, "Account" : "acc1" }, { "System" : { "identifier" : { "domain" : "System", "id" : "xxx" }, "name" : "TEST2" }, "Account" : "acc2" }, { "System" : { "identifier" : { "domain" : "System", "id" : "xxx" }, "name" : "TEST3" }, "Account" : "acc3" }], "sub" : { "ind" : false, "identifier" : { "domain" : "ops", "id" : "1", "version" : "1" }] ``` My python code : ``` insert_statement = """INSERT INTO mytable VALUES (:1,:2)""" r =requests.get(url, cert=(auth_certificate, priv_key), verify=root_cert, timeout=3600) data=json.loads(r.text) myrows = [] for item in data: try: name = (item.get("data").get("Name")) except AttributeError: name='' try: account= (item.get("data").get("mainAccounts")[0].get("Account") ) except TypeError: account='' rows=(name,account) myrows.append(rows) cursor.executemany(insert_statement,myrows) connection_target.commit() ``` with the above i only get first value for 'account' in list i.e. ("acc1") , how to get all the values i.e. (acc1,acc2,acc3) ? I have tried below with no success : try: Account = (item.get("data").get("mainAccounts")[0].get("Account") for item in data["mainAccounts") except TypeError: Account= '' please advise.Appreciate your help always.
2021/12/08
[ "https://Stackoverflow.com/questions/70279160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17544527/" ]
Knative Serving does not support such a policy as of today. However, the community thinks that Kubernetes' Descheduler should work just fine, as is discussed in <https://github.com/knative/serving/issues/6176>.
Not kn specific but you could try adding `activeDeadlineSeconds` to the podSpec.
47,351,274
In summary, When provisioning my vagrant box using Ansible, I get thrown a mysterious error when trying to clone my bitbucket private repo using ssh. The error states that the "Host key verification failed". Yet if I vagrant ssh and then run the '*git clone*' command, the private repo is successfully cloned. This indicates that the ssh forward agent is indeed working and the vagrant box can access my private key associated with the bitbucket repo. I have been struggling for two days on this issue and am loosing my mind! Please, somebody help me!!! **Vagrantfile:** ``` Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.network "private_network", ip: "192.168.33.10" config.ssh.forward_agent = true # Only contains ansible dependencies config.vm.provision "shell", inline: "sudo apt-get install python-minimal -y" # Use ansible for all provisioning: config.vm.provision "ansible" do |ansible| ansible.playbook = "provisioning/playbook.yml" end end ``` My **playbook.yml** is as follows: ``` --- - hosts: all become: true tasks: - name: create /var/www/ directory file: dest=/var/www/ state=directory owner=www-data group=www-data mode=0755 - name: Add the user 'ubuntu' to group 'www-data' user: name: ubuntu shell: /bin/bash groups: www-data append: yes - name: Clone bitbucket repo git: repo: git@bitbucket.org:gustavmahler/example.com.git dest: /var/www/poo version: master accept_hostkey: yes ``` **Error Message:** `vagrant provision` > > TASK [common : Clone bitbucket repo] \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* > > > fatal: [default]: FAILED! => {"changed": false, "cmd": "/usr/bin/git clone --origin origin '' /var/www/poo", "failed": true, "msg": "Cloning into '/var/www/poo'...\nWarning: Permanently added the RSA host key for IP address '104.192.143.3' to the list of known hosts.\r\nPermission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.", "rc": 128, "stderr": "Cloning into '/var/www/poo'...\nWarning: Permanently added the RSA host key for IP address '104.192.143.3' to the list of known hosts.\r\nPermission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n", "stderr\_lines": ["Cloning into '/var/www/poo'...", "Warning: Permanently added the RSA host key for IP address '104.192.143.3' to the list of known hosts.", "Permission denied (publickey).", "fatal: Could not read from remote repository.", "", "Please make sure you have the correct access rights", "and the repository exists."], "stdout": "", "stdout\_lines": []} > > > **Additional Info:** * *ssh-add -l* on my machine does contain the associated bitbucket repo key. * *ssh-add -l* inside the vagrant box does also contain the associated bitbucket repo key (through ssh-forwarding). **Yet cloning works** if done manually inside the vagrant box **?**: ``` vagrant ssh git clone git@bitbucket.org:myusername/myprivaterepo.com.git Then type "yes" to allow the RSA fingerprint to be added to ~/.ssh/known_hosts (as its first connection with bitbucket) ``` **Possible solution?** I have seen in the Ansible documentation that there is a *[key\_file](http://docs.ansible.com/ansible/latest/git_module.html#options):* option. How would I reference the private key which is located outside the vagrant box and is passed in using ssh forwarding? I do have multiple ssh keys for different entities inside my ~/.ssh/ Perhaps the git clone command when run by Ansible provisioning isn't selecting the correct key? Any help is greatly appreciated and thanks for reading my nightmare.
2017/11/17
[ "https://Stackoverflow.com/questions/47351274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4515324/" ]
Since you run the whole playbook with `become: true`, SSH key-forwarding (as well as troubleshooting) becomes irrelevant, because the user connecting to BitBucket from your play is `root`. Run the task connecting to BitBucket as `ubuntu` user: * either specifying `become: false` in the `Clone bitbucket repo` task), * or removing `become: true` From the play and adding it only to tasks that require elevated permissions.
This answer comes direct from techraf's helpful comments. * I changed the owner of the /var/www directory from 'www-data' to 'ubuntu' (the username I use to login via ssh). * I also added "become: false" above the git task. NOTE: I have since been dealing with the following issue so this answer does not fully resolve my problems: [Ansible bitbucket clone repo provisioning ssh error](https://stackoverflow.com/questions/47784489/ansible-bitbucket-clone-repo-provisioning-ssh-error) **Updated working playbook.yml file**: ``` --- - hosts: all become: true tasks: - name: create /var/www/ directory file: dest=/var/www/ state=directory owner=ubuntu group=www-data mode=0755 - name: Add the user 'ubuntu' to group 'www-data' user: name: ubuntu shell: /bin/bash groups: www-data append: yes - name: Clone bitbucket repo become: false git: repo: git@bitbucket.org:[username]/example.com.git dest: /var/www/poo version: master accept_hostkey: yes ```
40,477,687
I'm getting `ImportError: No module named django` when trying to up my containers running `docker-compose up`. Here's my scenario: Dockerfile ``` FROM python:2.7 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ ``` docker-compose.yml ``` version: '2' services: db: image: postgres web: build: . command: bash -c "python manage.py runserver" volumes: - .:/code expose: - "8000" links: - db:db ``` manage.py ``` import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings") from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) ``` * The containers were built successfully * My packages are present in the container (I get them when I run `pip freeze`) I've changed the manage.py content for ``` import django print django.get_version() ``` and it worked. This is basically the same example as in this [tutorial](https://docs.docker.com/compose/django/), except for starting a new project.
2016/11/08
[ "https://Stackoverflow.com/questions/40477687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2812350/" ]
Maybe what you want is to do this: ``` void printHandResults(struct Card (*hand)[]); ``` and this: ``` void printHandResults(struct Card (*hand)[]) { } ``` What you were doing was passing a pointer to an array of struct variables in the main, BUT, the function was set to receive **an array of pointers to struct variables** and not **a pointer to an array of struct variables**! Now the type mismatch would not occur and thus, no warning. Note that `[]`, the (square) brackets have higher precedence than the (unary) dereferencing operator `*`, so we would need a set of parentheses `()` enclosing the array name and `*` operator to ensure what we are talking about here!
More easy way: ``` #define HAND_SIZE 5 typedef struct Cards{ char suit; char face; }card_t; void printHandResults( card_t*); int main(void) { card_t hand[HAND_SIZE]; card_t* p_hand = hand; ... printHandResults(p_hand); } void printHandResults( card_t *hand) { // Here you must play with pointer's arithmetic ... } ```
40,477,687
I'm getting `ImportError: No module named django` when trying to up my containers running `docker-compose up`. Here's my scenario: Dockerfile ``` FROM python:2.7 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ ``` docker-compose.yml ``` version: '2' services: db: image: postgres web: build: . command: bash -c "python manage.py runserver" volumes: - .:/code expose: - "8000" links: - db:db ``` manage.py ``` import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings") from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) ``` * The containers were built successfully * My packages are present in the container (I get them when I run `pip freeze`) I've changed the manage.py content for ``` import django print django.get_version() ``` and it worked. This is basically the same example as in this [tutorial](https://docs.docker.com/compose/django/), except for starting a new project.
2016/11/08
[ "https://Stackoverflow.com/questions/40477687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2812350/" ]
An array degrades into a raw pointer to the first array element. So you can do something more like this instead: ``` #define HAND_SIZE 5 struct Card { char suit; char face; }; void printHandResults(struct Card *hand); int main(void) { struct Card hand[HAND_SIZE]; ... printHandResults(hand); } void printHandResults(struct Card *hand) { for (int i = 0; i < HAND_SIZE; ++i) { // print hand[i].suit and hand[i].face as needed... } } ``` Alternatively: ``` #define HAND_SIZE 5 struct Card { char suit; char face; }; void printHandResults(struct Card *hand, int numInHand); int main(void) { struct Card hand[HAND_SIZE]; ... printHandResults(hand, HAND_SIZE); } void printHandResults(struct Card *hand, int numInHand) { for (int i = 0; i < numInHand; ++i) { // print hand[i].suit and hand[i].face as needed... } } ``` Alternatively, create a new typedef for the card array, and then you can create variables and pointers of that type: ``` #define HAND_SIZE 5 struct Card { char suit; char face; }; typedef struct Card Hand[HAND_SIZE]; void printHandResults(Hand *hand); int main(void) { Hand hand; ... printHandResults(&hand); } void printHandResults(Hand *hand) { for (int i = 0; i < HAND_SIZE; ++i) { // print hand[i].suit and hand[i].face as needed... } } ```
More easy way: ``` #define HAND_SIZE 5 typedef struct Cards{ char suit; char face; }card_t; void printHandResults( card_t*); int main(void) { card_t hand[HAND_SIZE]; card_t* p_hand = hand; ... printHandResults(p_hand); } void printHandResults( card_t *hand) { // Here you must play with pointer's arithmetic ... } ```
57,498,262
It seems so basic, but I can't work out how to achieve the following... Consider the scenario where I have the following data: ```py all_columns = ['A','B','C','D'] first_columns = ['A','B'] second_columns = ['C','D'] new_columns = ['E','F'] values = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]] df = pd.DataFrame(data = values, columns = all_columns) df A B C D 0 1 2 3 4 1 5 6 7 8 2 9 10 11 12 3 13 14 15 16 ``` How can I using this data subsequently subtract let's say column C - column A, then column D - column B and return two new columns E and F respectively to my df Pandas dataframe? I have multiple columns so writing the formula one by one is not an option. I imagine it should be something like that, but python thinks that I am trying to subtract list names rather than the values in the actual lists... ```py df[new_columns] = df[second_columns] - df[first_columns] ``` Expected output: ```py A B C D E F 0 1 2 3 4 2 2 1 5 6 7 8 2 2 2 9 10 11 12 2 2 3 13 14 15 16 2 2 ```
2019/08/14
[ "https://Stackoverflow.com/questions/57498262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10750173/" ]
An option is `pivot_longer` from the dev version of `tidyr` ``` library(tidyr) library(dplyr) library(tibble) nm1 <- sub("\\.?\\d+$", "", names(df)) names(df) <- paste0(nm1, ":", ave(seq_along(nm1), nm1, FUN = seq_along)) df %>% rownames_to_column('rn') %>% pivot_longer(-rn, names_to= c(".value", "Group"), names_sep= ":") ```
You may use base R's `reshape`. The trick is the nested `varying` and to `cbind` an `id` column. ``` reshape(cbind(id=1:nrow(DF), DF), varying=list(c(2, 4), c(3, 5)), direction="long", timevar="Group", v.names=c("Gene", "AVG_EXPR"), times=c("Group1", "Group2")) # id Group Gene AVG_EXPR # 1.Group1 1 Group1 EMX1 0.55748718 # 2.Group1 2 Group1 EXO_C3L4 -0.71399573 # 3.Group1 3 Group1 FAF2P1 0.72595733 # 4.Group1 4 Group1 FAM224A 0.78106399 # 5.Group1 5 Group1 GATC 0.33296728 # 6.Group1 6 Group1 FAM43A -1.14699031 # 7.Group1 7 Group1 FAT4 0.64475462 # 8.Group1 8 Group1 EXO_FEZF1-AS1 0.88621529 # 1.Group2 1 Group2 EXO_BRPF3 -0.62445044 # 2.Group2 2 Group2 AFS 0.14577195 # 3.Group2 3 Group2 IJAS -1.15083794 # 4.Group2 4 Group2 CCDC187 0.95483371 # 5.Group2 5 Group2 CCDC200 -1.55217630 # 6.Group2 6 Group2 CCDC7 -0.51225644 # 7.Group2 7 Group2 CCL27 0.52270643 # 8.Group2 8 Group2 CD6 -0.08751736 ``` ### Data ``` DF <- structure(list(Group1 = structure(c(1L, 2L, 4L, 5L, 8L, 6L, 7L, 3L), .Label = c("EMX1", "EXO_C3L4", "EXO_FEZF1-AS1", "FAF2P1", "FAM224A", "FAM43A", "FAT4", "GATC"), class = "factor"), AVG_EXPR = c(0.557487175664929, -0.713995729705457, 0.725957334982896, 0.781063988053138, 0.332967281017435, -1.14699031084438, 0.644754618475526, 0.88621528791546), Group2 = structure(c(7L, 1L, 8L, 2L, 3L, 4L, 5L, 6L), .Label = c("AFS", "CCDC187", "CCDC200", "CCDC7", "CCL27", "CD6", "EXO_BRPF3", "IJAS"), class = "factor"), AVG_EXPR.1 = c(-0.624450436709626, 0.145771950049532, -1.15083793547685, 0.954833708527147, -1.55217630379434, -0.512256436764118, 0.522706429197282, -0.0875173629601027)), class = "data.frame", row.names = c(NA, -8L)) ```
57,498,262
It seems so basic, but I can't work out how to achieve the following... Consider the scenario where I have the following data: ```py all_columns = ['A','B','C','D'] first_columns = ['A','B'] second_columns = ['C','D'] new_columns = ['E','F'] values = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]] df = pd.DataFrame(data = values, columns = all_columns) df A B C D 0 1 2 3 4 1 5 6 7 8 2 9 10 11 12 3 13 14 15 16 ``` How can I using this data subsequently subtract let's say column C - column A, then column D - column B and return two new columns E and F respectively to my df Pandas dataframe? I have multiple columns so writing the formula one by one is not an option. I imagine it should be something like that, but python thinks that I am trying to subtract list names rather than the values in the actual lists... ```py df[new_columns] = df[second_columns] - df[first_columns] ``` Expected output: ```py A B C D E F 0 1 2 3 4 2 2 1 5 6 7 8 2 2 2 9 10 11 12 2 2 3 13 14 15 16 2 2 ```
2019/08/14
[ "https://Stackoverflow.com/questions/57498262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10750173/" ]
You can combine two `dplyr` calls with `bind_rows`. ```r df %>% select(gene=Group1,AVG_EXPR) %>% mutate(group='group 1') %>% bind_rows(.,df %>% select(gene=Group2,AVG_EXPR=AVG_EXPR.1) %>% mutate(group='group 2')) ```
You may use base R's `reshape`. The trick is the nested `varying` and to `cbind` an `id` column. ``` reshape(cbind(id=1:nrow(DF), DF), varying=list(c(2, 4), c(3, 5)), direction="long", timevar="Group", v.names=c("Gene", "AVG_EXPR"), times=c("Group1", "Group2")) # id Group Gene AVG_EXPR # 1.Group1 1 Group1 EMX1 0.55748718 # 2.Group1 2 Group1 EXO_C3L4 -0.71399573 # 3.Group1 3 Group1 FAF2P1 0.72595733 # 4.Group1 4 Group1 FAM224A 0.78106399 # 5.Group1 5 Group1 GATC 0.33296728 # 6.Group1 6 Group1 FAM43A -1.14699031 # 7.Group1 7 Group1 FAT4 0.64475462 # 8.Group1 8 Group1 EXO_FEZF1-AS1 0.88621529 # 1.Group2 1 Group2 EXO_BRPF3 -0.62445044 # 2.Group2 2 Group2 AFS 0.14577195 # 3.Group2 3 Group2 IJAS -1.15083794 # 4.Group2 4 Group2 CCDC187 0.95483371 # 5.Group2 5 Group2 CCDC200 -1.55217630 # 6.Group2 6 Group2 CCDC7 -0.51225644 # 7.Group2 7 Group2 CCL27 0.52270643 # 8.Group2 8 Group2 CD6 -0.08751736 ``` ### Data ``` DF <- structure(list(Group1 = structure(c(1L, 2L, 4L, 5L, 8L, 6L, 7L, 3L), .Label = c("EMX1", "EXO_C3L4", "EXO_FEZF1-AS1", "FAF2P1", "FAM224A", "FAM43A", "FAT4", "GATC"), class = "factor"), AVG_EXPR = c(0.557487175664929, -0.713995729705457, 0.725957334982896, 0.781063988053138, 0.332967281017435, -1.14699031084438, 0.644754618475526, 0.88621528791546), Group2 = structure(c(7L, 1L, 8L, 2L, 3L, 4L, 5L, 6L), .Label = c("AFS", "CCDC187", "CCDC200", "CCDC7", "CCL27", "CD6", "EXO_BRPF3", "IJAS"), class = "factor"), AVG_EXPR.1 = c(-0.624450436709626, 0.145771950049532, -1.15083793547685, 0.954833708527147, -1.55217630379434, -0.512256436764118, 0.522706429197282, -0.0875173629601027)), class = "data.frame", row.names = c(NA, -8L)) ```
67,080,686
So I need to enforce the type of a class variable, but more importantly, I need to enforce a list with its types. So if I have some code which looks like this: ``` class foo: def __init__(self, value: list[int]): self.value = value ``` Btw I'm using python version 3.9.4 So essentially my question is, how can I make sure that value is a list of integers. Thanks in advance * Zac
2021/04/13
[ "https://Stackoverflow.com/questions/67080686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15017177/" ]
You can enable the system keyboard by configuring `OVRCameraRig` Oculus features. 1. From the Hierarchy view, select `OVRCameraRig` to open settings in the Inspector view. 2. Under `OVR Manager`, in the `Quest Features` section, select `Require System Keyboard`. [![enter image description here](https://i.stack.imgur.com/X84TD.png)](https://i.stack.imgur.com/X84TD.png) [![enter image description here](https://i.stack.imgur.com/DME4s.png)](https://i.stack.imgur.com/DME4s.png) For more: <https://developer.oculus.com/documentation/unity/unity-keyboard-overlay/>
What @Codemaker answered here is for Oculus Integration. For XR Interaction Toolkit, you can manually enable a keyboard instance, by doing the following. ``` private TouchScreenKeyboard keyboard; public void ShowKeyboard() { keyboard = TouchScreenKeyboard.Open("", TouchScreenKeyboardType.Default); } ``` Just add this `Showkeyboard()` under the Inputfield's `On Select ()` event. And you are done. You can find the source [here](https://docs.unity3d.com/ScriptReference/TouchScreenKeyboard.Open.html). I tried it and it works for me.
1,399,727
Assuming the text is typed at the same time in the same (Israeli) timezone, The following free text lines are equivalent: ``` Wed Sep 9 16:26:57 IDT 2009 2009-09-09 16:26:57 16:26:57 September 9th, 16:26:57 ``` Is there a python module that would convert all these text-dates to an (identical) `datetime.datetime` instance? I would like to use it in a command-line tool that would get a freetext date and time as an argument, and return the equivalent date and time in different time zones, e.g.: ``` ~$wdate 16:00 Israel Israel: 16:00 San Francisco: 06:00 UTC: 13:00 ``` or: ``` ~$wdate 18:00 SanFran San Francisco 18:00:22 Israel: 01:00:22 (Day after) UTC: 22:00:22 ``` Any Ideas? Thanks, Udi
2009/09/09
[ "https://Stackoverflow.com/questions/1399727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51197/" ]
The [python-dateutil](http://labix.org/python-dateutil) package sounds like it would be helpful. Your examples only use simple HH:MM timestamps with a (magically shortened) city identifier, but it seems able to handle more complicated formats like those earlier in the question, too.
You could you [`time.strptime`](http://docs.python.org/library/time.html#time.strptime) Like this : ``` time.strptime("2009-09-09 16:26:57", "%Y-%m-%d %H:%M:%S") ``` It will return a struct\_time value (more info on the python doc page).
1,399,727
Assuming the text is typed at the same time in the same (Israeli) timezone, The following free text lines are equivalent: ``` Wed Sep 9 16:26:57 IDT 2009 2009-09-09 16:26:57 16:26:57 September 9th, 16:26:57 ``` Is there a python module that would convert all these text-dates to an (identical) `datetime.datetime` instance? I would like to use it in a command-line tool that would get a freetext date and time as an argument, and return the equivalent date and time in different time zones, e.g.: ``` ~$wdate 16:00 Israel Israel: 16:00 San Francisco: 06:00 UTC: 13:00 ``` or: ``` ~$wdate 18:00 SanFran San Francisco 18:00:22 Israel: 01:00:22 (Day after) UTC: 22:00:22 ``` Any Ideas? Thanks, Udi
2009/09/09
[ "https://Stackoverflow.com/questions/1399727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51197/" ]
[parsedatetime](http://code.google.com/p/parsedatetime/) seems to be a very capable module for this specific task.
You could you [`time.strptime`](http://docs.python.org/library/time.html#time.strptime) Like this : ``` time.strptime("2009-09-09 16:26:57", "%Y-%m-%d %H:%M:%S") ``` It will return a struct\_time value (more info on the python doc page).
62,767,387
I'm a noob with python beautifoulsoup library and i'm trying to scrape data from a website's highcharts. i found that all the data i need is located in a script tag, however i dont know how to scrape them (please see the attached image) Is there a way to get the data from this script tag using python beautifulsoup? [script](https://i.stack.imgur.com/1Ogao.png)
2020/07/07
[ "https://Stackoverflow.com/questions/62767387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13881565/" ]
Check out this working sandbox: <https://codesandbox.io/s/confident-taussig-tg3my?file=/src/App.js> One thing to note is you shouldn't base keys off of linear sequences [or indexes] because that can mess up React's tracking. Instead, pass the index value from map into the IssueRow component to track order, and give each issue its own serialized id. ``` const issueRows = issues.map((issue, index) => ( <IssueRow key={issue.id} issue={issue} index={index} /> )); ``` Also, the correct way to update state that depends on previous state, is to pass a function rather than an object: ``` setIssues(state => [...state, issue]); ``` You essentially iterate previous state, plus add the new issue.
This should be inside another `React.useEffect()`: ``` setTimeout(() => { createIssue(sampleIssue); }, 2000); ``` Otherwise, it will run every render in a loop. As a general rule of thumb there should be no side effects outside of useEffect.
62,767,387
I'm a noob with python beautifoulsoup library and i'm trying to scrape data from a website's highcharts. i found that all the data i need is located in a script tag, however i dont know how to scrape them (please see the attached image) Is there a way to get the data from this script tag using python beautifulsoup? [script](https://i.stack.imgur.com/1Ogao.png)
2020/07/07
[ "https://Stackoverflow.com/questions/62767387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13881565/" ]
Check out this working sandbox: <https://codesandbox.io/s/confident-taussig-tg3my?file=/src/App.js> One thing to note is you shouldn't base keys off of linear sequences [or indexes] because that can mess up React's tracking. Instead, pass the index value from map into the IssueRow component to track order, and give each issue its own serialized id. ``` const issueRows = issues.map((issue, index) => ( <IssueRow key={issue.id} issue={issue} index={index} /> )); ``` Also, the correct way to update state that depends on previous state, is to pass a function rather than an object: ``` setIssues(state => [...state, issue]); ``` You essentially iterate previous state, plus add the new issue.
Functional component can look like this: ``` function IssueTable() { const [issue1, setIssues] = React.useState([]); const createIssue = React.useCallback( function createIssue(issue) { //passing callback to setIssues // so useCallback does not depend // on issue1 and does not have stale // closure for issue1 setIssues((issues) => { issue.id = issues.length + 1; issue.created = new Date(); const newIssueList = issues.slice(); newIssueList.push(issue); return newIssueList; }); }, [] ); React.useEffect(() => { setTimeout(() => { createIssue(sampleIssue); }, 2000); }, [createIssue]); React.useEffect(() => { setTimeout(() => { setIssues(initialIssues); }, 500); }, []); const issueRows = issue1.map((issue) => ( <IssueRow key={issue.id} issue={issue} /> )); return ( <table className="bordered-table"> <thead> <tr> <th>ID</th> <th>Status</th> <th>Owner</th> <th>Created</th> <th>Effort</th> <th>Due Date</th> <th>Title</th> </tr> </thead> <tbody>{issueRows}</tbody> </table> ); } ```
63,438,737
I have an image in which text is lighter and not clearly visible, i want to enhance the text and lighten then background using python pil or cv2. Help is needed. I needed to print this image and hence it has ro be clearer. what i have done till now is below ``` from PIL import Image, ImageEnhance im = Image.open('t.jpg') im2 = im.point(lambda p:p * 0.8) im2.save('t1.jpg') ``` Above has slightly darken the text but still i needed to make text clear and background whiter or lighter. The image is a photo of a printed document with text colour in black on white paper but that is not clearly visible. Below is snap of photo [![enter image description here](https://i.stack.imgur.com/cRYkY.jpg)](https://i.stack.imgur.com/cRYkY.jpg) [![enter image description here](https://i.stack.imgur.com/cRYkY.jpg)](https://i.stack.imgur.com/cRYkY.jpg) In image above behind and surrounding letters and words there is some kind of noise white and silver dots. How can i completely remove them so that text borders can be clearly visible. I have also inverted image colours and given below sample [![enter image description here](https://i.stack.imgur.com/R4tex.jpg)](https://i.stack.imgur.com/R4tex.jpg) I have tried following Adaptive thresolding ,(failed) Otsu method, (failed) Gaussian blurring, (failed) Colour segmentation, (failed) Connected component removal (failed) Denoising( failed because it also removed text) but nothing worked. Above all methods made it even worse because result of above methods gave me below image. Also text is affected when connected compoment method is used [![enter image description here](https://i.stack.imgur.com/RT5uk.jpg)](https://i.stack.imgur.com/RT5uk.jpg) Please help is needed. Also tried image magick morphology editing but failed How actually noise exist is shown in below image. It is continous rectangle bar filled with dots surrounding text ![enter image description here (https://i.stack.imgur.com/ZMHim.jpg)!
2020/08/16
[ "https://Stackoverflow.com/questions/63438737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Save first column in an array, too. Insert before your `END` section `{dates[FNR] = $1}` and replace both `$1` with `dates[n]`.
Could you please try following, have written on mobile so couldn't test it as of now, should work but. ``` awk ' { diff=$2-$3 a[$1]=(a[$1]!="" && a[$1]>diff?a[$1]:diff) } END{ for(i in a){ print i,a[i] } }' *.txt ```
63,438,737
I have an image in which text is lighter and not clearly visible, i want to enhance the text and lighten then background using python pil or cv2. Help is needed. I needed to print this image and hence it has ro be clearer. what i have done till now is below ``` from PIL import Image, ImageEnhance im = Image.open('t.jpg') im2 = im.point(lambda p:p * 0.8) im2.save('t1.jpg') ``` Above has slightly darken the text but still i needed to make text clear and background whiter or lighter. The image is a photo of a printed document with text colour in black on white paper but that is not clearly visible. Below is snap of photo [![enter image description here](https://i.stack.imgur.com/cRYkY.jpg)](https://i.stack.imgur.com/cRYkY.jpg) [![enter image description here](https://i.stack.imgur.com/cRYkY.jpg)](https://i.stack.imgur.com/cRYkY.jpg) In image above behind and surrounding letters and words there is some kind of noise white and silver dots. How can i completely remove them so that text borders can be clearly visible. I have also inverted image colours and given below sample [![enter image description here](https://i.stack.imgur.com/R4tex.jpg)](https://i.stack.imgur.com/R4tex.jpg) I have tried following Adaptive thresolding ,(failed) Otsu method, (failed) Gaussian blurring, (failed) Colour segmentation, (failed) Connected component removal (failed) Denoising( failed because it also removed text) but nothing worked. Above all methods made it even worse because result of above methods gave me below image. Also text is affected when connected compoment method is used [![enter image description here](https://i.stack.imgur.com/RT5uk.jpg)](https://i.stack.imgur.com/RT5uk.jpg) Please help is needed. Also tried image magick morphology editing but failed How actually noise exist is shown in below image. It is continous rectangle bar filled with dots surrounding text ![enter image description here (https://i.stack.imgur.com/ZMHim.jpg)!
2020/08/16
[ "https://Stackoverflow.com/questions/63438737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Save first column in an array, too. Insert before your `END` section `{dates[FNR] = $1}` and replace both `$1` with `dates[n]`.
You may use Perl, which you'll find on any linux distribution. ``` perl -e ' $bad = -99999; while(<ARGV>){ chomp(@cols=split /\h+/); $diff = $cols[1]<0 || $cols[2]<0 ? $bad : $cols[1]-$cols[2]; $old = $result{$cols[0]} || $bad; $result{$cols[0]} = $old < $diff ? $diff : $old; } foreach $k (sort keys %result) {print "$k $result{$k}\n";} ' file*.txt ``` As a one-line: ``` perl -e '$bad=-99999;while(<ARGV>){chomp(@c=split /\h+/);$diff=$c[1]<0||$c[2]<0?$bad:$c[1]-$c[2];$old=$result{$c[0]}||$bad;$result{$c[0]}=$old<$diff?$diff:$old;}foreach $k(sort keys %result){print "$k $result{$k}\n";}' file* ``` Result: ``` 05Jan2020 4 06Jan2020 -1 07Jan2020 -2 08Jan2020 -99999 09Jan2020 -1 10Jan2020 9 11Jan2020 17 12Jan2020 18 ```
58,771,456
i am installing connector of Psycopg2 but i am unable install, how do i install that successfully? in using this command line to install: ``` pip install psycopg2 (test) C:\Users\Shree.Shree-PC\Desktop\projects\purchasepo>pip install psycopg2 Collecting psycopg2 Using cached https://files.pythonhosted.org/packages/84/d7/6a93c99b5ba4d4d22daa3928b983cec66df4536ca50b22ce5dcac65e4e71/psycopg2-2.8.4.tar .gz ERROR: Command errored out with exit status 1: command: 'c:\users\shree.shree-pc\envs\test\scripts\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Shr ee.Shree-PC\\AppData\\Local\\Temp\\pip-install-_bgz1zs9\\psycopg2\\setup.py'"'"'; __file__='"'"'C:\\Users\\Shree.Shree-PC\\AppData\\Local\\T emp\\pip-install-_bgz1zs9\\psycopg2\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip- install-_bgz1zs9\psycopg2\pip-egg-info' cwd: C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\ Complete output (23 lines): running egg_info creating C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info writing C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\PKG-INFO writing dependency_links to C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\depe ndency_links.txt writing top-level names to C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\top_l evel.txt writing manifest file 'C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\SOURCES.t xt' Error: pg_config executable not found. pg_config is required to build psycopg2 from source. Please add the directory containing pg_config to the $PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. If you prefer to avoid building psycopg2 from source, please install the PyPI 'psycopg2-binary' package instead. For further information please check the 'doc/src/install.rst' file (also at <http://initd.org/psycopg/docs/install.html>). ---------------------------------------- ``` ERROR: Command errored out with exit status 1: python setup.py egg\_info Check the logs for full command output.
2019/11/08
[ "https://Stackoverflow.com/questions/58771456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10123621/" ]
It was such a dump solution, it took me hours to find this: When I get the file from DocumentPicker I had to add the type of the file because DocumentPicker return an odd type called "success", when I changed it to 'image/jpeg' it worked :D its not a solution at all because I will need to find a way to know what type of file is each file a user chooses, anyways, this code works c: ```js let response = await DocumentPicker.getDocumentAsync({type: 'image/jpeg'}) response.type = 'image/jpeg' // <- lasdfkasdfaslfkfsdkdsaf const data = new FormData(); data.append('file', response); axios.post('http://192.168.0.3:8000/api/file', data , {headers: { 'Content-type': 'application/x-www-form-urlencoded' }} ) .then(res => { console.log("gooosh", res.data) }) .catch(error => { console.log("error", error, JSON.stringify(error)) }); ```
you should try to modify the content-type to ``` fetch("http://192.168.0.3:8000/api/file", { method: "POST", headers:{ 'Content-Type': 'multipart/form-data', }, body: data }) ``` and for the form-url-urlencoded, the fetch is not supported. you have to push it by yourself.you can see this [answer](https://stackoverflow.com/questions/35325370/post-a-x-www-form-urlencoded-request-from-react-native).
1,817,183
I'm trying to learn the super() function in Python. I thought I had a grasp of it until I came over this example (2.6) and found myself stuck. [http://www.cafepy.com/article/python\_attributes\_and\_methods/python\_attributes\_and\_methods.html#super-with-classmethod-example](https://web.archive.org/web/20170820065901/http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#super-with-classmethod-example) ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "test.py", line 9, in do_something do_something = classmethod(do_something) TypeError: unbound method do_something() must be called with B instance as first argument (got nothing instead) >>> ``` It wasn't what I expected when I read this line right before the example: > If we're using a class method, we don't have an instance to call super with. Fortunately for us, super works even with a type as the second argument. --- The type can be passed directly to super as shown below. Which is exactly what Python tells me is not possible by saying that do\_something() should be called with an instance of B.
2009/11/29
[ "https://Stackoverflow.com/questions/1817183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169854/" ]
Sometimes texts have to be read more for the flavor of the idea rather than for the details. This is one of those cases. In the [linked page](http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#a-super-solution), Examples 2.5, 2.6 and 2.7 should all use one method, `do_your_stuff`. (That is, `do_something` should be changed to `do_your_stuff`.) In addition, as [Ned Deily pointed out](https://stackoverflow.com/a/1817242/190597), `A.do_your_stuff` has to be a class method. ``` class A(object): @classmethod def do_your_stuff(cls): print 'This is A' class B(A): @classmethod def do_your_stuff(cls): super(B, cls).do_your_stuff() B.do_your_stuff() ``` `super(B, cls).do_your_stuff` returns a *bound* method (see [footnote 2](http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#a-super-solution)). Since `cls` was passed as the second argument to `super()`, it is `cls` that gets bound to the returned method. In other words, `cls` gets passed as the first argument to the method `do_your_stuff()` of class A. To reiterate: `super(B, cls).do_your_stuff()` causes `A`'s `do_your_stuff` method to be called with `cls` passed as the first argument. In order for that to work, `A`'s `do_your_stuff` has to be a class method. The linked page doesn't mention that, but that is definitively the case. PS. `do_something = classmethod(do_something)` is the old way of making a classmethod. The new(er) way is to use the @classmethod decorator. --- Note that `super(B, cls)` can not be replaced by `super(cls, cls)`. Doing so could lead to infinite loops. For example, ``` class A(object): @classmethod def do_your_stuff(cls): print('This is A') class B(A): @classmethod def do_your_stuff(cls): print('This is B') # super(B, cls).do_your_stuff() # CORRECT super(cls, cls).do_your_stuff() # WRONG class C(B): @classmethod def do_your_stuff(cls): print('This is C') # super(C, cls).do_your_stuff() # CORRECT super(cls, cls).do_your_stuff() # WRONG C.do_your_stuff() ``` will raise `RuntimeError: maximum recursion depth exceeded while calling a Python object`. If `cls` is `C`, then `super(cls, cls)` searches `C.mro()` for the class that comes after `C`. ``` In [161]: C.mro() Out[161]: [__main__.C, __main__.B, __main__.A, object] ``` Since that class is `B`, when `cls` is `C`, `super(cls, cls).do_your_stuff()` **always** calls `B.do_your_stuff`. Since `super(cls, cls).do_your_stuff()` is called inside `B.do_your_stuff`, you end up calling `B.do_your_stuff` in an infinite loop. In Python3, the [0-argument form of `super`](https://docs.python.org/3/library/functions.html#super) was added so `super(B, cls)` could be replaced by `super()`, and Python3 will figure out from context that `super()` in the definition of `class B` should be equivalent to `super(B, cls)`. But in no circumstance is `super(cls, cls)` (or for similar reasons, `super(type(self), self)`) ever correct.
The example from the web page seems to work as published. Did you create a `do_something` method for the superclass as well but not make it into a classmethod? Something like this will give you that error: ``` >>> class A(object): ... def do_something(cls): ... print cls ... # do_something = classmethod(do_something) ... >>> class B(A): ... def do_something(cls): ... super(B, cls).do_something() ... do_something = classmethod(do_something) ... >>> B().do_something() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in do_something TypeError: unbound method do_something() must be called with B instance as first argument (got nothing instead) ```
1,817,183
I'm trying to learn the super() function in Python. I thought I had a grasp of it until I came over this example (2.6) and found myself stuck. [http://www.cafepy.com/article/python\_attributes\_and\_methods/python\_attributes\_and\_methods.html#super-with-classmethod-example](https://web.archive.org/web/20170820065901/http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#super-with-classmethod-example) ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "test.py", line 9, in do_something do_something = classmethod(do_something) TypeError: unbound method do_something() must be called with B instance as first argument (got nothing instead) >>> ``` It wasn't what I expected when I read this line right before the example: > If we're using a class method, we don't have an instance to call super with. Fortunately for us, super works even with a type as the second argument. --- The type can be passed directly to super as shown below. Which is exactly what Python tells me is not possible by saying that do\_something() should be called with an instance of B.
2009/11/29
[ "https://Stackoverflow.com/questions/1817183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169854/" ]
The example from the web page seems to work as published. Did you create a `do_something` method for the superclass as well but not make it into a classmethod? Something like this will give you that error: ``` >>> class A(object): ... def do_something(cls): ... print cls ... # do_something = classmethod(do_something) ... >>> class B(A): ... def do_something(cls): ... super(B, cls).do_something() ... do_something = classmethod(do_something) ... >>> B().do_something() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in do_something TypeError: unbound method do_something() must be called with B instance as first argument (got nothing instead) ```
I think I've understood the point now thanks to this beatiful site and lovely community. If you don't mind please correct me if I'm wrong on classmethods (which I am now trying to understand fully): ``` # EXAMPLE #1 >>> class A(object): ... def foo(cls): ... print cls ... foo = classmethod(foo) ... >>> a = A() >>> a.foo() # THIS IS THE CLASS ITSELF (__class__) class '__main__.A' # EXAMPLE #2 # SAME AS ABOVE (With new @decorator) >>> class A(object): ... @classmethod ... def foo(cls): ... print cls ... >>> a = A() >>> a.foo() class '__main__.A' # EXAMPLE #3 >>> class B(object): ... def foo(self): ... print self ... >>> b = B() >>> b.foo() # THIS IS THE INSTANCE WITH ADDRESS (self) __main__.B object at 0xb747a8ec >>> ``` I hope this illustration shows ..