qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
71,398,447
|
In below code 21 is hour and 53 is min and 10 is wait time
in this code I want to send message in loop frequently but I failed. I also tried for loop but it is not working. Any body know how to send 100 message in whatsapp using python please help me
```
import pywhatkit
from flask import Flask
while 1:
pywhatkit.sendwhatmsg("+9198xxxxxxxx", "Hi",21,53,10)
```
|
2022/03/08
|
[
"https://Stackoverflow.com/questions/71398447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18126137/"
] |
Use
```
final response = await http.post(Uri.parse(url), body: {
"fecha_inicio": _fechaInicioBBDD,
"fecha_fin": _fechaFinalBBDD,
"latitud": _controllerLatitud.text,
"longitud": _controllerLongitud.text,
"calle": _controllerDireccion.text,
"descripcion": _controllerDescripcion.text,
"tipo_aviso": tipoAviso,
"activar_x_antes": _horas.toString()
});
```
`_horas.toString()`
|
The documentation for [`http.post`](https://pub.dev/documentation/http/latest/http/post.html) states:
>
> `body` sets the body of the request. It can be a `String`, a `List<int>` or a `Map<String, String>`.
>
>
>
Since you are passing a `Map`, all keys and values in your `Map` are required to be `String`s. You should not be converting a `String` to a `double` as one of the `Map` values.
|
71,398,447
|
In below code 21 is hour and 53 is min and 10 is wait time
in this code I want to send message in loop frequently but I failed. I also tried for loop but it is not working. Any body know how to send 100 message in whatsapp using python please help me
```
import pywhatkit
from flask import Flask
while 1:
pywhatkit.sendwhatmsg("+9198xxxxxxxx", "Hi",21,53,10)
```
|
2022/03/08
|
[
"https://Stackoverflow.com/questions/71398447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18126137/"
] |
When you are doing:
```
double.parse(_horas.toString())
```
this is converting `_horas` to a `String` and then *again* to a `double`.
Instead, only call `toString()`:
```
"activar_x_antes": _horas.toString()
```
|
The documentation for [`http.post`](https://pub.dev/documentation/http/latest/http/post.html) states:
>
> `body` sets the body of the request. It can be a `String`, a `List<int>` or a `Map<String, String>`.
>
>
>
Since you are passing a `Map`, all keys and values in your `Map` are required to be `String`s. You should not be converting a `String` to a `double` as one of the `Map` values.
|
71,398,447
|
In below code 21 is hour and 53 is min and 10 is wait time
in this code I want to send message in loop frequently but I failed. I also tried for loop but it is not working. Any body know how to send 100 message in whatsapp using python please help me
```
import pywhatkit
from flask import Flask
while 1:
pywhatkit.sendwhatmsg("+9198xxxxxxxx", "Hi",21,53,10)
```
|
2022/03/08
|
[
"https://Stackoverflow.com/questions/71398447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18126137/"
] |
Let's take a closer look at why this is happening.
You have value `_horas = 4.0`; which is a `double`, you are trying to convert it to `String` to send it to the server. However, you are using `double. parse`.
Let's look at the [parse](https://api.flutter.dev/flutter/dart-core/double/parse.html) doc. it says "Parse source as a double literal and return its value." in fact, you are parsing a number *String* back to `double` again!
What you can do is either
```
"activar_x_antes": _horas.toString()
```
or
```
"activar_x_antes": '$_horas'
```
I hope the explanation helps.
|
The documentation for [`http.post`](https://pub.dev/documentation/http/latest/http/post.html) states:
>
> `body` sets the body of the request. It can be a `String`, a `List<int>` or a `Map<String, String>`.
>
>
>
Since you are passing a `Map`, all keys and values in your `Map` are required to be `String`s. You should not be converting a `String` to a `double` as one of the `Map` values.
|
53,913,303
|
Im trying to implement a menu in my tool. But i couldn't implement a switch case in python. i know that python has only dictionary mapping. How to call parameterised methods in those switch case? For example, I have this program
`def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())`
Here, I couldn't perform the parameterised call `subdomain(host)`. Please help.
|
2018/12/24
|
[
"https://Stackoverflow.com/questions/53913303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10829002/"
] |
Switch cases can be implemented using dictionary mapping in Python like so:
```
def Choice(i):
switcher = {1: subdomain, 2: reverseLookup}
func = switcher.get(i, 'Invalid')
if func != 'Invalid':
print(func(host))
```
There is a dictionary `switcher` which helps on mapping to the right function based on the input to the function `Choice`. There is default case to be implemented which is done using `switcher.get(i, 'Invalid')`, so if this returns `'Invalid'`, you can give an error message to user or neglect it.
The call goes like this:
```
Choice(2) # For example
```
Remember to set value of `host` prior to calling `Choice`.
|
**try this ....**
```
def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())
if __name__ == "__main__":
argument=0
print Choice(argument)
```
|
53,913,303
|
Im trying to implement a menu in my tool. But i couldn't implement a switch case in python. i know that python has only dictionary mapping. How to call parameterised methods in those switch case? For example, I have this program
`def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())`
Here, I couldn't perform the parameterised call `subdomain(host)`. Please help.
|
2018/12/24
|
[
"https://Stackoverflow.com/questions/53913303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10829002/"
] |
I think the problem is because the first two functions are getting called when the `switcher` dictionary is created. You can avoid that by making all of the values `lambda` function definitions as shown below:
```
def choice(i):
switcher = {
1: lambda: subdomain(host),
2: lambda: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda: 'Invalid')
print(func())
```
|
**try this ....**
```
def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())
if __name__ == "__main__":
argument=0
print Choice(argument)
```
|
53,913,303
|
Im trying to implement a menu in my tool. But i couldn't implement a switch case in python. i know that python has only dictionary mapping. How to call parameterised methods in those switch case? For example, I have this program
`def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())`
Here, I couldn't perform the parameterised call `subdomain(host)`. Please help.
|
2018/12/24
|
[
"https://Stackoverflow.com/questions/53913303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10829002/"
] |
There is an option that is obvious to get correct..:
```
def choice(i, host): # you should normally pass all variables used in the function
if i == 1:
print(subdomain(host))
elif i == 2:
print(reverseLookup(host))
elif i == 3:
print('two')
else:
print('Invalid')
```
if you're using a dictionary it is important that all rhs (right-hand-sides) have the same type, i.e. a function taking zero arguments. I prefer to put the dict where it's used when I use a dict to emulate a switch statement:
```
def choice(i, host):
print({
1: lambda: subdomain(host),
2: lambda: reverseLookup(host),
3: lambda: 'two',
}.get(i, lambda: 'Invalid')()) # note the () at the end, which calls the zero-argument function returned from .get(..)
```
|
**try this ....**
```
def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())
if __name__ == "__main__":
argument=0
print Choice(argument)
```
|
53,913,303
|
Im trying to implement a menu in my tool. But i couldn't implement a switch case in python. i know that python has only dictionary mapping. How to call parameterised methods in those switch case? For example, I have this program
`def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())`
Here, I couldn't perform the parameterised call `subdomain(host)`. Please help.
|
2018/12/24
|
[
"https://Stackoverflow.com/questions/53913303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10829002/"
] |
I think the problem is because the first two functions are getting called when the `switcher` dictionary is created. You can avoid that by making all of the values `lambda` function definitions as shown below:
```
def choice(i):
switcher = {
1: lambda: subdomain(host),
2: lambda: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda: 'Invalid')
print(func())
```
|
Switch cases can be implemented using dictionary mapping in Python like so:
```
def Choice(i):
switcher = {1: subdomain, 2: reverseLookup}
func = switcher.get(i, 'Invalid')
if func != 'Invalid':
print(func(host))
```
There is a dictionary `switcher` which helps on mapping to the right function based on the input to the function `Choice`. There is default case to be implemented which is done using `switcher.get(i, 'Invalid')`, so if this returns `'Invalid'`, you can give an error message to user or neglect it.
The call goes like this:
```
Choice(2) # For example
```
Remember to set value of `host` prior to calling `Choice`.
|
53,913,303
|
Im trying to implement a menu in my tool. But i couldn't implement a switch case in python. i know that python has only dictionary mapping. How to call parameterised methods in those switch case? For example, I have this program
`def Choice(i):
switcher = {
1: subdomain(host),
2: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda:'Invalid')
print(func())`
Here, I couldn't perform the parameterised call `subdomain(host)`. Please help.
|
2018/12/24
|
[
"https://Stackoverflow.com/questions/53913303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10829002/"
] |
I think the problem is because the first two functions are getting called when the `switcher` dictionary is created. You can avoid that by making all of the values `lambda` function definitions as shown below:
```
def choice(i):
switcher = {
1: lambda: subdomain(host),
2: lambda: reverseLookup(host),
3: lambda: 'two'
}
func = switcher.get(i, lambda: 'Invalid')
print(func())
```
|
There is an option that is obvious to get correct..:
```
def choice(i, host): # you should normally pass all variables used in the function
if i == 1:
print(subdomain(host))
elif i == 2:
print(reverseLookup(host))
elif i == 3:
print('two')
else:
print('Invalid')
```
if you're using a dictionary it is important that all rhs (right-hand-sides) have the same type, i.e. a function taking zero arguments. I prefer to put the dict where it's used when I use a dict to emulate a switch statement:
```
def choice(i, host):
print({
1: lambda: subdomain(host),
2: lambda: reverseLookup(host),
3: lambda: 'two',
}.get(i, lambda: 'Invalid')()) # note the () at the end, which calls the zero-argument function returned from .get(..)
```
|
40,527,051
|
I'm trying to make my program allow the user to input vectors of the form (x,y,z) using python's built in input() function.
If entered normally in python with out using the input() function it indexes each vector separately. For example,
```
>>> z = (1,2,3), (4,5,6), (7,8,9)
>>> z[1]
(4, 5, 6)
```
But when I try to use the input function I run into the following problem.
```
>>> z = input('What are the vectors? ')
What are the vectors? (1,2,3), (4,5,6), (7,8,9)
>>> z[1]
'1'
```
Why does using the input function turn it into a string and is there a way around this?
Thanks
|
2016/11/10
|
[
"https://Stackoverflow.com/questions/40527051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7141092/"
] |
In Python 3, `input` always returns a string. You need to convert the string. For this type of input I recommend using `liter_eval` from the module `ast`:
```
import ast
vectors = ast.literal_eval('(1,2,3), (4,5,6), (7,8,9)')
vectors[1] #(4, 5, 6)
```
|
In your example, `z` is just a string as input by the user.
This string is: `"(1,2,3), (4,5,6), (7,8,9)"` so the second element, `z[1]` is just giving you `"1"`.
If you want an actual vector object you would have to write code to parse the string input by the user. For example, you could delimit based on parentheses and convert the numbers individually.
Hope this helps!
|
40,527,051
|
I'm trying to make my program allow the user to input vectors of the form (x,y,z) using python's built in input() function.
If entered normally in python with out using the input() function it indexes each vector separately. For example,
```
>>> z = (1,2,3), (4,5,6), (7,8,9)
>>> z[1]
(4, 5, 6)
```
But when I try to use the input function I run into the following problem.
```
>>> z = input('What are the vectors? ')
What are the vectors? (1,2,3), (4,5,6), (7,8,9)
>>> z[1]
'1'
```
Why does using the input function turn it into a string and is there a way around this?
Thanks
|
2016/11/10
|
[
"https://Stackoverflow.com/questions/40527051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7141092/"
] |
In Python 3, `input` always returns a string. You need to convert the string. For this type of input I recommend using `liter_eval` from the module `ast`:
```
import ast
vectors = ast.literal_eval('(1,2,3), (4,5,6), (7,8,9)')
vectors[1] #(4, 5, 6)
```
|
you can use eval with input function to the get the same as first.
```
eval(input('Please enter the vector')
```
|
40,527,051
|
I'm trying to make my program allow the user to input vectors of the form (x,y,z) using python's built in input() function.
If entered normally in python with out using the input() function it indexes each vector separately. For example,
```
>>> z = (1,2,3), (4,5,6), (7,8,9)
>>> z[1]
(4, 5, 6)
```
But when I try to use the input function I run into the following problem.
```
>>> z = input('What are the vectors? ')
What are the vectors? (1,2,3), (4,5,6), (7,8,9)
>>> z[1]
'1'
```
Why does using the input function turn it into a string and is there a way around this?
Thanks
|
2016/11/10
|
[
"https://Stackoverflow.com/questions/40527051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7141092/"
] |
In your example, `z` is just a string as input by the user.
This string is: `"(1,2,3), (4,5,6), (7,8,9)"` so the second element, `z[1]` is just giving you `"1"`.
If you want an actual vector object you would have to write code to parse the string input by the user. For example, you could delimit based on parentheses and convert the numbers individually.
Hope this helps!
|
you can use eval with input function to the get the same as first.
```
eval(input('Please enter the vector')
```
|
44,155,564
|
When trying to plot a graph with pyplot I am running the following code:
```
from matplotlib import pyplot as plt
x = [6, 5, 4]
y = [3, 4, 5]
plt.plot(x, y)
plt.show()
```
This is returning the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-59955f73b463> in <module>()
4 y = [3, 4, 5]
5
----> 6 plt.plot(x, y)
7 plt.show()
/usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in plot(*args, **kwargs)
3304 @_autogen_docstring(Axes.plot)
3305 def plot(*args, **kwargs):
-> 3306 ax = gca()
3307 # Deprecated: allow callers to override the hold state
3308 # by passing hold=True|False
/usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in gca(**kwargs)
948 matplotlib.figure.Figure.gca : The figure's gca method.
949 """
--> 950 return gcf().gca(**kwargs)
951
952 # More ways of creating axes:
/usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in gca(self, **kwargs)
1367
1368 # no axes found, so create one which spans the figure
-> 1369 return self.add_subplot(1, 1, 1, **kwargs)
1370
1371 def sca(self, a):
/usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in add_subplot(self, *args, **kwargs)
1019 self._axstack.remove(ax)
1020
-> 1021 a = subplot_class_factory(projection_class)(self, *args, **kwargs)
1022
1023 self._axstack.add(key, a)
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_subplots.pyc in __init__(self, fig, *args, **kwargs)
71
72 # _axes_class is set in the subplot_class_factory
---> 73 self._axes_class.__init__(self, fig, self.figbox, **kwargs)
74
75 def __reduce__(self):
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in __init__(self, fig, rect, facecolor, frameon, sharex, sharey, label, xscale, yscale, axisbg, **kwargs)
527
528 # this call may differ for non-sep axes, e.g., polar
--> 529 self._init_axis()
530 if axisbg is not None and facecolor is not None:
531 raise TypeError('Both axisbg and facecolor are not None. '
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in _init_axis(self)
620 def _init_axis(self):
621 "move this out of __init__ because non-separable axes don't use it"
--> 622 self.xaxis = maxis.XAxis(self)
623 self.spines['bottom'].register_axis(self.xaxis)
624 self.spines['top'].register_axis(self.xaxis)
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in __init__(self, axes, pickradius)
674 self._minor_tick_kw = dict()
675
--> 676 self.cla()
677 self._set_scale('linear')
678
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in cla(self)
758 self._set_artist_props(self.label)
759
--> 760 self.reset_ticks()
761
762 self.converter = None
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in reset_ticks(self)
769 # define 1 so properties set on ticks will be copied as they
770 # grow
--> 771 cbook.popall(self.majorTicks)
772 cbook.popall(self.minorTicks)
773
AttributeError: 'module' object has no attribute 'popall'
```
My matplotlib has always worked fine, but this error popped up after I reinstalled it using homebrew and pip yesterday. I am running the following:
```
OS: Mac OS Sierra 10.12.5
Python: 2.7.13
Matplotlib: 2.0.2
```
I have tried a complete reinstall of matplotlib and python again since but still getting the same error. I have also tried multiple editors (Jupiter, Sublime, Terminal).
Any help would be very appreciated!
|
2017/05/24
|
[
"https://Stackoverflow.com/questions/44155564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8058705/"
] |
I had this exact error and in my case it turned out to be that both `pip` and `conda` had installed copies of `matplotlib`. In a 'mixed' environment with `pip` used to fill gaps in Anaconda, `pip` can automatically install upgrades to (already-installed) dependencies of the package you asked to install, creating duplication.
To test for this:
```
$ conda list matplotlib
# packages in environment at /home/ec2-user/anaconda3:
#
matplotlib 2.0.2 np113py35_0
matplotlib 2.1.1 <pip>
```
Problem! Fix:
```
$ pip uninstall matplotlib
```
Probably a good idea to force `matplotlib` upgrade to the version `pip` wanted:
```
$ conda install matplotlib=2.1.1
```
|
I have solved my problem although I am not entirely sure why this has solved it.
I used `pip uninstall matplotlib`, to remove the python install, and also updated my `~/.zshrc` and `~/.bash_profile` paths to contain:
HomeBrew:
`export PATH=/usr/local/bin:$PATH`
Python:
`export PATH=/usr/local/share/python:$PATH`
This has solved the issue. I am guessing the issue was caused by having two install of matplotlib and having the path in `~/.bash_proile` but not the `~/.zshrc`.
|
44,155,564
|
When trying to plot a graph with pyplot I am running the following code:
```
from matplotlib import pyplot as plt
x = [6, 5, 4]
y = [3, 4, 5]
plt.plot(x, y)
plt.show()
```
This is returning the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-59955f73b463> in <module>()
4 y = [3, 4, 5]
5
----> 6 plt.plot(x, y)
7 plt.show()
/usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in plot(*args, **kwargs)
3304 @_autogen_docstring(Axes.plot)
3305 def plot(*args, **kwargs):
-> 3306 ax = gca()
3307 # Deprecated: allow callers to override the hold state
3308 # by passing hold=True|False
/usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in gca(**kwargs)
948 matplotlib.figure.Figure.gca : The figure's gca method.
949 """
--> 950 return gcf().gca(**kwargs)
951
952 # More ways of creating axes:
/usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in gca(self, **kwargs)
1367
1368 # no axes found, so create one which spans the figure
-> 1369 return self.add_subplot(1, 1, 1, **kwargs)
1370
1371 def sca(self, a):
/usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in add_subplot(self, *args, **kwargs)
1019 self._axstack.remove(ax)
1020
-> 1021 a = subplot_class_factory(projection_class)(self, *args, **kwargs)
1022
1023 self._axstack.add(key, a)
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_subplots.pyc in __init__(self, fig, *args, **kwargs)
71
72 # _axes_class is set in the subplot_class_factory
---> 73 self._axes_class.__init__(self, fig, self.figbox, **kwargs)
74
75 def __reduce__(self):
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in __init__(self, fig, rect, facecolor, frameon, sharex, sharey, label, xscale, yscale, axisbg, **kwargs)
527
528 # this call may differ for non-sep axes, e.g., polar
--> 529 self._init_axis()
530 if axisbg is not None and facecolor is not None:
531 raise TypeError('Both axisbg and facecolor are not None. '
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in _init_axis(self)
620 def _init_axis(self):
621 "move this out of __init__ because non-separable axes don't use it"
--> 622 self.xaxis = maxis.XAxis(self)
623 self.spines['bottom'].register_axis(self.xaxis)
624 self.spines['top'].register_axis(self.xaxis)
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in __init__(self, axes, pickradius)
674 self._minor_tick_kw = dict()
675
--> 676 self.cla()
677 self._set_scale('linear')
678
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in cla(self)
758 self._set_artist_props(self.label)
759
--> 760 self.reset_ticks()
761
762 self.converter = None
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in reset_ticks(self)
769 # define 1 so properties set on ticks will be copied as they
770 # grow
--> 771 cbook.popall(self.majorTicks)
772 cbook.popall(self.minorTicks)
773
AttributeError: 'module' object has no attribute 'popall'
```
My matplotlib has always worked fine, but this error popped up after I reinstalled it using homebrew and pip yesterday. I am running the following:
```
OS: Mac OS Sierra 10.12.5
Python: 2.7.13
Matplotlib: 2.0.2
```
I have tried a complete reinstall of matplotlib and python again since but still getting the same error. I have also tried multiple editors (Jupiter, Sublime, Terminal).
Any help would be very appreciated!
|
2017/05/24
|
[
"https://Stackoverflow.com/questions/44155564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8058705/"
] |
I have solved my problem although I am not entirely sure why this has solved it.
I used `pip uninstall matplotlib`, to remove the python install, and also updated my `~/.zshrc` and `~/.bash_profile` paths to contain:
HomeBrew:
`export PATH=/usr/local/bin:$PATH`
Python:
`export PATH=/usr/local/share/python:$PATH`
This has solved the issue. I am guessing the issue was caused by having two install of matplotlib and having the path in `~/.bash_proile` but not the `~/.zshrc`.
|
I have had a similar kind of problem
what I did was trying to upgrade my matplotlib using
```
pip install -U matplotlib
```
and then reopen anaconda to see it working
|
44,155,564
|
When trying to plot a graph with pyplot I am running the following code:
```
from matplotlib import pyplot as plt
x = [6, 5, 4]
y = [3, 4, 5]
plt.plot(x, y)
plt.show()
```
This is returning the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-59955f73b463> in <module>()
4 y = [3, 4, 5]
5
----> 6 plt.plot(x, y)
7 plt.show()
/usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in plot(*args, **kwargs)
3304 @_autogen_docstring(Axes.plot)
3305 def plot(*args, **kwargs):
-> 3306 ax = gca()
3307 # Deprecated: allow callers to override the hold state
3308 # by passing hold=True|False
/usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in gca(**kwargs)
948 matplotlib.figure.Figure.gca : The figure's gca method.
949 """
--> 950 return gcf().gca(**kwargs)
951
952 # More ways of creating axes:
/usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in gca(self, **kwargs)
1367
1368 # no axes found, so create one which spans the figure
-> 1369 return self.add_subplot(1, 1, 1, **kwargs)
1370
1371 def sca(self, a):
/usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in add_subplot(self, *args, **kwargs)
1019 self._axstack.remove(ax)
1020
-> 1021 a = subplot_class_factory(projection_class)(self, *args, **kwargs)
1022
1023 self._axstack.add(key, a)
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_subplots.pyc in __init__(self, fig, *args, **kwargs)
71
72 # _axes_class is set in the subplot_class_factory
---> 73 self._axes_class.__init__(self, fig, self.figbox, **kwargs)
74
75 def __reduce__(self):
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in __init__(self, fig, rect, facecolor, frameon, sharex, sharey, label, xscale, yscale, axisbg, **kwargs)
527
528 # this call may differ for non-sep axes, e.g., polar
--> 529 self._init_axis()
530 if axisbg is not None and facecolor is not None:
531 raise TypeError('Both axisbg and facecolor are not None. '
/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in _init_axis(self)
620 def _init_axis(self):
621 "move this out of __init__ because non-separable axes don't use it"
--> 622 self.xaxis = maxis.XAxis(self)
623 self.spines['bottom'].register_axis(self.xaxis)
624 self.spines['top'].register_axis(self.xaxis)
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in __init__(self, axes, pickradius)
674 self._minor_tick_kw = dict()
675
--> 676 self.cla()
677 self._set_scale('linear')
678
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in cla(self)
758 self._set_artist_props(self.label)
759
--> 760 self.reset_ticks()
761
762 self.converter = None
/usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in reset_ticks(self)
769 # define 1 so properties set on ticks will be copied as they
770 # grow
--> 771 cbook.popall(self.majorTicks)
772 cbook.popall(self.minorTicks)
773
AttributeError: 'module' object has no attribute 'popall'
```
My matplotlib has always worked fine, but this error popped up after I reinstalled it using homebrew and pip yesterday. I am running the following:
```
OS: Mac OS Sierra 10.12.5
Python: 2.7.13
Matplotlib: 2.0.2
```
I have tried a complete reinstall of matplotlib and python again since but still getting the same error. I have also tried multiple editors (Jupiter, Sublime, Terminal).
Any help would be very appreciated!
|
2017/05/24
|
[
"https://Stackoverflow.com/questions/44155564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8058705/"
] |
I had this exact error and in my case it turned out to be that both `pip` and `conda` had installed copies of `matplotlib`. In a 'mixed' environment with `pip` used to fill gaps in Anaconda, `pip` can automatically install upgrades to (already-installed) dependencies of the package you asked to install, creating duplication.
To test for this:
```
$ conda list matplotlib
# packages in environment at /home/ec2-user/anaconda3:
#
matplotlib 2.0.2 np113py35_0
matplotlib 2.1.1 <pip>
```
Problem! Fix:
```
$ pip uninstall matplotlib
```
Probably a good idea to force `matplotlib` upgrade to the version `pip` wanted:
```
$ conda install matplotlib=2.1.1
```
|
I have had a similar kind of problem
what I did was trying to upgrade my matplotlib using
```
pip install -U matplotlib
```
and then reopen anaconda to see it working
|
18,153,913
|
I used `python -mtimeit` to test and found out it takes more time to `from Module import Sth` comparing to `import Module`
E.g.
```
$ python -mtimeit "import math; math.sqrt(4)"
1000000 loops, best of 3: 0.618 usec per loop
$ python -mtimeit "from math import sqrt; sqrt(4)"
1000000 loops, best of 3: 1.11 usec per loop
```
same for other case. Could someone please explain the rationale behind? Thank you!
|
2013/08/09
|
[
"https://Stackoverflow.com/questions/18153913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2325350/"
] |
There are two issues here. The first step is to figure out which part is faster: the import statement, or the call.
So, let's do that:
```
$ python -mtimeit 'import math'
1000000 loops, best of 3: 0.555 usec per loop
$ python -mtimeit 'from math import sqrt'
1000000 loops, best of 3: 1.22 usec per loop
$ python -mtimeit -s 'from math import sqrt' 'sqrt(10)'
10000000 loops, best of 3: 0.0879 usec per loop
$ python -mtimeit -s 'import math' 'math.sqrt(10)'
10000000 loops, best of 3: 0.122 usec per loop
```
(That's with Apple CPython 2.7.2 64-bit on OS X 10.6.4 on my laptop. But python.org 3.4 dev on the same laptop and 3.3.1 on a linux box give roughly similar results. With PyPy, the smarter caching makes it impossible to test, since everything finishes in 1ns… Anyway, I think these results are probably about as portable as microbenchmarks ever can be.)
So it turns out that the `import` statement is more than twice as fast; after that, calling the function is a little slower, but not nearly enough to make up for the cheaper `import`. (Keep in mind that your test was doing an `import` for each call. In real-life code, of course, you tend to call things a lot more than once per `import`. So, we're really looking at an edge case that will rarely affect real code. But as long as you keep that in mind, we proceed.)
---
Conceptually, you can understand why the `from … import` statement takes longer: it has more work to do. The first version has to find the module, compile it if necessary, and execute it. The second version has to do all of that, and then *also* extract `sqrt` and insert it into your current module's globals. So, it has to be at least a little slower.
If you look at the bytecode (e.g., by using the [`dis`](http://docs.python.org/2/library/dis.html) module and calling `dis.dis('import math')`), this is exactly the difference. Compare:
```
0 LOAD_CONST 0 (0)
3 LOAD_CONST 1 (None)
6 IMPORT_NAME 0 (math)
9 STORE_NAME 0 (math)
12 LOAD_CONST 1 (None)
15 RETURN_VALUE
```
… to:
```
0 LOAD_CONST 0 (0)
3 LOAD_CONST 1 (('sqrt',))
6 IMPORT_NAME 0 (math)
9 IMPORT_FROM 1 (sqrt)
12 STORE_NAME 1 (sqrt)
15 POP_TOP
16 LOAD_CONST 2 (None)
19 RETURN_VALUE
```
The extra stack manipulation (the `LOAD_CONST` and `POP_TOP`) probably doesn't make much difference, and using a different argument to `STORE_NAME` is unlikely to matter at all… but the `IMPORT_FROM` is a significant extra step.
---
Surprisingly, a quick&dirty attempt to profile the `IMPORT_FROM` code shows that the majority of the cost is actually looking up the appropriate globals to import into. I'm not sure why, but… that implies that importing a whole slew of names should be not much slower than importing just one. And, as you pointed out in a comment, that's exactly what you see. (But don't read too much into that. There are many reasons that `IMPORT_FROM` might have a large constant factor and only a small linear one, and we're not exactly throwing a huge number of names at it.)
---
One last thing: If this ever really does matter in real code, and you want to get the best of both worlds, `import math; sqrt = math.sqrt` is faster than `from math import sqrt`, but gives you the same small speedup to lookup/call time. (But again, I can't imagine any real code where this would matter. The only time you'll ever care how long `sqrt` takes is when you're calling it a billion times, at which point you won't care how long the import takes. Plus, if you really do need to optimize that, create a local scope and bind `sqrt` there to avoid the global lookup entirely.)
|
This is not an answer, but some information. It needed formatting so I didn't include it as a comment. Here is the bytecode for 'from math import sqrt':
```
>>> from math import sqrt
>>> import dis
>>> def f(n): return sqrt(n)
...
>>> dis.dis(f)
1 0 LOAD_GLOBAL 0 (sqrt)
3 LOAD_FAST 0 (n)
6 CALL_FUNCTION 1
9 RETURN_VALUE
```
And for 'import math'
```
>>> import math
>>> import dis
>>> dis.dis(math.sqrt)
>>> def f(n): return math.sqrt(n)
...
>>> dis.dis(f)
1 0 LOAD_GLOBAL 0 (math)
3 LOAD_ATTR 1 (sqrt)
6 LOAD_FAST 0 (n)
9 CALL_FUNCTION 1
12 RETURN_VALUE
```
Interestingly, the faster method has one more instruction.
|
51,481,021
|
My friend told me about [Josephus problem](https://en.wikipedia.org/wiki/Josephus_problem), where you have `41` people sitting in the circle. Person number `1` has a sword, kills person on the right and passes the sword to the next person. This goes on until there is only one person left alive. I came up with this solution in python:
```
print('''There are n people in the circle. You give the knife to one of
them, he stabs person on the right and
gives the knife to the next person. What will be the number of whoever
will be left alive?''')
pplList = []
numOfPeople = int(input('How many people are there in the circle?'))
for i in range(1, (numOfPeople + 1)):
pplList.append(i)
print(pplList)
while len(pplList) > 1:
for i in pplList:
if i % 2 == 0:
del pplList[::i]
print(f'The number of person which survived is {pplList[0]+1}')
break
```
But it only works up to `42` people. What should I do, or how should I change the code so it would work for, for example, `100, 1000` and more people in the circle?
I've looked up Josephus problem and seen different solutions but I'm curious if my answer could be correct after some minor adjustment or should I start from scratch.
|
2018/07/23
|
[
"https://Stackoverflow.com/questions/51481021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9274224/"
] |
I see two serious bugs.
1. I guarantee that `del ppList[::i]` does nothing resembling what you hope it does.
2. When you wrap around the circle, it is important to know if you killed the last person in the list (first in list kills again) or didn't (first person in list dies).
And contrary to your assertion that it works up to 42, it does not work for many smaller numbers. The first that it doesn't work for is 2. (It gives 3 as an answer instead of 1.)
|
The problem is you are not considering the guy in the end if he is not killed. Example, if there are 9 people, after killing 8, 9 has the sword, but you are just starting with 1, instead of 9 in the next loop. As someone mentioned already, it is not working for smaller numbers also. Actually if you look close, you're killing odd numbers in the very first loop, instead of even numbers. which is very wrong.
You can correct your code as followed
```py
while len(pplList )>1:
if len(pplList )%2 == 0:
pplList = pplList [::2] #omitting every second number
elif len(pplList )%2 ==1:
last = pplList [-1] #last one won't be killed
pplList = pplList [:-2:2]
pplList .insert(0,last) # adding the last to the start
```
There are very effective methods to solve the problem other than this method. check [this link](https://medium.com/@sudheernaidu53/the-josephus-problem-36cbf94f3a64) to know more
|
61,975,308
|
I am trying to read the url code using URLLIB. Here is my code:
```
import urllib
url = "https://www.facebook.com/fads0000fass"
r = urllib.request.urlopen(url)
p = r.code
if(p == "HTTP Error 404: Not Found" ):
print("hello")
else:
print("null")
```
The url I am using will show error code 404 but I am not able to read it.
I also tried `if(p == 404)` but I get the same issue.
I Can read other codes i.e. 200, 201 etc.
Can you please help me fix it?
traceback:
```
Traceback (most recent call last):
File "gd.py", line 7, in <module>
r = urllib.request.urlopen(url)
File "/usr/lib64/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python3.7/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib64/python3.7/urllib/request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python3.7/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/lib64/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/lib64/python3.7/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
```
|
2020/05/23
|
[
"https://Stackoverflow.com/questions/61975308",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13288913/"
] |
I'm not sure that's what you're asking.
```
import urllib.request
url = "https://www.facebook.com/fads0000fass"
try:
r = urllib.request.urlopen(url)
p = r.code
except urllib.error.HTTPError:
print("hello")
```
|
In order to reach your if statement, your code needs exception handling. An exception is being raised when you call `urlopen` on line 7. See the first step of your traceback.
```
File "gd.py", line 7, in <module>
r = urllib.request.urlopen(url)
```
The exception happens here, which causes your code to exit, so further statements aren't evaluated. To get past this, you must [handle the exception](https://docs.python.org/3/tutorial/errors.html#handling-exceptions).
```py
import urllib.request
url = "https://www.facebook.com/fads0000fass"
try:
r = urllib.request.urlopen(url)
except urllib.error.HTTPError as e:
# More useful
# print(f"{e.code}: {e.reason}\n\n{e.headers}")
if e.code in [404]:
print("hello")
else:
print("null")
```
---
Going beyond this, if you want something more like your original logic, I'd recommend using the [requests](https://pypi.org/project/requests/) library. I'd actually recommend using requests for all of your HTTP needs whenever possible, it's exceptionally good.
```py
import requests
r = requests.get(url)
p = r.status_code
if r.status_code == 404:
print("hello")
else:
print("null")
```
|
71,902,562
|
I have started the Yolo5 Training with custom data
The command I have used:
```
!python train.py --img 640 --batch-size 32 --epochs 5 --data /content/drive/MyDrive/yolov5_dataset/dataset_Trafic/data.yaml --cfg /content/drive/MyDrive/yolov5/models/yolov5s.yaml --name Model
```
Training started as below & completed:
[](https://i.stack.imgur.com/Iddtf.png)
For resuming/continue for more epoch I have below command
```
!python train.py --img 640 --batch-size 32 --epochs 6 --data /content/drive/MyDrive/yolov5_dataset/dataset_Trafic/data.yaml --weights /content/drive/MyDrive/yolov5/runs/train/Model/weights/best.pt --cache --exist-ok
```
[](https://i.stack.imgur.com/lax4X.png)
But still the Training start from the scratch. How to continue from previous epoch.
Also I tried with resume command
```
!python train.py --epochs 10 --resume
```
but I am getting below error message
[](https://i.stack.imgur.com/4Fb2A.png)
|
2022/04/17
|
[
"https://Stackoverflow.com/questions/71902562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2707200/"
] |
#### Case 1
Here we consider the statement:
```
Animal a1 = func1();
```
The call expression `func1()` is an **rvlalue** of type `Animal`. And from C++17 onwards, due to [mandatory copy elison](https://en.cppreference.com/w/cpp/language/copy_elision#Mandatory_elision_of_copy/move_operations):
>
> Under the following circumstances, the compilers are required to omit the copy and move construction of class objects, even if the copy/move constructor and the destructor have observable side-effects. The objects are constructed directly into the storage where they would otherwise be copied/moved to. The **copy/move constructors need not be present or accessible**:
>
>
>
>
> * In a return statement, when the operand is a prvalue of the same class type (ignoring cv-qualification) as the function return type:
>
>
>
That is, the object is constructed directly into the storage where they would otherwise be copied/moved to. That is, in this case(for C++17), there is no need of a copy/move constructor to be available. And so this statement works.
#### Case 2
Here we consider the statement:
```
Animal a2 = func2();
```
Here from [non mandatory copy elison](https://en.cppreference.com/w/cpp/language/copy_elision#Mandatory_elision_of_copy/move_operations),
>
> Under the following circumstances, the compilers are permitted, but not required to omit the copy and move (since C++11) construction of class objects even if the copy/move (since C++11) constructor and the destructor have observable side-effects. The objects are constructed directly into the storage where they would otherwise be copied/moved to. This is an optimization: even when it takes place and the copy/move (since C++11) constructor is not called, it still **must be present and accessible** (as if no optimization happened at all), otherwise the program is ill-formed:
>
>
>
>
> * In a return statement, when the operand is the name of a non-volatile object with automatic storage duration, which isn't a function parameter or a catch clause parameter, and which is of the same class type (ignoring cv-qualification) as the function return type.
>
>
>
That is, the copy/move constructors are required to exist(that is these ctors must be present and accessible) but since you've explicitly marked them as **deleted** this statement fails with the [error](https://onlinegdb.com/p_Ws9ITJV):
```
error: use of deleted function ‘Animal::Animal(Animal&&)’
```
The error can also be seen [here](https://onlinegdb.com/p_Ws9ITJV)
|
In C++17 there is a new set of rules about temporary materialization.
In a simple explanation an expression that evaluates to a prvalue (a temporary) doesn't immediately create an object, but is instead a recipe for creating an object. So in your example `Animal()` doesn't create an object straight away so that's why you can return it even if the copy and move constructors are deleted.
In your main assigning the prvalue to `a` triggers the temporary materialization so the object is only now created directly in the scope of `main`. Throught all of this there is a single object so there is no operation of copy or move.
|
4,175,697
|
I am writting a daemon server using python, sometimes there are python runtime errors, for example some variable type is not correct. That error will not cause the process to exit.
Is it possible for me to redirect such runtime error to a log file?
|
2010/11/14
|
[
"https://Stackoverflow.com/questions/4175697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/197036/"
] |
It looks like you are asking two questions.
To prevent your process from exiting on errors, you need to catch all [`exception`](http://www.python.org/doc//current/library/exceptions.html)s that are raised using [`try...except...finally`](http://www.python.org/doc//current/reference/compound_stmts.html#try).
You also wish to redirect all output to a log. Happily, Python provides a comprehensive [`logging`](http://www.python.org/doc//current/library/logging.html) module for your convenience.
An example, for your delight and delectation:
```
#!/usr/bin/env python
import logging
logging.basicConfig(filename='warning.log', level=logging.WARNING)
try:
1/0
except ZeroDivisionError, e:
logging.warning('The following error occurred, yet I shall carry on regardless: %s', e)
```
This graciously emits:
```
% cat warning.log
WARNING:root:The following error occurred, yet I shall carry on regardless: integer division or modulo by zero
```
|
Look at the `traceback` module. If you catch a `RuntimeError`, you can write it to the log (look at the `logging` module for that).
|
17,860,717
|
Spent almost more than 30 mins of my time in trying all different possibly. Finally now I'm exhausted. Can someone please help me on this quote problem
```
def remote_shell_func_execute():
with settings(host_string='user@XXX.yyy.com',warn_only=True):
process = run("subprocess.Popen(\["/root/test/shell_script_for_test.sh func2"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)")
process.wait()
for line in process.stdout.readlines():
print(line)
```
when run the fab, I get
```
fab remote_shell_func_execute
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/Fabric-1.6.1-py2.7.egg/fabric/main.py",line 654, in main
docstring, callables, default = load_fabfile(fabfile)
File "/usr/local/lib/python2.7/site-packages/Fabric-1.6.1-py2.7.egg/fabric/main.py",line 165, in load_fabfile
imported = importer(os.path.splitext(fabfile)[0])
File "/home/fabfile.py", line 18
process = run("subprocess.Popen(\["/root/test/shell_script_for_test.sh func2"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)")
^
SyntaxError: invalid syntax
```
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17860717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2436055/"
] |
Just use a single quoted string.
```
run('subprocess.Popen(\["/root/test/shell_script_for_test.sh func2"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)')
```
Or escape the inner `"`.
```
run("subprocess.Popen(\[\"/root/test/shell_script_for_test.sh func2\"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)")
```
|
When you escape quotes, the escape backslash must go directly before the quote character:
```
"[\"/..."
```
Alternatively, use single quotes for the string, this avoids the need for escaping at all:
```
'["/...'
```
|
7,314,925
|
I'm saving data to a PostgreSQL backend through Django. Many of the fields in my models are DecimalFields set to arbitrarily high max\_digits and decimal\_places, corresponding to numeric columns in the database backend. The data in each column have a precision (or number of decimal places) that is not known *a priori*, and each datum in a given column need not have the same precision.
For example, arguments to a model may look like:
```
{'dist': Decimal("94.3"), 'dist_e': Decimal("1.2")}
{'dist': Decimal("117"), 'dist_e': Decimal("4")}
```
where the keys are database column names.
Upon output, I need to preserve and redisplay those data with the precision with which they were read in. In other words, after the database is queried, the displayed data need to look exactly like the data in that were read in, with no additional or missing trailing 0's in the decimals. When queried, however, either in a django shell or in the admin interface, all of the DecimalField data come back with many trailing 0's.
I have seen similar questions answered for money values, where the precision (2 decimal places) is both known and the same for all data in a given column. However, how might one best preserve the exact precision represented by Decimal values in Django and numeric values in PostgreSQL when the precision is not the same and not known beforehand?
EDIT:
Possibly an additional useful piece of information: When viewing the table to which the data are saved in a Django dbshell, the many trailing 0's are also present. The python Decimal value is apparently converted to the maximum precision value specified in the models.py file upon being saved to the PostgreSQL backend.
|
2011/09/06
|
[
"https://Stackoverflow.com/questions/7314925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/929783/"
] |
If you need perfect parity forwards and backwards, you'll need to use a CharField. Any number-based database field is going to interact with your data muxing it in some way or another. Now, I know you mentioned not being able to know the digit length of the data points, and a CharField requires some length. You can either set it arbitrarily high (1000, 2000, etc) or I suppose you could use a TextField, instead.
However, with either approach, you're going to be wasting a lot database resources in most scenarios. I would suggest modifying your approach such that extra zeros at the end don't matter (for display purpose you could always chop them off), or such that the precision is not longer arbitrary.
|
Since I asked this question awhile ago and the answer remains the same, I'll share what I found should it be helpful to anyone in a similar position. Django doesn't have the ability to take advantage of the PostgreSQL Numerical column type with arbitrary precision. In order to preserve the display precision of data I upload to my database, and in order to be able to perform mathematical calculations on values obtained from database queries without first recasting strings into python Decimal types, I opted to add an extra precision column for every numerical column in the database.
The precision value is an integer indicating how many digits after the decimal point are required. The datum `4.350` is assigned a value of `3` in its corresponding precision column. Normally displayed integers (e.g. `2531`) have a precision entry of `0`. However, large integers reported in scientific notation are assigned a negative integer to preserve their display precision. The value `4.320E+33`, for example, gets the precision entry `-3`. The database recognizes that all objects with negative precision values should be re-displayed in scientific notation.
This solution adds some complexity to the structure and code surrounding the database, but it has proven effective. It also allows me to accurately preserve precision through calculations like converting to/from log and linear values.
|
46,258,924
|
I am trying to recreate some code from MatLab using numpy, and I cannot find out how to store a variable amount of matrices. In MatLab I used the following code:
```
for i = 1:rows
K{i} = zeros(5,4); %create 5x4 matrix
K{i}(1,1)= ET(i,1); %put knoop i in table
K{i}(1,3)= ET(i,2); %put knoop j in table
... *do some stuff with it*
end
```
What i assumed i needed to do was to create a List of matrices, but I've only been able to store single arrays in list, not matrices. Something like this, but then working:
```
for i in range(ET.shape[0]):
K[[i]] = np.zeros((5, 4))
K[[i]][1, 2] = ET[i, 2]
```
I've tried looking on
<https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html>
but it didn't help me.
Looking through somewhat simular questions a dirty method seems to be using globals, and than changing the variable name, like this:
```
for x in range(0, 9):
globals()['string%s' % x] = 'Hello'
print(string3)
```
Is this the best way for me to achieve my goal, or is there a proper way of storing multiple matrices in a variable? Or am i wanting something that I shouldnt want to do because python has a different way of handeling it?
|
2017/09/16
|
[
"https://Stackoverflow.com/questions/46258924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8620430/"
] |
In the MATLAB code you are using a Cell array. Cells are generic containers. The equivalent in Python is a regular [list](https://docs.python.org/2/tutorial/introduction.html#lists) - not a numpy structure. You can create your numpy arrays and then store them in a list like so:
```
import numpy as np
array1 = np.array([1, 2, 3, 4]) # Numpy array (1D)
array2 = np.matrix([[4,5],[6,7]]) # Numpy matrix
array3 = np.zeros((3,4)) # 2D numpy array
array_list = [a1, a2, a3] # List containing the numpy objects
```
So your code would need to be modified to look more like this:
```
K = []
for i in range(rows):
K.append(np.zeros((5,4))) # create 5x4 matrix
K[i][1,1]= ET[i,1] # put knoop i in table
K[i][1,3]= ET[i,2] # put knoop j in table
... *do some stuff with it*
```
If you are just getting started with scientific computing in Python this [article](http://engineeringterminal.com/electrical-engineering/tutorials/intro-to-scipy-for-matlab-users.html) is helpful.
|
How about something like this:
```
import numpy as np
myList = []
for i in range(100):
mtrx = np.zeros((5,4))
mtrx[1,2] = 7
mtrx[3,0] = -5
myList.append(mtrx)
```
|
1,242,904
|
I use CMake to build my application. How can I find where the python site-packages directory is located? I need the path in order to compile an extension to python.
CMake has to be able to find the path on all three major OS as I plan to deploy my application on Linux, Mac and Windows.
I tried using
```
include(FindPythonLibs)
find_path( PYTHON_SITE_PACKAGES site-packages ${PYTHON_INCLUDE_PATH}/.. )
```
however that does not work.
I can also obtain the path by running
```
python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"
```
on the shell, but how would I invoke that from CMake ?
SOLUTION:
Thanks, Alex.
So the command that gives me the site-package dir is:
```
execute_process ( COMMAND python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" OUTPUT_VARIABLE PYTHON_SITE_PACKAGES OUTPUT_STRIP_TRAILING_WHITESPACE)
```
The OUTPUT\_STRIP\_TRAILING\_WHITESPACE command is needed to remove the trailing new line.
|
2009/08/07
|
[
"https://Stackoverflow.com/questions/1242904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/134397/"
] |
You can execute external processes in cmake with [execute\_process](http://www.cmake.org/cmake/help/cmake2.6docs.html#command:execute_process) (and get the output into a variable if needed, as it would be here).
|
I suggest to use `get_python_lib(True)` if you are making this extension as a dynamic library. This first parameter should be true if you need the platform specific location (in 64bit linux machines, this could be `/usr/lib64` instead of `/usr/lib`)
|
1,242,904
|
I use CMake to build my application. How can I find where the python site-packages directory is located? I need the path in order to compile an extension to python.
CMake has to be able to find the path on all three major OS as I plan to deploy my application on Linux, Mac and Windows.
I tried using
```
include(FindPythonLibs)
find_path( PYTHON_SITE_PACKAGES site-packages ${PYTHON_INCLUDE_PATH}/.. )
```
however that does not work.
I can also obtain the path by running
```
python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"
```
on the shell, but how would I invoke that from CMake ?
SOLUTION:
Thanks, Alex.
So the command that gives me the site-package dir is:
```
execute_process ( COMMAND python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" OUTPUT_VARIABLE PYTHON_SITE_PACKAGES OUTPUT_STRIP_TRAILING_WHITESPACE)
```
The OUTPUT\_STRIP\_TRAILING\_WHITESPACE command is needed to remove the trailing new line.
|
2009/08/07
|
[
"https://Stackoverflow.com/questions/1242904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/134397/"
] |
You can execute external processes in cmake with [execute\_process](http://www.cmake.org/cmake/help/cmake2.6docs.html#command:execute_process) (and get the output into a variable if needed, as it would be here).
|
Slightly updated version that I used for [lcm](https://github.com/lcm-proj-lcm):
```
execute_process(
COMMAND "${PYTHON_EXECUTABLE}" -c "if True:
from distutils import sysconfig as sc
print(sc.get_python_lib(prefix='', plat_specific=True))"
OUTPUT_VARIABLE PYTHON_SITE
OUTPUT_STRIP_TRAILING_WHITESPACE)
```
This sets `PYTHON_SITE` to the appropriate prefix-relative path, suitable for use like:
```
install(
FILES ${mypackage_python_files}
DESTINATION ${PYTHON_SITE}/mypackage)
```
(Please don't install to an absolute path! Doing so bypasses `CMAKE_INSTALL_PREFIX`.)
|
1,242,904
|
I use CMake to build my application. How can I find where the python site-packages directory is located? I need the path in order to compile an extension to python.
CMake has to be able to find the path on all three major OS as I plan to deploy my application on Linux, Mac and Windows.
I tried using
```
include(FindPythonLibs)
find_path( PYTHON_SITE_PACKAGES site-packages ${PYTHON_INCLUDE_PATH}/.. )
```
however that does not work.
I can also obtain the path by running
```
python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"
```
on the shell, but how would I invoke that from CMake ?
SOLUTION:
Thanks, Alex.
So the command that gives me the site-package dir is:
```
execute_process ( COMMAND python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" OUTPUT_VARIABLE PYTHON_SITE_PACKAGES OUTPUT_STRIP_TRAILING_WHITESPACE)
```
The OUTPUT\_STRIP\_TRAILING\_WHITESPACE command is needed to remove the trailing new line.
|
2009/08/07
|
[
"https://Stackoverflow.com/questions/1242904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/134397/"
] |
You can execute external processes in cmake with [execute\_process](http://www.cmake.org/cmake/help/cmake2.6docs.html#command:execute_process) (and get the output into a variable if needed, as it would be here).
|
Since CMake 3.12 you can use [FindPython](https://cmake.org/cmake/help/latest/module/FindPython.html) module which populates `Python_SITELIB` and `Python_SITEARCH` variables for architecture independent and specific libraries, respectively.
Example:
```
find_package(Python ${PYTHON_VERSION} REQUIRED COMPONENTS Development)
Python_add_library(foo MODULE
src/foo.cc src/python_interface.cc
)
install(TARGETS foo DESTINATION ${Python_SITEARCH}/foo)
```
|
1,242,904
|
I use CMake to build my application. How can I find where the python site-packages directory is located? I need the path in order to compile an extension to python.
CMake has to be able to find the path on all three major OS as I plan to deploy my application on Linux, Mac and Windows.
I tried using
```
include(FindPythonLibs)
find_path( PYTHON_SITE_PACKAGES site-packages ${PYTHON_INCLUDE_PATH}/.. )
```
however that does not work.
I can also obtain the path by running
```
python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"
```
on the shell, but how would I invoke that from CMake ?
SOLUTION:
Thanks, Alex.
So the command that gives me the site-package dir is:
```
execute_process ( COMMAND python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" OUTPUT_VARIABLE PYTHON_SITE_PACKAGES OUTPUT_STRIP_TRAILING_WHITESPACE)
```
The OUTPUT\_STRIP\_TRAILING\_WHITESPACE command is needed to remove the trailing new line.
|
2009/08/07
|
[
"https://Stackoverflow.com/questions/1242904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/134397/"
] |
Slightly updated version that I used for [lcm](https://github.com/lcm-proj-lcm):
```
execute_process(
COMMAND "${PYTHON_EXECUTABLE}" -c "if True:
from distutils import sysconfig as sc
print(sc.get_python_lib(prefix='', plat_specific=True))"
OUTPUT_VARIABLE PYTHON_SITE
OUTPUT_STRIP_TRAILING_WHITESPACE)
```
This sets `PYTHON_SITE` to the appropriate prefix-relative path, suitable for use like:
```
install(
FILES ${mypackage_python_files}
DESTINATION ${PYTHON_SITE}/mypackage)
```
(Please don't install to an absolute path! Doing so bypasses `CMAKE_INSTALL_PREFIX`.)
|
I suggest to use `get_python_lib(True)` if you are making this extension as a dynamic library. This first parameter should be true if you need the platform specific location (in 64bit linux machines, this could be `/usr/lib64` instead of `/usr/lib`)
|
1,242,904
|
I use CMake to build my application. How can I find where the python site-packages directory is located? I need the path in order to compile an extension to python.
CMake has to be able to find the path on all three major OS as I plan to deploy my application on Linux, Mac and Windows.
I tried using
```
include(FindPythonLibs)
find_path( PYTHON_SITE_PACKAGES site-packages ${PYTHON_INCLUDE_PATH}/.. )
```
however that does not work.
I can also obtain the path by running
```
python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"
```
on the shell, but how would I invoke that from CMake ?
SOLUTION:
Thanks, Alex.
So the command that gives me the site-package dir is:
```
execute_process ( COMMAND python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" OUTPUT_VARIABLE PYTHON_SITE_PACKAGES OUTPUT_STRIP_TRAILING_WHITESPACE)
```
The OUTPUT\_STRIP\_TRAILING\_WHITESPACE command is needed to remove the trailing new line.
|
2009/08/07
|
[
"https://Stackoverflow.com/questions/1242904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/134397/"
] |
Since CMake 3.12 you can use [FindPython](https://cmake.org/cmake/help/latest/module/FindPython.html) module which populates `Python_SITELIB` and `Python_SITEARCH` variables for architecture independent and specific libraries, respectively.
Example:
```
find_package(Python ${PYTHON_VERSION} REQUIRED COMPONENTS Development)
Python_add_library(foo MODULE
src/foo.cc src/python_interface.cc
)
install(TARGETS foo DESTINATION ${Python_SITEARCH}/foo)
```
|
I suggest to use `get_python_lib(True)` if you are making this extension as a dynamic library. This first parameter should be true if you need the platform specific location (in 64bit linux machines, this could be `/usr/lib64` instead of `/usr/lib`)
|
55,598,548
|
I'm trying to update my Spyder to fix some error in my Spyder 3.2.3.
But when I called `conda update spyder` mentioned in (<https://github.com/spyder-ide/spyder/issues/9019#event-2225858161>), the Anaconda prompt showed as follow:

and the Spyder wasn't updated to the latest version (3.3.3).
I guessed the reason I couldn't update Spyder is because my Conda isn't the latest version, so I ran
`conda update -n base -c defaults conda`
However after that (update conda to latest version 4.6.11) I found that all my Spyder and my Anaconda Navigator could not be opened. It seems that the commands not only update the Conda, but also update some other packages to py3.7.
When I called `conda update spyder` again, the prompt showed as follow:
```
WARNING: The conda.compat module is deprecated and will be removed in a future release.
Collecting package metadata: done
Solving environment: |
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/win-64::anaconda==5.3.1=py37_0
- https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64::anaconda-navigator==1.6.4=py36_0
- defaults/win-64::astropy==3.0.4=py37hfa6e2cd_0
- defaults/win-64::blaze==0.11.3=py37_0
- defaults/win-64::bottleneck==1.2.1=py37h452e1ab_1
- defaults/win-64::dask==0.19.1=py37_0
- defaults/win-64::datashape==0.5.4=py37_1
- defaults/win-64::h5py==2.8.0=py37h3bdd7fb_2
- defaults/win-64::imageio==2.4.1=py37_0
- defaults/win-64::matplotlib==2.2.3=py37hd159220_0
- defaults/win-64::mkl-service==1.1.2=py37hb217b18_5
- defaults/win-64::mkl_fft==1.0.4=py37h1e22a9b_1
- defaults/win-64::mkl_random==1.0.1=py37h77b88f5_1
- defaults/win-64::numba==0.39.0=py37h830ac7b_0
- defaults/win-64::numexpr==2.6.8=py37h9ef55f4_0
- defaults/win-64::numpy-base==1.15.1=py37h8128ebf_0
- defaults/win-64::odo==0.5.1=py37_0
- defaults/win-64::pandas==0.23.4=py37h830ac7b_0
- defaults/win-64::patsy==0.5.0=py37_0
- defaults/win-64::pytables==3.4.4=py37he6f6034_0
- defaults/win-64::pytest-arraydiff==0.2=py37h39e3cac_0
- defaults/win-64::pytest-astropy==0.4.0=py37_0
- defaults/win-64::pytest-doctestplus==0.1.3=py37_0
- defaults/win-64::pywavelets==1.0.0=py37h452e1ab_0
- defaults/win-64::scikit-image==0.14.0=py37h6538335_1
- defaults/win-64::scikit-learn==0.19.2=py37heebcf9a_0
- defaults/win-64::scipy==1.1.0=py37h4f6bf74_1
- defaults/win-64::seaborn==0.9.0=py37_0
- defaults/win-64::statsmodels==0.9.0=py37h452e1ab_0
done
# All requested packages already installed.
```
I guess maybe the python version conflict (my python version is 3.6.2) causes the exception of the Spyder and Navigator. So I try to restore these packages to py3.6 version by called `conda install python = 3.6`, but it doesn't works.
This is the result of `conda list -version`(the last 2 rev)
```
2019-04-09 22:59:08 (rev 3)
certifi {2016.2.28 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2019.3.9}
conda {4.5.13 -> 4.6.11}
cryptography {1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.6.1}
curl {7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.64.0}
libcurl {7.61.0 -> 7.64.0}
libpng {1.6.34 -> 1.6.36}
libprotobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1}
libssh2 {1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.8.0}
menuinst {1.4.7 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.4.16}
openssl {1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.1.1b}
protobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1}
pycurl {7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.43.0.2}
pyqt {5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 5.9.2}
python {3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.8}
qt {5.6.2 -> 5.9.7}
requests {2.14.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.21.0}
sip {4.18 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 4.19.8}
sqlite {3.24.0 -> 3.27.2}
vc {14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 14.1}
+krb5-1.16.1
2019-04-09 23:02:48 (rev 4)
cryptography {2.6.1 -> 1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
curl {7.64.0 -> 7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
krb5 {1.16.1 -> 1.13.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
libcurl {7.64.0 -> 7.61.1}
libpng {1.6.36 -> 1.6.34}
libprotobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
libssh2 {1.8.0 -> 1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
menuinst {1.4.16 -> 1.4.14}
openssl {1.1.1b -> 1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
protobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
pycurl {7.43.0.2 -> 7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
pyqt {5.9.2 -> 5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
python {3.6.8 -> 3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
qt {5.9.7 -> 5.6.2}
sqlite {3.27.2 -> 3.25.2}
vc {14.1 -> 14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
```
This is the result of `conda info`
```
active environment : base
active env location : C:\Users\lenovo\Anaconda3
shell level : 1
user config file : C:\Users\lenovo\.condarc
populated config files : C:\Users\lenovo\.condarc
conda version : 4.6.11
conda-build version : 3.0.19
python version : 3.6.2.final.0
base environment : C:\Users\lenovo\Anaconda3 (writable)
channel URLs : https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64
https://mirrors.ustc.edu.cn/anaconda/pkgs/free/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/win-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\Users\lenovo\Anaconda3\pkgs
C:\Users\lenovo\.conda\pkgs
C:\Users\lenovo\AppData\Local\conda\conda\pkgs
envs directories : C:\Users\lenovo\Anaconda3\envs
C:\Users\lenovo\.conda\envs
C:\Users\lenovo\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.6.11 requests/2.21.0 CPython/3.6.2 Windows/10 Windows/10.0.17134
administrator : False
netrc file : None
offline mode : False
```
What is the best way to fix the issue?
How can I get my Spyder to work again?
|
2019/04/09
|
[
"https://Stackoverflow.com/questions/55598548",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11335756/"
] |
Fortunately I have fixed my Spyder by using command 'conda install --revision 2', and updated my Spyder to version 3.3.4 in the Anaconda Navigator.
The `conda list --version` can show each rev before, so I used command `conda install --revision 2` to restore the environment to what it was before I updated conda. After that my Spyder and Anaconda Navigator can be used normally. Then I update my Spyder in the Anaconda Navigator to the version 3.3.4.
This is the link of the [`conda install`](https://docs.conda.io/projects/conda/en/latest/commands/install.html?highlight=revision)
|
I reinstalled the package that's causing inconsistency and then the problem is gone.
My inconsistency error:
[](https://i.stack.imgur.com/ACgYe.png)
What I did:
```
conda install -c conda-forge mkl-service
```
|
55,598,548
|
I'm trying to update my Spyder to fix some error in my Spyder 3.2.3.
But when I called `conda update spyder` mentioned in (<https://github.com/spyder-ide/spyder/issues/9019#event-2225858161>), the Anaconda prompt showed as follow:

and the Spyder wasn't updated to the latest version (3.3.3).
I guessed the reason I couldn't update Spyder is because my Conda isn't the latest version, so I ran
`conda update -n base -c defaults conda`
However after that (update conda to latest version 4.6.11) I found that all my Spyder and my Anaconda Navigator could not be opened. It seems that the commands not only update the Conda, but also update some other packages to py3.7.
When I called `conda update spyder` again, the prompt showed as follow:
```
WARNING: The conda.compat module is deprecated and will be removed in a future release.
Collecting package metadata: done
Solving environment: |
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/win-64::anaconda==5.3.1=py37_0
- https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64::anaconda-navigator==1.6.4=py36_0
- defaults/win-64::astropy==3.0.4=py37hfa6e2cd_0
- defaults/win-64::blaze==0.11.3=py37_0
- defaults/win-64::bottleneck==1.2.1=py37h452e1ab_1
- defaults/win-64::dask==0.19.1=py37_0
- defaults/win-64::datashape==0.5.4=py37_1
- defaults/win-64::h5py==2.8.0=py37h3bdd7fb_2
- defaults/win-64::imageio==2.4.1=py37_0
- defaults/win-64::matplotlib==2.2.3=py37hd159220_0
- defaults/win-64::mkl-service==1.1.2=py37hb217b18_5
- defaults/win-64::mkl_fft==1.0.4=py37h1e22a9b_1
- defaults/win-64::mkl_random==1.0.1=py37h77b88f5_1
- defaults/win-64::numba==0.39.0=py37h830ac7b_0
- defaults/win-64::numexpr==2.6.8=py37h9ef55f4_0
- defaults/win-64::numpy-base==1.15.1=py37h8128ebf_0
- defaults/win-64::odo==0.5.1=py37_0
- defaults/win-64::pandas==0.23.4=py37h830ac7b_0
- defaults/win-64::patsy==0.5.0=py37_0
- defaults/win-64::pytables==3.4.4=py37he6f6034_0
- defaults/win-64::pytest-arraydiff==0.2=py37h39e3cac_0
- defaults/win-64::pytest-astropy==0.4.0=py37_0
- defaults/win-64::pytest-doctestplus==0.1.3=py37_0
- defaults/win-64::pywavelets==1.0.0=py37h452e1ab_0
- defaults/win-64::scikit-image==0.14.0=py37h6538335_1
- defaults/win-64::scikit-learn==0.19.2=py37heebcf9a_0
- defaults/win-64::scipy==1.1.0=py37h4f6bf74_1
- defaults/win-64::seaborn==0.9.0=py37_0
- defaults/win-64::statsmodels==0.9.0=py37h452e1ab_0
done
# All requested packages already installed.
```
I guess maybe the python version conflict (my python version is 3.6.2) causes the exception of the Spyder and Navigator. So I try to restore these packages to py3.6 version by called `conda install python = 3.6`, but it doesn't works.
This is the result of `conda list -version`(the last 2 rev)
```
2019-04-09 22:59:08 (rev 3)
certifi {2016.2.28 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2019.3.9}
conda {4.5.13 -> 4.6.11}
cryptography {1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.6.1}
curl {7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.64.0}
libcurl {7.61.0 -> 7.64.0}
libpng {1.6.34 -> 1.6.36}
libprotobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1}
libssh2 {1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.8.0}
menuinst {1.4.7 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.4.16}
openssl {1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.1.1b}
protobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1}
pycurl {7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.43.0.2}
pyqt {5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 5.9.2}
python {3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.8}
qt {5.6.2 -> 5.9.7}
requests {2.14.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.21.0}
sip {4.18 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 4.19.8}
sqlite {3.24.0 -> 3.27.2}
vc {14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 14.1}
+krb5-1.16.1
2019-04-09 23:02:48 (rev 4)
cryptography {2.6.1 -> 1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
curl {7.64.0 -> 7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
krb5 {1.16.1 -> 1.13.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
libcurl {7.64.0 -> 7.61.1}
libpng {1.6.36 -> 1.6.34}
libprotobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
libssh2 {1.8.0 -> 1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
menuinst {1.4.16 -> 1.4.14}
openssl {1.1.1b -> 1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
protobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
pycurl {7.43.0.2 -> 7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
pyqt {5.9.2 -> 5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
python {3.6.8 -> 3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
qt {5.9.7 -> 5.6.2}
sqlite {3.27.2 -> 3.25.2}
vc {14.1 -> 14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
```
This is the result of `conda info`
```
active environment : base
active env location : C:\Users\lenovo\Anaconda3
shell level : 1
user config file : C:\Users\lenovo\.condarc
populated config files : C:\Users\lenovo\.condarc
conda version : 4.6.11
conda-build version : 3.0.19
python version : 3.6.2.final.0
base environment : C:\Users\lenovo\Anaconda3 (writable)
channel URLs : https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64
https://mirrors.ustc.edu.cn/anaconda/pkgs/free/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/win-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\Users\lenovo\Anaconda3\pkgs
C:\Users\lenovo\.conda\pkgs
C:\Users\lenovo\AppData\Local\conda\conda\pkgs
envs directories : C:\Users\lenovo\Anaconda3\envs
C:\Users\lenovo\.conda\envs
C:\Users\lenovo\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.6.11 requests/2.21.0 CPython/3.6.2 Windows/10 Windows/10.0.17134
administrator : False
netrc file : None
offline mode : False
```
What is the best way to fix the issue?
How can I get my Spyder to work again?
|
2019/04/09
|
[
"https://Stackoverflow.com/questions/55598548",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11335756/"
] |
This worked with python3.8 and spyder4.1.5:
```
conda install pyqt --force-reinstall
```
|
Fortunately I have fixed my Spyder by using command 'conda install --revision 2', and updated my Spyder to version 3.3.4 in the Anaconda Navigator.
The `conda list --version` can show each rev before, so I used command `conda install --revision 2` to restore the environment to what it was before I updated conda. After that my Spyder and Anaconda Navigator can be used normally. Then I update my Spyder in the Anaconda Navigator to the version 3.3.4.
This is the link of the [`conda install`](https://docs.conda.io/projects/conda/en/latest/commands/install.html?highlight=revision)
|
55,598,548
|
I'm trying to update my Spyder to fix some error in my Spyder 3.2.3.
But when I called `conda update spyder` mentioned in (<https://github.com/spyder-ide/spyder/issues/9019#event-2225858161>), the Anaconda prompt showed as follow:

and the Spyder wasn't updated to the latest version (3.3.3).
I guessed the reason I couldn't update Spyder is because my Conda isn't the latest version, so I ran
`conda update -n base -c defaults conda`
However after that (update conda to latest version 4.6.11) I found that all my Spyder and my Anaconda Navigator could not be opened. It seems that the commands not only update the Conda, but also update some other packages to py3.7.
When I called `conda update spyder` again, the prompt showed as follow:
```
WARNING: The conda.compat module is deprecated and will be removed in a future release.
Collecting package metadata: done
Solving environment: |
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- defaults/win-64::anaconda==5.3.1=py37_0
- https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64::anaconda-navigator==1.6.4=py36_0
- defaults/win-64::astropy==3.0.4=py37hfa6e2cd_0
- defaults/win-64::blaze==0.11.3=py37_0
- defaults/win-64::bottleneck==1.2.1=py37h452e1ab_1
- defaults/win-64::dask==0.19.1=py37_0
- defaults/win-64::datashape==0.5.4=py37_1
- defaults/win-64::h5py==2.8.0=py37h3bdd7fb_2
- defaults/win-64::imageio==2.4.1=py37_0
- defaults/win-64::matplotlib==2.2.3=py37hd159220_0
- defaults/win-64::mkl-service==1.1.2=py37hb217b18_5
- defaults/win-64::mkl_fft==1.0.4=py37h1e22a9b_1
- defaults/win-64::mkl_random==1.0.1=py37h77b88f5_1
- defaults/win-64::numba==0.39.0=py37h830ac7b_0
- defaults/win-64::numexpr==2.6.8=py37h9ef55f4_0
- defaults/win-64::numpy-base==1.15.1=py37h8128ebf_0
- defaults/win-64::odo==0.5.1=py37_0
- defaults/win-64::pandas==0.23.4=py37h830ac7b_0
- defaults/win-64::patsy==0.5.0=py37_0
- defaults/win-64::pytables==3.4.4=py37he6f6034_0
- defaults/win-64::pytest-arraydiff==0.2=py37h39e3cac_0
- defaults/win-64::pytest-astropy==0.4.0=py37_0
- defaults/win-64::pytest-doctestplus==0.1.3=py37_0
- defaults/win-64::pywavelets==1.0.0=py37h452e1ab_0
- defaults/win-64::scikit-image==0.14.0=py37h6538335_1
- defaults/win-64::scikit-learn==0.19.2=py37heebcf9a_0
- defaults/win-64::scipy==1.1.0=py37h4f6bf74_1
- defaults/win-64::seaborn==0.9.0=py37_0
- defaults/win-64::statsmodels==0.9.0=py37h452e1ab_0
done
# All requested packages already installed.
```
I guess maybe the python version conflict (my python version is 3.6.2) causes the exception of the Spyder and Navigator. So I try to restore these packages to py3.6 version by called `conda install python = 3.6`, but it doesn't works.
This is the result of `conda list -version`(the last 2 rev)
```
2019-04-09 22:59:08 (rev 3)
certifi {2016.2.28 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2019.3.9}
conda {4.5.13 -> 4.6.11}
cryptography {1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.6.1}
curl {7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.64.0}
libcurl {7.61.0 -> 7.64.0}
libpng {1.6.34 -> 1.6.36}
libprotobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1}
libssh2 {1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.8.0}
menuinst {1.4.7 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.4.16}
openssl {1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.1.1b}
protobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1}
pycurl {7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.43.0.2}
pyqt {5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 5.9.2}
python {3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.8}
qt {5.6.2 -> 5.9.7}
requests {2.14.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.21.0}
sip {4.18 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 4.19.8}
sqlite {3.24.0 -> 3.27.2}
vc {14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 14.1}
+krb5-1.16.1
2019-04-09 23:02:48 (rev 4)
cryptography {2.6.1 -> 1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
curl {7.64.0 -> 7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
krb5 {1.16.1 -> 1.13.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
libcurl {7.64.0 -> 7.61.1}
libpng {1.6.36 -> 1.6.34}
libprotobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
libssh2 {1.8.0 -> 1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
menuinst {1.4.16 -> 1.4.14}
openssl {1.1.1b -> 1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
protobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
pycurl {7.43.0.2 -> 7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
pyqt {5.9.2 -> 5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
python {3.6.8 -> 3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
qt {5.9.7 -> 5.6.2}
sqlite {3.27.2 -> 3.25.2}
vc {14.1 -> 14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)}
```
This is the result of `conda info`
```
active environment : base
active env location : C:\Users\lenovo\Anaconda3
shell level : 1
user config file : C:\Users\lenovo\.condarc
populated config files : C:\Users\lenovo\.condarc
conda version : 4.6.11
conda-build version : 3.0.19
python version : 3.6.2.final.0
base environment : C:\Users\lenovo\Anaconda3 (writable)
channel URLs : https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64
https://mirrors.ustc.edu.cn/anaconda/pkgs/free/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/win-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\Users\lenovo\Anaconda3\pkgs
C:\Users\lenovo\.conda\pkgs
C:\Users\lenovo\AppData\Local\conda\conda\pkgs
envs directories : C:\Users\lenovo\Anaconda3\envs
C:\Users\lenovo\.conda\envs
C:\Users\lenovo\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.6.11 requests/2.21.0 CPython/3.6.2 Windows/10 Windows/10.0.17134
administrator : False
netrc file : None
offline mode : False
```
What is the best way to fix the issue?
How can I get my Spyder to work again?
|
2019/04/09
|
[
"https://Stackoverflow.com/questions/55598548",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11335756/"
] |
This worked with python3.8 and spyder4.1.5:
```
conda install pyqt --force-reinstall
```
|
I reinstalled the package that's causing inconsistency and then the problem is gone.
My inconsistency error:
[](https://i.stack.imgur.com/ACgYe.png)
What I did:
```
conda install -c conda-forge mkl-service
```
|
64,832,243
|
I'm trying to install apache airflow with pip, so I enter "pip install apache-airflow".
but somehow i got an error that i don't understand. Could you please help me with this?
for a little bit context, I'm using macOS catalina and python 3.8.2.
I have tried to upgrade my pip, but the error still there.
These are the error that appear
```
ERROR: Command errored out with exit status 1:
command: /Users/muhammadsyamsularifin/airflow/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-8qj0qv0h/install-record.txt --single-version-externally-managed --compile --install-headers /Users/muhammadsyamsularifin/airflow/venv/include/site/python3.8/setproctitle
cwd: /private/tmp/pip-install-ruyjyg1t/setproctitle/
Complete output (119 lines):
running install
running build
running build_ext
building 'setproctitle' extension
creating build
creating build/temp.macosx-10.14.6-x86_64-3.8
creating build/temp.macosx-10.14.6-x86_64-3.8/src
xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -DSPT_VERSION=1.1.10 -D__darwin__=1 -I/Users/muhammadsyamsularifin/airflow/venv/include -I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c src/setproctitle.c -o build/temp.macosx-10.14.6-x86_64-3.8/src/setproctitle.o
In file included from src/setproctitle.c:14:
In file included from src/spt.h:15:
In file included from src/spt_python.h:14:
In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11:
In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:63:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture
#error Unsupported architecture
^
In file included from src/setproctitle.c:14:
In file included from src/spt.h:15:
In file included from src/spt_python.h:14:
In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11:
In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:64:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported
#error architecture not supported
^
In file included from src/setproctitle.c:14:
In file included from src/spt.h:15:
In file included from src/spt_python.h:14:
In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:33:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported
#error architecture not supported
^
In file included from src/setproctitle.c:14:
In file included from src/spt.h:15:
In file included from src/spt_python.h:14:
In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t'
typedef __int64_t __darwin_blkcnt_t; /* total blocks */
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_blksize_t; /* preferred block size */
^
note: '__int128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_dev_t; /* dev_t */
^
note: '__int128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */
^
note: '__uint128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/
^
note: '__uint128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t'
typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t'
typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'?
typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */
^
note: '__uint128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t'
typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */
^
note: '__int128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_sigset_t; /* [???] signal set */
^
note: '__uint128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */
^
note: '__int128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_uid_t; /* [???] user IDs */
^
note: '__uint128_t' declared here
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */
^
note: '__uint128_t' declared here
In file included from src/setproctitle.c:14:
In file included from src/spt.h:15:
In file included from src/spt_python.h:14:
In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_wctype_t;
^
note: '__uint128_t' declared here
In file included from src/setproctitle.c:14:
In file included from src/spt.h:15:
In file included from src/spt_python.h:14:
In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:75:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_va_list.h:31:
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/types.h:37:2: error: architecture not supported
#error architecture not supported
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
error: command 'xcrun' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /Users/muhammadsyamsularifin/airflow/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-8qj0qv0h/install-record.txt --single-version-externally-managed --compile --install-headers /Users/muhammadsyamsularifin/airflow/venv/include/site/python3.8/setproctitle Check the logs for full command output.
```
|
2020/11/14
|
[
"https://Stackoverflow.com/questions/64832243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14607085/"
] |
I looked into similar errors and here are a few possible fixes:
1. If you installed Python3.8 via `Brew`, try to uninstall it and install a new version that you build from source.
2. Try `sudo python3.8 -m pip install apache-airflow`.
3. Upgrade to Python 3.8.5 as per [this post](https://stackoverflow.com/questions/64111015/pip-install-psutil-is-throwing-error-unsupported-architecture-any-workarou).
4. Export the environment variable `export ARCHFLAGS="-arch x86_64"` as per [this post](https://github.com/giampaolo/psutil/issues/1832#issuecomment-704596756).
|
I faced exact same issue, here's how I solved it.
I explicitly used python 3.8.5 as pointed in [Meghdeep Ray's answer](https://stackoverflow.com/a/64867996/2508468)
I also exported the `export ARCHFLAGS="-arch x86_64"` environment variable still inspired by [Meghdeep Ray's answer](https://stackoverflow.com/a/64867996/2508468)
But the important part for me was to install `apache-airflow` from `pip` using the following command :
```
pip install apache-airflow[]==1.10.12 \
--constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.7.txt"
```
The constraint is important as pointed in this [github issue](https://github.com/apache/airflow/issues/12031)
|
1,417,473
|
I'm trying to call a function in a Python script from my main C++ program. The python function takes a string as the argument and returns nothing (ok.. 'None').
It works perfectly well (never thought it would be that easy..) as long as the previous call is finished before the function is called again, otherwise there is an access violation at `pModule = PyImport_Import(pName)`.
There are a lot of tutorials how to embed python in C and vice versa but I found nothing about that problem.
```
int callPython(TCHAR* title){
PyObject *pName, *pModule, *pFunc;
PyObject *pArgs, *pValue;
Py_Initialize();
pName = PyUnicode_FromString("Main");
/* Name of Pythonfile */
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, "writeLyricToFile");
/* function name. pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(1);
pValue = PyUnicode_FromWideChar(title, -1);
if (!pValue) {
Py_DECREF(pArgs);
Py_DECREF(pModule);
showErrorBox(_T("pValue is false"));
return 1;
}
PyTuple_SetItem(pArgs, 0, pValue);
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL) {
//worked as it should!
Py_DECREF(pValue);
}
else {
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
showErrorBox(_T("pValue is null"));
return 1;
}
}
else {
if (PyErr_Occurred()) PyErr_Print();
showErrorBox(_T("pFunc null or not callable"));
return 1;
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
else {
PyErr_Print();
showErrorBox(_T("pModule is null"));
return 1;
}
Py_Finalize();
return 0;
}
```
|
2009/09/13
|
[
"https://Stackoverflow.com/questions/1417473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144746/"
] |
When you say "as long as the previous call is finished before the function is called again", I can only assume that you have multiple threads calling from C++ into Python. The python is not thread safe, so this is going to fail!
Read up on the Global Interpreter Lock (GIL) in the Python manual. Perhaps the following links will help:
* <http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock>
* <http://docs.python.org/c-api/init.html#PyEval_InitThreads>
* <http://docs.python.org/c-api/init.html#PyEval_AcquireLock>
* <http://docs.python.org/c-api/init.html#PyEval_ReleaseLock>
The GIL is mentioned on Wikipedia:
* <http://en.wikipedia.org/wiki/Global_Interpreter_Lock>
|
Thank you for your help!
Yes you're right, there are several C threads. Never thought I'd need mutex for the interpreter itself - the GIL is a completly new concept for me (and isn't even once mentioned in the whole tutorial).
After reading the reference (for sure not the easiest part of it, although the PyGILState\_\* functions simplify the whole thing a lot), I added an
```
void initPython(){
PyEval_InitThreads();
Py_Initialize();
PyEval_ReleaseLock();
}
```
function to initialise the interpreter correctly.
Every thread creates its data structure, acquires the lock and releases it afterwards as shown in the reference.
Works as it should, but when calling Py\_Finalize() before terminating the process I get a segfault.. any problems with just leaving it?
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
If you're always accessing the variables in config.py like this:
```
import config
...
config.VAR1
```
You can replace the `config` module imported by whatever module you're actually trying to test. So, if you're testing a module called `foo`, and it imports and uses `config`, you can say:
```
from mock import patch
import foo
import config_test
....
with patch('foo.config', new=config_test):
foo.whatever()
```
But this isn't actually replacing the module globally, it's only replacing it within the `foo` module's namespace. So you would need to patch it everywhere it's imported. It also wouldn't work if `foo` does this instead of `import config`:
```
from config import VAR1
```
You can also mess with `sys.modules` to do this:
```
import config_test
import sys
sys.modules["config"] = config_test
# import modules that uses "import config" here, and they'll actually get config_test
```
But generally it's not a good idea to mess with `sys.modules`, and I don't think this case is any different. I would favor all of the other suggestions made over it.
|
Consider this following setup
configuration.py:
```
import os
class Config(object):
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
class TestConfig(object):
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
if os.getenv("TEST"):
config = TestConfig
else:
config = Config
```
now everywhere else in your code you can use:
```
from configuration import config
print config.CONF_VAR1, config.CONF_VAR2
```
And when you want to mock your coniguration file just set the environment variable "TEST".
Extra credit:
If you have lots of configuration variables that are shared between your testing and non-testing code, then you can derive TestConfig from Config and simply overwrite the variables that need changing:
```
class Config(object):
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
CONF_VAR3 = "VAR3"
class TestConfig(Config):
CONF_VAR2 = "test_VAR2"
# CONF_VAR1, CONF_VAR3 remain unchanged
```
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
**foo.py:**
```
import config
VAR1 = config.CONF_VAR1
def bar():
return VAR1
```
**test.py:**
```
import unittest
import unittest.mock as mock
import test_config
class Test(unittest.TestCase):
def test_one(self):
with mock.patch.dict('sys.modules', config=test_config):
import foo
self.assertEqual(foo.bar(), 'test_VAR1')
```
As you can see, the patch works even for code executed during `import foo`.
|
Consider this following setup
configuration.py:
```
import os
class Config(object):
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
class TestConfig(object):
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
if os.getenv("TEST"):
config = TestConfig
else:
config = Config
```
now everywhere else in your code you can use:
```
from configuration import config
print config.CONF_VAR1, config.CONF_VAR2
```
And when you want to mock your coniguration file just set the environment variable "TEST".
Extra credit:
If you have lots of configuration variables that are shared between your testing and non-testing code, then you can derive TestConfig from Config and simply overwrite the variables that need changing:
```
class Config(object):
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
CONF_VAR3 = "VAR3"
class TestConfig(Config):
CONF_VAR2 = "test_VAR2"
# CONF_VAR1, CONF_VAR3 remain unchanged
```
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
If you want to mock an entire module just mock the import where the module is used.
`myfile.py`
```
import urllib
```
`test_myfile.py`
```
import mock
import unittest
class MyTest(unittest.TestCase):
@mock.patch('myfile.urllib')
def test_thing(self, urllib):
urllib.whatever.return_value = 4
```
|
Consider this following setup
configuration.py:
```
import os
class Config(object):
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
class TestConfig(object):
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
if os.getenv("TEST"):
config = TestConfig
else:
config = Config
```
now everywhere else in your code you can use:
```
from configuration import config
print config.CONF_VAR1, config.CONF_VAR2
```
And when you want to mock your coniguration file just set the environment variable "TEST".
Extra credit:
If you have lots of configuration variables that are shared between your testing and non-testing code, then you can derive TestConfig from Config and simply overwrite the variables that need changing:
```
class Config(object):
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
CONF_VAR3 = "VAR3"
class TestConfig(Config):
CONF_VAR2 = "test_VAR2"
# CONF_VAR1, CONF_VAR3 remain unchanged
```
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
If you're always accessing the variables in config.py like this:
```
import config
...
config.VAR1
```
You can replace the `config` module imported by whatever module you're actually trying to test. So, if you're testing a module called `foo`, and it imports and uses `config`, you can say:
```
from mock import patch
import foo
import config_test
....
with patch('foo.config', new=config_test):
foo.whatever()
```
But this isn't actually replacing the module globally, it's only replacing it within the `foo` module's namespace. So you would need to patch it everywhere it's imported. It also wouldn't work if `foo` does this instead of `import config`:
```
from config import VAR1
```
You can also mess with `sys.modules` to do this:
```
import config_test
import sys
sys.modules["config"] = config_test
# import modules that uses "import config" here, and they'll actually get config_test
```
But generally it's not a good idea to mess with `sys.modules`, and I don't think this case is any different. I would favor all of the other suggestions made over it.
|
If your application ("app.py" say) looks like
```
import config
print config.var1, config.var2
```
And gives the output:
```
$ python app.py
VAR1 VAR2
```
You can use `mock.patch` to patch the individual config variables:
```
from mock import patch
with patch('config.var1', 'test_VAR1'):
import app
```
This results in:
```
$ python mockimport.py
test_VAR1 VAR2
```
Though I'm not sure if this is possible at the module level.
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
**foo.py:**
```
import config
VAR1 = config.CONF_VAR1
def bar():
return VAR1
```
**test.py:**
```
import unittest
import unittest.mock as mock
import test_config
class Test(unittest.TestCase):
def test_one(self):
with mock.patch.dict('sys.modules', config=test_config):
import foo
self.assertEqual(foo.bar(), 'test_VAR1')
```
As you can see, the patch works even for code executed during `import foo`.
|
If your application ("app.py" say) looks like
```
import config
print config.var1, config.var2
```
And gives the output:
```
$ python app.py
VAR1 VAR2
```
You can use `mock.patch` to patch the individual config variables:
```
from mock import patch
with patch('config.var1', 'test_VAR1'):
import app
```
This results in:
```
$ python mockimport.py
test_VAR1 VAR2
```
Though I'm not sure if this is possible at the module level.
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
If you want to mock an entire module just mock the import where the module is used.
`myfile.py`
```
import urllib
```
`test_myfile.py`
```
import mock
import unittest
class MyTest(unittest.TestCase):
@mock.patch('myfile.urllib')
def test_thing(self, urllib):
urllib.whatever.return_value = 4
```
|
If your application ("app.py" say) looks like
```
import config
print config.var1, config.var2
```
And gives the output:
```
$ python app.py
VAR1 VAR2
```
You can use `mock.patch` to patch the individual config variables:
```
from mock import patch
with patch('config.var1', 'test_VAR1'):
import app
```
This results in:
```
$ python mockimport.py
test_VAR1 VAR2
```
Though I'm not sure if this is possible at the module level.
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
If you're always accessing the variables in config.py like this:
```
import config
...
config.VAR1
```
You can replace the `config` module imported by whatever module you're actually trying to test. So, if you're testing a module called `foo`, and it imports and uses `config`, you can say:
```
from mock import patch
import foo
import config_test
....
with patch('foo.config', new=config_test):
foo.whatever()
```
But this isn't actually replacing the module globally, it's only replacing it within the `foo` module's namespace. So you would need to patch it everywhere it's imported. It also wouldn't work if `foo` does this instead of `import config`:
```
from config import VAR1
```
You can also mess with `sys.modules` to do this:
```
import config_test
import sys
sys.modules["config"] = config_test
# import modules that uses "import config" here, and they'll actually get config_test
```
But generally it's not a good idea to mess with `sys.modules`, and I don't think this case is any different. I would favor all of the other suggestions made over it.
|
**foo.py:**
```
import config
VAR1 = config.CONF_VAR1
def bar():
return VAR1
```
**test.py:**
```
import unittest
import unittest.mock as mock
import test_config
class Test(unittest.TestCase):
def test_one(self):
with mock.patch.dict('sys.modules', config=test_config):
import foo
self.assertEqual(foo.bar(), 'test_VAR1')
```
As you can see, the patch works even for code executed during `import foo`.
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
If you're always accessing the variables in config.py like this:
```
import config
...
config.VAR1
```
You can replace the `config` module imported by whatever module you're actually trying to test. So, if you're testing a module called `foo`, and it imports and uses `config`, you can say:
```
from mock import patch
import foo
import config_test
....
with patch('foo.config', new=config_test):
foo.whatever()
```
But this isn't actually replacing the module globally, it's only replacing it within the `foo` module's namespace. So you would need to patch it everywhere it's imported. It also wouldn't work if `foo` does this instead of `import config`:
```
from config import VAR1
```
You can also mess with `sys.modules` to do this:
```
import config_test
import sys
sys.modules["config"] = config_test
# import modules that uses "import config" here, and they'll actually get config_test
```
But generally it's not a good idea to mess with `sys.modules`, and I don't think this case is any different. I would favor all of the other suggestions made over it.
|
If you want to mock an entire module just mock the import where the module is used.
`myfile.py`
```
import urllib
```
`test_myfile.py`
```
import mock
import unittest
class MyTest(unittest.TestCase):
@mock.patch('myfile.urllib')
def test_thing(self, urllib):
urllib.whatever.return_value = 4
```
|
23,900,878
|
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks.
config.py:
```
CONF_VAR1 = "VAR1"
CONF_VAR2 = "VAR2"
```
test\_config.py:
```
CONF_VAR1 = "test_VAR1"
CONF_VAR2 = "test_VAR2"
```
All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
|
2014/05/28
|
[
"https://Stackoverflow.com/questions/23900878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2085665/"
] |
**foo.py:**
```
import config
VAR1 = config.CONF_VAR1
def bar():
return VAR1
```
**test.py:**
```
import unittest
import unittest.mock as mock
import test_config
class Test(unittest.TestCase):
def test_one(self):
with mock.patch.dict('sys.modules', config=test_config):
import foo
self.assertEqual(foo.bar(), 'test_VAR1')
```
As you can see, the patch works even for code executed during `import foo`.
|
If you want to mock an entire module just mock the import where the module is used.
`myfile.py`
```
import urllib
```
`test_myfile.py`
```
import mock
import unittest
class MyTest(unittest.TestCase):
@mock.patch('myfile.urllib')
def test_thing(self, urllib):
urllib.whatever.return_value = 4
```
|
50,601,935
|
I'm debugging my python application using VSCode.
I have a main python file from where I start the debugger. I'm able to put breakpoints in this file, but if I want to put breakpoints in other files which are called by the main file, I get them as 'Unverified breakpoint' and the debugger ignores them.
How can I change my `launch.json` so that I'm able to put breakpoints on all the files in my project?
Here's my current `launch.json`:
```js
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}"
},
{
"name": "Python: Attach",
"type": "python",
"request": "attach",
"localRoot": "${workspaceFolder}",
"remoteRoot": "${workspaceFolder}",
"port": 3000,
"secret": "my_secret",
"host": "localhost"
},
{
"name": "Python: Terminal (integrated)",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
{
"name": "Python: Terminal (external)",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "externalTerminal"
},
{
"name": "Python: Django",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/manage.py",
"args": [
"runserver",
"--noreload",
"--nothreading"
],
"debugOptions": [
"RedirectOutput",
"Django"
]
},
{
"name": "Python: Flask (0.11.x or later)",
"type": "python",
"request": "launch",
"module": "flask",
"env": {
"FLASK_APP": "${workspaceFolder}/app.py"
},
"args": [
"run",
"--no-debugger",
"--no-reload"
]
},
{
"name": "Python: Module",
"type": "python",
"request": "launch",
"module": "nf.session.session"
},
{
"name": "Python: Pyramid",
"type": "python",
"request": "launch",
"args": [
"${workspaceFolder}/development.ini"
],
"debugOptions": [
"RedirectOutput",
"Pyramid"
]
},
{
"name": "Python: Watson",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/console.py",
"args": [
"dev",
"runserver",
"--noreload=True"
]
},
{
"name": "Python: All debug Options",
"type": "python",
"request": "launch",
"pythonPath": "${config:python.pythonPath}",
"program": "${file}",
"module": "module.name",
"env": {
"VAR1": "1",
"VAR2": "2"
},
"envFile": "${workspaceFolder}/.env",
"args": [
"arg1",
"arg2"
],
"debugOptions": [
"RedirectOutput"
]
}
]
}
```
Thanks
|
2018/05/30
|
[
"https://Stackoverflow.com/questions/50601935",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1616955/"
] |
This may be a result of the "[justMyCode](https://code.visualstudio.com/docs/python/debugging#_justmycode)" configuration option, as it defaults to true.
While the description from the provider is "...restricts debugging to user-written code only. Set to False to also enable debugging of standard library functions.", I think what they mean is that anything in the site-packages for the current python environment will not be debugged.
I wanted to debug the Django stack to determine where an exception originating in my code was being eaten-up and not surfaced, so I did this to my VSCode's debugger's launch configuration:
```
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Control Center",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/manage.py",
"args": [
"runserver",
"--noreload"
],
"justMyCode": false, // I want to debug through Django framework code sometimes
"django": true
}
]
}
```
As soon as I did that, the debugger removed the "Unverified breakpoint" message and allowed me to debug files in the site-packages for the virtual environment I was working on, which contained Django.
I should note that the Breakpoint section of the debugger gave me a hint as to why the breakpoint was unverified and what tweaks to make to the launch config:
[](https://i.stack.imgur.com/H6KYn.png)
One additional note: that "--noreload" argument is also important. In debugging a different application using Flask, the debugger wouldn't stop at any breakpoint until I added it to the launch config.
|
The imports for the other modules were inside strings and called using the `execute` function. That's why VSCode couldn't very the breakpoints in the other files as it didn't know that these other files are used by the main file..
|
16,068,532
|
So this happened to me:
```
thing = ModelClass()
thing.foo = bar()
thing.do_Stuff()
thing.save() #works fine
thing.decimal_field = decimal_value
thing.save() #error here
```
Traceback follows:
```
TypeError at /journey/collaborators/2/
unsupported operand type(s) for ** or pow(): 'Decimal' and 'str'
274. oH.save()
File "/usr/lib/python2.7/dist-packages/django/db/models/base.py" in save
460. self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/usr/lib/python2.7/dist-packages/django/db/models/base.py" in save_base
543. for f in meta.local_fields if not isinstance(f, AutoField)]
File "/usr/lib/python2.7/dist-packages/django/db/models/fields/subclassing.py" in inner
28. return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/db/models/fields/__init__.py" in get_db_prep_save
787. self.max_digits, self.decimal_places)
File "/usr/lib/python2.7/dist-packages/django/db/backends/__init__.py" in value_to_db_decimal
705. return util.format_number(value, max_digits, decimal_places)
File "/usr/lib/python2.7/dist-packages/django/db/backends/util.py" in format_number
145. return u'%s' % str(value.quantize(decimal.Decimal(".1") ** decimal_places, context=context))
```
I've tried setting `decimal_value` to a `decimal.Decimal` instance, a float, an int and a string. It seems I can't save my model instance unless I leave that field blank.
Any ideas how to fix this?
|
2013/04/17
|
[
"https://Stackoverflow.com/questions/16068532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/742082/"
] |
Linking turned on will throw away any methods, attributes, properties... that are not used at the time of compilation. This is problem for example with reflection approach.
Your problem - very large package can be solved by:
1. reducing library code - removing unused code manually and linking off - probably don't want to do that
2. using DataContractJsonSerializer - application using this class is significantly smaller
3. live with 17 MB app, after all it still bearable :) And linking "Sdk assemblies only" could help also a little bit
|
Use the Json.Net provided by the [Xamarin Component Store](http://components.xamarin.com/view/json.net/). I have used this component for multiple projects and my Release builds with linking enabled come in between 4-8 MB.
|
16,068,532
|
So this happened to me:
```
thing = ModelClass()
thing.foo = bar()
thing.do_Stuff()
thing.save() #works fine
thing.decimal_field = decimal_value
thing.save() #error here
```
Traceback follows:
```
TypeError at /journey/collaborators/2/
unsupported operand type(s) for ** or pow(): 'Decimal' and 'str'
274. oH.save()
File "/usr/lib/python2.7/dist-packages/django/db/models/base.py" in save
460. self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/usr/lib/python2.7/dist-packages/django/db/models/base.py" in save_base
543. for f in meta.local_fields if not isinstance(f, AutoField)]
File "/usr/lib/python2.7/dist-packages/django/db/models/fields/subclassing.py" in inner
28. return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/db/models/fields/__init__.py" in get_db_prep_save
787. self.max_digits, self.decimal_places)
File "/usr/lib/python2.7/dist-packages/django/db/backends/__init__.py" in value_to_db_decimal
705. return util.format_number(value, max_digits, decimal_places)
File "/usr/lib/python2.7/dist-packages/django/db/backends/util.py" in format_number
145. return u'%s' % str(value.quantize(decimal.Decimal(".1") ** decimal_places, context=context))
```
I've tried setting `decimal_value` to a `decimal.Decimal` instance, a float, an int and a string. It seems I can't save my model instance unless I leave that field blank.
Any ideas how to fix this?
|
2013/04/17
|
[
"https://Stackoverflow.com/questions/16068532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/742082/"
] |
Linking turned on will throw away any methods, attributes, properties... that are not used at the time of compilation. This is problem for example with reflection approach.
Your problem - very large package can be solved by:
1. reducing library code - removing unused code manually and linking off - probably don't want to do that
2. using DataContractJsonSerializer - application using this class is significantly smaller
3. live with 17 MB app, after all it still bearable :) And linking "Sdk assemblies only" could help also a little bit
|
This problem can be solved by adding the System assembly to the "Skip linking assemblies" list under Mono Android Options for the project. This added < 1 Mb to the size of the APK for me.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
The `print` statement in Python 2.x has some special syntax which would not be available for an ordinary function. For example you can use a trailing `,` to suppress the output of a final newline or you can use `>>` to redirect the output to a file. But all this wasn't convincing enough even to Guido van Rossum himself to keep it a statement -- he turned `print` into a function in Python 3.x.
|
I will throw in my thoughts on this:
In Python 2.x `print` is not a statement by mistake, or because printing to `stdout` is such a basic thing to do. Everything else is so thought-through or has at least understandable reasons that a mistake of that order would seem odd. If communicating with `stdout` would have been cosidered so basic, communicating with `stdin` would have to be just as important, yet `input()` is a function.
If you look at the [list of reserved keywords](https://docs.python.org/2/reference/lexical_analysis.html#keywords) and the [list of statements](https://docs.python.org/2/reference/simple_stmts.html) which are not expressions, `print` clearly stands out which is another hint that there must be very specific reasons.
I think `print` *had* to be a statement and not an expression, to avoid a security breach in `input()`. Remember that `input()` in Python2 evaluates whatever the user types into `stdin`. If the user typed `print a` and `a` holds a list of all passwords, that would be quiet catastrophic.
Apparently, the ability of `input()` to evaluate expressions was considered more important than `print` being a normal built-in function.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
Because Guido has decided that he made a mistake. :)
It has since been corrected: try Python 3, which dedicates a [section of its release notes](http://docs.python.org/release/3.0.1/whatsnew/3.0.html#print-is-a-function) to describing the change to a function.
For the whole background, see [PEP 3105](https://www.python.org/dev/peps/pep-3105/) and the several links provided in its References section!
|
`print` **was** a statement in Python because it was a statement in ABC, the main inspiration for Python (although it was called `WRITE` there). That in turn probably had a statement instead of a function as it was a teaching language and as such inspired by basic. Python on the other hand, turned out to be more than a teaching language (although it's good for that too).
However, nowadays `print` **is** a function. Yes, in Python 2 as well, you can do
```
from __future__ import print_function
```
and you are all set. Works since Python 2.6.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
It is now a function in Python 3.
|
The `print` statement in Python 2.x has some special syntax which would not be available for an ordinary function. For example you can use a trailing `,` to suppress the output of a final newline or you can use `>>` to redirect the output to a file. But all this wasn't convincing enough even to Guido van Rossum himself to keep it a statement -- he turned `print` into a function in Python 3.x.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
Because Guido has decided that he made a mistake. :)
It has since been corrected: try Python 3, which dedicates a [section of its release notes](http://docs.python.org/release/3.0.1/whatsnew/3.0.html#print-is-a-function) to describing the change to a function.
For the whole background, see [PEP 3105](https://www.python.org/dev/peps/pep-3105/) and the several links provided in its References section!
|
An answer that draws from what I appreciate about the `print` statement, but not necessarily from the official Python history...
Python is, to some extent, a *scripting language*. Now, there are lots of definitions of "scripting language", but the one I'll use here is: a language designed for efficient use of short or interactive programs. Such languages tend to allow one-line programs without excessive boilerplate; make keyboard input easier (for instance, by avoiding excessive punctuation); and provide built-in syntax for common tasks (convenience at the possible expense of purity). In Python's case, printing a value is a very common thing to do, especially in interactive mode. Requiring `print` to be a function seems unnecessarily inconvenient here. There's a significantly lower risk of error with the special syntax that does the right thing 99% of the time.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
`print` **was** a statement in Python because it was a statement in ABC, the main inspiration for Python (although it was called `WRITE` there). That in turn probably had a statement instead of a function as it was a teaching language and as such inspired by basic. Python on the other hand, turned out to be more than a teaching language (although it's good for that too).
However, nowadays `print` **is** a function. Yes, in Python 2 as well, you can do
```
from __future__ import print_function
```
and you are all set. Works since Python 2.6.
|
The `print` statement in Python 2.x has some special syntax which would not be available for an ordinary function. For example you can use a trailing `,` to suppress the output of a final newline or you can use `>>` to redirect the output to a file. But all this wasn't convincing enough even to Guido van Rossum himself to keep it a statement -- he turned `print` into a function in Python 3.x.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
It is now a function in Python 3.
|
An answer that draws from what I appreciate about the `print` statement, but not necessarily from the official Python history...
Python is, to some extent, a *scripting language*. Now, there are lots of definitions of "scripting language", but the one I'll use here is: a language designed for efficient use of short or interactive programs. Such languages tend to allow one-line programs without excessive boilerplate; make keyboard input easier (for instance, by avoiding excessive punctuation); and provide built-in syntax for common tasks (convenience at the possible expense of purity). In Python's case, printing a value is a very common thing to do, especially in interactive mode. Requiring `print` to be a function seems unnecessarily inconvenient here. There's a significantly lower risk of error with the special syntax that does the right thing 99% of the time.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
Because Guido has decided that he made a mistake. :)
It has since been corrected: try Python 3, which dedicates a [section of its release notes](http://docs.python.org/release/3.0.1/whatsnew/3.0.html#print-is-a-function) to describing the change to a function.
For the whole background, see [PEP 3105](https://www.python.org/dev/peps/pep-3105/) and the several links provided in its References section!
|
It is now a function in Python 3.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
It is now a function in Python 3.
|
I will throw in my thoughts on this:
In Python 2.x `print` is not a statement by mistake, or because printing to `stdout` is such a basic thing to do. Everything else is so thought-through or has at least understandable reasons that a mistake of that order would seem odd. If communicating with `stdout` would have been cosidered so basic, communicating with `stdin` would have to be just as important, yet `input()` is a function.
If you look at the [list of reserved keywords](https://docs.python.org/2/reference/lexical_analysis.html#keywords) and the [list of statements](https://docs.python.org/2/reference/simple_stmts.html) which are not expressions, `print` clearly stands out which is another hint that there must be very specific reasons.
I think `print` *had* to be a statement and not an expression, to avoid a security breach in `input()`. Remember that `input()` in Python2 evaluates whatever the user types into `stdin`. If the user typed `print a` and `a` holds a list of all passwords, that would be quiet catastrophic.
Apparently, the ability of `input()` to evaluate expressions was considered more important than `print` being a normal built-in function.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
Because Guido has decided that he made a mistake. :)
It has since been corrected: try Python 3, which dedicates a [section of its release notes](http://docs.python.org/release/3.0.1/whatsnew/3.0.html#print-is-a-function) to describing the change to a function.
For the whole background, see [PEP 3105](https://www.python.org/dev/peps/pep-3105/) and the several links provided in its References section!
|
I will throw in my thoughts on this:
In Python 2.x `print` is not a statement by mistake, or because printing to `stdout` is such a basic thing to do. Everything else is so thought-through or has at least understandable reasons that a mistake of that order would seem odd. If communicating with `stdout` would have been cosidered so basic, communicating with `stdin` would have to be just as important, yet `input()` is a function.
If you look at the [list of reserved keywords](https://docs.python.org/2/reference/lexical_analysis.html#keywords) and the [list of statements](https://docs.python.org/2/reference/simple_stmts.html) which are not expressions, `print` clearly stands out which is another hint that there must be very specific reasons.
I think `print` *had* to be a statement and not an expression, to avoid a security breach in `input()`. Remember that `input()` in Python2 evaluates whatever the user types into `stdin`. If the user typed `print a` and `a` holds a list of all passwords, that would be quiet catastrophic.
Apparently, the ability of `input()` to evaluate expressions was considered more important than `print` being a normal built-in function.
|
4,413,912
|
Why is `print` a keyword in python and not a function?
|
2010/12/10
|
[
"https://Stackoverflow.com/questions/4413912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538442/"
] |
`print` **was** a statement in Python because it was a statement in ABC, the main inspiration for Python (although it was called `WRITE` there). That in turn probably had a statement instead of a function as it was a teaching language and as such inspired by basic. Python on the other hand, turned out to be more than a teaching language (although it's good for that too).
However, nowadays `print` **is** a function. Yes, in Python 2 as well, you can do
```
from __future__ import print_function
```
and you are all set. Works since Python 2.6.
|
An answer that draws from what I appreciate about the `print` statement, but not necessarily from the official Python history...
Python is, to some extent, a *scripting language*. Now, there are lots of definitions of "scripting language", but the one I'll use here is: a language designed for efficient use of short or interactive programs. Such languages tend to allow one-line programs without excessive boilerplate; make keyboard input easier (for instance, by avoiding excessive punctuation); and provide built-in syntax for common tasks (convenience at the possible expense of purity). In Python's case, printing a value is a very common thing to do, especially in interactive mode. Requiring `print` to be a function seems unnecessarily inconvenient here. There's a significantly lower risk of error with the special syntax that does the right thing 99% of the time.
|
71,214,931
|
I want to access items from a new dictionary called `conversations` by implementing a for loop.
```
{" conversations": [
{"tag": "greeting",
"user": ["Hi", " What's your name?", " How are you?", "Hello", "Good day"],
"response": ["Hello, my name is Rosie. Nice to see you", "Good to see you again", " How can I help you?"],
"context_set": ""
},
{"tag": "Good-bye",
"user": ["Bye", "See you", "Goodbye"],
"response": ["Thanks for visiting our company", "Have a nice day", "Good-bye."]
},
{"tag": "thanks",
"user": ["Thanks", "Thank you", "That's helpful", "Appreciated your service" ],
"response": ["Glad to help!", "My pleasure", "You’re welcome."]
}
]
}
```
The code I use to load the dictionary in a notebook is
```
file_name = 'dialogue.txt'
with open(file_name, encoding='utf8') as f:
for line in f:
print(line.strip())
dialogue_text = f.read()
```
This line of code does not return any results when trying to access the dictionary.
```
for k in dialogue_text:
print(k)
```
My intention is to write this code by implementing tokenization and stemming, but it returned an error
```
words = []
labels = []
docs_x = []
docs_y = []
for conversation in dialogue_text["conversations"]:
for user in dialogue_text["user"]:
words = nltk.word_tokenize(user)
words.extend(words)
docs_x.append(words)
docs_y.append(intent["tag"])if intent["tag"] not in labels:
labels.append(intent["tag"])words = [stemmer.stemWord(w.lower()) for w in words if w
!= "?"]
words = sorted(list(set(words)))labels = sorted(labels)
```
Error Message:
```
TypeError Traceback (most recent call last)
<ipython-input-12-d42234f8e809> in <module>()
10 docs_y = []
11
---> 12 for conversation in dialogue_text["conversations"]:
13 for user in dialogue_text["user"]:
14 words = nltk.word_tokenize(user)
TypeError: string indices must be integers
```
What code should I write to resolve this issue?
|
2022/02/22
|
[
"https://Stackoverflow.com/questions/71214931",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18273393/"
] |
You should use the `json` module to load in JSON data as opposed to reading in the file line-by-line. Whatever procedure you build yourself is likely to be fragile and less efficient.
Here is the looping structure that you're looking for:
```py
import json
with open('input.json') as input_file:
data = json.load(input_file)
# Spaces were originally in the key name.
for conversation in data[' conversations']:
for user_words in conversation['user']:
# Do stuff with user_words ...
```
|
Try this out:
```
import json
dialogue_text = json.load(open("dialogue.txt", encoding='utf8'))
for conversation in dialogue_text[" conversations"]:
for user in conversation['user']:
print(user)
```
Output:
```none
Hi
What's your name?
How are you?
Hello
Good day
Bye
See you
Goodbye
Thanks
Thank you
That's helpful
Appreciated your service
```
|
72,393,418
|
I am working with python and numpy! I have a txt file with integers, space seperated, and each row of the file must be a row in an array or dataframe. The problem is that not every row has the same size! I know the size that i want them to have and i want to put to the missing values the number zero! As is not comma seperated i can't find a way to do that! I was wondering if there is a way to find the length of each row of my array and add the appropriate number of zeros! Is that possible? Any other ideas? I am new at numpy library as you can see..
|
2022/05/26
|
[
"https://Stackoverflow.com/questions/72393418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19207300/"
] |
Forbidden often correlated to SSL/TLS certificate verification failure. Please try using the `requests.get` by setting the `verify=False` as following
Fixing the SSL certificate issue
```
requests.get("https://www.example.com/insert.php?network=testnet&id=1245300&c=2803824&lat=7555457", verify=False)
```
Fixing the TLS certificate issue
Check out my [answer](https://stackoverflow.com/questions/72347165/python-requests-403-forbidden-error-while-downlaoding-pdf-file-from-www-resear/72349915#72349915) related to the TLS certificate verification fix.
|
Somehow I overcomplicated it and when I tried the absolute minimum that works.
```
import requests
headers = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36' }
response = requests.get("http://www.example.com/insert.php?network=testnet&id=1245200&c=2803824&lat=7555457", headers=headers)
print(response.text)
```
|
62,801,244
|
I am working on a project where i have to use Mouse as a paintbrush. I have used `cv2.setMouseCallback()` function but it returned the following error.
here is the part of my code
```
import cv2
import numpy as np
# mouse callback function
def draw_circle(event,x,y,flags,param):
if event == cv2.EVENT_LBUTTONDBLCLK:
cv2.circle(img,(x,y),100,(255,0,0),-1)
# Create a black image, a window and bind the function to window
img = np.zeros((512,512,3), np.uint8)
cv2.namedWindow('image')
cv2.setMouseCallback('image',draw_circle)
while(1):
cv2.imshow('image',img)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
```
when I run this It returned me the following error:
error Traceback
```
(most recent call last)
<ipython-input-1-640e54baca5f> in <module>
10 img = np.zeros((512,512,3), np.uint8)
11 cv2.namedWindow('image')
---> 12 cv2.setMouseCallback('image',draw_circle)
13
14 while(1):
error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window_QT.cpp:717: error: (-27:Null pointer) NULL window handler in function 'cvSetMouseCallback'
```
My python version - 3.8
Operating System - ubuntu 20
|
2020/07/08
|
[
"https://Stackoverflow.com/questions/62801244",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7976921/"
] |
The error is resolved I removed the previously installed OpenCV
(which is installed using pip `pip install opencv-python`) and reinstalled it using
`sudo apt install libopencv-dev python3-opencv`
|
for me, also worked with the pip3 installations.
Just made a fresh virtual env, and installed opencv-python and opencv-contrib-python.
|
62,801,244
|
I am working on a project where i have to use Mouse as a paintbrush. I have used `cv2.setMouseCallback()` function but it returned the following error.
here is the part of my code
```
import cv2
import numpy as np
# mouse callback function
def draw_circle(event,x,y,flags,param):
if event == cv2.EVENT_LBUTTONDBLCLK:
cv2.circle(img,(x,y),100,(255,0,0),-1)
# Create a black image, a window and bind the function to window
img = np.zeros((512,512,3), np.uint8)
cv2.namedWindow('image')
cv2.setMouseCallback('image',draw_circle)
while(1):
cv2.imshow('image',img)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
```
when I run this It returned me the following error:
error Traceback
```
(most recent call last)
<ipython-input-1-640e54baca5f> in <module>
10 img = np.zeros((512,512,3), np.uint8)
11 cv2.namedWindow('image')
---> 12 cv2.setMouseCallback('image',draw_circle)
13
14 while(1):
error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window_QT.cpp:717: error: (-27:Null pointer) NULL window handler in function 'cvSetMouseCallback'
```
My python version - 3.8
Operating System - ubuntu 20
|
2020/07/08
|
[
"https://Stackoverflow.com/questions/62801244",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7976921/"
] |
The error is resolved I removed the previously installed OpenCV
(which is installed using pip `pip install opencv-python`) and reinstalled it using
`sudo apt install libopencv-dev python3-opencv`
|
add the same namedWindow worked for me
```
cv2.namedWindow("video")
cv2.imshow("video", image)
cv2.setMouseCallback("video", drawLine)
```
|
62,801,244
|
I am working on a project where i have to use Mouse as a paintbrush. I have used `cv2.setMouseCallback()` function but it returned the following error.
here is the part of my code
```
import cv2
import numpy as np
# mouse callback function
def draw_circle(event,x,y,flags,param):
if event == cv2.EVENT_LBUTTONDBLCLK:
cv2.circle(img,(x,y),100,(255,0,0),-1)
# Create a black image, a window and bind the function to window
img = np.zeros((512,512,3), np.uint8)
cv2.namedWindow('image')
cv2.setMouseCallback('image',draw_circle)
while(1):
cv2.imshow('image',img)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
```
when I run this It returned me the following error:
error Traceback
```
(most recent call last)
<ipython-input-1-640e54baca5f> in <module>
10 img = np.zeros((512,512,3), np.uint8)
11 cv2.namedWindow('image')
---> 12 cv2.setMouseCallback('image',draw_circle)
13
14 while(1):
error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window_QT.cpp:717: error: (-27:Null pointer) NULL window handler in function 'cvSetMouseCallback'
```
My python version - 3.8
Operating System - ubuntu 20
|
2020/07/08
|
[
"https://Stackoverflow.com/questions/62801244",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7976921/"
] |
add the same namedWindow worked for me
```
cv2.namedWindow("video")
cv2.imshow("video", image)
cv2.setMouseCallback("video", drawLine)
```
|
for me, also worked with the pip3 installations.
Just made a fresh virtual env, and installed opencv-python and opencv-contrib-python.
|
30,013,383
|
I want to be able to get from `[2, 3]` and `3 : [2, 3, 2, 3, 2, 3]`.
(Like `3 * a` in python where a is a list)
Is there a quick and efficient way to do this in Javascript ?
I do this with a `for` loop, but it lacks visibility and I guess efficiency.
I would like for it to work with every types of element.
For instance, I used the code :
```
function dup (n, obj) {
var ret = [];
for (var i = 0; i<n; i++)
{
ret[i] = obj;
}
return (ret);
}
```
The problem is that it doesn't work with arrays or objects, only with primitive values.
Do I have to make conditions, or is there a clean way to duplicate a variable ?
|
2015/05/03
|
[
"https://Stackoverflow.com/questions/30013383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4859055/"
] |
You can use this (very readable :P) function:
```
function repeat(arr, n){
var a = [];
for (var i=0;i<n;[i++].push.apply(a,arr));
return a;
}
```
`repeat([2,3], 3)` returns an array `[2, 3, 2, 3, 2, 3]`.
Basically, it's this:
```
function repeat(array, times){
var newArray = [];
for (var i=0; i < times; i++){
Array.prototype.push.apply(newArray, array);
}
return newArray;
}
```
we push `array`'s values onto `newArray` `times` times. To be able to push an array as its values (so, `push(2, 3)` instead of `push([2, 3])`) I used apply, which takes an array an passes it to push as a list of arguments.
Or, extend the prototype:
```
Array.prototype.repeat = function(n){
var a = [];
for (var i=0;i<n;[i++].push.apply(a,this));
return a;
}
```
`[2, 3].repeat(3)` returns an array `[2, 3, 2, 3, 2, 3]`.
If you want something reasonably readable, you can use [concat](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat) within a loop:
```
function repeat(array, n){
var newArray = [];
for (var i = 0; i < n; i++){
newArray = newArray.concat(array);
}
return newArray;
}
```
|
There is not. This is a very Pythonic idea.
You could devise a function to do it, but I doubt there is any computational benefit because you would just be using a loop or some weird misuse of string functions.
|
28,314,014
|
I have written this code in python
```
import os
files = os.listdir(".")
x = ""
for file in files:
x = ("\"" + file + "\" ")
f = open("files.txt", "w")
f.write(x)
f.close()
```
this works and I get a single string with all the files in a directory as `"foo.txt" "bar.txt" "baz.txt"`
but I don't like the for loop. Can't I write the code more succinctly.... like those python pros?
I tried
`"\"".join(files)`
but how do I get the `"` the end of the file name as well?
|
2015/02/04
|
[
"https://Stackoverflow.com/questions/28314014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/337134/"
] |
1. You can write string literals using both `'single'` and `"double-quotes"`; you don't have to escape one inside the other.
2. You can use the `format` function to apply quotes before you `join`.
3. You should use the `with` statement when opening files to save you from having to `close` it explicitly.
Thus:
```
import os
with open("files.txt", "w") as f:
f.write(' '.join('"{}"'.format(file) for file in os.listdir('.'))
```
|
```
import os
files = os.listdir(".")
x = " ".join('"%s"'%f for f in files)
with open("files.txt", "w") as f:
f.write(x)
```
|
28,314,014
|
I have written this code in python
```
import os
files = os.listdir(".")
x = ""
for file in files:
x = ("\"" + file + "\" ")
f = open("files.txt", "w")
f.write(x)
f.close()
```
this works and I get a single string with all the files in a directory as `"foo.txt" "bar.txt" "baz.txt"`
but I don't like the for loop. Can't I write the code more succinctly.... like those python pros?
I tried
`"\"".join(files)`
but how do I get the `"` the end of the file name as well?
|
2015/02/04
|
[
"https://Stackoverflow.com/questions/28314014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/337134/"
] |
```
import os
files = os.listdir(".")
x = " ".join('"%s"'%f for f in files)
with open("files.txt", "w") as f:
f.write(x)
```
|
You can use `with` to writer file.
```
import os
files = os.listdir('.')
x = ' '.join(['"%s"'%f for f in files])
with open("files.txt", "w") as f:
f.write(x)
```
|
28,314,014
|
I have written this code in python
```
import os
files = os.listdir(".")
x = ""
for file in files:
x = ("\"" + file + "\" ")
f = open("files.txt", "w")
f.write(x)
f.close()
```
this works and I get a single string with all the files in a directory as `"foo.txt" "bar.txt" "baz.txt"`
but I don't like the for loop. Can't I write the code more succinctly.... like those python pros?
I tried
`"\"".join(files)`
but how do I get the `"` the end of the file name as well?
|
2015/02/04
|
[
"https://Stackoverflow.com/questions/28314014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/337134/"
] |
1. You can write string literals using both `'single'` and `"double-quotes"`; you don't have to escape one inside the other.
2. You can use the `format` function to apply quotes before you `join`.
3. You should use the `with` statement when opening files to save you from having to `close` it explicitly.
Thus:
```
import os
with open("files.txt", "w") as f:
f.write(' '.join('"{}"'.format(file) for file in os.listdir('.'))
```
|
You can use `with` to writer file.
```
import os
files = os.listdir('.')
x = ' '.join(['"%s"'%f for f in files])
with open("files.txt", "w") as f:
f.write(x)
```
|
58,223,422
|
I'm currently trying to find an effective way of running a machine learning task over a set amount of cores using `tensorflow`. From the information I found there were two main approaches to doing this.
The first of which was using the two tensorflow variables intra\_op\_parallelism\_threads and inter\_op\_parallelism\_threads and then creating a session using this configuration.
The second of which is using `OpenMP`. Setting the environment variable `OMP_NUM_THREADS` allows for manipulation of the amount of threads spawned for the process.
My problem arose when I discovered that installing tensorflow through conda and through pip gave two different environments. In the `conda install` modifying the `OpenMP` environment variables seemed to change the way the process was parallelised, whilst in the 'pip environment' the only thing which appeared to change it was the inter/intra config variables which I mentioned earlier.
This led to some difficulty in trying to compare the two installs for benchmarking reasons. If I set `OMP_NUM_THREADS` equal to 1 and inter/intra to 16 on a 48 core processor on the `conda install` I only get about 200% CPU usage as most of the threads are idle at any given time.
```py
omp_threads = 1
mkl_threads = 1
os.environ["OMP_NUM_THREADS"] = str(omp_threads)
os.environ["MKL_NUM_THREADS"] = str(mkl_threads)
config = tf.ConfigProto()
config.intra_op_parallelism_threads = 16
config.inter_op_parallelism_threads = 16
session = tf.Session(config=config)
K.set_session(session)
```
I would expect this code to spawn 32 threads where most of which are being utilized at any given time, when in fact it spawns 32 threads and only 4-5 are being used at once.
Has anyone ran into anything similar before when using tensorflow?
Why is it that installing through conda and through pip seems to give two different environments?
Is there any way of having comparable performance on the two installs by using some combination of the two methods discussed earlier?
Finally is there maybe an even better way to limit python to a specific number of cores?
|
2019/10/03
|
[
"https://Stackoverflow.com/questions/58223422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8776042/"
] |
**first check the index labels and columns**
```
fact.index
fact.columns
```
If you need to convert index to columns use:
Use:
```
fact.reset_index()
```
**Then you can use:**
```
fact.groupby(['store_id', 'month'])['quantity'].mean()
```
Output:
```
store_id month
174 8 1
354 7 1
8 1
9 1
Name: quantity, dtype: int64
```
**or better:**
```
fact['mean']=fact.groupby(['store_id', 'month'])['quantity'].transform('mean')
print(fact)
store_id sku_id date quantity city city.1 category month \
0 354 31253 2017-08-08 1 Paris Paris Shirt 8
1 354 31253 2017-08-19 1 Paris Paris Shirt 8
2 354 31258 2017-07-30 1 Paris Paris Shirt 7
3 354 277171 2017-09-28 1 Paris Paris Shirt 9
4 174 295953 2017-08-16 1 London London Shirt 8
mean
0 1
1 1
2 1
3 1
4 1
```
|
need to add "**as\_index=True**"
ex:
"count\_in = df.groupby(['time\_in','id'], **as\_index=True**)['time\_in'].count()"
|
58,791,530
|
I am making a poem generator in python, and I am currently working on how poems are outputted to the user. I would like to make it so every line that is outputted will have a comma follow after. I thought I could achieve this easily by using the .join function, but it seems to attach to letters rather than the end of the string stored in the list.
```
line1[-1]=', '.join(line1[-1])
print(*line1)
print(*line2)
```
Will output something like:
```
Moonlight is w, i, n, t, e, r
The waters scent fallen snow
```
In hindsight, I should have known join was the wrong function to use, but I'm still lost. I tried .append, but as the items in my list are strings, I get the error message "'str' object has no attribute 'append'."
I realize another fix to all this might be something like:
```
print(*line1, ",")
```
But I'm working on a function that will decide whether the correct punctuation needs to be "!", ",", "?", or "--", so I am really hoping to find something that can be attached to the string in the list itself, instead of tacked on during the output printing process.
|
2019/11/10
|
[
"https://Stackoverflow.com/questions/58791530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12334091/"
] |
Just use the `+` or `+=` operator for strings, for example:
```
trailing_punct = ',' # can be '!', '?', etc.
line1 += trailing_punct
# or
line1 = line1 + trailing_punct
```
`+=` can be used to modify the string "in place" (note that under the covers, it does create a new object and assign to it, so `id(line1)` will have changed after this operation).
|
It seems your `line1` and `line2`are lists of strings, so I'll start by assuming that:
```
line1 = ["Moonlight", "is", "winter"]
line2 = ["The", "waters", "scent", "fallen", "snow"]
```
You are using the default behaviour of the `print` function when given several string arguments to add the space between words: `print(*line1)` is equivalent to calling `print(line1[0], line1[1], ...)` (see [\*args and \*\*kwargs](https://stackoverflow.com/questions/3394835/use-of-args-and-kwargs)).
That makes adding the line separator to the list of words of the line insufficient, as it will have a space before it:
```
print("\n--//--\nUsing print default space between given arguments:")
line_separator = ","
line1.append(line_separator)
print(*line1)
print(*line2)
```
Results in:
```
--//--
Using print default space between given arguments:
Moonlight is winter ,
The waters scent fallen snow
```
What you want to do can be done by joining the list of words into a single string, and then joining the list of lines with the separator you want:
```
print("\n--//--\nPrinting a single string:")
line1_str = ' '.join(line1)
line2_str = ' '.join(line2)
line_separator = ",\n" # notice the new line \n
lines = line_separator.join([line1_str, line2_str])
print(lines)
```
Results in
```
--//--
Printing a single string:
Moonlight is winter,
The waters scent fallen snow
```
Consider using a list of lines for easier expansion, and maybe a list of separators to be used in order for each line.
|
18,788,493
|
I would like to extract filename from url in R. For now I do it as follows, but maybe it can be done shorter like in python. assuming path is just string.
```
path="http://www.exanple.com/foo/bar/fooXbar.xls"
```
in R:
```
tail(strsplit(path,"[/]")[[1]],1)
```
in Python:
```
path.split("/")[-1:]
```
Maybe some sub, gsub solution?
|
2013/09/13
|
[
"https://Stackoverflow.com/questions/18788493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/953553/"
] |
There's a function for that...
```
basename(path)
[1] "fooXbar.xls"
```
|
@SimonO101 has the most robust answer IMO, but some other options:
Since regular expressions are greedy, you can use that to your advantage
```
sub('.*/', '', path)
# [1] "fooXbar.xls"
```
Also, you shouldn't need the `[]` around the `/` in your `strsplit`.
```
> tail(strsplit(path,"/")[[1]],1)
[1] "fooXbar.xls"
```
|
68,608,899
|
I want python to be printing a bunch of numbers like 1 to 10000. Then I want to decide when to stop the numbers from printing and the last number printed will be my number. something like.
```
for item in range(10000):
print (item)
number = input("")
```
But the problem is that it waits for me to place the input and then continues the loop. I want it to be looping until I say so.
I'll appreciate your help. Thanks.
|
2021/08/01
|
[
"https://Stackoverflow.com/questions/68608899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13472873/"
] |
Catch `KeyboardInterrupt` exception when you interrupts the code by `Ctrl`+`C`:
```
import time
try:
for item in range(10000):
print(item)
time.sleep(1)
except KeyboardInterrupt:
print(f'Last item: {item}')
```
|
You can use the keyboard module:
```
import keyboard # using module keyboard
i=0
while i<10000: # making a loop
try: # used try so that if user pressed other than the given key error will not be shown
print(i)
if keyboard.is_pressed('q'): # if key 'q' is pressed
print('Last number: ',i)
break # finishing the loop
else:
i+=1
except Exception:
break
```
|
22,081,361
|
I'm wondering if there's a way to fill under a pyplot curve with a vertical gradient, like in this quick mockup:

I found this hack on StackOverflow, and I don't mind the polygons if I could figure out how to make the color map vertical: [How to fill rainbow color under a curve in Python matplotlib](https://stackoverflow.com/questions/18215276/how-to-fill-rainbow-color-under-a-curve-in-python-matplotlib)
|
2014/02/27
|
[
"https://Stackoverflow.com/questions/22081361",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2774479/"
] |
There may be a better way, but here goes:
```
from matplotlib import pyplot as plt
x = range(10)
y = range(10)
z = [[z] * 10 for z in range(10)]
num_bars = 100 # more bars = smoother gradient
plt.contourf(x, y, z, num_bars)
background_color = 'w'
plt.fill_between(x, y, y2=max(y), color=background_color)
plt.show()
```
Shows:

|
There is an alternative solution closer to the sketch in the question. It's given on Henry Barthes' blog <http://pradhanphy.blogspot.com/2014/06/filling-between-curves-with-color.html>.
This applies an imshow to each of the patches, I've copied the code in case the link changes,
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
from matplotlib.patches import PathPatch
xx=np.arange(0,10,0.01)
yy=xx*np.exp(-xx)
path = Path(np.array([xx,yy]).transpose())
patch = PathPatch(path, facecolor='none')
plt.gca().add_patch(patch)
im = plt.imshow(xx.reshape(yy.size,1),
cmap=plt.cm.Reds,
interpolation="bicubic",
origin='lower',
extent=[0,10,-0.0,0.40],
aspect="auto",
clip_path=patch,
clip_on=True)
plt.show()
```
|
53,494,637
|
I'm trying to build a Flask app that has Kafka as an interface. I used a Python connector, [kafka-python](https://kafka-python.readthedocs.io/en/master/index.html) and a Docker image for Kafka, [spotify/kafkaproxy](https://hub.docker.com/r/spotify/kafkaproxy/) .
Below is the docker-compose file.
```
version: '3.3'
services:
kafka:
image: spotify/kafkaproxy
container_name: kafka_dev
ports:
- '9092:9092'
- '2181:2181'
environment:
- ADVERTISED_HOST=0.0.0.0
- ADVERTISED_PORT=9092
- CONSUMER_THREADS=1
- TOPICS=PROFILE_CREATED,IMG_RATED
- ZK_CONNECT=kafka7zookeeper:2181/root/path
flaskapp:
build: ./flask-app
container_name: flask_dev
ports:
- '9000:5000'
volumes:
- ./flask-app:/app
depends_on:
- kafka
```
Below is the Python snippet I used to connect to kafka. Here, I used the Kafka container's alias `kafka` to connect, as Docker would take care of mapping the alias to it's IP address.
```
from kafka import KafkaConsumer, KafkaProducer
TOPICS = ['PROFILE_CREATED', 'IMG_RATED']
BOOTSTRAP_SERVERS = ['kafka:9092']
consumer = KafkaConsumer(TOPICS, bootstrap_servers=BOOTSTRAP_SERVERS)
```
I got `NoBrokersAvailable` error. From this, I could understand that the Flask app could not find the Kafka server.
```
Traceback (most recent call last):
File "./app.py", line 11, in <module>
consumer = KafkaConsumer("PROFILE_CREATED", bootstrap_servers=BOOTSTRAP_SERVERS)
File "/usr/local/lib/python3.6/site-packages/kafka/consumer/group.py", line 340, in __init__
self._client = KafkaClient(metrics=self._metrics, **self.config)
File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 219, in __init__
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 819, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
```
**Other Observations:**
1. I was able to run `ping kafka` from the Flask container and get packets from the Kafka container.
2. When I run the Flask app locally, trying to connect to the Kafka container by setting `BOOTSTRAP_SERVERS = ['localhost:9092']`, it works fine.
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53494637",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4306852/"
] |
**UPDATE**
As mentioned by cricket\_007, given that you are using the docker-compose provided below, you should use `kafka:29092` to connect to Kafka from another container. So your code would look like this:
```
from kafka import KafkaConsumer, KafkaProducer
TOPICS = ['PROFILE_CREATED', 'IMG_RATED']
BOOTSTRAP_SERVERS = ['kafka:29092']
consumer = KafkaConsumer(TOPICS, bootstrap_servers=BOOTSTRAP_SERVERS)
```
**END UPDATE**
I would recommend you use the Kafka images from [Confluent Inc](https://www.confluent.io/about/#about_confluent), they have all sorts of example setups using docker-compose that are ready to use and they are always updating them.
Try this out:
```
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
flaskapp:
build: ./flask-app
container_name: flask_dev
ports:
- '9000:5000'
volumes:
- ./flask-app:/app
```
I used this [docker-compose.yml](https://github.com/confluentinc/cp-docker-images/blob/5.0.1-post/examples/kafka-single-node/docker-compose.yml) and added your service on top
Please note that:
>
> The config used here exposes port 9092 for *external* connections to the broker i.e. those from *outside* the docker network. This could be from the host machine running docker, or maybe further afield if you've got a more complicated setup. If the latter is true, you will need to change the value 'localhost' in KAFKA\_ADVERTISED\_LISTENERS to one that is resolvable to the docker host from those remote clients
>
>
>
Make sure you check out the other examples, may be useful for you especially when moving to production environments: <https://github.com/confluentinc/cp-docker-images/tree/5.0.1-post/examples>
Also worth checking:
It seems that you need to specify the api\_version to avoid this error. For more details check [here](https://github.com/dpkp/kafka-python/issues/1308#issuecomment-355430042).
>
> Version 1.3.5 of this library (which is latest on pypy) only lists certain API versions 0.8.0 to 0.10.1. So unless you explicitly specify api\_version to be (0, 10, 1) the client library's attempt to discover the version will cause a NoBrokersAvailable error.
>
>
>
```
producer = KafkaProducer(
bootstrap_servers=URL,
client_id=CLIENT_ID,
value_serializer=JsonSerializer.serialize,
api_version=(0, 10, 1)
)
```
This should work, interestingly enough setting the api\_version is accidentally fixing the issue according to this:
>
> When you set api\_version the client will not attempt to probe brokers for version information. So it is the probe operation that is failing. One large difference between the version probe connections and the general connections is that the former only attempts to connect on a single interface per connection (per broker), where as the latter -- general operation -- will cycle through all interfaces continually until a connection succeeds. #1411 fixes this by switching the version probe logic to attempt a connection on all found interfaces.
>
>
>
The actual issue is described [here](https://github.com/dpkp/kafka-python/issues/1308#issuecomment-371532689)
|
I managed to get this up-and-running using a [network](https://docs.docker.com/compose/networking/) named `stream_net` between all services.
```
# for local development
version: "3.7"
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- stream_net
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- stream_net
flaskapp:
build: ./flask-app
container_name: flask_dev
ports:
- "9000:5000"
volumes:
- ./flask-app:/app
networks:
- stream_net
depends_on:
- kafka
networks:
stream_net:
```
* connection from outside the containers on `localhost:9092`
* connection within the network on `kafka:29092`
of course it is strange to put all containers that are already running within a network within a network. But in this way the containers can be named by their actual name. Maybe someone can explain exactly how this works, or it helps someone else to understand the core of the problem and to solve it properly.
|
65,734,652
|
I open a binary file with Python3 and want to print byte-by-byte in hex.
However, all online resource only mention printing a "byte array" in hex.
Please tell me how to print only 1 single byte, thanks.
```py
#!/usr/bin/env python3
if __name__ == "__main__":
with open("./datasets/data.bin", 'rb') as file:
byte = file.read(1)
while byte:
print(byte) # how to print hex instead of ascii?
byte = file.read(1)
```
|
2021/01/15
|
[
"https://Stackoverflow.com/questions/65734652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1417929/"
] |
Try this one:
```py
print(hex(byte[0]))
```
|
Using an `f-string` like this just prints two hex digits for each byte:
```py
print(f'{byte[0]:02x}')
```
|
26,897,208
|
```
from pythonds.basic.stack import Stack
rStack = Stack()
def toStr(n,base):
convertString = "0123456789ABCDEF"
while n > 0:
if n < base:
rStack.push(convertString[n])
else:
rStack.push(convertString[n % base])
n = n // base
res = ""
while not rStack.isEmpty():
res = res + str(rStack.pop())
return res
print(toStr(1345,2))
```
I'm referring to [this tutorial](http://interactivepython.org/runestone/static/pythonds/Recursion/StackFramesImplementingRecursion.html) and also pasted the code above. The tutorial says the function is recursive but I don't see a recursive call anywhere, just a while loop. What am I missing?
|
2014/11/12
|
[
"https://Stackoverflow.com/questions/26897208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3448561/"
] |
You are right that this particular function is **not** recursive. However, the context is, that on the previous slide there was a recursive function, and in this one they want to show a glimpse of how it *behaves* internally. They later say:
>
> The previous example [i.e. the one in question - B.] gives us some insight into how Python implements a recursive function call.
>
>
>
So, yes, the title is misleading, it should be rather *Expanding a recursive function* or *Imitating recursive function behavior with a stack* or something like this.
One may say that this function employs a recursive approach/strategy in some sense, to the problem being solved, but is not recursive itself.
|
Because you're using a stack structure.
If you consider how function calling is implemented, recursion is essentially an easy way to get the compiler to manage a stack of invocations for you.
This function does all the stack handling manually, but it is still conceptually a recursive function, just one where the stack management is done manually instead of letting the compiler do it.
|
26,897,208
|
```
from pythonds.basic.stack import Stack
rStack = Stack()
def toStr(n,base):
convertString = "0123456789ABCDEF"
while n > 0:
if n < base:
rStack.push(convertString[n])
else:
rStack.push(convertString[n % base])
n = n // base
res = ""
while not rStack.isEmpty():
res = res + str(rStack.pop())
return res
print(toStr(1345,2))
```
I'm referring to [this tutorial](http://interactivepython.org/runestone/static/pythonds/Recursion/StackFramesImplementingRecursion.html) and also pasted the code above. The tutorial says the function is recursive but I don't see a recursive call anywhere, just a while loop. What am I missing?
|
2014/11/12
|
[
"https://Stackoverflow.com/questions/26897208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3448561/"
] |
A *recursive algorithm*, by definition, [is a method where the solution to a problem depends on solutions to smaller instances of the same problem](https://en.wikipedia.org/wiki/Recursion_%28computer_science%29).
Here, the problem is to convert a number to a string in a given notation.
The "stockpiling" of data the function does actually looks like this:
```
push(d1)
push(d2)
...
push(dn-1)
push(dn)
res+=pop(dn)
res+=pop(dn-1)
...
res+=pop(d2)
res+=pop(d1)
```
which is effectively:
```
def pushpop():
push(dx)
pushpop(dx+1...dn)
res+=pop(dx)
```
I.e. *a step that processes a specific chunk of data **encloses all the steps that process the rest of the data*** (with each chunk processed in the same way).
It can be argued if the *function* is recursive (since they tend to apply the term to subroutines in a narrower sense), but the *algorithm* it implements definitely is.
---
For you to better feel the difference, here's an *iterative* solution to the same problem:
```
def toStr(n,base):
charmap = "0123456789ABCDEF"
res=''
while n > 0:
res = charmap[n % base] + res
n = n // base
return res
```
As you can see, this method has much lower memory footprint as it doesn't *stockpile* tasks. This is the difference: an iterative algorithm performs each step *using the same instance of the state* by *mutating* it while a recursive one *creates a new instance for each step*, necessarily stockpiling them if the old ones are still needed.
|
Because you're using a stack structure.
If you consider how function calling is implemented, recursion is essentially an easy way to get the compiler to manage a stack of invocations for you.
This function does all the stack handling manually, but it is still conceptually a recursive function, just one where the stack management is done manually instead of letting the compiler do it.
|
26,897,208
|
```
from pythonds.basic.stack import Stack
rStack = Stack()
def toStr(n,base):
convertString = "0123456789ABCDEF"
while n > 0:
if n < base:
rStack.push(convertString[n])
else:
rStack.push(convertString[n % base])
n = n // base
res = ""
while not rStack.isEmpty():
res = res + str(rStack.pop())
return res
print(toStr(1345,2))
```
I'm referring to [this tutorial](http://interactivepython.org/runestone/static/pythonds/Recursion/StackFramesImplementingRecursion.html) and also pasted the code above. The tutorial says the function is recursive but I don't see a recursive call anywhere, just a while loop. What am I missing?
|
2014/11/12
|
[
"https://Stackoverflow.com/questions/26897208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3448561/"
] |
You are right that this particular function is **not** recursive. However, the context is, that on the previous slide there was a recursive function, and in this one they want to show a glimpse of how it *behaves* internally. They later say:
>
> The previous example [i.e. the one in question - B.] gives us some insight into how Python implements a recursive function call.
>
>
>
So, yes, the title is misleading, it should be rather *Expanding a recursive function* or *Imitating recursive function behavior with a stack* or something like this.
One may say that this function employs a recursive approach/strategy in some sense, to the problem being solved, but is not recursive itself.
|
A *recursive algorithm*, by definition, [is a method where the solution to a problem depends on solutions to smaller instances of the same problem](https://en.wikipedia.org/wiki/Recursion_%28computer_science%29).
Here, the problem is to convert a number to a string in a given notation.
The "stockpiling" of data the function does actually looks like this:
```
push(d1)
push(d2)
...
push(dn-1)
push(dn)
res+=pop(dn)
res+=pop(dn-1)
...
res+=pop(d2)
res+=pop(d1)
```
which is effectively:
```
def pushpop():
push(dx)
pushpop(dx+1...dn)
res+=pop(dx)
```
I.e. *a step that processes a specific chunk of data **encloses all the steps that process the rest of the data*** (with each chunk processed in the same way).
It can be argued if the *function* is recursive (since they tend to apply the term to subroutines in a narrower sense), but the *algorithm* it implements definitely is.
---
For you to better feel the difference, here's an *iterative* solution to the same problem:
```
def toStr(n,base):
charmap = "0123456789ABCDEF"
res=''
while n > 0:
res = charmap[n % base] + res
n = n // base
return res
```
As you can see, this method has much lower memory footprint as it doesn't *stockpile* tasks. This is the difference: an iterative algorithm performs each step *using the same instance of the state* by *mutating* it while a recursive one *creates a new instance for each step*, necessarily stockpiling them if the old ones are still needed.
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
You're on the right track with your `delete_model` method. When the django admin performs an action on multiple objects at once it uses the [update function](https://docs.djangoproject.com/en/dev/topics/db/queries/#updating-multiple-objects-at-once). However, as you see in the docs these actions are performed at the database level only using SQL.
You need to add your `delete_model` method in as a [custom action](https://docs.djangoproject.com/en/dev/ref/contrib/admin/actions/) in the django admin.
```
def delete_model(modeladmin, request, queryset):
for obj in queryset:
filename=obj.profile_name+".xml"
os.remove(os.path.join(obj.type,filename))
obj.delete()
```
Then add your function to your modeladmin -
```
class profilesAdmin(admin.ModelAdmin):
list_display = ["type","username","domain_name"]
actions = [delete_model]
```
|
Your method should be
```
class profilesAdmin(admin.ModelAdmin):
#...
def _profile_delete(self, sender, instance, **kwargs):
# do something
def delete_model(self, request, object):
# do something
```
You should add a reference to current object as the first argument in every method signature (usually called `self`). Also, the delete\_model should be implemented as a method.
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
You're on the right track with your `delete_model` method. When the django admin performs an action on multiple objects at once it uses the [update function](https://docs.djangoproject.com/en/dev/topics/db/queries/#updating-multiple-objects-at-once). However, as you see in the docs these actions are performed at the database level only using SQL.
You need to add your `delete_model` method in as a [custom action](https://docs.djangoproject.com/en/dev/ref/contrib/admin/actions/) in the django admin.
```
def delete_model(modeladmin, request, queryset):
for obj in queryset:
filename=obj.profile_name+".xml"
os.remove(os.path.join(obj.type,filename))
obj.delete()
```
Then add your function to your modeladmin -
```
class profilesAdmin(admin.ModelAdmin):
list_display = ["type","username","domain_name"]
actions = [delete_model]
```
|
The main issue is that the Django admin's bulk delete uses SQL, not instance.delete(), as noted elsewhere. For an admin-only solution, the following solution preserves the Django admin's "do you really want to delete these" interstitial.
The most general solution is to override the queryset returned by the model's manager to intercept delete.
```
from django.contrib.admin.actions import delete_selected
class BulkDeleteMixin(object):
class SafeDeleteQuerysetWrapper(object):
def __init__(self, wrapped_queryset):
self.wrapped_queryset = wrapped_queryset
def _safe_delete(self):
for obj in self.wrapped_queryset:
obj.delete()
def __getattr__(self, attr):
if attr == 'delete':
return self._safe_delete
else:
return getattr(self.wrapped_queryset, attr)
def __iter__(self):
for obj in self.wrapped_queryset:
yield obj
def __getitem__(self, index):
return self.wrapped_queryset[index]
def __len__(self):
return len(self.wrapped_queryset)
def get_actions(self, request):
actions = super(BulkDeleteMixin, self).get_actions(request)
actions['delete_selected'] = (BulkDeleteMixin.action_safe_bulk_delete, 'delete_selected', ugettext_lazy("Delete selected %(verbose_name_plural)s"))
return actions
def action_safe_bulk_delete(self, request, queryset):
wrapped_queryset = BulkDeleteMixin.SafeDeleteQuerysetWrapper(queryset)
return delete_selected(self, request, wrapped_queryset)
class SomeAdmin(BulkDeleteMixin, admin.ModelAdmin):
...
```
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
You're on the right track with your `delete_model` method. When the django admin performs an action on multiple objects at once it uses the [update function](https://docs.djangoproject.com/en/dev/topics/db/queries/#updating-multiple-objects-at-once). However, as you see in the docs these actions are performed at the database level only using SQL.
You need to add your `delete_model` method in as a [custom action](https://docs.djangoproject.com/en/dev/ref/contrib/admin/actions/) in the django admin.
```
def delete_model(modeladmin, request, queryset):
for obj in queryset:
filename=obj.profile_name+".xml"
os.remove(os.path.join(obj.type,filename))
obj.delete()
```
Then add your function to your modeladmin -
```
class profilesAdmin(admin.ModelAdmin):
list_display = ["type","username","domain_name"]
actions = [delete_model]
```
|
you try overriding delete\_model method failed because when you delete multiple objects the django use `QuerySet.delete()`,for efficiency reasons your model’s `delete()` method will not be called.
you can see it there <https://docs.djangoproject.com/en/1.9/ref/contrib/admin/actions/>
watch the beginning Warning
Admin `delete_model()` is same as the model's `delete()`
<https://github.com/django/django/blob/master/django/contrib/admin/options.py#L1005>
so when you delete multiple objects, you custom the delete method will never be call.
you have two way.
1.custom delete action.
action calling Model.delete() for each of the selected items.
2.use signal.
you can use signal alone, not inside the class.
you also can watch this question [Django model: delete() not triggered](https://stackoverflow.com/questions/1471909/django-model-delete-not-triggered)
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
The main issue is that the Django admin's bulk delete uses SQL, not instance.delete(), as noted elsewhere. For an admin-only solution, the following solution preserves the Django admin's "do you really want to delete these" interstitial.
The most general solution is to override the queryset returned by the model's manager to intercept delete.
```
from django.contrib.admin.actions import delete_selected
class BulkDeleteMixin(object):
class SafeDeleteQuerysetWrapper(object):
def __init__(self, wrapped_queryset):
self.wrapped_queryset = wrapped_queryset
def _safe_delete(self):
for obj in self.wrapped_queryset:
obj.delete()
def __getattr__(self, attr):
if attr == 'delete':
return self._safe_delete
else:
return getattr(self.wrapped_queryset, attr)
def __iter__(self):
for obj in self.wrapped_queryset:
yield obj
def __getitem__(self, index):
return self.wrapped_queryset[index]
def __len__(self):
return len(self.wrapped_queryset)
def get_actions(self, request):
actions = super(BulkDeleteMixin, self).get_actions(request)
actions['delete_selected'] = (BulkDeleteMixin.action_safe_bulk_delete, 'delete_selected', ugettext_lazy("Delete selected %(verbose_name_plural)s"))
return actions
def action_safe_bulk_delete(self, request, queryset):
wrapped_queryset = BulkDeleteMixin.SafeDeleteQuerysetWrapper(queryset)
return delete_selected(self, request, wrapped_queryset)
class SomeAdmin(BulkDeleteMixin, admin.ModelAdmin):
...
```
|
Your method should be
```
class profilesAdmin(admin.ModelAdmin):
#...
def _profile_delete(self, sender, instance, **kwargs):
# do something
def delete_model(self, request, object):
# do something
```
You should add a reference to current object as the first argument in every method signature (usually called `self`). Also, the delete\_model should be implemented as a method.
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
You can use the [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) which is coming from [Django 2.1](https://docs.djangoproject.com/en/2.1/) onward for bulk delete objects and the [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) for single delete. Both methods will handle something before deleting the object.
ModelAdmin.delete\_queryset(request, queryset)
==============================================
---
This is the explanation about [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) in [release note](https://docs.djangoproject.com/en/2.1/releases/2.1/#django-contrib-admin) of [Django 2.1](https://docs.djangoproject.com/en/2.1/).
>
> The delete\_queryset() method is given the HttpRequest and a QuerySet of objects to be deleted. Override this method to customize the deletion process for the “delete selected objects”
>
>
>
Let's look at what [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) does, you can override admin.[ModelAdmin](https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin) class in this way by including [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) function. Here you'll get list of object(s), `queryset.delete()` mean delete all the object(s) at once or you can add a loop to delete one by one.
```
def delete_queryset(self, request, queryset):
print('==========================delete_queryset==========================')
print(queryset)
"""
you can do anything here BEFORE deleting the object(s)
"""
queryset.delete()
"""
you can do anything here AFTER deleting the object(s)
"""
print('==========================delete_queryset==========================')
```
So I'm going to delete 5 objects from "select window" and here is those 5 objects.
[](https://i.stack.imgur.com/ZMmSI.png)
Then you'll redirect to the confirmation page like this,
[](https://i.stack.imgur.com/uBzRP.png)
Keep it mind about "Yes, I'm sure" button and I'll explain it later. When you click that button you will see the below image after removing those 5 objects.
[](https://i.stack.imgur.com/BM5ly.png)
This is the terminal output,
[](https://i.stack.imgur.com/FjvQ7.png)
So you'll get those 5 objects as a list of QuerySet and before deleting you can do anything what ever you want in the comment area.
ModelAdmin.delete\_model(request, obj)
======================================
---
This is the explanation about [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model).
>
> The delete\_model method is given the HttpRequest and a model instance. Overriding this method allows doing pre- or post-delete operations. Call super().delete\_model() to delete the object using Model.delete().
>
>
>
Let's look at what [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) does, you can override admin.[ModelAdmin](https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin) class in this way by including [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) function.
```
actions = ['delete_model']
def delete_model(self, request, obj):
print('============================delete_model============================')
print(obj)
"""
you can do anything here BEFORE deleting the object
"""
obj.delete()
"""
you can do anything here AFTER deleting the object
"""
print('============================delete_model============================')
```
I just click my 6th object to delete from the "change window".
[](https://i.stack.imgur.com/mruW4.png)
There is another Delete button, when you click it you'll see the window which we saw earlier.
[](https://i.stack.imgur.com/ZvBRd.png)
Click "Yes, I'm sure" button to delete the single object. You'll see the following window with the notification of that deleted object.
[](https://i.stack.imgur.com/IfuRJ.png)
This is the terminal output,
[](https://i.stack.imgur.com/0uoBy.png)
So you'll get selected object as a single of QuerySet and before deleting you can do anything what ever you want in the comment area.
---
The **final conclusion** is you can handle the delete event by clicking "Yes, I'm sure" button in "select window" or "change window" in Django Admin Site using [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) and [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model). In this way we don't need to handle such a signals like [django.db.models.signals.pre\_delete](https://docs.djangoproject.com/en/2.1/ref/signals/#django.db.models.signals.pre_delete) or [django.db.models.signals.post\_delete](https://docs.djangoproject.com/en/2.1/ref/signals/#django.db.models.signals.post_delete).
Here is the full code,
```
from django.contrib import admin
from . import models
class AdminInfo(admin.ModelAdmin):
model = models.AdminInfo
actions = ['delete_model']
def delete_queryset(self, request, queryset):
print('========================delete_queryset========================')
print(queryset)
"""
you can do anything here BEFORE deleting the object(s)
"""
queryset.delete()
"""
you can do anything here AFTER deleting the object(s)
"""
print('========================delete_queryset========================')
def delete_model(self, request, obj):
print('==========================delete_model==========================')
print(obj)
"""
you can do anything here BEFORE deleting the object
"""
obj.delete()
"""
you can do anything here AFTER deleting the object
"""
print('==========================delete_model==========================')
admin.site.register(models.AdminInfo, AdminInfo)
```
|
Your method should be
```
class profilesAdmin(admin.ModelAdmin):
#...
def _profile_delete(self, sender, instance, **kwargs):
# do something
def delete_model(self, request, object):
# do something
```
You should add a reference to current object as the first argument in every method signature (usually called `self`). Also, the delete\_model should be implemented as a method.
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
The main issue is that the Django admin's bulk delete uses SQL, not instance.delete(), as noted elsewhere. For an admin-only solution, the following solution preserves the Django admin's "do you really want to delete these" interstitial.
The most general solution is to override the queryset returned by the model's manager to intercept delete.
```
from django.contrib.admin.actions import delete_selected
class BulkDeleteMixin(object):
class SafeDeleteQuerysetWrapper(object):
def __init__(self, wrapped_queryset):
self.wrapped_queryset = wrapped_queryset
def _safe_delete(self):
for obj in self.wrapped_queryset:
obj.delete()
def __getattr__(self, attr):
if attr == 'delete':
return self._safe_delete
else:
return getattr(self.wrapped_queryset, attr)
def __iter__(self):
for obj in self.wrapped_queryset:
yield obj
def __getitem__(self, index):
return self.wrapped_queryset[index]
def __len__(self):
return len(self.wrapped_queryset)
def get_actions(self, request):
actions = super(BulkDeleteMixin, self).get_actions(request)
actions['delete_selected'] = (BulkDeleteMixin.action_safe_bulk_delete, 'delete_selected', ugettext_lazy("Delete selected %(verbose_name_plural)s"))
return actions
def action_safe_bulk_delete(self, request, queryset):
wrapped_queryset = BulkDeleteMixin.SafeDeleteQuerysetWrapper(queryset)
return delete_selected(self, request, wrapped_queryset)
class SomeAdmin(BulkDeleteMixin, admin.ModelAdmin):
...
```
|
you try overriding delete\_model method failed because when you delete multiple objects the django use `QuerySet.delete()`,for efficiency reasons your model’s `delete()` method will not be called.
you can see it there <https://docs.djangoproject.com/en/1.9/ref/contrib/admin/actions/>
watch the beginning Warning
Admin `delete_model()` is same as the model's `delete()`
<https://github.com/django/django/blob/master/django/contrib/admin/options.py#L1005>
so when you delete multiple objects, you custom the delete method will never be call.
you have two way.
1.custom delete action.
action calling Model.delete() for each of the selected items.
2.use signal.
you can use signal alone, not inside the class.
you also can watch this question [Django model: delete() not triggered](https://stackoverflow.com/questions/1471909/django-model-delete-not-triggered)
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
You can use the [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) which is coming from [Django 2.1](https://docs.djangoproject.com/en/2.1/) onward for bulk delete objects and the [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) for single delete. Both methods will handle something before deleting the object.
ModelAdmin.delete\_queryset(request, queryset)
==============================================
---
This is the explanation about [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) in [release note](https://docs.djangoproject.com/en/2.1/releases/2.1/#django-contrib-admin) of [Django 2.1](https://docs.djangoproject.com/en/2.1/).
>
> The delete\_queryset() method is given the HttpRequest and a QuerySet of objects to be deleted. Override this method to customize the deletion process for the “delete selected objects”
>
>
>
Let's look at what [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) does, you can override admin.[ModelAdmin](https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin) class in this way by including [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) function. Here you'll get list of object(s), `queryset.delete()` mean delete all the object(s) at once or you can add a loop to delete one by one.
```
def delete_queryset(self, request, queryset):
print('==========================delete_queryset==========================')
print(queryset)
"""
you can do anything here BEFORE deleting the object(s)
"""
queryset.delete()
"""
you can do anything here AFTER deleting the object(s)
"""
print('==========================delete_queryset==========================')
```
So I'm going to delete 5 objects from "select window" and here is those 5 objects.
[](https://i.stack.imgur.com/ZMmSI.png)
Then you'll redirect to the confirmation page like this,
[](https://i.stack.imgur.com/uBzRP.png)
Keep it mind about "Yes, I'm sure" button and I'll explain it later. When you click that button you will see the below image after removing those 5 objects.
[](https://i.stack.imgur.com/BM5ly.png)
This is the terminal output,
[](https://i.stack.imgur.com/FjvQ7.png)
So you'll get those 5 objects as a list of QuerySet and before deleting you can do anything what ever you want in the comment area.
ModelAdmin.delete\_model(request, obj)
======================================
---
This is the explanation about [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model).
>
> The delete\_model method is given the HttpRequest and a model instance. Overriding this method allows doing pre- or post-delete operations. Call super().delete\_model() to delete the object using Model.delete().
>
>
>
Let's look at what [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) does, you can override admin.[ModelAdmin](https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin) class in this way by including [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) function.
```
actions = ['delete_model']
def delete_model(self, request, obj):
print('============================delete_model============================')
print(obj)
"""
you can do anything here BEFORE deleting the object
"""
obj.delete()
"""
you can do anything here AFTER deleting the object
"""
print('============================delete_model============================')
```
I just click my 6th object to delete from the "change window".
[](https://i.stack.imgur.com/mruW4.png)
There is another Delete button, when you click it you'll see the window which we saw earlier.
[](https://i.stack.imgur.com/ZvBRd.png)
Click "Yes, I'm sure" button to delete the single object. You'll see the following window with the notification of that deleted object.
[](https://i.stack.imgur.com/IfuRJ.png)
This is the terminal output,
[](https://i.stack.imgur.com/0uoBy.png)
So you'll get selected object as a single of QuerySet and before deleting you can do anything what ever you want in the comment area.
---
The **final conclusion** is you can handle the delete event by clicking "Yes, I'm sure" button in "select window" or "change window" in Django Admin Site using [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) and [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model). In this way we don't need to handle such a signals like [django.db.models.signals.pre\_delete](https://docs.djangoproject.com/en/2.1/ref/signals/#django.db.models.signals.pre_delete) or [django.db.models.signals.post\_delete](https://docs.djangoproject.com/en/2.1/ref/signals/#django.db.models.signals.post_delete).
Here is the full code,
```
from django.contrib import admin
from . import models
class AdminInfo(admin.ModelAdmin):
model = models.AdminInfo
actions = ['delete_model']
def delete_queryset(self, request, queryset):
print('========================delete_queryset========================')
print(queryset)
"""
you can do anything here BEFORE deleting the object(s)
"""
queryset.delete()
"""
you can do anything here AFTER deleting the object(s)
"""
print('========================delete_queryset========================')
def delete_model(self, request, obj):
print('==========================delete_model==========================')
print(obj)
"""
you can do anything here BEFORE deleting the object
"""
obj.delete()
"""
you can do anything here AFTER deleting the object
"""
print('==========================delete_model==========================')
admin.site.register(models.AdminInfo, AdminInfo)
```
|
The main issue is that the Django admin's bulk delete uses SQL, not instance.delete(), as noted elsewhere. For an admin-only solution, the following solution preserves the Django admin's "do you really want to delete these" interstitial.
The most general solution is to override the queryset returned by the model's manager to intercept delete.
```
from django.contrib.admin.actions import delete_selected
class BulkDeleteMixin(object):
class SafeDeleteQuerysetWrapper(object):
def __init__(self, wrapped_queryset):
self.wrapped_queryset = wrapped_queryset
def _safe_delete(self):
for obj in self.wrapped_queryset:
obj.delete()
def __getattr__(self, attr):
if attr == 'delete':
return self._safe_delete
else:
return getattr(self.wrapped_queryset, attr)
def __iter__(self):
for obj in self.wrapped_queryset:
yield obj
def __getitem__(self, index):
return self.wrapped_queryset[index]
def __len__(self):
return len(self.wrapped_queryset)
def get_actions(self, request):
actions = super(BulkDeleteMixin, self).get_actions(request)
actions['delete_selected'] = (BulkDeleteMixin.action_safe_bulk_delete, 'delete_selected', ugettext_lazy("Delete selected %(verbose_name_plural)s"))
return actions
def action_safe_bulk_delete(self, request, queryset):
wrapped_queryset = BulkDeleteMixin.SafeDeleteQuerysetWrapper(queryset)
return delete_selected(self, request, wrapped_queryset)
class SomeAdmin(BulkDeleteMixin, admin.ModelAdmin):
...
```
|
15,196,321
|
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs
```
>>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'),
('ما أحلى الثورة التونسية', 'positive'),
('أجمل طفل في العالم', 'positive'),
('الشعب يحرس', 'positive'),
('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')]
Unsupported characters in input
```
how can I solve this problem?
|
2013/03/04
|
[
"https://Stackoverflow.com/questions/15196321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048995/"
] |
You can use the [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) which is coming from [Django 2.1](https://docs.djangoproject.com/en/2.1/) onward for bulk delete objects and the [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) for single delete. Both methods will handle something before deleting the object.
ModelAdmin.delete\_queryset(request, queryset)
==============================================
---
This is the explanation about [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) in [release note](https://docs.djangoproject.com/en/2.1/releases/2.1/#django-contrib-admin) of [Django 2.1](https://docs.djangoproject.com/en/2.1/).
>
> The delete\_queryset() method is given the HttpRequest and a QuerySet of objects to be deleted. Override this method to customize the deletion process for the “delete selected objects”
>
>
>
Let's look at what [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) does, you can override admin.[ModelAdmin](https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin) class in this way by including [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) function. Here you'll get list of object(s), `queryset.delete()` mean delete all the object(s) at once or you can add a loop to delete one by one.
```
def delete_queryset(self, request, queryset):
print('==========================delete_queryset==========================')
print(queryset)
"""
you can do anything here BEFORE deleting the object(s)
"""
queryset.delete()
"""
you can do anything here AFTER deleting the object(s)
"""
print('==========================delete_queryset==========================')
```
So I'm going to delete 5 objects from "select window" and here is those 5 objects.
[](https://i.stack.imgur.com/ZMmSI.png)
Then you'll redirect to the confirmation page like this,
[](https://i.stack.imgur.com/uBzRP.png)
Keep it mind about "Yes, I'm sure" button and I'll explain it later. When you click that button you will see the below image after removing those 5 objects.
[](https://i.stack.imgur.com/BM5ly.png)
This is the terminal output,
[](https://i.stack.imgur.com/FjvQ7.png)
So you'll get those 5 objects as a list of QuerySet and before deleting you can do anything what ever you want in the comment area.
ModelAdmin.delete\_model(request, obj)
======================================
---
This is the explanation about [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model).
>
> The delete\_model method is given the HttpRequest and a model instance. Overriding this method allows doing pre- or post-delete operations. Call super().delete\_model() to delete the object using Model.delete().
>
>
>
Let's look at what [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) does, you can override admin.[ModelAdmin](https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin) class in this way by including [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model) function.
```
actions = ['delete_model']
def delete_model(self, request, obj):
print('============================delete_model============================')
print(obj)
"""
you can do anything here BEFORE deleting the object
"""
obj.delete()
"""
you can do anything here AFTER deleting the object
"""
print('============================delete_model============================')
```
I just click my 6th object to delete from the "change window".
[](https://i.stack.imgur.com/mruW4.png)
There is another Delete button, when you click it you'll see the window which we saw earlier.
[](https://i.stack.imgur.com/ZvBRd.png)
Click "Yes, I'm sure" button to delete the single object. You'll see the following window with the notification of that deleted object.
[](https://i.stack.imgur.com/IfuRJ.png)
This is the terminal output,
[](https://i.stack.imgur.com/0uoBy.png)
So you'll get selected object as a single of QuerySet and before deleting you can do anything what ever you want in the comment area.
---
The **final conclusion** is you can handle the delete event by clicking "Yes, I'm sure" button in "select window" or "change window" in Django Admin Site using [delete\_queryset](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_queryset) and [delete\_model](https://docs.djangoproject.com/en/2.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.delete_model). In this way we don't need to handle such a signals like [django.db.models.signals.pre\_delete](https://docs.djangoproject.com/en/2.1/ref/signals/#django.db.models.signals.pre_delete) or [django.db.models.signals.post\_delete](https://docs.djangoproject.com/en/2.1/ref/signals/#django.db.models.signals.post_delete).
Here is the full code,
```
from django.contrib import admin
from . import models
class AdminInfo(admin.ModelAdmin):
model = models.AdminInfo
actions = ['delete_model']
def delete_queryset(self, request, queryset):
print('========================delete_queryset========================')
print(queryset)
"""
you can do anything here BEFORE deleting the object(s)
"""
queryset.delete()
"""
you can do anything here AFTER deleting the object(s)
"""
print('========================delete_queryset========================')
def delete_model(self, request, obj):
print('==========================delete_model==========================')
print(obj)
"""
you can do anything here BEFORE deleting the object
"""
obj.delete()
"""
you can do anything here AFTER deleting the object
"""
print('==========================delete_model==========================')
admin.site.register(models.AdminInfo, AdminInfo)
```
|
you try overriding delete\_model method failed because when you delete multiple objects the django use `QuerySet.delete()`,for efficiency reasons your model’s `delete()` method will not be called.
you can see it there <https://docs.djangoproject.com/en/1.9/ref/contrib/admin/actions/>
watch the beginning Warning
Admin `delete_model()` is same as the model's `delete()`
<https://github.com/django/django/blob/master/django/contrib/admin/options.py#L1005>
so when you delete multiple objects, you custom the delete method will never be call.
you have two way.
1.custom delete action.
action calling Model.delete() for each of the selected items.
2.use signal.
you can use signal alone, not inside the class.
you also can watch this question [Django model: delete() not triggered](https://stackoverflow.com/questions/1471909/django-model-delete-not-triggered)
|
51,861,677
|
I am trying to solve the problem called [R2](https://open.kattis.com/problems/r2) on kattis but for some reason, while the program (written in python) runs in the IDLE, I am met with a run time error in kattis with the judgement being a valueerror.
Here's my code:
```
R1 = int(input('input R1 '))
S = int(input('input S '))
R2 = (S*2)-R1
print(R2)
```
|
2018/08/15
|
[
"https://Stackoverflow.com/questions/51861677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10216731/"
] |
```
nums = input().split(' ')
r2 = 2*int(nums[1]) - int(nums[0])
print(r2)
```
The problem states that the two numbers will be input on a single line. You are attempting to capture two numbers input on two separate lines by calling `input` twice.
|
Darrahts pointed one problem out.
The 2nd Problem is: `input([prompt])` writes the prompt to the standard output,
however you should only write your solution to standard output.
|
30,205,473
|
I have the following JSON structure. I am attempting to extract the following information from the "brow\_eventdetails" section.
* ATime
* SBTime
* CTime
My question is is there any easy way to do this without using regular expression. In other words my question is the a nested JSON format that I can extract by some means using python.
```
{
"AppName": "undefined",
"Event": "browser information event",
"Message": "brow_eventdetails:{\"Message\":\"for https://mysite.myspace.com/display/CORE/mydetails took too long (821 ms : ATime: 5 ms, SBTime: 391 ms, CTime: 425 ms), and exceeded threshold of 5 ms\",\"Title\":\"mydetails My Work Details\",\"Host\":\"nzmyserver.ad.mydomain.com\",\"Page URL\":\"https://nzmyserver.mydomain.com/display/CORE/mydetails\",\"PL\":821,\"ATime\":5,\"SBTime\":391,\"CTime\":425}",
"Severity": "warn",
"UserInfo": "General Info"
}
```
The program that I use is given below.
```
with open(fname, 'r+') as f:
json_data = json.load(f)
message = json_data['Message']
nt = message.split('ATime')[1].strip().split(':')[1].split(',')[0]
bt = message.split('SBTime')[1].strip().split(':')[1].split('\s')[0])
st = message.split('CTime')[1].strip().split(':')[1].split('\s')[0])
json_data["ATime"] = bt
json_data["SBTime"] = st
json_data["CTime"] = nt
f.seek(0)
json.dump(json_data,f,ensure_ascii=True)
```
There are some issues with this program.The first one is extracting ATime,SBTime and CTime. These values are repeated.I want to extract just the numeric values, 5, 391 and 425.I don't want ms that follows it.how can I achieve this?
If I were to update the program to use json.loads() as below I get the following error
with open(fname, 'r+') as f:
json\_data = json.load(f)
message = json\_data['Message']
message\_data = json.loads(message)
f.seek(0)
json.dump(json\_data,f,ensure\_ascii=True)
I get
```
ValueError: No JSON object could be decoded
```
|
2015/05/13
|
[
"https://Stackoverflow.com/questions/30205473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/316082/"
] |
You need to parse again the json string of `json_data['message']` then just access the desired values, one way to do it:
```
# since the string value of `message` itself isn't a valid json string
# discard it, and parse it with json again
brow_eventdetails = json.loads(json_data['message'].replace('brow_eventdetails:', ''))
brow_eventdetails['ATime']
Out[6]: 5
brow_eventdetails['SBTime']
Out[7]: 391
brow_eventdetails['CTime']
Out[8]: 425
...
```
|
Parse this string value using json.loads as you would with every other string that contains JSON.
|
34,818,960
|
I need to highlight a specific word in a text within a tkinter frame. In order to find the word, I put a balise like in html. So in a text like "hello i'm in the |house|" I want to highlight the word "house".
My frame is defined like that:
`class FrameCodage(Frame):
self.t2Codage = Text(self, height=20, width=50)`
and I insert my text with this code: `fenetre.fCodage.t2Codage.insert(END, res)` , res being a variable containing my text.
I saw this code on an other post:
```
class CustomText(tk.Text):
'''A text widget with a new method, highlight_pattern()
example:
text = CustomText()
text.tag_configure("red", foreground="#ff0000")
text.highlight_pattern("this should be red", "red")
The highlight_pattern method is a simplified python
version of the tcl code at http://wiki.tcl.tk/3246
'''
def __init__(self, *args, **kwargs):
tk.Text.__init__(self, *args, **kwargs)
def highlight_pattern(self, pattern, tag, start="1.0", end="end",
regexp=False):
'''Apply the given tag to all text that matches the given pattern
If 'regexp' is set to True, pattern will be treated as a regular
expression.
'''
start = self.index(start)
end = self.index(end)
self.mark_set("matchStart", start)
self.mark_set("matchEnd", start)
self.mark_set("searchLimit", end)
count = tk.IntVar()
while True:
index = self.search(pattern, "matchEnd","searchLimit",
count=count, regexp=regexp)
if index == "": break
self.mark_set("matchStart", index)
self.mark_set("matchEnd", "%s+%sc" % (index, count.get()))
self.tag_add(tag, "matchStart", "matchEnd")
```
But they are few things that I don't understand: how can I apply this function to my case? When did I call this function? What's the pattern and the tag in my case? I'm a beginner in Tkinter so don't hesitate to explain to me this code, or another.
|
2016/01/15
|
[
"https://Stackoverflow.com/questions/34818960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5464538/"
] |
Instead of this:
```
class FrameCodage(Frame):
self.t2Codage = Text(self, height=20, width=50)
```
... do this:
```
class FrameCodage(Frame):
self.t2Codage = CustomText(self, height=20, width=50)
```
Next, create a "highlight" tag, and configure it however you want:
```
self.t2Codage.tag_configure("highlight", foreground="red")
```
Finally, you can call the `highlight_pattern` method as if it were a standard method:
```
self.t2Codage.highlight_pattern(r"\|.*?\|", "red", regexp=True)
```
|
Below is widget I created to deal with this, hope it helps.
```
try: import tkinter as tk
except ImportError: import Tkinter as tk
class code_editor(tk.Text):
def __init__(self, parent, case_insensetive = True, current_line_colour = '', word_end_at = r""" .,{}[]()=+-*/\|<>%""", tags = {}, *args, **kwargs):
tk.Text.__init__(self, *args, **kwargs)
self.bind("<KeyRelease>", lambda e: self.highlight())
self.case_insensetive = case_insensetive
self.highlight_current_line = current_line_colour != ''
self.word_end = word_end_at
self.tags = tags
if self.case_insensetive:
for tag in self.tags:
self.tags[tag]['words'] = [word.lower() for word in self.tags[tag]['words']]
#loops through the syntax dictionary to creat tags for each type
for tag in self.tags:
self.tag_config(tag, **self.tags[tag]['style'])
if self.highlight_current_line:
self.tag_configure("current_line", background = current_line_colour)
self.tag_add("current_line", "insert linestart", "insert lineend+1c")
self.tag_raise("sel")
#find what is the last word thats being typed.
def last_word(self):
line, last = self.index(tk.INSERT).split('.')
last=int(last)
#this limit issues when user is a fast typer
last_char = self.get(f'{line}.{int(last)-1}', f'{line}.{last}')
while last_char in self.word_end and last > 0:
last-=1
last_char = self.get(f'{line}.{int(last)-1}', f'{line}.{last}')
first = int(last)
while True:
first-=1
if first<0: break
if self.get(f"{line}.{first}", f"{line}.{first+1}") in self.word_end:
break
return {'word': self.get(f"{line}.{first+1}", f"{line}.{last}"), 'first': f"{line}.{first+1}", 'last': f"{line}.{last}"}
#highlight the last word if its a syntax, See: syntax dictionary on the top.
#this runs on every key release which is why it fails when the user is too fast.
#it also highlights the current line
def highlight(self):
if self.highlight_current_line:
self.tag_remove("current_line", 1.0, "end")
self.tag_add("current_line", "insert linestart", "insert lineend+1c")
lastword = self.last_word()
wrd = lastword['word'].lower() if self.case_insensetive else lastword['word']
for tag in self.tags:
if wrd in self.tags[tag]['words']:
self.tag_add(tag, lastword['first'], lastword['last'])
else:
self.tag_remove(tag, lastword['first'], lastword['last'])
self.tag_raise("sel")
#### example ####
if __name__ == '__main__':
# from pyFilename import code_editor
ms = tk.Tk()
example_text = code_editor(
parent = ms,
case_insensetive = True, #True by default.
current_line_colour = 'grey10', #'' by default which will not highlight the current line.
word_end_at = r""" .,{}[]()=+-*/\|<>%""", #<< by default, this will till the class where is word ending.
tags = {#'SomeTagName': {'style': {'someStyle': 'someValue', ... etc}, 'words': ['word1', 'word2' ... etc]}} this tells it to apply this style to these words
"failSynonyms": {'style': {'foreground': 'red', 'font': 'helvetica 8'}, 'words': ['fail', 'bad']},
"passSynonyms":{'style': {'foreground': 'green', 'font': 'helvetica 12'}, 'words': ['Pass', 'ok']},
"sqlSyntax":{'style': {'foreground': 'blue', 'font': 'italic'}, 'words': ['select', 'from']},
},
font='helvetica 10 bold', #Sandard tkinter text arguments
background = 'black', #Sandard tkinter text arguments
foreground = 'white' #Sandard tkinter text arguments
)
example_text.pack()
ms.mainloop()
```
|
14,670,768
|
Hi I'm new to django and python. I want to extend django `User` model and add a `create_user` method in a model, then I call this `create_user` method from view. However, I got an error msg.
My model:
```
from django.db import models
from django.contrib.auth.models import User
class Basic(models.Model):
user = models.OneToOneField(User)
gender = models.CharField(max_length = 10)
def create_ck_user(acc_type, fb_id,fb_token):
# "Create a user and insert into auth_user table"
user = User.objects.create_user(acc_type,fb_id,fb_token)
user.save()
class External(models.Model):
user = models.OneToOneField(Basic)
external_id = models.IntegerField()
locale = models.CharField(max_length = 40)
token = models.CharField(max_length = 250)
```
and in view, I did sth like this:
```
Basic.create_ck_user('acc_type' = 'fb', 'fb_id' = fb_id, 'fb_token' = fb_token)
```
error shows that:
```
keyword can't be an expression (views.py, Basic.objects.create_ck_user('acc_type' = 'fb', 'fb_id' = fb_id, 'fb_token' = fb_token))
```
Edit:
After add @classmehtod, and change view.py to:
```
...
if (request.method == 'POST'):
acc_type = request.POST.get('acc_type')
fb_id = request.POST.get('fb_id')
fb_token = request.POST.get('fb_token')
Basic.create_ck_user(acc_type,fb_id,fb_token)
...
```
An error msg shows:
>
> create\_ck\_user() takes exactly 3 arguments (4 given)
>
>
>
checked error details, it includes "request" variable though that i just pass 3 variables: `acc_type`, `fb_id` and `fb_token`.
Any ideas?
|
2013/02/03
|
[
"https://Stackoverflow.com/questions/14670768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/204127/"
] |
Try
```
Basic.create_ck_user('fb', fb_id, fb_token)
```
You don't assign to strings when you call a function/method. You assign to variables. But since you are using positional arguments in your function definition then you don't even need them.
Assigning to a string will never work anyway... strings are immutable objects.
Also, you want this method to be a class method. Otherwise you would need to create an instance of User before calling it.
```
# inside class definition
@classmethod
def create_ck_user(cls, acc_type, fb_id,fb_token):
# "Create a user and insert into auth_user table"
user = cls.objects.create_user(acc_type,fb_id,fb_token)
user.save()
```
|
Just look here:
<https://docs.djangoproject.com/en/dev/topics/auth/default/#creating-users>
You can find a lot of answers just looking at the documentation.
|
30,183,795
|
I know the normal way to use APScheduler is "python setup.py install". But I want to embed it into my program directly, so the user don't need install it when using my program.
```
class BaseScheduler(six.with_metaclass(ABCMeta)):
_trigger_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.triggers'))
# print(_trigger_plugins, 'ddd')
_trigger_classes = {}
_executor_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.executors'))
_executor_classes = {}
_jobstore_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.jobstores'))
_jobstore_classes = {}
_stopped = True
```
thks.
|
2015/05/12
|
[
"https://Stackoverflow.com/questions/30183795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/620853/"
] |
You can instantiate the triggers directly, without going through their aliases. That eliminates the need to install APScheduler or setuptools. Does this answer your question?
|
I find a way to work around this problem.
1. use 'pip install apscheduler' to install locally
2. goto the installed directory and cp that directory to your lib directory
3. use 'pip uninstall apscheduler' to remove it.
4. make you code to import the apscheduler from your lib directory.
5. done.
|
11,174,997
|
(Python 2.7)I need to print the bfs of a binary tree with a given preorder and inorder and a max lenght of the strings of preorder and inorder.
I know how it works, for example:
preorder:ABCDE
inorder:CBDAE
max length:5
```
A
/ \
B E
/ \
C D
```
BFS:ABECD
So far I got this figured out
```
class BinaryTree:
def __init__ (self, value, parent=None):
self.parent = parent
self.left_child = None
self.right_child = None
self.value=value
def setLeftChild(self, child=None):
self.left_child = child
if child:
child.parent = self
def setRightChild(self, child=None):
self.right_child = child
if child:
child.parent = self
preorder={}
inorder={}
print "max string length?"
i=int(raw_input())
count=0
while i>count:
print"insert the preorder"
preorder[raw_input()]=count
count=count+1
print "preorder is",sorted(preorder, key=preorder.get)
count2=0
while i>count2:
print"insert the inorder"
inorder[raw_input()]=count2
count2=count2+1
print "inorder is",sorted(inorder, key=inorder.get)
root=
```
I've figured out how to create a binary tree in python but the thing is I don't know how to add the values of the next childs. As you can see I already have the root and figured out how to insert the first childs (left and right) but I don't know how to add the next ones.
|
2012/06/24
|
[
"https://Stackoverflow.com/questions/11174997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1419828/"
] |
I guess essentially the question is how to get all the parent-leftChild pairs and parent-rightChild pairs of the tree from given preorder and inorder
To get the parent-leftChild pairs, you need to check: 1) if node1 is right after node2 in preorder; 2) if node2 is in front of node1 in inorder
For your example preorder:ABCDE inorder:CBDAE
* B is right after A in preorder and B is in front of A in inorder, thus B is the left child of A.
* D is right after C in preorder, but D is also after C in inorder, thus D is not the left child of C
You can use the similar trick to get all parent-rightChild pairs
|
To add children to any node, just get the node that you want to add children to and call setLeftChild or setRightChild on it.
|
11,174,997
|
(Python 2.7)I need to print the bfs of a binary tree with a given preorder and inorder and a max lenght of the strings of preorder and inorder.
I know how it works, for example:
preorder:ABCDE
inorder:CBDAE
max length:5
```
A
/ \
B E
/ \
C D
```
BFS:ABECD
So far I got this figured out
```
class BinaryTree:
def __init__ (self, value, parent=None):
self.parent = parent
self.left_child = None
self.right_child = None
self.value=value
def setLeftChild(self, child=None):
self.left_child = child
if child:
child.parent = self
def setRightChild(self, child=None):
self.right_child = child
if child:
child.parent = self
preorder={}
inorder={}
print "max string length?"
i=int(raw_input())
count=0
while i>count:
print"insert the preorder"
preorder[raw_input()]=count
count=count+1
print "preorder is",sorted(preorder, key=preorder.get)
count2=0
while i>count2:
print"insert the inorder"
inorder[raw_input()]=count2
count2=count2+1
print "inorder is",sorted(inorder, key=inorder.get)
root=
```
I've figured out how to create a binary tree in python but the thing is I don't know how to add the values of the next childs. As you can see I already have the root and figured out how to insert the first childs (left and right) but I don't know how to add the next ones.
|
2012/06/24
|
[
"https://Stackoverflow.com/questions/11174997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1419828/"
] |
To add children to any node, just get the node that you want to add children to and call setLeftChild or setRightChild on it.
|
If you're using BFS - you ideally want to be using a graph - an excellent library is [networkx](http://networkx.lanl.gov/index.html)
An example:
```
import networkx as nx
g = nx.DiGraph()
g.add_edge('A', 'B')
g.add_edge('A', 'E')
g.add_edge('B', 'C')
g.add_edge('B', 'D')
print 'A' + ''.join(node[1] for node in (nx.bfs_edges(g, 'A')))
# ABECD
```
|
11,174,997
|
(Python 2.7)I need to print the bfs of a binary tree with a given preorder and inorder and a max lenght of the strings of preorder and inorder.
I know how it works, for example:
preorder:ABCDE
inorder:CBDAE
max length:5
```
A
/ \
B E
/ \
C D
```
BFS:ABECD
So far I got this figured out
```
class BinaryTree:
def __init__ (self, value, parent=None):
self.parent = parent
self.left_child = None
self.right_child = None
self.value=value
def setLeftChild(self, child=None):
self.left_child = child
if child:
child.parent = self
def setRightChild(self, child=None):
self.right_child = child
if child:
child.parent = self
preorder={}
inorder={}
print "max string length?"
i=int(raw_input())
count=0
while i>count:
print"insert the preorder"
preorder[raw_input()]=count
count=count+1
print "preorder is",sorted(preorder, key=preorder.get)
count2=0
while i>count2:
print"insert the inorder"
inorder[raw_input()]=count2
count2=count2+1
print "inorder is",sorted(inorder, key=inorder.get)
root=
```
I've figured out how to create a binary tree in python but the thing is I don't know how to add the values of the next childs. As you can see I already have the root and figured out how to insert the first childs (left and right) but I don't know how to add the next ones.
|
2012/06/24
|
[
"https://Stackoverflow.com/questions/11174997",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1419828/"
] |
I guess essentially the question is how to get all the parent-leftChild pairs and parent-rightChild pairs of the tree from given preorder and inorder
To get the parent-leftChild pairs, you need to check: 1) if node1 is right after node2 in preorder; 2) if node2 is in front of node1 in inorder
For your example preorder:ABCDE inorder:CBDAE
* B is right after A in preorder and B is in front of A in inorder, thus B is the left child of A.
* D is right after C in preorder, but D is also after C in inorder, thus D is not the left child of C
You can use the similar trick to get all parent-rightChild pairs
|
If you're using BFS - you ideally want to be using a graph - an excellent library is [networkx](http://networkx.lanl.gov/index.html)
An example:
```
import networkx as nx
g = nx.DiGraph()
g.add_edge('A', 'B')
g.add_edge('A', 'E')
g.add_edge('B', 'C')
g.add_edge('B', 'D')
print 'A' + ''.join(node[1] for node in (nx.bfs_edges(g, 'A')))
# ABECD
```
|
61,590,927
|
I have a rather simple program using `dask`:
```
import dask.array as darray
import numpy as np
X = np.array([[1.,2.,3.],
[4.,5.,6.],
[7.,8.,9.]])
arr = darray.from_array(X)
arr = arr[:,0]
a = darray.min(arr)
b = darray.max(arr)
quantiles = darray.linspace(a, b, 4)
print(np.array(quantiles))
```
Running this program results in an error like this:
```
Traceback (most recent call last):
File "discretization.py", line 12, in <module>
print(np.array(quantiles))
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__
x = np.array(x)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__
x = np.array(x)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__
x = np.array(x)
[Previous line repeated 325 more times]
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1337, in __array__
x = self.compute()
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 166, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 434, in compute
dsk = collections_to_dsk(collections, optimize_graph, **kwargs)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 220, in collections_to_dsk
[opt(dsk, keys, **kwargs) for opt, (dsk, keys) in groups.items()],
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 220, in <listcomp>
[opt(dsk, keys, **kwargs) for opt, (dsk, keys) in groups.items()],
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/optimization.py", line 42, in optimize
dsk = optimize_blockwise(dsk, keys=keys)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/blockwise.py", line 547, in optimize_blockwise
out = _optimize_blockwise(graph, keys=keys)
File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/blockwise.py", line 572, in _optimize_blockwise
if isinstance(layers[layer], Blockwise):
File "/anaconda3/lib/python3.7/abc.py", line 139, in __instancecheck__
return _abc_instancecheck(cls, instance)
RecursionError: maximum recursion depth exceeded in comparison
```
Python is version 3.7.1 and `dask` is version 2.15.0.
What is wrong with this program?
Thanks in advance.
|
2020/05/04
|
[
"https://Stackoverflow.com/questions/61590927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2595776/"
] |
Simply use `dnf`
```sh
dnf -y install gcc-toolset-9-gcc gcc-toolset-9-gcc-c++
source /opt/rh/gcc-toolset-9/enable
```
ref: <https://centos.pkgs.org/8/centos-appstream-x86_64/gcc-toolset-9-gcc-9.1.1-2.4.el8.x86_64.rpm.html>
Note: `source` won't work inside a Dockerfile so prefer to use:
```
ENV PATH=/opt/rh/gcc-toolset-9/root/usr/bin:$PATH
```
or better
```
RUN dnf -y install gcc-toolset-9-gcc gcc-toolset-9-gcc-c++
RUN echo "source /opt/rh/gcc-toolset-9/enable" >> /etc/bashrc
SHELL ["/bin/bash", "--login", "-c"]
RUN gcc --version
```
|
this command work for me
```
dnf install gcc --best --allowerasing
```
|
55,422,150
|
This is my first time using python and matplotlib and I'd like to plot data from a CSV file.
The CSV file is in the form of:
```
10/03/2018 00:00,454.95,594.86
```
with about 4000 rows. I'd like to plot the data from the second column vs the datetime for each row and the data from the third column vs the datetime for each row, both on the same plot.
This is my code so far but it's not working:
```
import matplotlib.pyplot as plt
import csv
import datetime
import re
T = []
X = []
Y = []
with open('Book2.csv','r') as csvfile:
plots = csv.reader(csvfile, delimiter=',')
for row in plots:
datetime_format = '%d/%m/%Y %H:%M'
date_time_data = datetime.datetime.strptime(row[0],datetime_format)
T.append(date_time_data)
X.append(float(row[1]))
Y.append(float(row[2]))
plt.plot(T,X, label='second column data vs datetime')
plt.plot(T,Y, label='third column data vs datetime')
plt.xlabel('DateTime')
plt.ylabel('Data')
plt.title('Interesting Graph\nCheck it out')
plt.legend()
plt.show()
```
Any help or guidance would be great. Many thanks! :)
|
2019/03/29
|
[
"https://Stackoverflow.com/questions/55422150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11245101/"
] |
Just use promises (or callbacks)
================================
I know that everyone hates JavaScript and so these anti-idiomatic transpilers and new language "features" exist to make JavaScript look like C# and whatnot but, honestly, it's just easier to use the language the way that it was originally designed (otherwise use Go or some other language that actually behaves the way that you want - and is more performant anyway). If you must expose async/await in your application, put it at the interface rather than littering it throughout.
My 2¢.
I'm just going to write some psuedo code to show you how easy this can be:
```
function doItAll(jobs) {
var results = [];
function next() {
var job = jobs.shift();
if (!job) {
return Promise.resolve(results);
}
return makeRequest(job.url).then(function (stuff) {
return updateDb().then(function (dbStuff) {
results.push(dbStuff);
}).then(next);
});
}
return next();
}
function makeRequest() {
return new Promise(function (resolve, reject) {
var resp = http.get('...', {...});
resp.on('end', function () {
// ... do whatever
resolve();
});
});
}
```
Simple. Easy to read. 1:1 correspondence between what the code looks like and what's actually happening. No trying to "force" JavaScript to behave counter to the way it was designed.
The longer you fight learning to understand async code, the longer it will take to to understand it.
Just dive in and learn to write JavaScript "the JavaScript way"! :D
|
Here's my updated function that works properly and synchronously, getting the data one by one and adding it to the database before moving to the next one.
I have made it by customizing @coolAJ86 answer and I've marked that as the correct one but thought it would be helpful for people stumbling across this thread to see my final, working & tested version.
```
var geoApiUrl =
'https://maps.googleapis.com/maps/api/geocode/json?key=<<MY API KEY>>&address=';
doItAll(allJobs)
function doItAll(jobs) {
var results = [];
var errors = [];
function nextJob() {
var job = jobs.shift();
if (!job) {
return Promise.resolve(results);
}
var friendlyAddress =
geoApiUrl +
encodeURIComponent(job.addressLine1 + ' ' + job.postcode);
return makeRequest(friendlyAddress).then(function(result) {
if((result.results[0] === undefined) || (result.results[0].geometry === undefined)){
nextJob();
} else {
return knex('LOCATIONS')
.returning('*')
.insert({
UPRN: job.UPRN,
lat: result.results[0].geometry.location.lat,
lng: result.results[0].geometry.location.lng,
title: job.title,
postcode: job.postcode,
addressLine1: job.addressLine1,
theo_id: job.clientId
})
.then(function(data) {
// console.log('KNEX CALLBACK COMING')
// console.log(data[0])
console.log(data[0]);
results.push(data[0]);
nextJob();
})
.catch(function(err) {
console.log(err);
errors.push(job);
});
}
});
}
return nextJob();
}
function makeRequest(url) {
return new Promise(function(resolve, reject) {
https
.get(url, resp => {
let data = '';
resp.on('data', chunk => {
data += chunk;
});
// The whole response has been received. Print out the result.
resp.on('end', () => {
let result = JSON.parse(data);
resolve(result);
});
})
.on('error', err => {
console.log('Error: ' + err.message);
reject(err);
});
});
}
```
|
48,685,715
|
The problem is very simple:
I want to call a script from a rule and I would like that rule to both:
* Perform stdout and stderr redirection
* Access the snakemake variables from the script(variable can be both lists and literals)
If I use the `shell:` then, I can perform the I/O redirection but I cannot use the `snakemake` variable inside the script.
Note: Of course it is possible to pass the variables to the script as arguments from the shell. However by doing so, the script cannot distinguish a literal and a list variable.
If I instead use `script:` then, I can access my snakemake variables but I cannot perform I/O redirection and many other shell facilities.
---
An example to illustrate the question:
1) Using the `shell:`
```
rule create_hdf5:
input:
genes_file = OUTPUT_PATH+'/{sample}/outs/genes.tsv'
params:
# frequencies is a list!!!
frequencies = config['X_var']['freqs']
output:
HDF5_OUTPUT+'/{sample}.h5'
log:
out = LOG_FILES+'/create_hdf5/sample_{sample}.out',
err = LOG_FILES+'/create_hdf5/sample_{sample}.err'
shell:
'python scripts/create_hdf5.py {input.genes_file} {params.frequencies} {output} {threads} 2> {log.err} 1> {log.out} '
```
Problem with 1): Naturally, the python script thinks that each element in the frequencies list is a new argument. Yet, the script cannot access the `snakemake` variable.
2) Using the `script:`
```
rule create_hdf5:
input:
genes_file = OUTPUT_PATH+'/{sample}/outs/genes.tsv'
params:
# frequencies is a list!!!
frequencies = config['X_var']['freqs']
output:
HDF5_OUTPUT+'/{sample}.h5'
log:
out = LOG_FILES+'/create_hdf5/sample_{sample}.out',
err = LOG_FILES+'/create_hdf5/sample_{sample}.err'
script:
'scripts/create_hdf5.py'
```
Problem with 2): I can access the snakemake variable inside the script. But now I cannot use the bash facilities such as I/O redirection.
I wonder if there is a way of achieving both (perhaps I am missing something from the snakemake documentation)?
Thanks in advance!
|
2018/02/08
|
[
"https://Stackoverflow.com/questions/48685715",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1935611/"
] |
If possible, I suggest you use the [argparse](https://docs.python.org/3/library/argparse.html) module to parse the input of your script, so that it can parse a list of arguments as such, using the `nargs="*"` option:
```python
def main():
"""Main function of the program."""
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
"-g", "--genes_file",
required=True,
help="Path to a file containing the genes.")
parser.add_argument(
"-o", "--output_file",
required=True,
help="Path to the output file.")
parser.add_argument(
"-f", "--frequencies",
nargs="*",
help="Space-separated list of frequencies.")
parser.add_argument(
"-t", "--threads",
type=int,
default=1,
help="Number of threads to use.")
args = parser.parse_args()
# then use args.gene_file as a file name and args.frequencies as a list, etc.
```
And you would call this as follows:
```python
shell:
"""
python scripts/create_hdf5.py \\
-g {input.genes_file} -f {params.frequencies} \\
-o {output} -t {threads} 2> {log.err} 1> {log.out}
"""
```
|
You can access the log filenames within the python script with the snakemake.log varaibale, which is a list containing both filenames:
```
snakemake.log = [ LOG_FILES+'/create_hdf5/sample_1.out', LOG_FILES+'/create_hdf5/sample_1.err' ]
```
you can thus use this within your script to create log files for logging, e.g.
```
import logging
mylogger = logging.getLogger('My logger')
# create file handler
fh = logging.FileHandler(snakemake.log[0])
mylogger.addHandler(fh)
mylogger.error("Some error")
```
|
65,095,357
|
When I am adding new functions to a file I can't import them neither if I just run the script in terminal or if I launch `ipython` and try importing a function there. I have no `.pyc` files. It looks as if there is some kind of caching going on. I never actually faced such an issue even though have been working with various projects for a while. Why could it happen?
What I see is the following: I launch `ipython` and the functions that were written long time ago by other programmers can be imported fine. If I comment them out and save the file, they still can be imported without any issues. If I write new functions they can't be imported.
The directory is `git` directory, I cloned the repo. Then the new branch was created and I switched to it. Python version is `3.7.5`, and I am working with virtual environment that I created some time ago which I activated with `source activate py37`.
I don't know whether its important but I have an empty `__init__.py` in the folder where script is located.
The code (I don't think its relevant, but still):
```
import hail as hl
import os
class SeqrDataValidationError(Exception):
pass
# Get public properties of a class
def public_class_props(cls):
return {k: v for k, v in cls.__dict__.items() if k[:1] != '_'}
def hello():
print('hello')
```
`public_class_props` is an old function and can be imported, but `hello` - can't.
|
2020/12/01
|
[
"https://Stackoverflow.com/questions/65095357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3815432/"
] |
We can use `pd.to_datetime` here with `errors='coerce'` to ignore the faulty dates. Then use the `dt.year` to calculate the difference:
```
df['date_until'] = pd.to_datetime(df['date_until'], format='%d.%m.%y', errors='coerce')
df['diff_year'] = df['date_until'].dt.year - df['year']
```
```
year date_until diff_year
0 2010 NaT NaN
1 2011 2013-06-30 2.0
2 2011 NaT NaN
3 2015 2018-06-30 3.0
4 2020 NaT NaN
```
|
For everybody who is trying to replace values just like I wanted to in the first place, here is how you could solve it:
```
for i in range(len(df)):
if pd.isna(df['date_until'].iloc[i]):
df['date_until'].iloc[i] = f'30.06.{df["year"].iloc[i] +1}'
if df['date_until'].iloc[i] == '-':
df['date_until'].iloc[i] = f'30.06.{df["year"].iloc[i] +1}
```
But @Erfan's approach is much cleaner
|
47,726,913
|
I am trying to run the following code here to save information to the database. I have seen other messages - but - it appears that the solutions are for older versions of `Python/DJango` (as they do not seem to be working on the versions I am using now: `Python 3.6.3` and `DJango 1.11.7`
```
if form.is_valid():
try:
item = form.save(commit=False)
item.tenantid = tenantid
item.save()
message = 'saving data was successful'
except DatabaseError as e:
message = 'Database Error: ' + str(e.message)
```
When doing so, I get an error message the error message listed below. How can I fix this so I can get the message found at the DB level printed out?
```
'DatabaseError' object has no attribute 'message'
Request Method:
POST
Request URL:
http://127.0.0.1:8000/storeowner/edit/
Django Version:
1.11.7
Exception Type:
AttributeError
Exception Value:
'DatabaseError' object has no attribute 'message'
Exception Location:
C:\WORK\AppPython\ContractorsClubSubModuleDEVELOP\libmstr\storeowner\views.py in edit_basic_info, line 40
Python Executable:
C:\WORK\Software\Python64bitv3.6\python.exe
Python Version:
3.6.3
Python Path:
['C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP',
'C:\\WORK\\Software\\OracleInstantClient64Bit\\instantclient_12_2',
'C:\\WORK\\Software\\Python64bitv3.6\\python36.zip',
'C:\\WORK\\Software\\Python64bitv3.6\\DLLs',
'C:\\WORK\\Software\\Python64bitv3.6\\lib',
'C:\\WORK\\Software\\Python64bitv3.6',
'C:\\Users\\dgmufasa\\AppData\\Roaming\\Python\\Python36\\site-packages',
'C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP\\libintgr',
'C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP\\libmstr',
'C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP\\libtrans',
'C:\\WORK\\AppPython\\ContractorsClubBackofficeCode\\libintgr',
'C:\\WORK\\AppPython\\ContractorsClubBackofficeCode\\libmstr',
'C:\\WORK\\TRASH\\tempforcustomer\\tempforcustomer\\libtempmstr',
'C:\\WORK\\AppPython\\ContractorsClubBackofficeCode\\libtrans',
'C:\\WORK\\Software\\Python64bitv3.6\\lib\\site-packages',
'C:\\WORK\\Software\\Python64bitv3.6\\lib\\site-packages\\django-1.11.7-py3.6.egg',
'C:\\WORK\\Software\\Python64bitv3.6\\lib\\site-packages\\pytz-2017.3-py3.6.egg']
Server time:
Sat, 9 Dec 2017 08:42:49 +0000
```
|
2017/12/09
|
[
"https://Stackoverflow.com/questions/47726913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2707727/"
] |
Just change
```
str(e.message)
```
to
```
str(e)
```
|
**some change**
>
> str(e.message)
>
>
>
**to**
>
> HttpResponse(e.message)
>
>
>
|
60,543,957
|
I have a project folder with different cloud functions folders e.g.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder2
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
--------- and so on!
```
Now what i have right now is. I push the code to the Source Repository one by one from the Cloud Fucntions folder to Source Repository(Separate Repos for each function folder). And then it has a Trigger enabled which trigger the cloud-build and then deploy the function.
The cloudbuild.yaml file i have is like this below..
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function
- --runtime=python37
- --source=.
- --entry-point=function_main
- --trigger-topic=Function
- --region=europe-west3
```
Now, what I would like to do is I would like to make a single source repo and whenever i change the code in one cloud function and push it then only it get deploys and rest remains like before.
---
Update
------
Now i have also tried something like this below but it also deploy all the functions at the same time even though i am working on a single function.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-Cloud-Function-Folder2
-main.py
-requirements.txt
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
-requirements.txt
```
cloudbuild.yaml file looks like this below
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function1
- --runtime=python37
- --source=./Cloud-Function-Folder1
- --entry-point=function1_main
- --trigger-topic=Function1
- --region=europe-west3
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function2
- --runtime=python37
- --source=./Cloud-Function-Folder2
- --entry-point=function2_main
- --trigger-topic=Function2
- --region=europe-west3
```
|
2020/03/05
|
[
"https://Stackoverflow.com/questions/60543957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6508416/"
] |
It's more complex et you have to play with limit and constraint of Cloud Build.
I do this:
* get the directory updated since the previous commit
* loop on this directory and do what I want
---
**Hypothesis 1**: all the subfolders are deployed by using the same commands
So, for this I put a `cloudbuild.yaml` at the root of my directory, and not in the subfolders
```
steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: /bin/bash
args:
- -c
- |
# Cloud Build doesn't recover the .git file. Thus checkout the repo for this
git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ;
# Copy only the .git file
mv /tmp/repo/.git .
# Make a diff between this version and the previous one and store the result into a file
git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff
# Do what you want, by performing a loop in to the directory
- name: 'python:3.7'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
# No strong isolation between each function, take care of conflicts!!
pip3 install -r requirements.txt
pytest
cd ..
done
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
gcloud functions deploy .........
cd ..
done
```
---
**Hypothesis 2**: the deployment is specific by subfolder
So, for this I put a `cloudbuild.yaml` at the root of my directory, and another one in the subfolders
```
steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: /bin/bash
args:
- -c
- |
# Cloud Build doesn't recover the .git file. Thus checkout the repo for this
git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ;
# Copy only the .git file
mv /tmp/repo/.git .
# Make a diff between this version and the previous one and store the result into a file
git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff
# Do what you want, by performing a loop in to the directory. Here launch a cloud build
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
gcloud builds submit
cd ..
done
```
Be careful to the [timeout](https://cloud.google.com/cloud-build/docs/build-config#timeout_2) here, because you can trigger a lot of Cloud Build and it take times.
---
Want to run manually your build, don't forget to add the $BRANCH\_NAME as substitution variable
```
gcloud builds submit --substitutions=BRANCH_NAME=master
```
|
If you create a single source repo and change your code as one cloud function you have to create a single ['cloudbuild.yaml' configuration file](https://cloud.google.com/cloud-build/docs/build-config). You need to connect this single repo to Cloud Build. Then create a [build trigger](https://cloud.google.com/cloud-build/docs/running-builds/create-manage-triggers#build_trigger) select this repo as a Source. Also you need to [configure deployment](https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-functions#continuous_deployment) and anytime you push new code to your repository, you will automatically trigger a build and deploy on Cloud Functions.
|
60,543,957
|
I have a project folder with different cloud functions folders e.g.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder2
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
--------- and so on!
```
Now what i have right now is. I push the code to the Source Repository one by one from the Cloud Fucntions folder to Source Repository(Separate Repos for each function folder). And then it has a Trigger enabled which trigger the cloud-build and then deploy the function.
The cloudbuild.yaml file i have is like this below..
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function
- --runtime=python37
- --source=.
- --entry-point=function_main
- --trigger-topic=Function
- --region=europe-west3
```
Now, what I would like to do is I would like to make a single source repo and whenever i change the code in one cloud function and push it then only it get deploys and rest remains like before.
---
Update
------
Now i have also tried something like this below but it also deploy all the functions at the same time even though i am working on a single function.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-Cloud-Function-Folder2
-main.py
-requirements.txt
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
-requirements.txt
```
cloudbuild.yaml file looks like this below
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function1
- --runtime=python37
- --source=./Cloud-Function-Folder1
- --entry-point=function1_main
- --trigger-topic=Function1
- --region=europe-west3
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function2
- --runtime=python37
- --source=./Cloud-Function-Folder2
- --entry-point=function2_main
- --trigger-topic=Function2
- --region=europe-west3
```
|
2020/03/05
|
[
"https://Stackoverflow.com/questions/60543957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6508416/"
] |
This is quite straightforward, however you need to control the behavior on the Build Trigger side of things and not on the `cloudbuild.yaml`. Conceptually, you want to limit the cloud build trigger behavior and limit it to certain changes within a repo.
As such, make use of the regEx glob include filter in the Build Trigger page:

You will build one *trigger per cloud function* (or cloud run) and set the "Included files filter (glob)" as follows:
* Cloud-Function1-Trigger
Project\_Folder/Cloud-Function-Folder1/\*\*
* Cloud-Function2-Trigger
Project\_Folder/Cloud-Function-Folder2/\*\*
...
Assumptions:
1. For each trigger the repo and branch is set such that the root of the repo has the Project\_Folder/
2. Repo and branch is appropriately set to so that the trigger can locate and access files in path Project\_Folder/Cloud-Function-Folder1/\*
When I have more than 2-3 cloud function, I tend to use Terraform to create all the required triggers in an automated fashion.
|
If you create a single source repo and change your code as one cloud function you have to create a single ['cloudbuild.yaml' configuration file](https://cloud.google.com/cloud-build/docs/build-config). You need to connect this single repo to Cloud Build. Then create a [build trigger](https://cloud.google.com/cloud-build/docs/running-builds/create-manage-triggers#build_trigger) select this repo as a Source. Also you need to [configure deployment](https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-functions#continuous_deployment) and anytime you push new code to your repository, you will automatically trigger a build and deploy on Cloud Functions.
|
60,543,957
|
I have a project folder with different cloud functions folders e.g.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder2
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
--------- and so on!
```
Now what i have right now is. I push the code to the Source Repository one by one from the Cloud Fucntions folder to Source Repository(Separate Repos for each function folder). And then it has a Trigger enabled which trigger the cloud-build and then deploy the function.
The cloudbuild.yaml file i have is like this below..
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function
- --runtime=python37
- --source=.
- --entry-point=function_main
- --trigger-topic=Function
- --region=europe-west3
```
Now, what I would like to do is I would like to make a single source repo and whenever i change the code in one cloud function and push it then only it get deploys and rest remains like before.
---
Update
------
Now i have also tried something like this below but it also deploy all the functions at the same time even though i am working on a single function.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-Cloud-Function-Folder2
-main.py
-requirements.txt
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
-requirements.txt
```
cloudbuild.yaml file looks like this below
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function1
- --runtime=python37
- --source=./Cloud-Function-Folder1
- --entry-point=function1_main
- --trigger-topic=Function1
- --region=europe-west3
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function2
- --runtime=python37
- --source=./Cloud-Function-Folder2
- --entry-point=function2_main
- --trigger-topic=Function2
- --region=europe-west3
```
|
2020/03/05
|
[
"https://Stackoverflow.com/questions/60543957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6508416/"
] |
you can do this by creating a folder for each of the functions like this
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder2
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
--------- and so on!
```
and create a `cloudbuild.yaml` in each dir which would look like this
```
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Cloud_Function_1
- --source=.
- --trigger-http
- --runtime=python37
- --allow-unauthenticated
dir: "Cloud-Function-Folder1"
```
on the cloud build create a trigger with included file filter to only include files from the `functions-folder-name` and also manually specify `functions-folder-name/cloudbuild.yaml` for each of the triggers.
From [this](https://torbjornzetterlund.com/using-cloud-build-as-you-ci-co-for-multiple-cloud-functions/) blog post by Torbjorn Zetterlund you can read the entire process of deploying multiple cloud functions from single github repo with include filter to only deploy changed function.
|
If you create a single source repo and change your code as one cloud function you have to create a single ['cloudbuild.yaml' configuration file](https://cloud.google.com/cloud-build/docs/build-config). You need to connect this single repo to Cloud Build. Then create a [build trigger](https://cloud.google.com/cloud-build/docs/running-builds/create-manage-triggers#build_trigger) select this repo as a Source. Also you need to [configure deployment](https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-functions#continuous_deployment) and anytime you push new code to your repository, you will automatically trigger a build and deploy on Cloud Functions.
|
60,543,957
|
I have a project folder with different cloud functions folders e.g.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder2
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
--------- and so on!
```
Now what i have right now is. I push the code to the Source Repository one by one from the Cloud Fucntions folder to Source Repository(Separate Repos for each function folder). And then it has a Trigger enabled which trigger the cloud-build and then deploy the function.
The cloudbuild.yaml file i have is like this below..
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function
- --runtime=python37
- --source=.
- --entry-point=function_main
- --trigger-topic=Function
- --region=europe-west3
```
Now, what I would like to do is I would like to make a single source repo and whenever i change the code in one cloud function and push it then only it get deploys and rest remains like before.
---
Update
------
Now i have also tried something like this below but it also deploy all the functions at the same time even though i am working on a single function.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-Cloud-Function-Folder2
-main.py
-requirements.txt
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
-requirements.txt
```
cloudbuild.yaml file looks like this below
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function1
- --runtime=python37
- --source=./Cloud-Function-Folder1
- --entry-point=function1_main
- --trigger-topic=Function1
- --region=europe-west3
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function2
- --runtime=python37
- --source=./Cloud-Function-Folder2
- --entry-point=function2_main
- --trigger-topic=Function2
- --region=europe-west3
```
|
2020/03/05
|
[
"https://Stackoverflow.com/questions/60543957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6508416/"
] |
It's more complex et you have to play with limit and constraint of Cloud Build.
I do this:
* get the directory updated since the previous commit
* loop on this directory and do what I want
---
**Hypothesis 1**: all the subfolders are deployed by using the same commands
So, for this I put a `cloudbuild.yaml` at the root of my directory, and not in the subfolders
```
steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: /bin/bash
args:
- -c
- |
# Cloud Build doesn't recover the .git file. Thus checkout the repo for this
git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ;
# Copy only the .git file
mv /tmp/repo/.git .
# Make a diff between this version and the previous one and store the result into a file
git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff
# Do what you want, by performing a loop in to the directory
- name: 'python:3.7'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
# No strong isolation between each function, take care of conflicts!!
pip3 install -r requirements.txt
pytest
cd ..
done
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
gcloud functions deploy .........
cd ..
done
```
---
**Hypothesis 2**: the deployment is specific by subfolder
So, for this I put a `cloudbuild.yaml` at the root of my directory, and another one in the subfolders
```
steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: /bin/bash
args:
- -c
- |
# Cloud Build doesn't recover the .git file. Thus checkout the repo for this
git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ;
# Copy only the .git file
mv /tmp/repo/.git .
# Make a diff between this version and the previous one and store the result into a file
git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff
# Do what you want, by performing a loop in to the directory. Here launch a cloud build
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
gcloud builds submit
cd ..
done
```
Be careful to the [timeout](https://cloud.google.com/cloud-build/docs/build-config#timeout_2) here, because you can trigger a lot of Cloud Build and it take times.
---
Want to run manually your build, don't forget to add the $BRANCH\_NAME as substitution variable
```
gcloud builds submit --substitutions=BRANCH_NAME=master
```
|
This is quite straightforward, however you need to control the behavior on the Build Trigger side of things and not on the `cloudbuild.yaml`. Conceptually, you want to limit the cloud build trigger behavior and limit it to certain changes within a repo.
As such, make use of the regEx glob include filter in the Build Trigger page:

You will build one *trigger per cloud function* (or cloud run) and set the "Included files filter (glob)" as follows:
* Cloud-Function1-Trigger
Project\_Folder/Cloud-Function-Folder1/\*\*
* Cloud-Function2-Trigger
Project\_Folder/Cloud-Function-Folder2/\*\*
...
Assumptions:
1. For each trigger the repo and branch is set such that the root of the repo has the Project\_Folder/
2. Repo and branch is appropriately set to so that the trigger can locate and access files in path Project\_Folder/Cloud-Function-Folder1/\*
When I have more than 2-3 cloud function, I tend to use Terraform to create all the required triggers in an automated fashion.
|
60,543,957
|
I have a project folder with different cloud functions folders e.g.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder2
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
--------- and so on!
```
Now what i have right now is. I push the code to the Source Repository one by one from the Cloud Fucntions folder to Source Repository(Separate Repos for each function folder). And then it has a Trigger enabled which trigger the cloud-build and then deploy the function.
The cloudbuild.yaml file i have is like this below..
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function
- --runtime=python37
- --source=.
- --entry-point=function_main
- --trigger-topic=Function
- --region=europe-west3
```
Now, what I would like to do is I would like to make a single source repo and whenever i change the code in one cloud function and push it then only it get deploys and rest remains like before.
---
Update
------
Now i have also tried something like this below but it also deploy all the functions at the same time even though i am working on a single function.
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-Cloud-Function-Folder2
-main.py
-requirements.txt
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
-requirements.txt
```
cloudbuild.yaml file looks like this below
```
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function1
- --runtime=python37
- --source=./Cloud-Function-Folder1
- --entry-point=function1_main
- --trigger-topic=Function1
- --region=europe-west3
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function2
- --runtime=python37
- --source=./Cloud-Function-Folder2
- --entry-point=function2_main
- --trigger-topic=Function2
- --region=europe-west3
```
|
2020/03/05
|
[
"https://Stackoverflow.com/questions/60543957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6508416/"
] |
It's more complex et you have to play with limit and constraint of Cloud Build.
I do this:
* get the directory updated since the previous commit
* loop on this directory and do what I want
---
**Hypothesis 1**: all the subfolders are deployed by using the same commands
So, for this I put a `cloudbuild.yaml` at the root of my directory, and not in the subfolders
```
steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: /bin/bash
args:
- -c
- |
# Cloud Build doesn't recover the .git file. Thus checkout the repo for this
git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ;
# Copy only the .git file
mv /tmp/repo/.git .
# Make a diff between this version and the previous one and store the result into a file
git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff
# Do what you want, by performing a loop in to the directory
- name: 'python:3.7'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
# No strong isolation between each function, take care of conflicts!!
pip3 install -r requirements.txt
pytest
cd ..
done
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
gcloud functions deploy .........
cd ..
done
```
---
**Hypothesis 2**: the deployment is specific by subfolder
So, for this I put a `cloudbuild.yaml` at the root of my directory, and another one in the subfolders
```
steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: /bin/bash
args:
- -c
- |
# Cloud Build doesn't recover the .git file. Thus checkout the repo for this
git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ;
# Copy only the .git file
mv /tmp/repo/.git .
# Make a diff between this version and the previous one and store the result into a file
git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff
# Do what you want, by performing a loop in to the directory. Here launch a cloud build
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: /bin/bash
args:
- -c
- |
for i in $$(cat /workspace/diff); do
cd $$i
gcloud builds submit
cd ..
done
```
Be careful to the [timeout](https://cloud.google.com/cloud-build/docs/build-config#timeout_2) here, because you can trigger a lot of Cloud Build and it take times.
---
Want to run manually your build, don't forget to add the $BRANCH\_NAME as substitution variable
```
gcloud builds submit --substitutions=BRANCH_NAME=master
```
|
you can do this by creating a folder for each of the functions like this
```
Project_Folder
-Cloud-Function-Folder1
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder2
-main.py
-requirements.txt
-cloudbuild.yaml
-Cloud-Function-Folder3
-main.py
-requirements.txt
-cloudbuild.yaml
--------- and so on!
```
and create a `cloudbuild.yaml` in each dir which would look like this
```
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Cloud_Function_1
- --source=.
- --trigger-http
- --runtime=python37
- --allow-unauthenticated
dir: "Cloud-Function-Folder1"
```
on the cloud build create a trigger with included file filter to only include files from the `functions-folder-name` and also manually specify `functions-folder-name/cloudbuild.yaml` for each of the triggers.
From [this](https://torbjornzetterlund.com/using-cloud-build-as-you-ci-co-for-multiple-cloud-functions/) blog post by Torbjorn Zetterlund you can read the entire process of deploying multiple cloud functions from single github repo with include filter to only deploy changed function.
|
30,857,579
|
To be more specific: Is there a way for a python program to continue running even after it's closed (like automatic open at a certain time)? Or like a gmail notification? This is for an alarm project, and I want it to ring/open itself even if the user closes the window. Is there a way for this to happen/get scripted? If so, how? Any help would be appreciated!
|
2015/06/16
|
[
"https://Stackoverflow.com/questions/30857579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343751/"
] |
You can implement a global `operator>>` for your `Numbers` class, eg:
```
std::istream& operator>>(std::istream &strm, Number &n)
{
int value;
strm >> value; // or however you need to read the value...
n = Number(value);
return strm;
}
```
Or:
```
class Number {
//...
friend std::istream& operator>>(std::istream &strm, Number &n);
};
std::istream& operator>>(std::istream &strm, Number &n)
{
strm >> n.value; // or however you need to read the value...
return strm;
}
```
Usually when you override the global streaming operators, you should implement member methods to handle the actual streaming, and then call those methods in the global operators. This allows the class to decide how best to stream itself:
```
class Number {
//...
void readFrom(std::istream &strm);
};
void Number::readFrom(std::istream &strm)
{
strm >> value; // or however you need to read the value...
}
std::istream& operator>>(std::istream &strm, Number &n)
{
n.readFrom(strm);
return strm;
}
```
If you are not allowed to define a custom `operator>>`, you can still use the `readFrom()` approach, at least:
```
for (int i = 0; i < length; i++) {
std::cout << "Enter the number value ";
numbers[i].readFrom(std::cin);
}
```
|
You will probably have to return `number` at the end of function `getNumbersFromUser` to avoid memory leakage.
Secondly, the line `cin >> number[i]` means that you are taking input in a variable of type `Number` which is not allowed. It is only allowed for primitive data type (int, char double etc.) or some built in objects like strings. To take input in your own defined data type, you will have to overload stream extraction operator `>>` or you can write a member function that takes input in class data member(s) and call that function. For example if your function is like
```
void Number::takeInput () {
cin >> val;
}
```
Now go in function and write `number[i].takeInput()` instead of `cin >> number[i]`.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.