qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
14,938,541
I use matplotlib to plot a scatter chart: ![enter image description here](https://i.stack.imgur.com/Nllqp.png) And label the bubble using a transparent box according to the tip at [How to annotate point on a scatter automatically placed arrow](https://stackoverflow.com/q/9074996/7758804) Here is the code: ``` if show_annote: for i in range(len(x)): annote_text = annotes[i][0][0] # STK_ID ax.annotate(annote_text, xy=(x[i], y[i]), xytext=(-10,3), textcoords='offset points', ha='center', va='bottom', bbox=dict(boxstyle='round,pad=0.2', fc='yellow', alpha=0.2), fontproperties=ANNOTE_FONT) ``` and the resulting plot: ![enter image description here](https://i.stack.imgur.com/v5FET.png) But there is still room for improvement to reduce overlap (for instance the label box offset is fixed as (-10,3)). Are there algorithms that can: 1. dynamically change the offset of label box according to the crowdedness of its neighbourhood 2. dynamically place the label box remotely and add an arrow line beween bubble and label box 3. somewhat change the label orientation 4. label\_box overlapping bubble is better than label\_box overlapping label\_box? I just want to make the chart easy for human eyes to comprehand, so some overlap is OK, not as rigid a constraint as <http://en.wikipedia.org/wiki/Automatic_label_placement> suggests. And the bubble quantity within the chart is less than 150 most of the time. I find the so called `Force-based label placement` <http://bl.ocks.org/MoritzStefaner/1377729> is quite interesting. I don't know if there is any python code/package available to implement the algorithm. I am not an academic guy and not looking for an optimum solution, and my python codes need to label many many charts, so the the speed/memory is in the scope of consideration. I am looking for a quick and effective solution. Any help (code,algorithm,tips,thoughts) on this subject? Thanks.
2013/02/18
[ "https://Stackoverflow.com/questions/14938541", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072888/" ]
Another option using my library `adjustText`, written specially for this purpose (<https://github.com/Phlya/adjustText>). ``` from adjustText import adjust_text np.random.seed(2016) N = 50 scatter_data = np.random.rand(N, 3) fig, ax = plt.subplots() ax.scatter(scatter_data[:, 0], scatter_data[:, 1], c=scatter_data[:, 2], s=scatter_data[:, 2] * 150) labels = ['ano_{}'.format(i) for i in range(N)] texts = [] for x, y, text in zip(scatter_data[:, 0], scatter_data[:, 1], labels): texts.append(ax.text(x, y, text)) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/YipY3.png)](https://i.stack.imgur.com/YipY3.png) ``` np.random.seed(2016) N = 50 scatter_data = np.random.rand(N, 3) fig, ax = plt.subplots() ax.scatter(scatter_data[:, 0], scatter_data[:, 1], c=scatter_data[:, 2], s=scatter_data[:, 2] * 150) labels = ['ano_{}'.format(i) for i in range(N)] texts = [] for x, y, text in zip(scatter_data[:, 0], scatter_data[:, 1], labels): texts.append(ax.text(x, y, text)) adjust_text(texts, force_text=0.05, arrowprops=dict(arrowstyle="-|>", color='r', alpha=0.5)) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/qeDW0.png)](https://i.stack.imgur.com/qeDW0.png) It doesn't repel from the bubbles, only from their centers and other texts.
We can use plotly for this. But we can't help placing overlap correctly if there is lot of data. Instead we can zoom in and zoom out. ``` import plotly.express as px df = px.data.tips() df = px.data.gapminder().query("year==2007 and continent=='Americas'") fig = px.scatter(df, x="gdpPercap", y="lifeExp", text="country", log_x=True, size_max=100, color="lifeExp", title="Life Expectency") fig.update_traces(textposition='top center') fig.show() ``` Output: [![enter image description here](https://i.stack.imgur.com/Ei4n6.gif)](https://i.stack.imgur.com/Ei4n6.gif)
14,938,541
I use matplotlib to plot a scatter chart: ![enter image description here](https://i.stack.imgur.com/Nllqp.png) And label the bubble using a transparent box according to the tip at [How to annotate point on a scatter automatically placed arrow](https://stackoverflow.com/q/9074996/7758804) Here is the code: ``` if show_annote: for i in range(len(x)): annote_text = annotes[i][0][0] # STK_ID ax.annotate(annote_text, xy=(x[i], y[i]), xytext=(-10,3), textcoords='offset points', ha='center', va='bottom', bbox=dict(boxstyle='round,pad=0.2', fc='yellow', alpha=0.2), fontproperties=ANNOTE_FONT) ``` and the resulting plot: ![enter image description here](https://i.stack.imgur.com/v5FET.png) But there is still room for improvement to reduce overlap (for instance the label box offset is fixed as (-10,3)). Are there algorithms that can: 1. dynamically change the offset of label box according to the crowdedness of its neighbourhood 2. dynamically place the label box remotely and add an arrow line beween bubble and label box 3. somewhat change the label orientation 4. label\_box overlapping bubble is better than label\_box overlapping label\_box? I just want to make the chart easy for human eyes to comprehand, so some overlap is OK, not as rigid a constraint as <http://en.wikipedia.org/wiki/Automatic_label_placement> suggests. And the bubble quantity within the chart is less than 150 most of the time. I find the so called `Force-based label placement` <http://bl.ocks.org/MoritzStefaner/1377729> is quite interesting. I don't know if there is any python code/package available to implement the algorithm. I am not an academic guy and not looking for an optimum solution, and my python codes need to label many many charts, so the the speed/memory is in the scope of consideration. I am looking for a quick and effective solution. Any help (code,algorithm,tips,thoughts) on this subject? Thanks.
2013/02/18
[ "https://Stackoverflow.com/questions/14938541", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072888/" ]
Another option using my library `adjustText`, written specially for this purpose (<https://github.com/Phlya/adjustText>). ``` from adjustText import adjust_text np.random.seed(2016) N = 50 scatter_data = np.random.rand(N, 3) fig, ax = plt.subplots() ax.scatter(scatter_data[:, 0], scatter_data[:, 1], c=scatter_data[:, 2], s=scatter_data[:, 2] * 150) labels = ['ano_{}'.format(i) for i in range(N)] texts = [] for x, y, text in zip(scatter_data[:, 0], scatter_data[:, 1], labels): texts.append(ax.text(x, y, text)) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/YipY3.png)](https://i.stack.imgur.com/YipY3.png) ``` np.random.seed(2016) N = 50 scatter_data = np.random.rand(N, 3) fig, ax = plt.subplots() ax.scatter(scatter_data[:, 0], scatter_data[:, 1], c=scatter_data[:, 2], s=scatter_data[:, 2] * 150) labels = ['ano_{}'.format(i) for i in range(N)] texts = [] for x, y, text in zip(scatter_data[:, 0], scatter_data[:, 1], labels): texts.append(ax.text(x, y, text)) adjust_text(texts, force_text=0.05, arrowprops=dict(arrowstyle="-|>", color='r', alpha=0.5)) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/qeDW0.png)](https://i.stack.imgur.com/qeDW0.png) It doesn't repel from the bubbles, only from their centers and other texts.
Just created another quick solution that is also very fast: [textalloc](https://github.com/ckjellson/textalloc) In this case you could do something like this: ``` import textalloc as ta import numpy as np import matplotlib.pyplot as plt np.random.seed(2022) N = 30 scatter_data = np.random.rand(N, 3)*10 fig, ax = plt.subplots() ax.scatter(scatter_data[:, 0], scatter_data[:, 1], c=scatter_data[:, 2], s=scatter_data[:, 2] * 50, zorder=10,alpha=0.5) labels = ['ano-{}'.format(i) for i in range(N)] text_list = labels = ['ano-{}'.format(i) for i in range(N)] ta.allocate_text(fig,ax,scatter_data[:, 0],scatter_data[:, 1], text_list, x_scatter=scatter_data[:, 0], y_scatter=scatter_data[:, 1], max_distance=0.2, min_distance=0.04, margin=0.039, linewidth=0.5, nbr_candidates=400) plt.show() ``` [![scatterplot](https://i.stack.imgur.com/jKY77.png)](https://i.stack.imgur.com/jKY77.png)
34,314,022
The documentation linked below seems to say that top level classes can be pickled, as well as their instances. But based on the answers to my previous [question](https://stackoverflow.com/q/34261379/3904031) it seem not to be correct. In the script I posted the pickle accepts the class object and writes a file, but this is not useful. THIS IS MY QUESTION: Is this documentation wrong, or is there something more subtle I don't understand? Also, should pickle be generating some kind of error message in this case? <https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled>, > > The following types can be pickled: > > > * None, True, and False > * integers, long integers, floating point numbers, complex numbers > * normal and Unicode strings > * tuples, lists, sets, and dictionaries containing only picklable objects > * functions defined at the top level of a module > * built-in functions defined at the top level of a module > * **classes that are defined at the top level of a module** ( *my bold* ) > * instances of such classes whose **dict** or the result of calling **getstate**() > is picklable (see section The pickle protocol for details). > > >
2015/12/16
[ "https://Stackoverflow.com/questions/34314022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
Make a class that is *defined at the top level of a module*: **foo.py**: ``` class Foo(object): pass ``` Then running a separate script, **script.py**: ``` import pickle import foo with open('/tmp/out.pkl', 'w') as f: pickle.dump(foo.Foo, f) del foo with open('/tmp/out.pkl', 'r') as f: cls = pickle.load(f) print(cls) ``` prints ``` <class 'foo.Foo'> ``` --- Note that the pickle file, `out.pkl`, merely contains *strings* which name the defining module and the name of the class. It does not store the definition of the class: ``` cfoo Foo p0 . ``` Therefore, *at the time of unpickling* the defining module, `foo`, must contain the definition of the class. If you delete the class from the defining module ``` del foo.Foo ``` then you'll get the error ``` AttributeError: 'module' object has no attribute 'Foo' ```
It's totally possible to pickle a class instance in python… while also saving the code to reconstruct the class and the instance's state. If you want to hack together a solution on top of `pickle`, or use a "trojan horse" `exec` based method here's how to do it: [How to unpickle an object whose class exists in a different namespace (python)?](https://stackoverflow.com/questions/14238837/how-to-unpickle-an-object-whose-class-exists-in-a-different-namespace-python?rq=1) Or, if you use `dill`, you have a `dump` function that already knows how to store a class instance, the class code, and the instance state: [How to recover a pickled class and its instances](https://stackoverflow.com/questions/34261379/how-to-recover-a-pickled-class-and-its-instances/34397001#34397001) [Pickle python class instance plus definition](https://stackoverflow.com/questions/6726183/pickle-python-class-instance-plus-definition/28095208#28095208) I'm the `dill` author, and I created `dill` in part to be able to ship class instances and class methods across `multiprocessing`. [Can't pickle <type 'instancemethod'> when using python's multiprocessing Pool.map()](https://stackoverflow.com/questions/1816958/cant-pickle-type-instancemethod-when-using-pythons-multiprocessing-pool-ma/21345273#21345273)
22,734,148
I'm trying to check if a number is a perfect square. However, i am dealing with extraordinarily large numbers so python thinks its infinity for some reason. it gets up to 1.1 X 10^154 before the code returns "Inf". Is there anyway to get around this? Here is the code, the lst variable just holds a bunch of really really really really really big numbers ``` import math from decimal import Decimal def main(): for i in lst: root = math.sqrt(Decimal(i)) print(root) if int(root + 0.5) ** 2 == i: print(str(i) + " True") ```
2014/03/29
[ "https://Stackoverflow.com/questions/22734148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3476226/" ]
I think that you need to take a look at the [BigFloat](https://pythonhosted.org/bigfloat/) module, e.g.: ``` import bigfloat as bf b = bf.BigFloat('1e1000', bf.precision(21)) print bf.sqrt(b) ``` Prints `BigFloat.exact('9.9999993810013282e+499', precision=53)`
math.sqrt() converts the argument to a Python float which has a maximum value around 10^308. You should probably look at using the [gmpy2](https://code.google.com/p/gmpy/) library. gmpy2 provide very fast multiple precision arithmetic. If you want to check for arbitrary powers, the function `gmpy2.is_power()` will return `True` if a number is a perfect power. It may be a cube or fifth power so you will need to check for power you are interested in. ``` >>> gmpy2.is_power(456789**372) True ``` You can use `gmpy2.isqrt_rem()` to check if it is an exact square. ``` >>> gmpy2.isqrt_rem(9) (mpz(3), mpz(0)) >>> gmpy2.isqrt_rem(10) (mpz(3), mpz(1)) ``` You can use `gmpy2.iroot_rem()` to check for arbitrary powers. ``` >>> gmpy2.iroot_rem(13**7 + 1, 7) (mpz(13), mpz(1)) ```
22,734,148
I'm trying to check if a number is a perfect square. However, i am dealing with extraordinarily large numbers so python thinks its infinity for some reason. it gets up to 1.1 X 10^154 before the code returns "Inf". Is there anyway to get around this? Here is the code, the lst variable just holds a bunch of really really really really really big numbers ``` import math from decimal import Decimal def main(): for i in lst: root = math.sqrt(Decimal(i)) print(root) if int(root + 0.5) ** 2 == i: print(str(i) + " True") ```
2014/03/29
[ "https://Stackoverflow.com/questions/22734148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3476226/" ]
I think that you need to take a look at the [BigFloat](https://pythonhosted.org/bigfloat/) module, e.g.: ``` import bigfloat as bf b = bf.BigFloat('1e1000', bf.precision(21)) print bf.sqrt(b) ``` Prints `BigFloat.exact('9.9999993810013282e+499', precision=53)`
Replace `math.sqrt(Decimal(i))` with `Decimal(i).sqrt()` to prevent your `Decimal`s decaying into `float`s
22,734,148
I'm trying to check if a number is a perfect square. However, i am dealing with extraordinarily large numbers so python thinks its infinity for some reason. it gets up to 1.1 X 10^154 before the code returns "Inf". Is there anyway to get around this? Here is the code, the lst variable just holds a bunch of really really really really really big numbers ``` import math from decimal import Decimal def main(): for i in lst: root = math.sqrt(Decimal(i)) print(root) if int(root + 0.5) ** 2 == i: print(str(i) + " True") ```
2014/03/29
[ "https://Stackoverflow.com/questions/22734148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3476226/" ]
I think that you need to take a look at the [BigFloat](https://pythonhosted.org/bigfloat/) module, e.g.: ``` import bigfloat as bf b = bf.BigFloat('1e1000', bf.precision(21)) print bf.sqrt(b) ``` Prints `BigFloat.exact('9.9999993810013282e+499', precision=53)`
@casevh has the right answer -- use a library that can do math on arbitrarily large integers. Since you're looking for squares, you presumably are working with integers, and one could argue that using floating point types (including decimal.Decimal) is, in some sense, inelegant. You definitely *shouldn't* use Python's float type; it has limited precision (about 16 decimal places). If you do use decimal.Decimal, be careful to specify the precision (which will depend on how big your numbers are). Since Python has a big integer type, one can write a reasonably simple algorithm to check for squareness; see my implementation of such an algorithm, along with illustrations of problems with float, and how you could use decimal.Decimal, below. ``` import math import decimal def makendigit(n): """Return an arbitraryish n-digit number""" return sum((j%9+1)*10**i for i,j in enumerate(range(n))) x=makendigit(30) # it looks like float will work... print 'math.sqrt(x*x) - x: %.17g' % (math.sqrt(x*x) - x) # ...but actually they won't print 'math.sqrt(x*x+1) - x: %.17g' % (math.sqrt(x*x+1) - x) # by default Decimal won't be sufficient... print 'decimal.Decimal(x*x).sqrt() - x:',decimal.Decimal(x*x).sqrt() - x # ...you need to specify the precision print 'decimal.Decimal(x*x).sqrt(decimal.Context(prec=30)) - x:',decimal.Decimal(x*x).sqrt(decimal.Context(prec=100)) - x def issquare_decimal(y,prec=1000): x=decimal.Decimal(y).sqrt(decimal.Context(prec=prec)) return x==x.to_integral_value() print 'issquare_decimal(x*x):',issquare_decimal(x*x) print 'issquare_decimal(x*x+1):',issquare_decimal(x*x+1) # you can check for "squareness" without going to floating point. # one option is a bisection search; this Newton's method approach # should be faster. # For "industrial use" you should use gmpy2 or some similar "big # integer" library. def isqrt(y): """Find largest integer <= sqrt(y)""" if not isinstance(y,(int,long)): raise ValueError('arg must be an integer') if y<0: raise ValueError('arg must be positive') if y in (0,1): return y x0=y//2 while True: # newton's rule x1= (x0**2+y)//2//x0 # we don't always get converge to x0=x1, e.g., for y=3 if abs(x1-x0)<=1: # nearly converged; find biggest # integer satisfying our condition x=max(x0,x1) if x**2>y: while x**2>y: x-=1 else: while (x+1)**2<=y: x+=1 return x x0=x1 def issquare(y): """Return true if non-negative integer y is a perfect square""" return y==isqrt(y)**2 print 'isqrt(x*x)-x:',isqrt(x*x)-x print 'issquare(x*x):',issquare(x*x) print 'issquare(x*x+1):',issquare(x*x+1) ```
22,734,148
I'm trying to check if a number is a perfect square. However, i am dealing with extraordinarily large numbers so python thinks its infinity for some reason. it gets up to 1.1 X 10^154 before the code returns "Inf". Is there anyway to get around this? Here is the code, the lst variable just holds a bunch of really really really really really big numbers ``` import math from decimal import Decimal def main(): for i in lst: root = math.sqrt(Decimal(i)) print(root) if int(root + 0.5) ** 2 == i: print(str(i) + " True") ```
2014/03/29
[ "https://Stackoverflow.com/questions/22734148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3476226/" ]
Replace `math.sqrt(Decimal(i))` with `Decimal(i).sqrt()` to prevent your `Decimal`s decaying into `float`s
math.sqrt() converts the argument to a Python float which has a maximum value around 10^308. You should probably look at using the [gmpy2](https://code.google.com/p/gmpy/) library. gmpy2 provide very fast multiple precision arithmetic. If you want to check for arbitrary powers, the function `gmpy2.is_power()` will return `True` if a number is a perfect power. It may be a cube or fifth power so you will need to check for power you are interested in. ``` >>> gmpy2.is_power(456789**372) True ``` You can use `gmpy2.isqrt_rem()` to check if it is an exact square. ``` >>> gmpy2.isqrt_rem(9) (mpz(3), mpz(0)) >>> gmpy2.isqrt_rem(10) (mpz(3), mpz(1)) ``` You can use `gmpy2.iroot_rem()` to check for arbitrary powers. ``` >>> gmpy2.iroot_rem(13**7 + 1, 7) (mpz(13), mpz(1)) ```
22,734,148
I'm trying to check if a number is a perfect square. However, i am dealing with extraordinarily large numbers so python thinks its infinity for some reason. it gets up to 1.1 X 10^154 before the code returns "Inf". Is there anyway to get around this? Here is the code, the lst variable just holds a bunch of really really really really really big numbers ``` import math from decimal import Decimal def main(): for i in lst: root = math.sqrt(Decimal(i)) print(root) if int(root + 0.5) ** 2 == i: print(str(i) + " True") ```
2014/03/29
[ "https://Stackoverflow.com/questions/22734148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3476226/" ]
Replace `math.sqrt(Decimal(i))` with `Decimal(i).sqrt()` to prevent your `Decimal`s decaying into `float`s
@casevh has the right answer -- use a library that can do math on arbitrarily large integers. Since you're looking for squares, you presumably are working with integers, and one could argue that using floating point types (including decimal.Decimal) is, in some sense, inelegant. You definitely *shouldn't* use Python's float type; it has limited precision (about 16 decimal places). If you do use decimal.Decimal, be careful to specify the precision (which will depend on how big your numbers are). Since Python has a big integer type, one can write a reasonably simple algorithm to check for squareness; see my implementation of such an algorithm, along with illustrations of problems with float, and how you could use decimal.Decimal, below. ``` import math import decimal def makendigit(n): """Return an arbitraryish n-digit number""" return sum((j%9+1)*10**i for i,j in enumerate(range(n))) x=makendigit(30) # it looks like float will work... print 'math.sqrt(x*x) - x: %.17g' % (math.sqrt(x*x) - x) # ...but actually they won't print 'math.sqrt(x*x+1) - x: %.17g' % (math.sqrt(x*x+1) - x) # by default Decimal won't be sufficient... print 'decimal.Decimal(x*x).sqrt() - x:',decimal.Decimal(x*x).sqrt() - x # ...you need to specify the precision print 'decimal.Decimal(x*x).sqrt(decimal.Context(prec=30)) - x:',decimal.Decimal(x*x).sqrt(decimal.Context(prec=100)) - x def issquare_decimal(y,prec=1000): x=decimal.Decimal(y).sqrt(decimal.Context(prec=prec)) return x==x.to_integral_value() print 'issquare_decimal(x*x):',issquare_decimal(x*x) print 'issquare_decimal(x*x+1):',issquare_decimal(x*x+1) # you can check for "squareness" without going to floating point. # one option is a bisection search; this Newton's method approach # should be faster. # For "industrial use" you should use gmpy2 or some similar "big # integer" library. def isqrt(y): """Find largest integer <= sqrt(y)""" if not isinstance(y,(int,long)): raise ValueError('arg must be an integer') if y<0: raise ValueError('arg must be positive') if y in (0,1): return y x0=y//2 while True: # newton's rule x1= (x0**2+y)//2//x0 # we don't always get converge to x0=x1, e.g., for y=3 if abs(x1-x0)<=1: # nearly converged; find biggest # integer satisfying our condition x=max(x0,x1) if x**2>y: while x**2>y: x-=1 else: while (x+1)**2<=y: x+=1 return x x0=x1 def issquare(y): """Return true if non-negative integer y is a perfect square""" return y==isqrt(y)**2 print 'isqrt(x*x)-x:',isqrt(x*x)-x print 'issquare(x*x):',issquare(x*x) print 'issquare(x*x+1):',issquare(x*x+1) ```
30,326,654
I'm following this for django manage.py module <http://docs.ansible.com/django_manage_module.html> for e.g. one of my tasks looks like - ``` - name: Django migrate django_manage: command=migrate app_path={{app_path}} settings={{django_settings}} tags: - django ``` this works perfectly fine with python2(default in ubuntu) but when I try with python3-django project it throws error ``` failed: [123.456.200.000] => (item=school) => {"cmd": "python manage.py makemigrations --noinput school --settings=myproj.settings.production", "failed": true, "item": "school", "path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games", "state": "absent", "syspath": ["/home/ubuntu/.ansible/tmp/ansible-tmp-1432039779.41-30449122707918", "/usr/lib/python2.7", "/usr/lib/python2.7/plat-x86_64-linux-gnu", "/usr/lib/python2.7/lib-tk", "/usr/lib/python2.7/lib-old", "/usr/lib/python2.7/lib-dynload", "/usr/local/lib/python2.7/dist-packages", "/usr/lib/python2.7/dist-packages"]} msg: :stderr: Traceback (most recent call last): File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named django.core.management ``` from this error it seems Ansible bydefault uses Python2. can we change this to python3 or anyother workaround? PS: pip freeze ensure that django 1.8 has installed (for python3 using pip3) Suggestions: when I run `ubuntu@ubuntu:/srv/myproj$ python3 manage.py migrate` it works fine. so I'm thinking of passing command directly something like ``` - name: Django migrate command: python3 manage.py migrate tags: - django ``` but how do I pass the project path or manage.py file's path, there is only an option to pass settings, something like `--settings=myproject.settings.main`. can we do by passing direct command?
2015/05/19
[ "https://Stackoverflow.com/questions/30326654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4414786/" ]
From Ansible website <http://docs.ansible.com/intro_installation.html> > > Python 3 is a slightly different language than Python 2 and most Python programs (including Ansible) are not switching over yet. However, some Linux distributions (Gentoo, Arch) may not have a Python 2.X interpreter installed by default. On those systems, you should install one, and set the ‘ansible\_python\_interpreter’ variable in inventory (see Inventory) to point at your 2.X Python. Distributions like Red Hat Enterprise Linux, CentOS, Fedora, and Ubuntu all have a 2.X interpreter installed by default and this does not apply to those distributions. This is also true of nearly all Unix systems. If you need to bootstrap these remote systems by installing Python 2.X, using the ‘raw’ module will be able to do it remotely. > > >
Ansible is using `python` to run the django command: <https://github.com/ansible/ansible-modules-core/blob/devel/web_infrastructure/django_manage.py#L237> Your only solution is thus to override the executable that will be run, for instance by changing your PATH: ``` - file: src=/usr/bin/python3 dest=/home/user/.local/bin/python state=link - name: Django migrate django_manage: command=migrate app_path={{app_path}} settings={{django_settings}} environment: - PATH: "/home/user/.local/bin/:/bin:/usr/bin:/usr/local/bin" ```
30,326,654
I'm following this for django manage.py module <http://docs.ansible.com/django_manage_module.html> for e.g. one of my tasks looks like - ``` - name: Django migrate django_manage: command=migrate app_path={{app_path}} settings={{django_settings}} tags: - django ``` this works perfectly fine with python2(default in ubuntu) but when I try with python3-django project it throws error ``` failed: [123.456.200.000] => (item=school) => {"cmd": "python manage.py makemigrations --noinput school --settings=myproj.settings.production", "failed": true, "item": "school", "path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games", "state": "absent", "syspath": ["/home/ubuntu/.ansible/tmp/ansible-tmp-1432039779.41-30449122707918", "/usr/lib/python2.7", "/usr/lib/python2.7/plat-x86_64-linux-gnu", "/usr/lib/python2.7/lib-tk", "/usr/lib/python2.7/lib-old", "/usr/lib/python2.7/lib-dynload", "/usr/local/lib/python2.7/dist-packages", "/usr/lib/python2.7/dist-packages"]} msg: :stderr: Traceback (most recent call last): File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named django.core.management ``` from this error it seems Ansible bydefault uses Python2. can we change this to python3 or anyother workaround? PS: pip freeze ensure that django 1.8 has installed (for python3 using pip3) Suggestions: when I run `ubuntu@ubuntu:/srv/myproj$ python3 manage.py migrate` it works fine. so I'm thinking of passing command directly something like ``` - name: Django migrate command: python3 manage.py migrate tags: - django ``` but how do I pass the project path or manage.py file's path, there is only an option to pass settings, something like `--settings=myproject.settings.main`. can we do by passing direct command?
2015/05/19
[ "https://Stackoverflow.com/questions/30326654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4414786/" ]
If you edit the shebang in the Django manage.py file to be `#!/usr/bin/env python3` then you can ensure that python 3 will always be used to run your Django app. Tried successfully with Ansible 2.3.0 and Django 1.10.5. YMMV
Ansible is using `python` to run the django command: <https://github.com/ansible/ansible-modules-core/blob/devel/web_infrastructure/django_manage.py#L237> Your only solution is thus to override the executable that will be run, for instance by changing your PATH: ``` - file: src=/usr/bin/python3 dest=/home/user/.local/bin/python state=link - name: Django migrate django_manage: command=migrate app_path={{app_path}} settings={{django_settings}} environment: - PATH: "/home/user/.local/bin/:/bin:/usr/bin:/usr/local/bin" ```
25,863,769
I have a set (or a list) of numbers {1, 2.25, 5.63, 2.12, 7.98, 4.77} and i want to find the best combination of numbers from this set/list which when added are closest to 10. How do i accomplish that in python using an element from collection ?
2014/09/16
[ "https://Stackoverflow.com/questions/25863769", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1578720/" ]
If the problem size permits, you can use some friends in `itertools` to quickly brute force through it: ``` s = {1, 2.25, 5.63, 2.12, 7.98, 4.77} from itertools import combinations, chain res = min(((comb, abs(sum(comb)-10)) for comb in chain(*[combinations(s, k) for k in range(1, len(s)+1)])), key=lambda x: x[1])[0] print res ``` Output: ``` (2.25, 5.63, 2.12) ```
It's an NP-Hard problem. If your data are not too big, you can just test every single solution with a code like : ``` def combination(itemList): """ Returns all the combinations of items in the list """ def wrapped(current_pack, itemList): if itemList == []: return [current_pack] else: head, tail = itemList[0], itemList[1:] return wrapped(current_pack+[head], tail) + wrapped(current_pack, tail) return wrapped([], itemList) def select_best(combination_list, objective): """ Returns the element whose the sum of its own elements is the nearest to the objective""" def element_sum(combination): result = 0.0 for element in combination: result+= element return result best, weight = combination_list[0], element_sum(combination_list[0]) for combination in combination_list: current_weight = element_sum(combination) if (abs(current_weight-objective) < abs(weight-objective)): best, weight = combination, current_weight return best if __name__ == "__main__" : items = [1, 2.25, 5.63, 2.12, 7.98, 4.77] combinations = combination(items) combinations.sort() print(combinations, len(combinations))#2^6 combinations -> 64 best = select_best(combinations, 10.0) print(best) ``` This code will give you the better solution whatever input you give to it. But as you can see the number of combination is 2^n where n is the number of element in your list. Try this with more than 50 element and say good bye to your RAM memory. As perfectly correct from an algorithmic point of view, you can wait for more than you're entire life to get a response for real case problems. Metaheuristic and Constraint satisfaction problem algorithms could be useful for more efficient approach.
62,787,056
I created virtual environment and installed both tensorflow and tensorflow-gpu. After that I installed keras. And then I checked in my conda terminal by importing keras and I was able to import keras in it. However, using jupyter notebook if I try to import keras then it gives me below error. ``` import keras ImportError Traceback (most recent call last) <ipython-input-5-88d96843a926> in <module> ----> 1 import keras ~\Anaconda3\lib\site-packages\keras\__init__.py in <module> 1 from __future__ import absolute_import 2 ----> 3 from . import utils 4 from . import activations 5 from . import applications ~\Anaconda3\lib\site-packages\keras\utils\__init__.py in <module> 4 from . import data_utils 5 from . import io_utils ----> 6 from . import conv_utils 7 from . import losses_utils 8 from . import metrics_utils ~\Anaconda3\lib\site-packages\keras\utils\conv_utils.py in <module> 7 from six.moves import range 8 import numpy as np ----> 9 from .. import backend as K 10 11 ~\Anaconda3\lib\site-packages\keras\backend\__init__.py in <module> ----> 1 from .load_backend import epsilon 2 from .load_backend import set_epsilon 3 from .load_backend import floatx 4 from .load_backend import set_floatx 5 from .load_backend import cast_to_floatx ~\Anaconda3\lib\site-packages\keras\backend\load_backend.py in <module> 88 elif _BACKEND == 'tensorflow': 89 sys.stderr.write('Using TensorFlow backend.\n') ---> 90 from .tensorflow_backend import * 91 else: 92 # Try and load external backend. ~\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py in <module> 4 5 import tensorflow as tf ----> 6 from tensorflow.python.eager import context 7 from tensorflow.python.framework import device as tfdev 8 from tensorflow.python.framework import ops as tf_ops ImportError: cannot import name 'context' from 'tensorflow.python.eager' (unknown location) ``` Already tried uninstalling and installing keras and tensorflow. I'm pretty new to programming so I am not sure how to go around it. Tried looking other threads but not helping. Can any one recommend what can I do to resolve it? Thanks
2020/07/08
[ "https://Stackoverflow.com/questions/62787056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13605650/" ]
Did you installed the dependencies with conda? Like this: ``` $ conda install -c conda-forge keras $ conda install -c conda-forge tensorflow $ conda install -c anaconda tensorflow-gpu ``` If you installed with `pip` they will not work inside your virtual env. Look at your conda dependencies list, to see if the tensorflow and keras are really there using: ``` $ conda list ``` If they are, activate your virtual environment: ``` $ conda activate 'name_of_your_env' ``` And run the jupyter inside that, should be something like that (if your env shows in parenthesis the activation worked, and you are now inside the virtual env): ``` (your_env)$ jupyter notebook ```
Doing below solved my issue. So I removed all the packages that were installed via pip and intstalled packages through conda. I had environment issue and created another environment from the scratch and ran below commands. Create virtual environment: ``` conda create -n <env_name> ``` Install tensorflow-gpu via conda and not pip. If you skip create environment command, type in below as it will scratch off new env and specify python and tensorflow version. ``` conda create -n <env_name> python=3.6 tensorflow-gpu=2.2 ``` And then I had to make sure that jupyter notebook is opening with the environment that I want it to open with. For that below code. ``` C:\Users\Adi(Your user here)\Anaconda3\envs\env_name\python.exe -m ipykernel install --user --name <env_name> --display-name "Python (env_name)" ``` When you go to Jupyter notebook, on the top right corner you should see your virtual environment and make sure you select that. And it got resolved like that.
24,112,445
I am using Python 3.4.0 and I have Mac OSX 10.9.2. I have the following code saved as sublimePygame in Sublime Text. ``` import pygame, sys from pygame.locals import * pygame.init() #set up the window DISPLAYSURF = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Drawing') # set up the colors BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) GREEN = ( 0, 255, 0) BLUE = ( 0, 0, 255) # Draw on surface object DISPLAYSURF.fill(WHITE) pygame.draw.polygon(DISPLAYSURF, GREEN, ((146, 0), (291, 106), (236, 277), (56, 277), (0, 106))) pygame.draw.line(DISPLAYSURF, BLUE, (60, 60), (120, 60), 4) pygame.draw.line(DISPLAYSURF, BLUE, (120, 60), (60, 120)) pygame.draw.line(DISPLAYSURF, BLUE, (60, 120), (120, 120), 4) pygame.draw.circle(DISPLAYSURF, BLUE, (300, 50), 20, 0) pygame.draw.ellipse(DISPLAYSURF, RED, (300, 250, 40, 80), 1) pygame.draw.rect(DISPLAYSURF, RED, (200, 150, 100, 50)) pixObj = pygame.PixelArray(DISPLAYSURF) pixObj[480, 380] = BLACK pixObj[482, 382] = BLACK pixObj[48, 384] = BLACK pixObj[486, 386] = BLACK pixObj[488, 388] = BLACK del pixObj while True: # main game loop for event in pygame.event.get(): if event.type == QUIT: sys.exit() pygame.display.update() ``` I ran the code in my terminal and the python window opened for a second and then closed. I got this error in the terminal. ``` Traceback (most recent call last): File "sublimePygame", line 29, in <module> pixObj[480, 380] = BLACK IndexError: invalid index Segmentation fault: 11 ``` I checked the pygame documentation and my code seemed ok. I googled the error and *Segmentation Error 11* seems to be a bug in python but I read that it was fixed in Python 3.4.0. Does Anyone know what went wrong? Thanks in advance! Edit: Marius found the bug in my program, however when I run it it opens a blank Python window and not what it was supposed top open. Does anyone know why this happened?
2014/06/09
[ "https://Stackoverflow.com/questions/24112445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3720830/" ]
Hope this link helps for part 1. Should be able to commit and push your changes using git push heroku master. <https://devcenter.heroku.com/articles/git#tracking-your-app-in-git> For part 2: will scaling your dynos back to 0 work for your case? [How to stop an app on Heroku?](https://stackoverflow.com/questions/2811453/how-to-stop-an-app-on-heroku)
1. When you make a change, just push the git repository to heroku again with`git push heroku master`. The server will automatically restart with the changed system. 2. You seem to have a misconception. You can always run your local development server regardless of what Heroku is doing (unless some other service your app connects to would become confused, of course; but if that happens, there is probably something wrong with your design). Nonetheless, if you want to stop the application on Heroku, just scale it to zero web dynos: `heroku ps:scale web=0`.
24,112,445
I am using Python 3.4.0 and I have Mac OSX 10.9.2. I have the following code saved as sublimePygame in Sublime Text. ``` import pygame, sys from pygame.locals import * pygame.init() #set up the window DISPLAYSURF = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Drawing') # set up the colors BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) GREEN = ( 0, 255, 0) BLUE = ( 0, 0, 255) # Draw on surface object DISPLAYSURF.fill(WHITE) pygame.draw.polygon(DISPLAYSURF, GREEN, ((146, 0), (291, 106), (236, 277), (56, 277), (0, 106))) pygame.draw.line(DISPLAYSURF, BLUE, (60, 60), (120, 60), 4) pygame.draw.line(DISPLAYSURF, BLUE, (120, 60), (60, 120)) pygame.draw.line(DISPLAYSURF, BLUE, (60, 120), (120, 120), 4) pygame.draw.circle(DISPLAYSURF, BLUE, (300, 50), 20, 0) pygame.draw.ellipse(DISPLAYSURF, RED, (300, 250, 40, 80), 1) pygame.draw.rect(DISPLAYSURF, RED, (200, 150, 100, 50)) pixObj = pygame.PixelArray(DISPLAYSURF) pixObj[480, 380] = BLACK pixObj[482, 382] = BLACK pixObj[48, 384] = BLACK pixObj[486, 386] = BLACK pixObj[488, 388] = BLACK del pixObj while True: # main game loop for event in pygame.event.get(): if event.type == QUIT: sys.exit() pygame.display.update() ``` I ran the code in my terminal and the python window opened for a second and then closed. I got this error in the terminal. ``` Traceback (most recent call last): File "sublimePygame", line 29, in <module> pixObj[480, 380] = BLACK IndexError: invalid index Segmentation fault: 11 ``` I checked the pygame documentation and my code seemed ok. I googled the error and *Segmentation Error 11* seems to be a bug in python but I read that it was fixed in Python 3.4.0. Does Anyone know what went wrong? Thanks in advance! Edit: Marius found the bug in my program, however when I run it it opens a blank Python window and not what it was supposed top open. Does anyone know why this happened?
2014/06/09
[ "https://Stackoverflow.com/questions/24112445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3720830/" ]
Assuming you are using git to manage your project, you apply changes to Heroku by pushing them. This might look something like this: ``` git push heroku master ``` This pushes the master branch of your git project to Heroku. To "turn off" your app on Heroku, you can do two things: ``` $ heroku ps:scale web=0 ``` This completely turns off the application. You can also put it in maintenance mode, which simply prevents traffic to it: ``` $ heroku maintenance:on ``` That said, you don't need to turn off your app on Heroku while developing locally. The two environments are completely independent of each other.
Hope this link helps for part 1. Should be able to commit and push your changes using git push heroku master. <https://devcenter.heroku.com/articles/git#tracking-your-app-in-git> For part 2: will scaling your dynos back to 0 work for your case? [How to stop an app on Heroku?](https://stackoverflow.com/questions/2811453/how-to-stop-an-app-on-heroku)
24,112,445
I am using Python 3.4.0 and I have Mac OSX 10.9.2. I have the following code saved as sublimePygame in Sublime Text. ``` import pygame, sys from pygame.locals import * pygame.init() #set up the window DISPLAYSURF = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Drawing') # set up the colors BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) GREEN = ( 0, 255, 0) BLUE = ( 0, 0, 255) # Draw on surface object DISPLAYSURF.fill(WHITE) pygame.draw.polygon(DISPLAYSURF, GREEN, ((146, 0), (291, 106), (236, 277), (56, 277), (0, 106))) pygame.draw.line(DISPLAYSURF, BLUE, (60, 60), (120, 60), 4) pygame.draw.line(DISPLAYSURF, BLUE, (120, 60), (60, 120)) pygame.draw.line(DISPLAYSURF, BLUE, (60, 120), (120, 120), 4) pygame.draw.circle(DISPLAYSURF, BLUE, (300, 50), 20, 0) pygame.draw.ellipse(DISPLAYSURF, RED, (300, 250, 40, 80), 1) pygame.draw.rect(DISPLAYSURF, RED, (200, 150, 100, 50)) pixObj = pygame.PixelArray(DISPLAYSURF) pixObj[480, 380] = BLACK pixObj[482, 382] = BLACK pixObj[48, 384] = BLACK pixObj[486, 386] = BLACK pixObj[488, 388] = BLACK del pixObj while True: # main game loop for event in pygame.event.get(): if event.type == QUIT: sys.exit() pygame.display.update() ``` I ran the code in my terminal and the python window opened for a second and then closed. I got this error in the terminal. ``` Traceback (most recent call last): File "sublimePygame", line 29, in <module> pixObj[480, 380] = BLACK IndexError: invalid index Segmentation fault: 11 ``` I checked the pygame documentation and my code seemed ok. I googled the error and *Segmentation Error 11* seems to be a bug in python but I read that it was fixed in Python 3.4.0. Does Anyone know what went wrong? Thanks in advance! Edit: Marius found the bug in my program, however when I run it it opens a blank Python window and not what it was supposed top open. Does anyone know why this happened?
2014/06/09
[ "https://Stackoverflow.com/questions/24112445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3720830/" ]
Assuming you are using git to manage your project, you apply changes to Heroku by pushing them. This might look something like this: ``` git push heroku master ``` This pushes the master branch of your git project to Heroku. To "turn off" your app on Heroku, you can do two things: ``` $ heroku ps:scale web=0 ``` This completely turns off the application. You can also put it in maintenance mode, which simply prevents traffic to it: ``` $ heroku maintenance:on ``` That said, you don't need to turn off your app on Heroku while developing locally. The two environments are completely independent of each other.
1. When you make a change, just push the git repository to heroku again with`git push heroku master`. The server will automatically restart with the changed system. 2. You seem to have a misconception. You can always run your local development server regardless of what Heroku is doing (unless some other service your app connects to would become confused, of course; but if that happens, there is probably something wrong with your design). Nonetheless, if you want to stop the application on Heroku, just scale it to zero web dynos: `heroku ps:scale web=0`.
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
I actually shared my solution to this exact issue. What I did here was create a pool of connections that you can specify the max with, and then queued query requests async through this channel. This way you can leave a certain amount of connections open, but it will queue and pool async and keep the speed you are used to. This requires gevent and postgres. [Python Postgres psycopg2 ThreadedConnectionPool exhausted](https://stackoverflow.com/questions/48532301/python-postgres-psycopg2-threadedconnectionpool-exhausted/49366850#49366850)
I'm no expert in this field, but I believe that [PgBouncer](https://pgbouncer.github.io/features.html) would do the job for you, assuming you're able to use a PostgreSQL back-end (that's one detail you didn't make clear). PgBouncer is a *connection pooler*, which allows you re-use connections avoiding the overhead of connecting on every request. According to their [documentation](https://pgbouncer.github.io/config.html): > > **user, password** > > > If user= is set, all connections to the destination database will be done with the specified user, meaning that there will be only one pool for this database. > > > Otherwise PgBouncer tries to log into the destination database with client username, meaning that there will be one pool per user. > > > So, you can have a single pool of connections per user, which sounds just like what you want. In MySQL land, the [mysql.connector.pooling](https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html) module allows you to do some connection pooling, though I'm not sure if you can do per-user pooling. Given that you can set up the pool name, I'm guessing you could use the user's name to identify the pool. Regardless of what you use, you will likely have occasions where reconnecting is unavoidable (a user connects, does a few things, goes away for a meeting and lunch, comes back and wants to take more action).
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
You could use an IoC container to store a singleton provider for you. Essentially, instead of constructing a new connection every time, it will only construct it once (the first time `ConnectionContainer.connection_provider()` is called) and thereafter it will always return the previously constructed connection. You'll need the `dependency-injector` package for my example to work: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, host, user, passwd, db, charset): self.conn = MySQLdb.connect( host=host, user=user, passwd=passwd, db=db, charset=charset ) class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, host='aaa', user='bbb', passwd='ccc', db='ddd', charset='utf8') def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) ``` I've hardcoded the connection string here, but it is also possible to make it variable depending on a changeable configuration. In that case you could also create a container for the configuration file and have the connection container read its config from there. You then set the config at runtime. As follows: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, connection_config): self.conn = MySQLdb.connect(**connection_config) class ConfigContainer(containers.DeclarativeContainer): connection_config = providers.Configuration("connection_config") class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, ConfigContainer.connection_config) def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) # run code my_config = { 'host':'aaa', 'user':'bbb', 'passwd':'ccc', 'db':'ddd', 'charset':'utf8' } ConfigContainer.connection_config.override(my_config) request = ... sql = ... do_queries(request, sql) ```
I'm no expert in this field, but I believe that [PgBouncer](https://pgbouncer.github.io/features.html) would do the job for you, assuming you're able to use a PostgreSQL back-end (that's one detail you didn't make clear). PgBouncer is a *connection pooler*, which allows you re-use connections avoiding the overhead of connecting on every request. According to their [documentation](https://pgbouncer.github.io/config.html): > > **user, password** > > > If user= is set, all connections to the destination database will be done with the specified user, meaning that there will be only one pool for this database. > > > Otherwise PgBouncer tries to log into the destination database with client username, meaning that there will be one pool per user. > > > So, you can have a single pool of connections per user, which sounds just like what you want. In MySQL land, the [mysql.connector.pooling](https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html) module allows you to do some connection pooling, though I'm not sure if you can do per-user pooling. Given that you can set up the pool name, I'm guessing you could use the user's name to identify the pool. Regardless of what you use, you will likely have occasions where reconnecting is unavoidable (a user connects, does a few things, goes away for a meeting and lunch, comes back and wants to take more action).
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
You could use an IoC container to store a singleton provider for you. Essentially, instead of constructing a new connection every time, it will only construct it once (the first time `ConnectionContainer.connection_provider()` is called) and thereafter it will always return the previously constructed connection. You'll need the `dependency-injector` package for my example to work: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, host, user, passwd, db, charset): self.conn = MySQLdb.connect( host=host, user=user, passwd=passwd, db=db, charset=charset ) class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, host='aaa', user='bbb', passwd='ccc', db='ddd', charset='utf8') def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) ``` I've hardcoded the connection string here, but it is also possible to make it variable depending on a changeable configuration. In that case you could also create a container for the configuration file and have the connection container read its config from there. You then set the config at runtime. As follows: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, connection_config): self.conn = MySQLdb.connect(**connection_config) class ConfigContainer(containers.DeclarativeContainer): connection_config = providers.Configuration("connection_config") class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, ConfigContainer.connection_config) def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) # run code my_config = { 'host':'aaa', 'user':'bbb', 'passwd':'ccc', 'db':'ddd', 'charset':'utf8' } ConfigContainer.connection_config.override(my_config) request = ... sql = ... do_queries(request, sql) ```
I am just sharing my knowledge over here. *Install the PyMySQL to use the MySql* **For Python 2.x** ``` pip install PyMySQL ``` **For Python 3.x** ``` pip3 install PyMySQL ``` **1.** If you are open to use Django Framework then it's very easy to run the SQL query without any re-connection. In setting.py file add the below lines ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'test', 'PASSWORD': 'test', 'HOST': 'localhost', 'OPTIONS': {'charset': 'utf8mb4'}, } } ``` In views.py file add these lines to get the data. You can customized your query according to your need ``` from django.db import connection def connect(request): cursor = connection.cursor() cursor.execute("SELECT * FROM Tablename"); results = cursor.fetchall() return results ``` You will get the desire results. Click [here](https://docs.djangoproject.com/en/2.1/topics/db/sql/) for more information about it **2. For python Tkinter** ``` from Tkinter import * import MySQLdb db = MySQLdb.connect("localhost","root","root","test") # prepare a cursor object using cursor() method cursor = db.cursor() cursor.execute("SELECT * FROM Tablename") if cursor.fetchone() is not None: print("In If") else: print("In Else") cursor.close() ``` Refer [this](https://palewi.re/posts/2008/04/26/python-recipe-connect-to-mysql-database-execute-a-query-print-the-results/) for more information **PS:** You can check this link for your question to reusing a DB connection for later. [How to enable MySQL client auto re-connect with MySQLdb?](https://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re-connect-with-mysqldb)
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
You could use an IoC container to store a singleton provider for you. Essentially, instead of constructing a new connection every time, it will only construct it once (the first time `ConnectionContainer.connection_provider()` is called) and thereafter it will always return the previously constructed connection. You'll need the `dependency-injector` package for my example to work: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, host, user, passwd, db, charset): self.conn = MySQLdb.connect( host=host, user=user, passwd=passwd, db=db, charset=charset ) class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, host='aaa', user='bbb', passwd='ccc', db='ddd', charset='utf8') def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) ``` I've hardcoded the connection string here, but it is also possible to make it variable depending on a changeable configuration. In that case you could also create a container for the configuration file and have the connection container read its config from there. You then set the config at runtime. As follows: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, connection_config): self.conn = MySQLdb.connect(**connection_config) class ConfigContainer(containers.DeclarativeContainer): connection_config = providers.Configuration("connection_config") class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, ConfigContainer.connection_config) def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) # run code my_config = { 'host':'aaa', 'user':'bbb', 'passwd':'ccc', 'db':'ddd', 'charset':'utf8' } ConfigContainer.connection_config.override(my_config) request = ... sql = ... do_queries(request, sql) ```
This is not a good idea to do such a thing synchronously in web app context. Remember that your application may needs to work in multi process/thread fashion, and you could not share connection between processes normally. So if you create a connection for your user on a process, there is no guaranty to receive query request on the same one. May be a better idea is to have a single process background worker which handles connections in multiple threads (a thread per session) to make queries on database and retrieve result on web app. Your application should assign a unique ID to each session and the background worker track each thread using session ID. You may use `celery` or any other task queues supporting async result. So the design would be something like below: ``` |<--| |<--------------| |<--| user (id: x) | | webapp | | queue | | worker (thread x) | | DB |-->| |-->| |-->| |-->| ``` Also you could create a queue for each user until they have an active session, as a result you could run a separate background process for each session.
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
I don't see why do you need a cached connection here and why not just reconnect on every request caching user's credentials somewhere, but anyway I'll try to outline a solution that might fit your requirements. I'd suggest to look into a more generic task first - cache something between subsequent requests your app needs to handle and can't serialize into `django`'s sessions. In your particular case this shared value would be a *database connection* (or multiple connections). Lets start with a simple task of sharing a simple *counter variable* between requests, just to understand what's actually happening under the hood. Amaizingly but neither answer has mentioned anything regarding a web server you might use! Actually there are multiple ways to handle concurrent connections in web apps: 1. Having multiple **processes**, every request comes into **one** of them at random 2. Having multiple **threads**, every request is handled by a random **thread** 3. p.1 and p.2 combined 4. Various **async** techniques, when there's a *single* **process** + *event loop* handling requests with a caveat that request handlers shouldn't block for a long time From my own experience p.1-2 are fine for majority of typical webapps. `Apache1.x` could only work with p.1, `Apache2.x` can handle all of 1-3. Lets start with the following `django` app and run a single-process [gunicorn](https://gunicorn.org/) webserver. I'm going to use `gunicorn` because it's fairly easy to configure it unlike `apache` (personal opinion :-) ### views.py ``` import time from django.http import HttpResponse c = 0 def main(self): global c c += 1 return HttpResponse('val: {}\n'.format(c)) def heavy(self): time.sleep(10) return HttpResponse('heavy done') ``` ### urls.py ``` from django.contrib import admin from django.urls import path from . import views urlpatterns = [ path('admin/', admin.site.urls), path('', views.main, name='main'), path('heavy/', views.heavy, name='heavy') ] ``` ### Running it in a single process mode: ``` gunicorn testpool.wsgi -w 1 ``` Here's our process tree - there's only 1 worker that would handle ALL requests ``` pstree 77292 -+= 77292 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 \--- 77295 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 ``` Trying to use our app: ``` curl 'http://127.0.0.1:8000' val: 1 curl 'http://127.0.0.1:8000' val: 2 curl 'http://127.0.0.1:8000' val: 3 ``` As you can see you can easily share the counter between subsequent requests. The problem here is that you can only serve a single request in parallel. If you request for **/heavy/** in one tab, **/** won't work until **/heavy** is done ### Lets now use 2 worker processes: ``` gunicorn testpool.wsgi -w 2 ``` This is how the process tree would look like: ``` pstree 77285 -+= 77285 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 |--- 77288 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 \--- 77289 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 ``` Testing our app: ``` curl 'http://127.0.0.1:8000' val: 1 curl 'http://127.0.0.1:8000' val: 2 curl 'http://127.0.0.1:8000' val: 1 ``` The first two requests has been handled by the first `worker process`, and the **3rd** one - by the second worker process that has its own memory space so you see **1** instead of **3**. Notice your output may differ because process 1 and 2 are selected at random. But sooner or later you'll hit a *different* process. That's not very helpful for us because we need to handle *multiple* concurrent requests and we need to somehow get our request handled by a specific process that can't be done in general case. Most *pooling* technics coming out of the box would only cache connections in the scope of a *single* process, if your request gets served by a different process - a **NEW** connection would need to be made. ### Lets move to threads ``` gunicorn testpool.wsgi -w 1 --threads 2 ``` Again - only 1 process ``` pstree 77310 -+= 77310 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 --threads 2 \--- 77313 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 --threads 2 ``` Now if you run **/heavy** in one tab you'll still be able to query **/** and your counter will be *preserved* between requests! Even if the number of threads is growing or shrinking depending on your workload it should still work fine. **Problems**: you'll need to *synchronize* access to the shared variable like this using python threads synchronization technics ([read more](https://docs.python.org/3.4/library/threading.html#lock-objects)). Another problem is that the same user may need to to issue multiple queries in parallel - i.e. open multiple tabs. To handle it you can open *multiple* connections on the first request when you have db credentials available. If a user needs more connections than your app might wait on lock until a connection becomes available. ### Back to your question You can create a class that would have the following methods: ``` from contextlib import contextmanager class ConnectionPool(object): def __init__(self, max_connections=4): self._pool = dict() self._max_connections = max_connections def preconnect(self, session_id, user, password): # create multiple connections and put them into self._pool # ... @contextmanager def get_connection(sef, session_id): # if have an available connection: # mark it as allocated # and return it try: yield connection finally: # put it back to the pool # .... # else # wait until there's a connection returned to the pool by another thread pool = ConnectionPool(4) def some_view(self): session_id = ... with pool.get_connection(session_id) as conn: conn.query(...) ``` This is not a complete solution - you'll need to somehow delete outdated connections not used for a long time. If a user comes back after a long time and his connection have been closed, he'll need to provide his credentials again - hopefully it's ok from your app's perspective. Also keep in mind python `threads` have its performance penalties, not sure if this is an issue for you. I haven't checked it for `apache2` (too much configuration burden, I haven't used it for ages and generally use [uwsgi](https://uwsgi-docs.readthedocs.io/en/latest/)), but it should work there too - would be happy to hear back from you if you manage to run it ) And also don't forget about **p.4** (async approach) - unlikely will you be able to use it on apache, but it's worth investigation - keywords: *django + gevent*, *django + asyncio*. It has its pros/cons and may greatly affect your app implementation so it's hard to suggest any solution without knowing your app requirements in detail
I'm no expert in this field, but I believe that [PgBouncer](https://pgbouncer.github.io/features.html) would do the job for you, assuming you're able to use a PostgreSQL back-end (that's one detail you didn't make clear). PgBouncer is a *connection pooler*, which allows you re-use connections avoiding the overhead of connecting on every request. According to their [documentation](https://pgbouncer.github.io/config.html): > > **user, password** > > > If user= is set, all connections to the destination database will be done with the specified user, meaning that there will be only one pool for this database. > > > Otherwise PgBouncer tries to log into the destination database with client username, meaning that there will be one pool per user. > > > So, you can have a single pool of connections per user, which sounds just like what you want. In MySQL land, the [mysql.connector.pooling](https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html) module allows you to do some connection pooling, though I'm not sure if you can do per-user pooling. Given that you can set up the pool name, I'm guessing you could use the user's name to identify the pool. Regardless of what you use, you will likely have occasions where reconnecting is unavoidable (a user connects, does a few things, goes away for a meeting and lunch, comes back and wants to take more action).
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
You could use an IoC container to store a singleton provider for you. Essentially, instead of constructing a new connection every time, it will only construct it once (the first time `ConnectionContainer.connection_provider()` is called) and thereafter it will always return the previously constructed connection. You'll need the `dependency-injector` package for my example to work: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, host, user, passwd, db, charset): self.conn = MySQLdb.connect( host=host, user=user, passwd=passwd, db=db, charset=charset ) class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, host='aaa', user='bbb', passwd='ccc', db='ddd', charset='utf8') def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) ``` I've hardcoded the connection string here, but it is also possible to make it variable depending on a changeable configuration. In that case you could also create a container for the configuration file and have the connection container read its config from there. You then set the config at runtime. As follows: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, connection_config): self.conn = MySQLdb.connect(**connection_config) class ConfigContainer(containers.DeclarativeContainer): connection_config = providers.Configuration("connection_config") class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, ConfigContainer.connection_config) def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) # run code my_config = { 'host':'aaa', 'user':'bbb', 'passwd':'ccc', 'db':'ddd', 'charset':'utf8' } ConfigContainer.connection_config.override(my_config) request = ... sql = ... do_queries(request, sql) ```
I don't see why do you need a cached connection here and why not just reconnect on every request caching user's credentials somewhere, but anyway I'll try to outline a solution that might fit your requirements. I'd suggest to look into a more generic task first - cache something between subsequent requests your app needs to handle and can't serialize into `django`'s sessions. In your particular case this shared value would be a *database connection* (or multiple connections). Lets start with a simple task of sharing a simple *counter variable* between requests, just to understand what's actually happening under the hood. Amaizingly but neither answer has mentioned anything regarding a web server you might use! Actually there are multiple ways to handle concurrent connections in web apps: 1. Having multiple **processes**, every request comes into **one** of them at random 2. Having multiple **threads**, every request is handled by a random **thread** 3. p.1 and p.2 combined 4. Various **async** techniques, when there's a *single* **process** + *event loop* handling requests with a caveat that request handlers shouldn't block for a long time From my own experience p.1-2 are fine for majority of typical webapps. `Apache1.x` could only work with p.1, `Apache2.x` can handle all of 1-3. Lets start with the following `django` app and run a single-process [gunicorn](https://gunicorn.org/) webserver. I'm going to use `gunicorn` because it's fairly easy to configure it unlike `apache` (personal opinion :-) ### views.py ``` import time from django.http import HttpResponse c = 0 def main(self): global c c += 1 return HttpResponse('val: {}\n'.format(c)) def heavy(self): time.sleep(10) return HttpResponse('heavy done') ``` ### urls.py ``` from django.contrib import admin from django.urls import path from . import views urlpatterns = [ path('admin/', admin.site.urls), path('', views.main, name='main'), path('heavy/', views.heavy, name='heavy') ] ``` ### Running it in a single process mode: ``` gunicorn testpool.wsgi -w 1 ``` Here's our process tree - there's only 1 worker that would handle ALL requests ``` pstree 77292 -+= 77292 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 \--- 77295 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 ``` Trying to use our app: ``` curl 'http://127.0.0.1:8000' val: 1 curl 'http://127.0.0.1:8000' val: 2 curl 'http://127.0.0.1:8000' val: 3 ``` As you can see you can easily share the counter between subsequent requests. The problem here is that you can only serve a single request in parallel. If you request for **/heavy/** in one tab, **/** won't work until **/heavy** is done ### Lets now use 2 worker processes: ``` gunicorn testpool.wsgi -w 2 ``` This is how the process tree would look like: ``` pstree 77285 -+= 77285 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 |--- 77288 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 \--- 77289 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 ``` Testing our app: ``` curl 'http://127.0.0.1:8000' val: 1 curl 'http://127.0.0.1:8000' val: 2 curl 'http://127.0.0.1:8000' val: 1 ``` The first two requests has been handled by the first `worker process`, and the **3rd** one - by the second worker process that has its own memory space so you see **1** instead of **3**. Notice your output may differ because process 1 and 2 are selected at random. But sooner or later you'll hit a *different* process. That's not very helpful for us because we need to handle *multiple* concurrent requests and we need to somehow get our request handled by a specific process that can't be done in general case. Most *pooling* technics coming out of the box would only cache connections in the scope of a *single* process, if your request gets served by a different process - a **NEW** connection would need to be made. ### Lets move to threads ``` gunicorn testpool.wsgi -w 1 --threads 2 ``` Again - only 1 process ``` pstree 77310 -+= 77310 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 --threads 2 \--- 77313 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 --threads 2 ``` Now if you run **/heavy** in one tab you'll still be able to query **/** and your counter will be *preserved* between requests! Even if the number of threads is growing or shrinking depending on your workload it should still work fine. **Problems**: you'll need to *synchronize* access to the shared variable like this using python threads synchronization technics ([read more](https://docs.python.org/3.4/library/threading.html#lock-objects)). Another problem is that the same user may need to to issue multiple queries in parallel - i.e. open multiple tabs. To handle it you can open *multiple* connections on the first request when you have db credentials available. If a user needs more connections than your app might wait on lock until a connection becomes available. ### Back to your question You can create a class that would have the following methods: ``` from contextlib import contextmanager class ConnectionPool(object): def __init__(self, max_connections=4): self._pool = dict() self._max_connections = max_connections def preconnect(self, session_id, user, password): # create multiple connections and put them into self._pool # ... @contextmanager def get_connection(sef, session_id): # if have an available connection: # mark it as allocated # and return it try: yield connection finally: # put it back to the pool # .... # else # wait until there's a connection returned to the pool by another thread pool = ConnectionPool(4) def some_view(self): session_id = ... with pool.get_connection(session_id) as conn: conn.query(...) ``` This is not a complete solution - you'll need to somehow delete outdated connections not used for a long time. If a user comes back after a long time and his connection have been closed, he'll need to provide his credentials again - hopefully it's ok from your app's perspective. Also keep in mind python `threads` have its performance penalties, not sure if this is an issue for you. I haven't checked it for `apache2` (too much configuration burden, I haven't used it for ages and generally use [uwsgi](https://uwsgi-docs.readthedocs.io/en/latest/)), but it should work there too - would be happy to hear back from you if you manage to run it ) And also don't forget about **p.4** (async approach) - unlikely will you be able to use it on apache, but it's worth investigation - keywords: *django + gevent*, *django + asyncio*. It has its pros/cons and may greatly affect your app implementation so it's hard to suggest any solution without knowing your app requirements in detail
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
This is not a good idea to do such a thing synchronously in web app context. Remember that your application may needs to work in multi process/thread fashion, and you could not share connection between processes normally. So if you create a connection for your user on a process, there is no guaranty to receive query request on the same one. May be a better idea is to have a single process background worker which handles connections in multiple threads (a thread per session) to make queries on database and retrieve result on web app. Your application should assign a unique ID to each session and the background worker track each thread using session ID. You may use `celery` or any other task queues supporting async result. So the design would be something like below: ``` |<--| |<--------------| |<--| user (id: x) | | webapp | | queue | | worker (thread x) | | DB |-->| |-->| |-->| |-->| ``` Also you could create a queue for each user until they have an active session, as a result you could run a separate background process for each session.
I am just sharing my knowledge over here. *Install the PyMySQL to use the MySql* **For Python 2.x** ``` pip install PyMySQL ``` **For Python 3.x** ``` pip3 install PyMySQL ``` **1.** If you are open to use Django Framework then it's very easy to run the SQL query without any re-connection. In setting.py file add the below lines ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'test', 'PASSWORD': 'test', 'HOST': 'localhost', 'OPTIONS': {'charset': 'utf8mb4'}, } } ``` In views.py file add these lines to get the data. You can customized your query according to your need ``` from django.db import connection def connect(request): cursor = connection.cursor() cursor.execute("SELECT * FROM Tablename"); results = cursor.fetchall() return results ``` You will get the desire results. Click [here](https://docs.djangoproject.com/en/2.1/topics/db/sql/) for more information about it **2. For python Tkinter** ``` from Tkinter import * import MySQLdb db = MySQLdb.connect("localhost","root","root","test") # prepare a cursor object using cursor() method cursor = db.cursor() cursor.execute("SELECT * FROM Tablename") if cursor.fetchone() is not None: print("In If") else: print("In Else") cursor.close() ``` Refer [this](https://palewi.re/posts/2008/04/26/python-recipe-connect-to-mysql-database-execute-a-query-print-the-results/) for more information **PS:** You can check this link for your question to reusing a DB connection for later. [How to enable MySQL client auto re-connect with MySQLdb?](https://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re-connect-with-mysqldb)
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
I don't see why do you need a cached connection here and why not just reconnect on every request caching user's credentials somewhere, but anyway I'll try to outline a solution that might fit your requirements. I'd suggest to look into a more generic task first - cache something between subsequent requests your app needs to handle and can't serialize into `django`'s sessions. In your particular case this shared value would be a *database connection* (or multiple connections). Lets start with a simple task of sharing a simple *counter variable* between requests, just to understand what's actually happening under the hood. Amaizingly but neither answer has mentioned anything regarding a web server you might use! Actually there are multiple ways to handle concurrent connections in web apps: 1. Having multiple **processes**, every request comes into **one** of them at random 2. Having multiple **threads**, every request is handled by a random **thread** 3. p.1 and p.2 combined 4. Various **async** techniques, when there's a *single* **process** + *event loop* handling requests with a caveat that request handlers shouldn't block for a long time From my own experience p.1-2 are fine for majority of typical webapps. `Apache1.x` could only work with p.1, `Apache2.x` can handle all of 1-3. Lets start with the following `django` app and run a single-process [gunicorn](https://gunicorn.org/) webserver. I'm going to use `gunicorn` because it's fairly easy to configure it unlike `apache` (personal opinion :-) ### views.py ``` import time from django.http import HttpResponse c = 0 def main(self): global c c += 1 return HttpResponse('val: {}\n'.format(c)) def heavy(self): time.sleep(10) return HttpResponse('heavy done') ``` ### urls.py ``` from django.contrib import admin from django.urls import path from . import views urlpatterns = [ path('admin/', admin.site.urls), path('', views.main, name='main'), path('heavy/', views.heavy, name='heavy') ] ``` ### Running it in a single process mode: ``` gunicorn testpool.wsgi -w 1 ``` Here's our process tree - there's only 1 worker that would handle ALL requests ``` pstree 77292 -+= 77292 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 \--- 77295 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 ``` Trying to use our app: ``` curl 'http://127.0.0.1:8000' val: 1 curl 'http://127.0.0.1:8000' val: 2 curl 'http://127.0.0.1:8000' val: 3 ``` As you can see you can easily share the counter between subsequent requests. The problem here is that you can only serve a single request in parallel. If you request for **/heavy/** in one tab, **/** won't work until **/heavy** is done ### Lets now use 2 worker processes: ``` gunicorn testpool.wsgi -w 2 ``` This is how the process tree would look like: ``` pstree 77285 -+= 77285 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 |--- 77288 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 \--- 77289 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 2 ``` Testing our app: ``` curl 'http://127.0.0.1:8000' val: 1 curl 'http://127.0.0.1:8000' val: 2 curl 'http://127.0.0.1:8000' val: 1 ``` The first two requests has been handled by the first `worker process`, and the **3rd** one - by the second worker process that has its own memory space so you see **1** instead of **3**. Notice your output may differ because process 1 and 2 are selected at random. But sooner or later you'll hit a *different* process. That's not very helpful for us because we need to handle *multiple* concurrent requests and we need to somehow get our request handled by a specific process that can't be done in general case. Most *pooling* technics coming out of the box would only cache connections in the scope of a *single* process, if your request gets served by a different process - a **NEW** connection would need to be made. ### Lets move to threads ``` gunicorn testpool.wsgi -w 1 --threads 2 ``` Again - only 1 process ``` pstree 77310 -+= 77310 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 --threads 2 \--- 77313 oleg /Users/oleg/.virtualenvs/test3.4/bin/python /Users/oleg/.virtualenvs/test3.4/bin/gunicorn testpool.wsgi -w 1 --threads 2 ``` Now if you run **/heavy** in one tab you'll still be able to query **/** and your counter will be *preserved* between requests! Even if the number of threads is growing or shrinking depending on your workload it should still work fine. **Problems**: you'll need to *synchronize* access to the shared variable like this using python threads synchronization technics ([read more](https://docs.python.org/3.4/library/threading.html#lock-objects)). Another problem is that the same user may need to to issue multiple queries in parallel - i.e. open multiple tabs. To handle it you can open *multiple* connections on the first request when you have db credentials available. If a user needs more connections than your app might wait on lock until a connection becomes available. ### Back to your question You can create a class that would have the following methods: ``` from contextlib import contextmanager class ConnectionPool(object): def __init__(self, max_connections=4): self._pool = dict() self._max_connections = max_connections def preconnect(self, session_id, user, password): # create multiple connections and put them into self._pool # ... @contextmanager def get_connection(sef, session_id): # if have an available connection: # mark it as allocated # and return it try: yield connection finally: # put it back to the pool # .... # else # wait until there's a connection returned to the pool by another thread pool = ConnectionPool(4) def some_view(self): session_id = ... with pool.get_connection(session_id) as conn: conn.query(...) ``` This is not a complete solution - you'll need to somehow delete outdated connections not used for a long time. If a user comes back after a long time and his connection have been closed, he'll need to provide his credentials again - hopefully it's ok from your app's perspective. Also keep in mind python `threads` have its performance penalties, not sure if this is an issue for you. I haven't checked it for `apache2` (too much configuration burden, I haven't used it for ages and generally use [uwsgi](https://uwsgi-docs.readthedocs.io/en/latest/)), but it should work there too - would be happy to hear back from you if you manage to run it ) And also don't forget about **p.4** (async approach) - unlikely will you be able to use it on apache, but it's worth investigation - keywords: *django + gevent*, *django + asyncio*. It has its pros/cons and may greatly affect your app implementation so it's hard to suggest any solution without knowing your app requirements in detail
I am just sharing my knowledge over here. *Install the PyMySQL to use the MySql* **For Python 2.x** ``` pip install PyMySQL ``` **For Python 3.x** ``` pip3 install PyMySQL ``` **1.** If you are open to use Django Framework then it's very easy to run the SQL query without any re-connection. In setting.py file add the below lines ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'test', 'PASSWORD': 'test', 'HOST': 'localhost', 'OPTIONS': {'charset': 'utf8mb4'}, } } ``` In views.py file add these lines to get the data. You can customized your query according to your need ``` from django.db import connection def connect(request): cursor = connection.cursor() cursor.execute("SELECT * FROM Tablename"); results = cursor.fetchall() return results ``` You will get the desire results. Click [here](https://docs.djangoproject.com/en/2.1/topics/db/sql/) for more information about it **2. For python Tkinter** ``` from Tkinter import * import MySQLdb db = MySQLdb.connect("localhost","root","root","test") # prepare a cursor object using cursor() method cursor = db.cursor() cursor.execute("SELECT * FROM Tablename") if cursor.fetchone() is not None: print("In If") else: print("In Else") cursor.close() ``` Refer [this](https://palewi.re/posts/2008/04/26/python-recipe-connect-to-mysql-database-execute-a-query-print-the-results/) for more information **PS:** You can check this link for your question to reusing a DB connection for later. [How to enable MySQL client auto re-connect with MySQLdb?](https://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re-connect-with-mysqldb)
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
You could use an IoC container to store a singleton provider for you. Essentially, instead of constructing a new connection every time, it will only construct it once (the first time `ConnectionContainer.connection_provider()` is called) and thereafter it will always return the previously constructed connection. You'll need the `dependency-injector` package for my example to work: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, host, user, passwd, db, charset): self.conn = MySQLdb.connect( host=host, user=user, passwd=passwd, db=db, charset=charset ) class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, host='aaa', user='bbb', passwd='ccc', db='ddd', charset='utf8') def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) ``` I've hardcoded the connection string here, but it is also possible to make it variable depending on a changeable configuration. In that case you could also create a container for the configuration file and have the connection container read its config from there. You then set the config at runtime. As follows: ``` import dependency_injector.containers as containers import dependency_injector.providers as providers class ConnectionProvider(): def __init__(self, connection_config): self.conn = MySQLdb.connect(**connection_config) class ConfigContainer(containers.DeclarativeContainer): connection_config = providers.Configuration("connection_config") class ConnectionContainer(containers.DeclarativeContainer): connection_provider = providers.Singleton(ConnectionProvider, ConfigContainer.connection_config) def do_queries(request, sql): user = request.user conn = ConnectionContainer.connection_provider().conn cursor = conn.cursor() cursor.execute(sql) # run code my_config = { 'host':'aaa', 'user':'bbb', 'passwd':'ccc', 'db':'ddd', 'charset':'utf8' } ConfigContainer.connection_config.override(my_config) request = ... sql = ... do_queries(request, sql) ```
I actually shared my solution to this exact issue. What I did here was create a pool of connections that you can specify the max with, and then queued query requests async through this channel. This way you can leave a certain amount of connections open, but it will queue and pool async and keep the speed you are used to. This requires gevent and postgres. [Python Postgres psycopg2 ThreadedConnectionPool exhausted](https://stackoverflow.com/questions/48532301/python-postgres-psycopg2-threadedconnectionpool-exhausted/49366850#49366850)
53,147,752
I am saving a user's database connection. On the first time they enter in their credentials, I do something like the following: ``` self.conn = MySQLdb.connect ( host = 'aaa', user = 'bbb', passwd = 'ccc', db = 'ddd', charset='utf8' ) cursor = self.conn.cursor() cursor.execute("SET NAMES utf8") cursor.execute('SET CHARACTER SET utf8;') cursor.execute('SET character_set_connection=utf8;') ``` I then have the `conn` ready to go for all the user's queries. However, I don't want to re-connect every time the `view` is loaded. How would I store this "open connection" so I can just do something like the following in the view: ``` def do_queries(request, sql): user = request.user conn = request.session['conn'] cursor = request.session['cursor'] cursor.execute(sql) ``` --- **Update**: it seems like the above is not possible and not good practice, so let me re-phrase what I'm trying to do: I have a sql editor that a user can use after they enter in their credentials (think of something like Navicat or SequelPro). Note this is **NOT** the default django db connection -- I do not know the credentials beforehand. Now, once the user has 'connected', I would like them to be able to do as many queries as they like without me having to reconnect every time they do this. For example -- to re-iterate again -- something like Navicat or SequelPro. How would this be done using python, django, or mysql? Perhaps I don't really understand what is necessary here (caching the connection? connection pooling? etc.), so any suggestions or help would be greatly appreciated.
2018/11/05
[ "https://Stackoverflow.com/questions/53147752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
I actually shared my solution to this exact issue. What I did here was create a pool of connections that you can specify the max with, and then queued query requests async through this channel. This way you can leave a certain amount of connections open, but it will queue and pool async and keep the speed you are used to. This requires gevent and postgres. [Python Postgres psycopg2 ThreadedConnectionPool exhausted](https://stackoverflow.com/questions/48532301/python-postgres-psycopg2-threadedconnectionpool-exhausted/49366850#49366850)
I am just sharing my knowledge over here. *Install the PyMySQL to use the MySql* **For Python 2.x** ``` pip install PyMySQL ``` **For Python 3.x** ``` pip3 install PyMySQL ``` **1.** If you are open to use Django Framework then it's very easy to run the SQL query without any re-connection. In setting.py file add the below lines ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'test', 'PASSWORD': 'test', 'HOST': 'localhost', 'OPTIONS': {'charset': 'utf8mb4'}, } } ``` In views.py file add these lines to get the data. You can customized your query according to your need ``` from django.db import connection def connect(request): cursor = connection.cursor() cursor.execute("SELECT * FROM Tablename"); results = cursor.fetchall() return results ``` You will get the desire results. Click [here](https://docs.djangoproject.com/en/2.1/topics/db/sql/) for more information about it **2. For python Tkinter** ``` from Tkinter import * import MySQLdb db = MySQLdb.connect("localhost","root","root","test") # prepare a cursor object using cursor() method cursor = db.cursor() cursor.execute("SELECT * FROM Tablename") if cursor.fetchone() is not None: print("In If") else: print("In Else") cursor.close() ``` Refer [this](https://palewi.re/posts/2008/04/26/python-recipe-connect-to-mysql-database-execute-a-query-print-the-results/) for more information **PS:** You can check this link for your question to reusing a DB connection for later. [How to enable MySQL client auto re-connect with MySQLdb?](https://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re-connect-with-mysqldb)
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
One option would be to loop through the `list` ('list1'), `filter` the 'names' column based on the 'names' vector, convert it to a single dataset while creating an identification column with `.id`, `spread` from 'long' to 'wide' and remove the 'grp' column ``` library(tidyverse) map_df(list1, ~ .x %>% filter(names %in% !! names), .id = 'grp') %>% spread(names, values) %>% select(-grp) # a b c #1 25 13 11 #2 12 10 NA ``` --- Or another option is to bind the datasets together with `bind_rows`, created a grouping id 'grp' to specify the `list` element, `filter` the rows by selecting only 'names' column that match with the 'names' `vector` and `spread` from 'long' to 'wide' ``` bind_rows(list1, .id = 'grp') %>% filter(names %in% !! names) %>% spread(names, values) ``` NOTE: It is better not to use reserved keywords for specifying object names (`names`). Also, to avoid confusions, the object should be different from the column names of the dataframe object. --- It can be also done with only `base R`. Create a group identifier with `Map`, `rbind` the `list` elements to single dataset, `subset` the rows by keeping only the values from the 'names' `vector`, and `reshape` from 'long' to 'wide' ``` df1 <- subset(do.call(rbind, Map(cbind, list1, ind = seq_along(list1))), names %in% .GlobalEnv$names) reshape(df1, idvar = 'ind', direction = 'wide', timevar = 'names')[-1] ```
A mix of base R and `dplyr`. For every list element we create a dataframe with 1 row. Using `dplyr`'s `rbind_list` row bind them together and then subset only those columns which we need using `names`. ``` library(dplyr) rbind_list(lapply(list1, function(x) setNames(data.frame(t(x$values)), x$names)))[names] # a b c # <dbl> <dbl> <dbl> #1 25 13 11 #2 12 10 NA ``` Output without subset looks like this ``` rbind_list(lapply(list1, function(x) setNames(data.frame(t(x$values)), x$names))) # a b c x # <dbl> <dbl> <dbl> <dbl> #1 25 13 11 NA #2 12 10 NA 2 ```
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
One option would be to loop through the `list` ('list1'), `filter` the 'names' column based on the 'names' vector, convert it to a single dataset while creating an identification column with `.id`, `spread` from 'long' to 'wide' and remove the 'grp' column ``` library(tidyverse) map_df(list1, ~ .x %>% filter(names %in% !! names), .id = 'grp') %>% spread(names, values) %>% select(-grp) # a b c #1 25 13 11 #2 12 10 NA ``` --- Or another option is to bind the datasets together with `bind_rows`, created a grouping id 'grp' to specify the `list` element, `filter` the rows by selecting only 'names' column that match with the 'names' `vector` and `spread` from 'long' to 'wide' ``` bind_rows(list1, .id = 'grp') %>% filter(names %in% !! names) %>% spread(names, values) ``` NOTE: It is better not to use reserved keywords for specifying object names (`names`). Also, to avoid confusions, the object should be different from the column names of the dataframe object. --- It can be also done with only `base R`. Create a group identifier with `Map`, `rbind` the `list` elements to single dataset, `subset` the rows by keeping only the values from the 'names' `vector`, and `reshape` from 'long' to 'wide' ``` df1 <- subset(do.call(rbind, Map(cbind, list1, ind = seq_along(list1))), names %in% .GlobalEnv$names) reshape(df1, idvar = 'ind', direction = 'wide', timevar = 'names')[-1] ```
Using base R only ``` body <- do.call('rbind', lapply(list1, function(list.element){ element.vals <- list.element[['values']] element.names <- list.element[['names']] names(element.vals) <- element.names return.vals <- element.vals[names] if(all(is.na(return.vals))) NULL else return.vals })) df <- as.data.frame(body) names(df) <- names df ```
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
One option would be to loop through the `list` ('list1'), `filter` the 'names' column based on the 'names' vector, convert it to a single dataset while creating an identification column with `.id`, `spread` from 'long' to 'wide' and remove the 'grp' column ``` library(tidyverse) map_df(list1, ~ .x %>% filter(names %in% !! names), .id = 'grp') %>% spread(names, values) %>% select(-grp) # a b c #1 25 13 11 #2 12 10 NA ``` --- Or another option is to bind the datasets together with `bind_rows`, created a grouping id 'grp' to specify the `list` element, `filter` the rows by selecting only 'names' column that match with the 'names' `vector` and `spread` from 'long' to 'wide' ``` bind_rows(list1, .id = 'grp') %>% filter(names %in% !! names) %>% spread(names, values) ``` NOTE: It is better not to use reserved keywords for specifying object names (`names`). Also, to avoid confusions, the object should be different from the column names of the dataframe object. --- It can be also done with only `base R`. Create a group identifier with `Map`, `rbind` the `list` elements to single dataset, `subset` the rows by keeping only the values from the 'names' `vector`, and `reshape` from 'long' to 'wide' ``` df1 <- subset(do.call(rbind, Map(cbind, list1, ind = seq_along(list1))), names %in% .GlobalEnv$names) reshape(df1, idvar = 'ind', direction = 'wide', timevar = 'names')[-1] ```
In base R ``` t(sapply(list1, function(x) setNames(x$values, names)[match(names, x$names)])) # a b c # [1,] 25 13 11 # [2,] 12 10 NA ```
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
One option would be to loop through the `list` ('list1'), `filter` the 'names' column based on the 'names' vector, convert it to a single dataset while creating an identification column with `.id`, `spread` from 'long' to 'wide' and remove the 'grp' column ``` library(tidyverse) map_df(list1, ~ .x %>% filter(names %in% !! names), .id = 'grp') %>% spread(names, values) %>% select(-grp) # a b c #1 25 13 11 #2 12 10 NA ``` --- Or another option is to bind the datasets together with `bind_rows`, created a grouping id 'grp' to specify the `list` element, `filter` the rows by selecting only 'names' column that match with the 'names' `vector` and `spread` from 'long' to 'wide' ``` bind_rows(list1, .id = 'grp') %>% filter(names %in% !! names) %>% spread(names, values) ``` NOTE: It is better not to use reserved keywords for specifying object names (`names`). Also, to avoid confusions, the object should be different from the column names of the dataframe object. --- It can be also done with only `base R`. Create a group identifier with `Map`, `rbind` the `list` elements to single dataset, `subset` the rows by keeping only the values from the 'names' `vector`, and `reshape` from 'long' to 'wide' ``` df1 <- subset(do.call(rbind, Map(cbind, list1, ind = seq_along(list1))), names %in% .GlobalEnv$names) reshape(df1, idvar = 'ind', direction = 'wide', timevar = 'names')[-1] ```
For the sake of completeness, here is a [data.table](/questions/tagged/data.table "show questions tagged 'data.table'") approach using `dcast()` and `rowid()`: ``` library(data.table) nam <- names1 # avoid name conflict with column name rbindlist(list1)[names1 %in% nam, dcast(.SD, rowid(names1) ~ names1)][, names1 := NULL][] ``` > > > ``` > a b c > 1: val1 13 11 > 2: 12 10 <NA> > > ``` > > Or, more concisely, pick columns after reshaping: ``` library(data.table) rbindlist(list1)[, dcast(.SD, rowid(names1) ~ names1)][, .SD, .SDcols = names1] ```
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
A mix of base R and `dplyr`. For every list element we create a dataframe with 1 row. Using `dplyr`'s `rbind_list` row bind them together and then subset only those columns which we need using `names`. ``` library(dplyr) rbind_list(lapply(list1, function(x) setNames(data.frame(t(x$values)), x$names)))[names] # a b c # <dbl> <dbl> <dbl> #1 25 13 11 #2 12 10 NA ``` Output without subset looks like this ``` rbind_list(lapply(list1, function(x) setNames(data.frame(t(x$values)), x$names))) # a b c x # <dbl> <dbl> <dbl> <dbl> #1 25 13 11 NA #2 12 10 NA 2 ```
For the sake of completeness, here is a [data.table](/questions/tagged/data.table "show questions tagged 'data.table'") approach using `dcast()` and `rowid()`: ``` library(data.table) nam <- names1 # avoid name conflict with column name rbindlist(list1)[names1 %in% nam, dcast(.SD, rowid(names1) ~ names1)][, names1 := NULL][] ``` > > > ``` > a b c > 1: val1 13 11 > 2: 12 10 <NA> > > ``` > > Or, more concisely, pick columns after reshaping: ``` library(data.table) rbindlist(list1)[, dcast(.SD, rowid(names1) ~ names1)][, .SD, .SDcols = names1] ```
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
Using base R only ``` body <- do.call('rbind', lapply(list1, function(list.element){ element.vals <- list.element[['values']] element.names <- list.element[['names']] names(element.vals) <- element.names return.vals <- element.vals[names] if(all(is.na(return.vals))) NULL else return.vals })) df <- as.data.frame(body) names(df) <- names df ```
For the sake of completeness, here is a [data.table](/questions/tagged/data.table "show questions tagged 'data.table'") approach using `dcast()` and `rowid()`: ``` library(data.table) nam <- names1 # avoid name conflict with column name rbindlist(list1)[names1 %in% nam, dcast(.SD, rowid(names1) ~ names1)][, names1 := NULL][] ``` > > > ``` > a b c > 1: val1 13 11 > 2: 12 10 <NA> > > ``` > > Or, more concisely, pick columns after reshaping: ``` library(data.table) rbindlist(list1)[, dcast(.SD, rowid(names1) ~ names1)][, .SD, .SDcols = names1] ```
54,494,842
I am totally new to python and basically new to programming in general. I have a college assignment that involves scanning through a CSV file and storing each row as a list. My file is a list of football data for the premier league season so the CSV file is structured as follows: ``` date; home; away; homegoals; awaygoals; result; 01/01/2012; Man United; Chelsea; 1; 2; A; 01/02/2012; Man City; Arsenal; 1; 1; D; ``` etc etc. At the moment each column is stored in a variable: ``` date = row[0] home = row[1] away = row[2] homegoals = row[4] awaygoals = row[5] ``` So I can currently access for example, all games with more than three goals ``` totalgoals = homegoals+awaygoals if totalgoals > 3: print(date, home, homegoals, awaygoals, away) ``` I can access all games which featured a certain team: ``` if (home or away) == "Man United": print(date, home, homegoals, awaygoals, away) ``` Very basic, I know. I am looking to be able to track things more in depth. So for example I would like to be able to access results where the team has not won in 3 games etc. I would like to be able to find out if a team is on a low scoring run. Now, from reading online for a while it seems to me the way you do this is with a combination of a dictionary and list(s). So far: ``` import csv with open('premier_league_data_1819.csv') as csvfile: readCSV = csv.reader(csvfile, delimiter=';') dates = [] hometeams = [] awayteams =[] homegoals = [] awaygoals = [] results = [] next(readCSV) for row in readCSV: date = row[0] home = row[1] away = row[2] hg = int(row[3]) #Home Goals ag = int(row[4]) #Away Goals ftr = row[6] #Result dates.append(date) hometeams.append(home) awayteams.append(away) homegoals.append(hg) awaygoals.append(ag) results.append(ftr) ``` if anyone could point me in the right direction on this I would be grateful. It would be good to know the best way of achieving this so I am not spinning my wheels getting more confused. I think to start I would need to first store all of a teams games in a list & then add that list to a dictionary that holds all teams records with the team name as a key.
2019/02/02
[ "https://Stackoverflow.com/questions/54494842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11005752/" ]
In base R ``` t(sapply(list1, function(x) setNames(x$values, names)[match(names, x$names)])) # a b c # [1,] 25 13 11 # [2,] 12 10 NA ```
For the sake of completeness, here is a [data.table](/questions/tagged/data.table "show questions tagged 'data.table'") approach using `dcast()` and `rowid()`: ``` library(data.table) nam <- names1 # avoid name conflict with column name rbindlist(list1)[names1 %in% nam, dcast(.SD, rowid(names1) ~ names1)][, names1 := NULL][] ``` > > > ``` > a b c > 1: val1 13 11 > 2: 12 10 <NA> > > ``` > > Or, more concisely, pick columns after reshaping: ``` library(data.table) rbindlist(list1)[, dcast(.SD, rowid(names1) ~ names1)][, .SD, .SDcols = names1] ```
70,075,290
I am try update a lambda by zappa, I created virtualenv and active virtualenv and install libraries, but in the moment run zappa update enviroment, I have this problem: How can i fix this :( ``` zappa update qa (pip 18.1 (/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages), Requirement.parse('pip>=20.3'), {'pip-tools'}) Calling update for stage qa.. Downloading and installing dependencies.. Packaging project as zip. Uploading maximo-copy-customers-qa-1637639364.zip (6.0MiB).. 100%|███████████████████████████████████████████████████████████████| 6.32M/6.32M [00:09<00:00, 664kB/s] Updating Lambda function code.. Updating Lambda function configuration.. Oh no! An error occurred! :( ============== Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 2778, in handle sys.exit(cli.handle()) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 512, in handle self.dispatch_command(self.command, stage) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 559, in dispatch_command self.update(self.vargs['zip'], self.vargs['no_upload']) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 979, in update layers=self.layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/core.py", line 1224, in update_lambda_configuration Layers=layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 676, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.ResourceConflictException: An error occurred (ResourceConflictException) when calling the UpdateFunctionConfiguration operation: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:937280411572:function:maximo-copy-customers-qa ```
2021/11/23
[ "https://Stackoverflow.com/questions/70075290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16488584/" ]
You should wait for function code update to complete before proceeding with update of function configuration. Inserting the following shell script between the steps can keep the process waiting: ``` STATE=$(aws lambda get-function --function-name "$FN_NAME" --query 'Configuration.LastUpdateStatus' --output text) while [[ "$STATE" == "InProgress" ]] do echo "sleep 5sec ...." sleep 5s STATE=$(aws lambda get-function --function-name "$FN_NAME" --query 'Configuration.LastUpdateStatus' --output text) echo $STATE done ```
Add to your zappa\_settings.json: ``` "lambda_description": "aws:states:opt-out" ``` [Zappa issue about it](https://github.com/zappa/Zappa/issues/1041)
70,075,290
I am try update a lambda by zappa, I created virtualenv and active virtualenv and install libraries, but in the moment run zappa update enviroment, I have this problem: How can i fix this :( ``` zappa update qa (pip 18.1 (/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages), Requirement.parse('pip>=20.3'), {'pip-tools'}) Calling update for stage qa.. Downloading and installing dependencies.. Packaging project as zip. Uploading maximo-copy-customers-qa-1637639364.zip (6.0MiB).. 100%|███████████████████████████████████████████████████████████████| 6.32M/6.32M [00:09<00:00, 664kB/s] Updating Lambda function code.. Updating Lambda function configuration.. Oh no! An error occurred! :( ============== Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 2778, in handle sys.exit(cli.handle()) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 512, in handle self.dispatch_command(self.command, stage) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 559, in dispatch_command self.update(self.vargs['zip'], self.vargs['no_upload']) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 979, in update layers=self.layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/core.py", line 1224, in update_lambda_configuration Layers=layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 676, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.ResourceConflictException: An error occurred (ResourceConflictException) when calling the UpdateFunctionConfiguration operation: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:937280411572:function:maximo-copy-customers-qa ```
2021/11/23
[ "https://Stackoverflow.com/questions/70075290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16488584/" ]
I would add a more sophisticated solution what mentioned LiriB earlier. Use the aws lambda cli which has the function-updated command ([documentation](https://docs.aws.amazon.com/cli/latest/reference/lambda/wait/function-updated.html)). Example: `aws lambda wait function-updated --function-name "$FN_NAME"` This command will wait until the function is updated. In case it is not upated in 5 minutes, it will stop the execution.
Add to your zappa\_settings.json: ``` "lambda_description": "aws:states:opt-out" ``` [Zappa issue about it](https://github.com/zappa/Zappa/issues/1041)
70,075,290
I am try update a lambda by zappa, I created virtualenv and active virtualenv and install libraries, but in the moment run zappa update enviroment, I have this problem: How can i fix this :( ``` zappa update qa (pip 18.1 (/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages), Requirement.parse('pip>=20.3'), {'pip-tools'}) Calling update for stage qa.. Downloading and installing dependencies.. Packaging project as zip. Uploading maximo-copy-customers-qa-1637639364.zip (6.0MiB).. 100%|███████████████████████████████████████████████████████████████| 6.32M/6.32M [00:09<00:00, 664kB/s] Updating Lambda function code.. Updating Lambda function configuration.. Oh no! An error occurred! :( ============== Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 2778, in handle sys.exit(cli.handle()) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 512, in handle self.dispatch_command(self.command, stage) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 559, in dispatch_command self.update(self.vargs['zip'], self.vargs['no_upload']) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/cli.py", line 979, in update layers=self.layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/zappa/core.py", line 1224, in update_lambda_configuration Layers=layers File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 676, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.ResourceConflictException: An error occurred (ResourceConflictException) when calling the UpdateFunctionConfiguration operation: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:937280411572:function:maximo-copy-customers-qa ```
2021/11/23
[ "https://Stackoverflow.com/questions/70075290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16488584/" ]
I would add a more sophisticated solution what mentioned LiriB earlier. Use the aws lambda cli which has the function-updated command ([documentation](https://docs.aws.amazon.com/cli/latest/reference/lambda/wait/function-updated.html)). Example: `aws lambda wait function-updated --function-name "$FN_NAME"` This command will wait until the function is updated. In case it is not upated in 5 minutes, it will stop the execution.
You should wait for function code update to complete before proceeding with update of function configuration. Inserting the following shell script between the steps can keep the process waiting: ``` STATE=$(aws lambda get-function --function-name "$FN_NAME" --query 'Configuration.LastUpdateStatus' --output text) while [[ "$STATE" == "InProgress" ]] do echo "sleep 5sec ...." sleep 5s STATE=$(aws lambda get-function --function-name "$FN_NAME" --query 'Configuration.LastUpdateStatus' --output text) echo $STATE done ```
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
This may be too roundabout for you, but you could use the %save magic function to save the lines in question and then copy them from the save file.
I tend to keep an open gvim window for this kind of things. Paste your class definition as is and then do something like: ``` :%s/^.*\.:// ```
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
One of the cool features of `ipython` is [session logging](http://ipython.org/ipython-doc/rel-1.1.0/interactive/reference.html#session-logging-and-restoring). If you enable it, the code you input in your session is logged to a file. It's very useful, I use it all the time. To make things even niftier for me, I have a shell alias `ipy_log_cat`, which prints the entire file. You can do something like: `ipy_log_cat | tail` to get the most recent input lines. (this is also useful for `grep`ing session history, etc.). You can also save a few keyboard/mouse strokes by piping it into [`xclip`](https://stackoverflow.com/questions/5130968/how-can-i-copy-the-output-of-a-command-directly-into-my-clipboard)!
This may be too roundabout for you, but you could use the %save magic function to save the lines in question and then copy them from the save file.
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
How about using `%hist n` to print line `n` (or a range of lines) without prompts (including line continuations), and doing your copy from that? (Simply scrolling back to that line is nearly as good). ``` In [1]: def foo(): ...: return 1+2 ...: In [6]: %history 1 def foo(): return 1+2 ```
This may be too roundabout for you, but you could use the %save magic function to save the lines in question and then copy them from the save file.
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
This QTconsole copy regression has been fixed, see <https://github.com/ipython/ipython/issues/3206> - I can confirm that the desired behavior is again present in the QtConsole in the Canopy 1.2 GUI and, I suspect, in the ipython egg installable by free users from the Enthought egg repo.
This may be too roundabout for you, but you could use the %save magic function to save the lines in question and then copy them from the save file.
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
One of the cool features of `ipython` is [session logging](http://ipython.org/ipython-doc/rel-1.1.0/interactive/reference.html#session-logging-and-restoring). If you enable it, the code you input in your session is logged to a file. It's very useful, I use it all the time. To make things even niftier for me, I have a shell alias `ipy_log_cat`, which prints the entire file. You can do something like: `ipy_log_cat | tail` to get the most recent input lines. (this is also useful for `grep`ing session history, etc.). You can also save a few keyboard/mouse strokes by piping it into [`xclip`](https://stackoverflow.com/questions/5130968/how-can-i-copy-the-output-of-a-command-directly-into-my-clipboard)!
I tend to keep an open gvim window for this kind of things. Paste your class definition as is and then do something like: ``` :%s/^.*\.:// ```
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
How about using `%hist n` to print line `n` (or a range of lines) without prompts (including line continuations), and doing your copy from that? (Simply scrolling back to that line is nearly as good). ``` In [1]: def foo(): ...: return 1+2 ...: In [6]: %history 1 def foo(): return 1+2 ```
I tend to keep an open gvim window for this kind of things. Paste your class definition as is and then do something like: ``` :%s/^.*\.:// ```
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
This QTconsole copy regression has been fixed, see <https://github.com/ipython/ipython/issues/3206> - I can confirm that the desired behavior is again present in the QtConsole in the Canopy 1.2 GUI and, I suspect, in the ipython egg installable by free users from the Enthought egg repo.
I tend to keep an open gvim window for this kind of things. Paste your class definition as is and then do something like: ``` :%s/^.*\.:// ```
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
How about using `%hist n` to print line `n` (or a range of lines) without prompts (including line continuations), and doing your copy from that? (Simply scrolling back to that line is nearly as good). ``` In [1]: def foo(): ...: return 1+2 ...: In [6]: %history 1 def foo(): return 1+2 ```
One of the cool features of `ipython` is [session logging](http://ipython.org/ipython-doc/rel-1.1.0/interactive/reference.html#session-logging-and-restoring). If you enable it, the code you input in your session is logged to a file. It's very useful, I use it all the time. To make things even niftier for me, I have a shell alias `ipy_log_cat`, which prints the entire file. You can do something like: `ipy_log_cat | tail` to get the most recent input lines. (this is also useful for `grep`ing session history, etc.). You can also save a few keyboard/mouse strokes by piping it into [`xclip`](https://stackoverflow.com/questions/5130968/how-can-i-copy-the-output-of-a-command-directly-into-my-clipboard)!
20,858,336
I'm using IPython Qt Console and when I copy code FROM Ipython it comes out like that: ``` class notathing(object): ...: ...: def __init__(self): ...: pass ...: ``` Is there any way to copy them without those leading triple dots and doublecolon? P.S. I tried both `Copy` and `Copy Raw Text` in context menu and it's still the same. OS: Debian Linux 7.2 (KDE).
2013/12/31
[ "https://Stackoverflow.com/questions/20858336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2022518/" ]
How about using `%hist n` to print line `n` (or a range of lines) without prompts (including line continuations), and doing your copy from that? (Simply scrolling back to that line is nearly as good). ``` In [1]: def foo(): ...: return 1+2 ...: In [6]: %history 1 def foo(): return 1+2 ```
This QTconsole copy regression has been fixed, see <https://github.com/ipython/ipython/issues/3206> - I can confirm that the desired behavior is again present in the QtConsole in the Canopy 1.2 GUI and, I suspect, in the ipython egg installable by free users from the Enthought egg repo.
14,198,382
I have some Entrys in a python list.Each Entry has a creation date and creation time.The values are stored as python datetime.date and datetime.time (as two separate fields).I need to get the list of Entrys sorted sothat previously created Entry comes before the others. I know there is a list.sort() function that accepts a key function.In this case ,do I have to use the date and time to create a datetime and use that as key to `sort()`? There is a `datetime.datetime.combine(date,time)` for this. But how do I specify this inside the sort function? I tried `key = datetime.datetim.combine(created_date,created_time)` but the interpreter complains that `the name created_date is not defined` ``` class Entry: created_date = #datetime.date created_time = #datetime.time ... my_entries_list=[Entry1,Entry2...Entry10] my_entries_list.sort(key = datetime.datetim.combine(created_date,created_time)) ```
2013/01/07
[ "https://Stackoverflow.com/questions/14198382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
You probably want something like: ``` my_entries_list.sort(key=lambda v: datetime.datetime.combine(v.created_date, v.created_time)) ``` Passing `datetime.datetime.combine(created_date, created_time)` tries to call `combine` immediately and breaks since `created_date` and `created_time` are not available as local variables. The `lambda` provides *delayed evaluation*: instead of executing the code immediately, it creates a *function* that will, when called, execute the specified code and return the result. The function also provides the parameter that will be used to access the `created_date` and `created_time` attributes.
Use `lambda`: ``` sorted(my_entries_list, key=lambda e: datetime.combine(e.created_date, e.created_time)) ```
14,198,382
I have some Entrys in a python list.Each Entry has a creation date and creation time.The values are stored as python datetime.date and datetime.time (as two separate fields).I need to get the list of Entrys sorted sothat previously created Entry comes before the others. I know there is a list.sort() function that accepts a key function.In this case ,do I have to use the date and time to create a datetime and use that as key to `sort()`? There is a `datetime.datetime.combine(date,time)` for this. But how do I specify this inside the sort function? I tried `key = datetime.datetim.combine(created_date,created_time)` but the interpreter complains that `the name created_date is not defined` ``` class Entry: created_date = #datetime.date created_time = #datetime.time ... my_entries_list=[Entry1,Entry2...Entry10] my_entries_list.sort(key = datetime.datetim.combine(created_date,created_time)) ```
2013/01/07
[ "https://Stackoverflow.com/questions/14198382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
Use `lambda`: ``` sorted(my_entries_list, key=lambda e: datetime.combine(e.created_date, e.created_time)) ```
From the [documentation](http://docs.python.org/2/library/stdtypes.html#mutable-sequence-types) > > key specifies a function of one argument that is used to extract a comparison key from each list element > > > eg. ``` def combined_date_key(elem): return datetime.datetime.combine(v.created_date, v.created_time) entries.sort(key=combined_date_key) ``` (this is identical to the lambda already posted, just broken out for clarity). Your other option is to use the `cmp` argument (it's not required here, but for a sufficiently expensive key function which can be short-circuited, this could be faster)
14,198,382
I have some Entrys in a python list.Each Entry has a creation date and creation time.The values are stored as python datetime.date and datetime.time (as two separate fields).I need to get the list of Entrys sorted sothat previously created Entry comes before the others. I know there is a list.sort() function that accepts a key function.In this case ,do I have to use the date and time to create a datetime and use that as key to `sort()`? There is a `datetime.datetime.combine(date,time)` for this. But how do I specify this inside the sort function? I tried `key = datetime.datetim.combine(created_date,created_time)` but the interpreter complains that `the name created_date is not defined` ``` class Entry: created_date = #datetime.date created_time = #datetime.time ... my_entries_list=[Entry1,Entry2...Entry10] my_entries_list.sort(key = datetime.datetim.combine(created_date,created_time)) ```
2013/01/07
[ "https://Stackoverflow.com/questions/14198382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
Use `lambda`: ``` sorted(my_entries_list, key=lambda e: datetime.combine(e.created_date, e.created_time)) ```
It may be slightly faster to use a tuple like this: ``` sorted(my_entries_list, key = lambda e: (e.created_date, e.created_time)) ``` Then things will be sorted by date, and only if the dates match will the time be considered.
14,198,382
I have some Entrys in a python list.Each Entry has a creation date and creation time.The values are stored as python datetime.date and datetime.time (as two separate fields).I need to get the list of Entrys sorted sothat previously created Entry comes before the others. I know there is a list.sort() function that accepts a key function.In this case ,do I have to use the date and time to create a datetime and use that as key to `sort()`? There is a `datetime.datetime.combine(date,time)` for this. But how do I specify this inside the sort function? I tried `key = datetime.datetim.combine(created_date,created_time)` but the interpreter complains that `the name created_date is not defined` ``` class Entry: created_date = #datetime.date created_time = #datetime.time ... my_entries_list=[Entry1,Entry2...Entry10] my_entries_list.sort(key = datetime.datetim.combine(created_date,created_time)) ```
2013/01/07
[ "https://Stackoverflow.com/questions/14198382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
You probably want something like: ``` my_entries_list.sort(key=lambda v: datetime.datetime.combine(v.created_date, v.created_time)) ``` Passing `datetime.datetime.combine(created_date, created_time)` tries to call `combine` immediately and breaks since `created_date` and `created_time` are not available as local variables. The `lambda` provides *delayed evaluation*: instead of executing the code immediately, it creates a *function* that will, when called, execute the specified code and return the result. The function also provides the parameter that will be used to access the `created_date` and `created_time` attributes.
From the [documentation](http://docs.python.org/2/library/stdtypes.html#mutable-sequence-types) > > key specifies a function of one argument that is used to extract a comparison key from each list element > > > eg. ``` def combined_date_key(elem): return datetime.datetime.combine(v.created_date, v.created_time) entries.sort(key=combined_date_key) ``` (this is identical to the lambda already posted, just broken out for clarity). Your other option is to use the `cmp` argument (it's not required here, but for a sufficiently expensive key function which can be short-circuited, this could be faster)
14,198,382
I have some Entrys in a python list.Each Entry has a creation date and creation time.The values are stored as python datetime.date and datetime.time (as two separate fields).I need to get the list of Entrys sorted sothat previously created Entry comes before the others. I know there is a list.sort() function that accepts a key function.In this case ,do I have to use the date and time to create a datetime and use that as key to `sort()`? There is a `datetime.datetime.combine(date,time)` for this. But how do I specify this inside the sort function? I tried `key = datetime.datetim.combine(created_date,created_time)` but the interpreter complains that `the name created_date is not defined` ``` class Entry: created_date = #datetime.date created_time = #datetime.time ... my_entries_list=[Entry1,Entry2...Entry10] my_entries_list.sort(key = datetime.datetim.combine(created_date,created_time)) ```
2013/01/07
[ "https://Stackoverflow.com/questions/14198382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
You probably want something like: ``` my_entries_list.sort(key=lambda v: datetime.datetime.combine(v.created_date, v.created_time)) ``` Passing `datetime.datetime.combine(created_date, created_time)` tries to call `combine` immediately and breaks since `created_date` and `created_time` are not available as local variables. The `lambda` provides *delayed evaluation*: instead of executing the code immediately, it creates a *function* that will, when called, execute the specified code and return the result. The function also provides the parameter that will be used to access the `created_date` and `created_time` attributes.
It may be slightly faster to use a tuple like this: ``` sorted(my_entries_list, key = lambda e: (e.created_date, e.created_time)) ``` Then things will be sorted by date, and only if the dates match will the time be considered.
14,198,382
I have some Entrys in a python list.Each Entry has a creation date and creation time.The values are stored as python datetime.date and datetime.time (as two separate fields).I need to get the list of Entrys sorted sothat previously created Entry comes before the others. I know there is a list.sort() function that accepts a key function.In this case ,do I have to use the date and time to create a datetime and use that as key to `sort()`? There is a `datetime.datetime.combine(date,time)` for this. But how do I specify this inside the sort function? I tried `key = datetime.datetim.combine(created_date,created_time)` but the interpreter complains that `the name created_date is not defined` ``` class Entry: created_date = #datetime.date created_time = #datetime.time ... my_entries_list=[Entry1,Entry2...Entry10] my_entries_list.sort(key = datetime.datetim.combine(created_date,created_time)) ```
2013/01/07
[ "https://Stackoverflow.com/questions/14198382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
From the [documentation](http://docs.python.org/2/library/stdtypes.html#mutable-sequence-types) > > key specifies a function of one argument that is used to extract a comparison key from each list element > > > eg. ``` def combined_date_key(elem): return datetime.datetime.combine(v.created_date, v.created_time) entries.sort(key=combined_date_key) ``` (this is identical to the lambda already posted, just broken out for clarity). Your other option is to use the `cmp` argument (it's not required here, but for a sufficiently expensive key function which can be short-circuited, this could be faster)
It may be slightly faster to use a tuple like this: ``` sorted(my_entries_list, key = lambda e: (e.created_date, e.created_time)) ``` Then things will be sorted by date, and only if the dates match will the time be considered.
17,601,602
First, I'm extremely new to coding and self-taught, so models / views / DOM fall on deaf ears (but willing to learn!) So I saved images into a database as blobs (BlobProperty), now trying to serve them. **Relevant Code:** (I took out a ton for ease of reading) ``` class Mentors(db.Model): id = db.StringProperty() mentor_id = db.StringProperty() name = db.StringProperty() img_file = db.BlobProperty() ``` ``` class ImageHandler (webapp2.RequestHandler): def get(self): mentor_id=self.request.get('mentor_id') mentor = db.GqlQuery("SELECT * FROM Mentors WHERE mentor_id = :1 LIMIT 1", mentor_id) if mentor.img_file: self.response.headers['Content-Type'] = "image/jpg" self.response.out.write(mentor.img_file) else: self.error(404) ``` ``` application = webapp2.WSGIApplication([ routes.DomainRoute('medhack.prebacked.com', medhack_pages), webapp2.Route(r'/', handler=HomepageHandler, name='home-main'), webapp2.Route(r'/imageit', handler=ImageHandler, name='image-handler') ], debug=True) ``` ``` class MedHackHandler(webapp2.RequestHandler): def get(self, url="/"): # ... bunch of code to serve template etc. mentors_events = db.GqlQuery("SELECT * FROM Mentors_Events WHERE event_id = :1 ORDER BY mentor_type DESC, mentor_id ASC", current_event_id) mentors = mentors_events ``` html: ``` {% for m in mentors %} #here 'mentors' refers to mentors_event query, and 'mentor' refers to the mentors table above. <img src="imageit?mentor_id={{m.mentor.mentor_id}}" alt="{{m.mentor.name}} headshot"/> {% endfor %} ``` Its seems that imageit isn't actually being called or the path is wrong or... I don't know. So many attempts and fails. Resources I've tried but fail to understand: <https://developers.google.com/appengine/articles/python/serving_dynamic_images> This seemed to be dang close, but I can't figure out how to implement. Need a "for dummies" translation. [How to load Blobproperty image in Google App Engine?](https://stackoverflow.com/questions/4283001/how-to-load-blobproperty-image-in-google-app-engine)
2013/07/11
[ "https://Stackoverflow.com/questions/17601602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/646491/" ]
In the handler, you're getting the ID from `self.request.get('mentor_id')`. However, in the template you've set the image URL to `imageit?key=whatever` - so the parameter is "key" not "mentor\_id". Choose one or the other.
Finally figured it out. I'm using a subdomain, and wasn't setting up *that* route, only /img coming off of the www root. I also wasn't using the URL correctly and the 15th pass of <https://developers.google.com/appengine/articles/python/serving_dynamic_images> finally answered my problem.
47,528,696
I am new about docker, so ,if any wrong thoughts come from me ,please point out it.Thanks~ I aim at running a web server that was developed by me ,or a team I belong to,in the docker. So, I thought out three steps: Have a image ,copy the web files into it,and run the container.so,I do the step below: 1- get a docker image. I try like this : `docker pull centos`, so that I can get a image based on centos.Here, I did not care about the version of centos,of course, it's version is 6.7 or ,just taged:latest. Here,I check the image by `docker images`,and I can see it like this: ``` REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/centos latest d123f4e55e12 3 weeks ago 196.6 MB ``` So,I think this step successed. 2- try copying the files in local system to the container. I stop at the path: /tornado,which had a folder named fordocker .The fordocker contains the web-server files. I try commonds like this(based on the guide): ``` docker cp fordocker/ d123f4e55e12:/web ``` But! Here comes the error: ``` Error response from daemon: No such container: d123f4e55e12 ``` 3- if I copy the files successfully,I could try like this:`docker run -d centos xxx python web.py`. This step will come error?I don't know yet. I searched a lot ,but do not explain the phenomenon. It seemed that everyone,beside me ,use the commond would succes. So,here comes the questions: 1- Is the method I thought out feasible? Must I create a images through profile? 2- Where comes the error if the method is feasible? Did the commond cp based on otherthings that I had not done? 3- What should I do if the method is not feasible?Create a image myself? Have a good day~
2017/11/28
[ "https://Stackoverflow.com/questions/47528696", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8948738/" ]
You have docker images and docker container. That is different. You pull or build images. When you launch an image, it becomes a running container. An image is not a running container, and so, you will not be able to copy a file inside an image. I do not know if this what you want to do, but you may 1) launch an image `docker run ...` 2) copy or modify a file `docker ps` shows your running container, and so you can `docker cp...` a file inside this running container 3) maybe save this modified image with `docker commit...` But usually, if you want to modify an image, you modify the Dockerfile, and so you can easily build a new image with such a command `docker build -t myuser/myimage:321 .` Notice the final dot, it means you use the Dockerfile that is local. See for example a Dockerfile for Nginx <https://github.com/nginxinc/docker-nginx/blob/c8bfb9b5260d4ad308deac5343b73b027f684375/mainline/stretch/Dockerfile>
I aimed at deploying a python web project by docker,and the first method I thought about is :copy server files to a container and run it with `python ***.py`. But I did not get the difference between images and container. Also,I got some other methods: 1- build a Dcokerfile. By this way,we can run a image with out other commond like `python ***.py` because we can write the commond into the Dcokerfile. 2- get a image that has a right python version,and try like `docker run -v $PWD/myapp:/usr/src/myapp -w /usr/src/myapp python:3.5 python helloworld.py`,but need not copy files to the container. If all the methods can I master, I would choose to build a Dcokerfile?
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
This is the jsFiddle with your expectations: [jsFiddle](https://jsfiddle.net/y277r5zL/) as you wanted the yellow border be around the whole content so it was better to extend your wrapper height. ```css #wrapper{ border: 1px solid #F68004; height: 150px; } #content{ background-color: #0075CF; height: 100px; } ``` ```html <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> ```
try this **CSS** ``` #wrapper{ border: 1px solid #F68004; } #content{ background-color: #0075CF; height: 100px; margin-bottom: 50px; } ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
The reason why the margin appears to be on top is due to [margin collapsing](https://developer.mozilla.org/en-US/docs/Web/CSS/margin_collapsing): **Parent and first/last child** If there is no border, padding, inline content, or clearance to separate the margin-top of a block with the margin-top of its first child block, or no border, padding, inline content, height, min-height, or max-height to separate the margin-bottom of a block with the margin-bottom of its last child, then those margins collapse. The collapsed margin ends up outside the parent. If you add add a transparent border to your parent div (`#content`) then your margin will behave: ```css #wrapper{ border: 1px solid #F68004; } #content{ border:1px solid transparent; background-color: #0075CF; height: 100px; } #box{ margin-bottom: 50px; height:10px; background-color:red; } ``` ```html <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> ``` If you want the white space at the bottom like in your expected image, just add `padding-bottom:50px` to `#wrapper` **Update** Why the `margin-bottom` is causing `margin-top`: As the collapsing margin is moving outside your parent div, it becomes margin bottom of the element outside the parent (which is the top border of `#wrapper`) - which pushes your `#content` div down (making it look like margin-top)
Solution 1: ``` <div id="wrapper"> <div id="content"> </div> <div id="box"></div> </div> ``` Solution 2: ``` <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> <style> #wrapper{ border: 1px solid #F68004; height: 150px; } #content{ background-color: #0075CF; height: 100px; } #box{ /*margin-bottom: 50px;*/ } </style> ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
You should give the margin to #content instead of #box. And if you want to give margin to #box only. Try to give it with minus value i.e. -(50+div height) If div #box height is 150px then use- ```css #box { margin-bottom: -200px; } ```
try this **CSS** ``` #wrapper{ border: 1px solid #F68004; } #content{ background-color: #0075CF; height: 100px; margin-bottom: 50px; } ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
Give `margin-bottom: 50px;` to `#content` instead of `#box` Because you have given height to `#content` div and here margin collapse. [Here it is explain with example](http://www.sitepoint.com/web-foundations/collapsing-margins/) **[Updated Fiddle](https://jsfiddle.net/7drzfb9x/)**
Please check below code ``` #content { background-color: #0075cf; height: 100px; margin-bottom: 50px; } #box{ /* margin-bottom: 50px; */ } ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
This is the jsFiddle with your expectations: [jsFiddle](https://jsfiddle.net/y277r5zL/) as you wanted the yellow border be around the whole content so it was better to extend your wrapper height. ```css #wrapper{ border: 1px solid #F68004; height: 150px; } #content{ background-color: #0075CF; height: 100px; } ``` ```html <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> ```
Please check below code ``` #content { background-color: #0075cf; height: 100px; margin-bottom: 50px; } #box{ /* margin-bottom: 50px; */ } ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
You should give the margin to #content instead of #box. And if you want to give margin to #box only. Try to give it with minus value i.e. -(50+div height) If div #box height is 150px then use- ```css #box { margin-bottom: -200px; } ```
Solution 1: ``` <div id="wrapper"> <div id="content"> </div> <div id="box"></div> </div> ``` Solution 2: ``` <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> <style> #wrapper{ border: 1px solid #F68004; height: 150px; } #content{ background-color: #0075CF; height: 100px; } #box{ /*margin-bottom: 50px;*/ } </style> ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
Give `margin-bottom: 50px;` to `#content` instead of `#box` Because you have given height to `#content` div and here margin collapse. [Here it is explain with example](http://www.sitepoint.com/web-foundations/collapsing-margins/) **[Updated Fiddle](https://jsfiddle.net/7drzfb9x/)**
You should give the margin to #content instead of #box. And if you want to give margin to #box only. Try to give it with minus value i.e. -(50+div height) If div #box height is 150px then use- ```css #box { margin-bottom: -200px; } ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
Please check below code ``` #content { background-color: #0075cf; height: 100px; margin-bottom: 50px; } #box{ /* margin-bottom: 50px; */ } ```
Solution 1: ``` <div id="wrapper"> <div id="content"> </div> <div id="box"></div> </div> ``` Solution 2: ``` <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> <style> #wrapper{ border: 1px solid #F68004; height: 150px; } #content{ background-color: #0075CF; height: 100px; } #box{ /*margin-bottom: 50px;*/ } </style> ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
The reason why the margin appears to be on top is due to [margin collapsing](https://developer.mozilla.org/en-US/docs/Web/CSS/margin_collapsing): **Parent and first/last child** If there is no border, padding, inline content, or clearance to separate the margin-top of a block with the margin-top of its first child block, or no border, padding, inline content, height, min-height, or max-height to separate the margin-bottom of a block with the margin-bottom of its last child, then those margins collapse. The collapsed margin ends up outside the parent. If you add add a transparent border to your parent div (`#content`) then your margin will behave: ```css #wrapper{ border: 1px solid #F68004; } #content{ border:1px solid transparent; background-color: #0075CF; height: 100px; } #box{ margin-bottom: 50px; height:10px; background-color:red; } ``` ```html <div id="wrapper"> <div id="content"> <div id="box"></div> </div> </div> ``` If you want the white space at the bottom like in your expected image, just add `padding-bottom:50px` to `#wrapper` **Update** Why the `margin-bottom` is causing `margin-top`: As the collapsing margin is moving outside your parent div, it becomes margin bottom of the element outside the parent (which is the top border of `#wrapper`) - which pushes your `#content` div down (making it look like margin-top)
Please check below code ``` #content { background-color: #0075cf; height: 100px; margin-bottom: 50px; } #box{ /* margin-bottom: 50px; */ } ```
35,887,597
I am new in Odoo development. I want to add product brand and country for the products. I just created the form view and menu for the brand under product menu in warehouse. Now I want to add a field for the brand in product view. I am trying to extend the product.product model for it but the model not found error occurs. I have no idea what is happening. Error details: ``` 2016-03-09 09:18:15,609 2562 INFO hat_dev openerp.modules.loading: loading 1 modules... 2016-03-09 09:18:15,620 2562 INFO hat_dev openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries 2016-03-09 09:18:15,648 2562 INFO hat_dev openerp.modules.loading: loading 55 modules... 2016-03-09 09:18:15,807 2562 INFO hat_dev openerp.modules.module: module openautoparts_erp: creating or updating database tables 2016-03-09 09:18:15,838 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/product_brand_views.xml 2016-03-09 09:18:15,893 2562 INFO hat_dev openerp.modules.loading: loading openautoparts_erp/product_brand/partner.xml 2016-03-09 09:18:15,919 2562 ERROR hat_dev openerp.addons.base.ir.ir_ui_view: Model not found: product.product Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262] 2016-03-09 09:18:15,926 2562 INFO hat_dev werkzeug: 127.0.0.1 - - [09/Mar/2016 09:18:15] "POST /longpolling/poll HTTP/1.1" 500 - 2016-03-09 09:18:15,952 2562 ERROR hat_dev werkzeug: Error on request: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 177, in run_wsgi execute(self.server.app) File "/usr/lib/python2.7/dist-packages/werkzeug/serving.py", line 165, in execute application_iter = app(environ, start_response) File "/opt/odoo/odoo/openerp/service/server.py", line 290, in app return self.app(e, s) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 216, in application return application_unproxied(environ, start_response) File "/opt/odoo/odoo/openerp/service/wsgi_server.py", line 202, in application_unproxied result = handler(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1290, in __call__ return self.dispatch(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1264, in __call__ return self.app(environ, start_wrapped) File "/usr/lib/python2.7/dist-packages/werkzeug/wsgi.py", line 579, in __call__ return self.app(environ, start_response) File "/opt/odoo/odoo/openerp/http.py", line 1428, in dispatch ir_http = request.registry['ir.http'] File "/opt/odoo/odoo/openerp/http.py", line 346, in registry return openerp.modules.registry.RegistryManager.get(self.db) if self.db else None File "/opt/odoo/odoo/openerp/modules/registry.py", line 339, in get update_module) File "/opt/odoo/odoo/openerp/modules/registry.py", line 370, in new openerp.modules.load_modules(registry._db, force_demo, status, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 351, in load_modules force, status, report, loaded_modules, update_module) File "/opt/odoo/odoo/openerp/modules/loading.py", line 255, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/opt/odoo/odoo/openerp/modules/loading.py", line 176, in load_module_graph _load_data(cr, module_name, idref, mode, kind='data') File "/opt/odoo/odoo/openerp/modules/loading.py", line 118, in _load_data tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 901, in convert_file convert_xml_import(cr, module, fp, idref, mode, noupdate, report) File "/opt/odoo/odoo/openerp/tools/convert.py", line 987, in convert_xml_import obj.parse(doc.getroot(), mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 853, in parse self._tags[rec.tag](self.cr, rec, n, mode=mode) File "/opt/odoo/odoo/openerp/tools/convert.py", line 763, in _tag_record id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context ) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_model.py", line 1064, in _update res_id = model_obj.create(cr, uid, values, context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/addons/base/ir/ir_ui_view.py", line 255, in create context=context) File "/opt/odoo/odoo/openerp/api.py", line 268, in wrapper return old_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 372, in old_api result = method(recs, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/opt/odoo/odoo/openerp/models.py", line 4285, in _create recs._validate_fields(vals) File "/opt/odoo/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/opt/odoo/odoo/openerp/models.py", line 1272, in _validate_fields raise ValidationError('\n'.join(errors)) ParseError: "ValidateError Field(s) `arch` failed against a constraint: Invalid view definition ``` Error details: Model not found: product.product ``` Error context: View `partner.brand` [view_id: 2112, xml_id: n/a, model: product.product, parent_id: 262]" while parsing /opt/odoo/custom/openautoparts_erp/product_brand/partner.xml:5, near <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> ``` My model is: ``` # -*- coding: utf-8 -*- from openerp import fields, models class Product(models.Model): _inherit = 'product.product' # Add a new column to the product.product model brand = fields.Char("brand", required=True) brand_ids = fields.One2many( 'product.brand', string='Brand Name', readonly=True) ``` And my view file is: ``` <?xml version="1.0" encoding="UTF-8"?> <openerp> <data> <!-- Add brand field to existing view --> <record model="ir.ui.view" id="product_brand_form_view"> <field name="name">partner.brand</field> <field name="model">product.product</field> <field name="inherit_id" ref="product.product_normal_form_view"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="Brands"> <group> <field name="brand"/> <field name="brand_ids"/> </group> </page> </notebook> </field> </record> </data> </openerp> ```
2016/03/09
[ "https://Stackoverflow.com/questions/35887597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5123488/" ]
Give `margin-bottom: 50px;` to `#content` instead of `#box` Because you have given height to `#content` div and here margin collapse. [Here it is explain with example](http://www.sitepoint.com/web-foundations/collapsing-margins/) **[Updated Fiddle](https://jsfiddle.net/7drzfb9x/)**
You can try like this: **[Demo](https://jsfiddle.net/gvLeto9b/1/)** Instead of setting height to `#content`, you can use it for `#box` ``` #content { background-color: #0075CF; } #box {margin-bottom: 50px; height: 100px;} ```
54,262,301
I downloaded openCV and YOLO weights, in order to implement object detection for a certain project using Python 3.5 version. when I run this code: ```python from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body ``` The console gives the error below: > > ImportError Traceback (most recent call > last) in () > ----> 1 from yolo\_utils import read\_classes, read\_anchors, generate\_colors, preprocess\_image, draw\_boxes, scale\_boxes > 2 from yad2k.models.keras\_yolo import yolo\_head, yolo\_boxes\_to\_corners, preprocess\_true\_boxes, yolo\_loss, yolo\_body > > > ImportError: No module named 'yolo\_utils' > > > Note that i downloaded yolo\_utils.py in the weights folder, how can I fix this issue?
2019/01/18
[ "https://Stackoverflow.com/questions/54262301", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10935479/" ]
Actually You are Importing user built module. As Yolo\_utils is created by a Coursera coordinators to make things easy, this module is available in only their machines and You are trying to import this in your machine. Here is github link of module : <https://github.com/JudasDie/deeplearning.ai/blob/master/Convolutional%20Neural%20Networks/week3/yolo_utils.py> Save this To your local machine in .py formet And Copy this file in your lib files of Your application(anaconda or any other)
Copy the source code of [yolo\_utils](https://github.com/iArunava/YOLOv3-Object-Detection-with-OpenCV/blob/master/yolo_utils.py) . Paste it in your source code before importing yolo\_utils. It worked for me. Hope this will help..
10,135,656
I had an existing Django project that I've just added South to. * I ran syncdb locally. * I ran `manage.py schemamigration app_name` locally * I ran `manage.py migrate app_name --fake` locally * I commit and pushed to heroku master * I ran syncdb on heroku * I ran `manage.py schemamigration app_name` on heroku * I ran `manage.py migrate app_name` on heroku I then receive this: ``` $ heroku run python notecard/manage.py migrate notecards Running python notecard/manage.py migrate notecards attached to terminal... up, run.1 Running migrations for notecards: - Migrating forwards to 0005_initial. > notecards:0003_initial Traceback (most recent call last): File "notecard/manage.py", line 14, in <module> execute_manager(settings) File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/app/lib/python2.7/site-packages/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/app/lib/python2.7/site-packages/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 81, in run_migration migration_function() File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/app/notecard/notecards/migrations/0003_initial.py", line 15, in forwards ('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) django.db.utils.DatabaseError: relation "notecards_semester" already exists ``` I have 3 models. Section, Semester, and Notecards. I've added one field to the Notecards model and I cannot get it added on Heroku. Thank you.
2012/04/13
[ "https://Stackoverflow.com/questions/10135656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/722427/" ]
You must fake the migrations that create the tables, then run the other migrations as usual. ``` manage.py migrate app_name 000X --fake manage.py migrate app_name ``` With 000X being the number of the migration in which you create the table.
First of all, from the looks of 0003\_initial and 0005\_initial, you've done multiple `schemamigration myapp --initial` commands which add create\_table statements. Having two sets of these will definitely cause problems as one will create tables, then the next one will attempt creating existing tables. Your `migrations` folder is probably completely polluted with odd migrations. Anyways, while I understand the theory of running `schemamigration` on the local machine AND the remote machine, this is probably the root of your problem. Schemamigration generates a new migration - if you have to run it on your development server, commit it, push it, then generate yet another one on your production machine, you'll probably end up with overlapping migrations. Another thing: if you are running syncdb on your remote machine and it's generating tables, that means your database is 100% current -- no migrations needed. You'd do a full `migrate --fake` to match your migrations to your database. ``` I ran syncdb locally. I ran manage.py schemamigration app_name locally I ran manage.py migrate app_name --fake locally I commit and pushed to heroku master I ran syncdb on heroku I ran manage.py schemamigration app_name on heroku # if you ran syncdb, your DB would be in the final state. I ran manage.py migrate app_name on heroku # if you ran syncdb, your DB would be in the final state. Nothing to migrate. ```
64,160,347
I am trying to replicate a Case Statement within my python script (involving pandas) that is applied to a dataframe and fills a new column based on how each row is processed, but it seems like every row is falling into the else condition due to every value in the new column being `Other`. My first thought is that it is do to the `any()` condition that I have used, but I feel like I could be using the wrong approach completely. Any advice on the direction I should take? **Example rows:** ``` index | source_name 1 | CLICK TO CALL - New Mexico 2 | Las Vegas Community Partner 3 | Facebook - Test Camp - Los Angeles 4 | Google - Test Camp - Los Angeles index | landing_page_url 1 | NaN 2 | https://lp.example.com/fb/la/test/ 3 | https://lp.example.com/fb/la/test/?utm_source=facebook 4 | https://lp.example.com/google/la/test/?utm_source=google ``` **Code Criteria:** ``` # Criteria fb_landing_page_crit = [ 'utm_source=facebook', 'fbclid', 'test.com/fb/' ] fb_source_crit = [ 'fb', 'facebook' ] google_landing_page_crit = [ 'gclid' ] google_source_crit = [ 'click to call', 'discovery', 'call', 'website', 'landing page', 'display - lp' ] local_listings_source_crit = [ 'gmb' ] partner_source_crit = [ 'vegas community', 'new orleans community', 'dc community', ] ``` Conditional: ``` def network_parse(df): if isinstance(df, str): if any(x in df['landing_page_url'] for x in fb_landing_page_crit): return 'Facebook' elif any(x in df['landing_page_url'] for x in google_landing_page_crit): return 'Google' elif any(x in df['source_name'] for x in fb_source_crit): return 'Facebook' elif any(x in df['source_name'] for x in google_source_crit): return 'Google' elif any(x in df['source_name'] for x in local_listings_source_crit): return 'Local Listings' elif any(x in df['source_name'] for x in partner_source_crit): return 'Partner - Community Partnership' else: return 'Other' else: return 'Other' ``` **Function Call:** ``` df['network'] = df.apply(network_parse, axis=1) # Every row returns "Other" ```
2020/10/01
[ "https://Stackoverflow.com/questions/64160347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1061892/" ]
I figured out a better approach to the problem. Rather than using contains methods, I decided to run a regex search to see if the combined list values are found within the column row and if they are present, then apply that value. Found below are my updates: **Lists:** ``` fb_landing_page_crit = [ 'utm_source=facebook', 'fbclid', 'test.com\/fb\/' ] fb_landing_page_regex = "|".join(fb_landing_page_crit) google_landing_page_crit = [ 'gclid' ] google_landing_page_regex = "|".join(google_landing_page_crit) fb_source_crit = [ 'fb', 'facebook' ] fb_source_regex = "|".join(fb_source_crit) google_source_crit = [ 'click to call', 'discovery', 'call', 'website', 'landing page', 'display \- lp' ] google_source_regex = "|".join(google_source_crit) local_listings_source_crit = [ 'gmb' ] local_listings_source_regex = "|".join(local_listings_source_crit) partner_source_crit = [ 'vegas community', 'new orleans community', 'dc community', ] partner_source_regex = "|".join(partner_source_crit) ``` Function: ``` def network_parse(df): if isinstance(df['landing_page_url'], str): if bool(re.search(fb_landing_page_regex,df['landing_page_url'].lower())) or bool(re.search(fb_source_regex,df['source_name'].lower())): return 'Facebook' if bool(re.search(google_landing_page_regex,df['landing_page_url'].lower())) or bool(re.search(google_source_regex,df['source_name'].lower())): return 'Google' if bool(re.search(local_listings_source_regex,df['source_name'].lower())): return 'Local Listings' if bool(re.search(partner_source_regex,df['source_name'].lower())): return 'Partner - Community Partnership' else: return 'Other' else: return 'Other' ``` Function call: ``` df['network'] = df.apply(network_parse, axis=1) ```
Right now, the problem is not the `any` but the `x in df['source_name']` part (I took `source_name` as it is simpler to explain there). You check if any row of the dataframe is *equal* to (e.g.) `'Google'`, not if it contains the word. To achieve the latter, you could nest the `for` statements: ``` ... if any((x in y for y in df['landing_page_url']) for x in fb_landing_page_crit): return 'Facebook' ``` However, I am pretty sure this is not the most elegant and efficient way, as it loops multiple times over the same column, but for smallish dataframes it might be ok. Otherwise it might help you find a more efficient solution. Edit: To investigate your problem, you could run the following two snippets, where the first one gives `False` and the second one gives `True`: ``` test = ['This', 'thought'] a = ['This is', 'a', 'longer Text than', 'I thought'] print(any(x in in a for x in test)) # This is principally what you coded test2 = ['This', 'I thought'] print(any(x in a for x in test2) ```
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
In python, if you declare an array like: ``` myarray = [] ``` You are declaring an empty array with head -1, and you can append values to it with the .append() function and access them the same way you would in java. For all intents and purposes, they are the same thing
It's easer to use collections.deque for stacks in python. ``` from collections import deque stack = deque() stack.append(1) # push stack.append(2) # push stack.append(3) # push stack.append(4) # push t = stack[-1] # your 'head()' tt = stack.pop() # pop if not len(stack): # empty() print("It's empty") ```
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
Fist of all, you need to distinguish between arrays and lists in Python. What you are talking about here is `list` class, but there are [actual arrays](https://docs.python.org/3/library/array.html) and they are more or less the same as arrays in Java. Python's `list` is similar to Java's `ArrayList` and to C++'s `std::vector`. In other words, you have three possible solutions here: 1. Use simple `list` and just append elements to it. 2. Use python's `array`s that are the closest thing to Java's arrays. 3. Use python's [deque](https://docs.python.org/3/library/collections.html#deque-objects). Regarding the use of `list`, if your goal is to initialize it with N empty elements, what you can do is: ```py N = 10 # or any other number my_list = [0] * N # 0 is the element here for the list to be filled with ``` or little more fancy approach ```py from itertools import repeat my_list = list(repeat(0, N)) ```
In python, if you declare an array like: ``` myarray = [] ``` You are declaring an empty array with head -1, and you can append values to it with the .append() function and access them the same way you would in java. For all intents and purposes, they are the same thing
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
In python, if you declare an array like: ``` myarray = [] ``` You are declaring an empty array with head -1, and you can append values to it with the .append() function and access them the same way you would in java. For all intents and purposes, they are the same thing
This is the same code from Java to Python: ``` class IntegerStack: def __init__(self): self._stack = [] self._head = -1 def emptyStack(self): return self._head < 0 def head(self): if self._head < 0: raise Exception("The stack is empty.") return self._stack[self._head] ```
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
If you want to implement something close to your java version, you could use a numpy array by importing numpy. Numpy arrays are similar because they are immutable objects like in java. Then you could write in your constructor: ```py _stack = np.zeros(MAX_NUMBER) ``` Otherwise you could use the mutable list object from python itself, in this case the list is already a stack essentially as you can see in the python docs for [data structures](https://docs.python.org/2/tutorial/datastructures.html): > > The list methods make it very easy to use a list as a stack, where the last element added is the first element retrieved (“last-in, first-out”). To add an item to the top of the stack, use append(). To retrieve an item from the top of the stack, use pop() without an explicit index. For example: > > > ```py >>> stack = [3, 4, 5] >>> stack.append(6) >>> stack.append(7) >>> stack [3, 4, 5, 6, 7] >>> stack.pop() 7 >>> stack [3, 4, 5, 6] >>> stack.pop() 6 >>> stack.pop() 5 >>> stack [3, 4] ``` The first version is more performant however, because mutable python objects have to be copied everytime they are changed whereas immutable objects are just created and old objects with no names referring to it are garbage collected.
In python, if you declare an array like: ``` myarray = [] ``` You are declaring an empty array with head -1, and you can append values to it with the .append() function and access them the same way you would in java. For all intents and purposes, they are the same thing
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
Fist of all, you need to distinguish between arrays and lists in Python. What you are talking about here is `list` class, but there are [actual arrays](https://docs.python.org/3/library/array.html) and they are more or less the same as arrays in Java. Python's `list` is similar to Java's `ArrayList` and to C++'s `std::vector`. In other words, you have three possible solutions here: 1. Use simple `list` and just append elements to it. 2. Use python's `array`s that are the closest thing to Java's arrays. 3. Use python's [deque](https://docs.python.org/3/library/collections.html#deque-objects). Regarding the use of `list`, if your goal is to initialize it with N empty elements, what you can do is: ```py N = 10 # or any other number my_list = [0] * N # 0 is the element here for the list to be filled with ``` or little more fancy approach ```py from itertools import repeat my_list = list(repeat(0, N)) ```
It's easer to use collections.deque for stacks in python. ``` from collections import deque stack = deque() stack.append(1) # push stack.append(2) # push stack.append(3) # push stack.append(4) # push t = stack[-1] # your 'head()' tt = stack.pop() # pop if not len(stack): # empty() print("It's empty") ```
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
This is the same code from Java to Python: ``` class IntegerStack: def __init__(self): self._stack = [] self._head = -1 def emptyStack(self): return self._head < 0 def head(self): if self._head < 0: raise Exception("The stack is empty.") return self._stack[self._head] ```
It's easer to use collections.deque for stacks in python. ``` from collections import deque stack = deque() stack.append(1) # push stack.append(2) # push stack.append(3) # push stack.append(4) # push t = stack[-1] # your 'head()' tt = stack.pop() # pop if not len(stack): # empty() print("It's empty") ```
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
If you want to implement something close to your java version, you could use a numpy array by importing numpy. Numpy arrays are similar because they are immutable objects like in java. Then you could write in your constructor: ```py _stack = np.zeros(MAX_NUMBER) ``` Otherwise you could use the mutable list object from python itself, in this case the list is already a stack essentially as you can see in the python docs for [data structures](https://docs.python.org/2/tutorial/datastructures.html): > > The list methods make it very easy to use a list as a stack, where the last element added is the first element retrieved (“last-in, first-out”). To add an item to the top of the stack, use append(). To retrieve an item from the top of the stack, use pop() without an explicit index. For example: > > > ```py >>> stack = [3, 4, 5] >>> stack.append(6) >>> stack.append(7) >>> stack [3, 4, 5, 6, 7] >>> stack.pop() 7 >>> stack [3, 4, 5, 6] >>> stack.pop() 6 >>> stack.pop() 5 >>> stack [3, 4] ``` The first version is more performant however, because mutable python objects have to be copied everytime they are changed whereas immutable objects are just created and old objects with no names referring to it are garbage collected.
It's easer to use collections.deque for stacks in python. ``` from collections import deque stack = deque() stack.append(1) # push stack.append(2) # push stack.append(3) # push stack.append(4) # push t = stack[-1] # your 'head()' tt = stack.pop() # pop if not len(stack): # empty() print("It's empty") ```
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
Fist of all, you need to distinguish between arrays and lists in Python. What you are talking about here is `list` class, but there are [actual arrays](https://docs.python.org/3/library/array.html) and they are more or less the same as arrays in Java. Python's `list` is similar to Java's `ArrayList` and to C++'s `std::vector`. In other words, you have three possible solutions here: 1. Use simple `list` and just append elements to it. 2. Use python's `array`s that are the closest thing to Java's arrays. 3. Use python's [deque](https://docs.python.org/3/library/collections.html#deque-objects). Regarding the use of `list`, if your goal is to initialize it with N empty elements, what you can do is: ```py N = 10 # or any other number my_list = [0] * N # 0 is the element here for the list to be filled with ``` or little more fancy approach ```py from itertools import repeat my_list = list(repeat(0, N)) ```
This is the same code from Java to Python: ``` class IntegerStack: def __init__(self): self._stack = [] self._head = -1 def emptyStack(self): return self._head < 0 def head(self): if self._head < 0: raise Exception("The stack is empty.") return self._stack[self._head] ```
58,971,323
I have an assignment in my class to implement something in Java and Python. I need to implement an IntegerStack with both languages. All the values are supposed to be held in an array and there are some meta data values like head() index. When I implement this is Java I just create an Array with max size (that I choose): ``` public class IntegerStack { public static int MAX_NUMBER = 50; private int[] _stack; private int _head; public IntegerStack() { _stack = new int[MAX_NUMBER]; _head = -1; } public boolean emptyStack() { return _head < 0; } public int head() { if (_head < 0) throw new StackOverflowError("The stack is empty."); // underflow return _stack[_head]; } // [...] } ``` I'm really not sure how to do this in Python. I checked out a couple tutorials, where they all say that an array in python has the syntax `my_array = [1,2,3]`. But it's different, because I can use it as a List and append items as I want. So I could make a for loop and initiate 50 zero elements into a Python array, but would it be the same thing as in Java? It is not clear to me how a Python List is different from an Array.
2019/11/21
[ "https://Stackoverflow.com/questions/58971323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10137268/" ]
If you want to implement something close to your java version, you could use a numpy array by importing numpy. Numpy arrays are similar because they are immutable objects like in java. Then you could write in your constructor: ```py _stack = np.zeros(MAX_NUMBER) ``` Otherwise you could use the mutable list object from python itself, in this case the list is already a stack essentially as you can see in the python docs for [data structures](https://docs.python.org/2/tutorial/datastructures.html): > > The list methods make it very easy to use a list as a stack, where the last element added is the first element retrieved (“last-in, first-out”). To add an item to the top of the stack, use append(). To retrieve an item from the top of the stack, use pop() without an explicit index. For example: > > > ```py >>> stack = [3, 4, 5] >>> stack.append(6) >>> stack.append(7) >>> stack [3, 4, 5, 6, 7] >>> stack.pop() 7 >>> stack [3, 4, 5, 6] >>> stack.pop() 6 >>> stack.pop() 5 >>> stack [3, 4] ``` The first version is more performant however, because mutable python objects have to be copied everytime they are changed whereas immutable objects are just created and old objects with no names referring to it are garbage collected.
This is the same code from Java to Python: ``` class IntegerStack: def __init__(self): self._stack = [] self._head = -1 def emptyStack(self): return self._head < 0 def head(self): if self._head < 0: raise Exception("The stack is empty.") return self._stack[self._head] ```
5,325,858
I need to perform http PUT operations from python Which libraries have been proven to support this? More specifically I need to perform PUT on keypairs, not file upload. I have been trying to work with the restful\_lib.py, but I get invalid results from the API that I am testing. (I know the results are invalid because I can fire off the same request with curl from the command line and it works.) After attending Pycon 2011 I came away with the impression that pycurl might be my solution, so I have been trying to implement that. I have two issues here. First, pycurl renames "PUT" as "UPLOAD" which seems to imply that it is focused on file uploads rather than key pairs. Second when I try to use it I never seem to get a return from the .perform() step. Here is my current code: ``` import pycurl import urllib url='https://xxxxxx.com/xxx-rest' UAM=pycurl.Curl() def on_receive(data): print data arglist= [\ ('username', 'testEmailAdd@test.com'),\ ('email', 'testEmailAdd@test.com'),\ ('username','testUserName'),\ ('givenName','testFirstName'),\ ('surname','testLastName')] encodedarg=urllib.urlencode(arglist) path2= url+"/user/"+"99b47002-56e5-4fe2-9802-9a760c9fb966" UAM.setopt(pycurl.URL, path2) UAM.setopt(pycurl.POSTFIELDS, encodedarg) UAM.setopt(pycurl.SSL_VERIFYPEER, 0) UAM.setopt(pycurl.UPLOAD, 1) #Set to "PUT" UAM.setopt(pycurl.CONNECTTIMEOUT, 1) UAM.setopt(pycurl.TIMEOUT, 2) UAM.setopt(pycurl.WRITEFUNCTION, on_receive) print "about to perform" print UAM.perform() ```
2011/03/16
[ "https://Stackoverflow.com/questions/5325858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/662525/" ]
httplib should manage. <http://docs.python.org/library/httplib.html> There's an example on this page <http://effbot.org/librarybook/httplib.htm>
[urllib](http://docs.python.org/library/urllib.html) and [urllib2](http://docs.python.org/library/urllib2.html) are also suggested.
5,325,858
I need to perform http PUT operations from python Which libraries have been proven to support this? More specifically I need to perform PUT on keypairs, not file upload. I have been trying to work with the restful\_lib.py, but I get invalid results from the API that I am testing. (I know the results are invalid because I can fire off the same request with curl from the command line and it works.) After attending Pycon 2011 I came away with the impression that pycurl might be my solution, so I have been trying to implement that. I have two issues here. First, pycurl renames "PUT" as "UPLOAD" which seems to imply that it is focused on file uploads rather than key pairs. Second when I try to use it I never seem to get a return from the .perform() step. Here is my current code: ``` import pycurl import urllib url='https://xxxxxx.com/xxx-rest' UAM=pycurl.Curl() def on_receive(data): print data arglist= [\ ('username', 'testEmailAdd@test.com'),\ ('email', 'testEmailAdd@test.com'),\ ('username','testUserName'),\ ('givenName','testFirstName'),\ ('surname','testLastName')] encodedarg=urllib.urlencode(arglist) path2= url+"/user/"+"99b47002-56e5-4fe2-9802-9a760c9fb966" UAM.setopt(pycurl.URL, path2) UAM.setopt(pycurl.POSTFIELDS, encodedarg) UAM.setopt(pycurl.SSL_VERIFYPEER, 0) UAM.setopt(pycurl.UPLOAD, 1) #Set to "PUT" UAM.setopt(pycurl.CONNECTTIMEOUT, 1) UAM.setopt(pycurl.TIMEOUT, 2) UAM.setopt(pycurl.WRITEFUNCTION, on_receive) print "about to perform" print UAM.perform() ```
2011/03/16
[ "https://Stackoverflow.com/questions/5325858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/662525/" ]
httplib should manage. <http://docs.python.org/library/httplib.html> There's an example on this page <http://effbot.org/librarybook/httplib.htm>
Thank you all for your assistance. I think I have found an answer. My code now looks like this: ``` import urllib import httplib import lxml from lxml import etree url='xxxx.com' UAM=httplib.HTTPSConnection(url) arglist= [\ ('username', 'testEmailAdd@test.com'),\ ('email', 'testEmailAdd@test.com'),\ ('username','testUserName'),\ ('givenName','testFirstName'),\ ('surname','testLastName')\ ] encodedarg=urllib.urlencode(arglist) uuid="99b47002-56e5-4fe2-9802-9a760c9fb966" path= "/uam-rest/user/"+uuid UAM.putrequest("PUT", path) UAM.putheader('content-type','application/x-www-form-urlencoded') UAM.putheader('accepts','application/com.internap.ca.uam.ama-v1+xml') UAM.putheader("Content-Length", str(len(encodedarg))) UAM.endheaders() UAM.send(encodedarg) response = UAM.getresponse() html = etree.HTML(response.read()) result = etree.tostring(html, pretty_print=True, method="html") print result ``` Updated: Now I am getting valid responses. This seems to be my solution. (The pretty print at the end isn't working, but I don't really care, that is just there while I am building the function.)
5,325,858
I need to perform http PUT operations from python Which libraries have been proven to support this? More specifically I need to perform PUT on keypairs, not file upload. I have been trying to work with the restful\_lib.py, but I get invalid results from the API that I am testing. (I know the results are invalid because I can fire off the same request with curl from the command line and it works.) After attending Pycon 2011 I came away with the impression that pycurl might be my solution, so I have been trying to implement that. I have two issues here. First, pycurl renames "PUT" as "UPLOAD" which seems to imply that it is focused on file uploads rather than key pairs. Second when I try to use it I never seem to get a return from the .perform() step. Here is my current code: ``` import pycurl import urllib url='https://xxxxxx.com/xxx-rest' UAM=pycurl.Curl() def on_receive(data): print data arglist= [\ ('username', 'testEmailAdd@test.com'),\ ('email', 'testEmailAdd@test.com'),\ ('username','testUserName'),\ ('givenName','testFirstName'),\ ('surname','testLastName')] encodedarg=urllib.urlencode(arglist) path2= url+"/user/"+"99b47002-56e5-4fe2-9802-9a760c9fb966" UAM.setopt(pycurl.URL, path2) UAM.setopt(pycurl.POSTFIELDS, encodedarg) UAM.setopt(pycurl.SSL_VERIFYPEER, 0) UAM.setopt(pycurl.UPLOAD, 1) #Set to "PUT" UAM.setopt(pycurl.CONNECTTIMEOUT, 1) UAM.setopt(pycurl.TIMEOUT, 2) UAM.setopt(pycurl.WRITEFUNCTION, on_receive) print "about to perform" print UAM.perform() ```
2011/03/16
[ "https://Stackoverflow.com/questions/5325858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/662525/" ]
[urllib](http://docs.python.org/library/urllib.html) and [urllib2](http://docs.python.org/library/urllib2.html) are also suggested.
Thank you all for your assistance. I think I have found an answer. My code now looks like this: ``` import urllib import httplib import lxml from lxml import etree url='xxxx.com' UAM=httplib.HTTPSConnection(url) arglist= [\ ('username', 'testEmailAdd@test.com'),\ ('email', 'testEmailAdd@test.com'),\ ('username','testUserName'),\ ('givenName','testFirstName'),\ ('surname','testLastName')\ ] encodedarg=urllib.urlencode(arglist) uuid="99b47002-56e5-4fe2-9802-9a760c9fb966" path= "/uam-rest/user/"+uuid UAM.putrequest("PUT", path) UAM.putheader('content-type','application/x-www-form-urlencoded') UAM.putheader('accepts','application/com.internap.ca.uam.ama-v1+xml') UAM.putheader("Content-Length", str(len(encodedarg))) UAM.endheaders() UAM.send(encodedarg) response = UAM.getresponse() html = etree.HTML(response.read()) result = etree.tostring(html, pretty_print=True, method="html") print result ``` Updated: Now I am getting valid responses. This seems to be my solution. (The pretty print at the end isn't working, but I don't really care, that is just there while I am building the function.)
26,699,356
i am using Spyder 2.3.1 under Windows 7 and have a running iPython 2.3 Kernel on a Rasperry Pi RASPBIAN Linux OS. I can connect to an external kernel, using a .json file and this tutorial: [Remote ipython console](https://pythonhosted.org/spyder/ipythonconsole.html) But what now? If I "run" a script (F5), then the kernel tries to exectue the script like: ``` %run "C:\test.py" ``` ERROR: File `u'C:\\test.py'` not found. This comes back with an error, ofc, because the script lays on my machine under c: and not on the remote machine/raspberry pi. How to I tell Spyder to somehow copy first the script to the remote machine and execute it there? If I check the "this is a remote kernel" checkbox, I cannot connect to the existing kernel anymore. What does that box mean? Will it copy the script via SSH to the remote machine before execution? If I enter the SSH login information, I get an "It seems the kernel died unexpectedly" error.
2014/11/02
[ "https://Stackoverflow.com/questions/26699356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153871/" ]
The tutorial that you mention is a little bit our of date as Spyder now has the ability to connect to remote kernels. The "This is a remote kernel" checkbox, when checked, enables the portion of the dialog where you can enter your ssh connection credentials. (You should need this unless you have manually opened the required ssh tunnels to forward the process ports of your remote kernel... ) Besides, the ipython connection info (the json file) must correspond to the remote kernel, running on your raspberry pi. Finally, there is no means at this time to copy the script lying on your local pc when you hit run. The preferred method would actually be the reverse: mount your raspberry pi's filesystem using a tool like sshfs and edit them in place. The plan is to implement an sftp client in Spyder so that it will not be required and you will be able to explore the remote filesystem from Spyder's file explorer. To summarize: 1) assuming that you are logged in your raspberry pi, launch a local IPython kernel with ipython kernel. It should give you the name of your json file to use, which you should copy to your local pc. 2) in spyder on your local pc, connect to a remote kernel with that json file and your ssh credentials I know that it is cumbersome, but it is a first step..
Another option is to use Spyder cells to send the whole contents of your file to the IPython console. I think this is easier than mounting your remote filesystem with Samba or sshfs (in case that's not possible or hard to do). Cells are defined by adding lines of the form `# %%` to your file. For example, let's say your file is: ``` # -*- coding: utf-8 -*- def f(x): print(x + x) f(5) ``` Then you can just add a cell at the bottom like this ``` # -*- coding: utf-8 -*- def f(x): print(x + x) f(5) # %% ``` and by pressing `Ctrl` + `Enter` above the cell line, the full contents of your file will be sent to the console and evaluated at once.
26,699,356
i am using Spyder 2.3.1 under Windows 7 and have a running iPython 2.3 Kernel on a Rasperry Pi RASPBIAN Linux OS. I can connect to an external kernel, using a .json file and this tutorial: [Remote ipython console](https://pythonhosted.org/spyder/ipythonconsole.html) But what now? If I "run" a script (F5), then the kernel tries to exectue the script like: ``` %run "C:\test.py" ``` ERROR: File `u'C:\\test.py'` not found. This comes back with an error, ofc, because the script lays on my machine under c: and not on the remote machine/raspberry pi. How to I tell Spyder to somehow copy first the script to the remote machine and execute it there? If I check the "this is a remote kernel" checkbox, I cannot connect to the existing kernel anymore. What does that box mean? Will it copy the script via SSH to the remote machine before execution? If I enter the SSH login information, I get an "It seems the kernel died unexpectedly" error.
2014/11/02
[ "https://Stackoverflow.com/questions/26699356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153871/" ]
The tutorial that you mention is a little bit our of date as Spyder now has the ability to connect to remote kernels. The "This is a remote kernel" checkbox, when checked, enables the portion of the dialog where you can enter your ssh connection credentials. (You should need this unless you have manually opened the required ssh tunnels to forward the process ports of your remote kernel... ) Besides, the ipython connection info (the json file) must correspond to the remote kernel, running on your raspberry pi. Finally, there is no means at this time to copy the script lying on your local pc when you hit run. The preferred method would actually be the reverse: mount your raspberry pi's filesystem using a tool like sshfs and edit them in place. The plan is to implement an sftp client in Spyder so that it will not be required and you will be able to explore the remote filesystem from Spyder's file explorer. To summarize: 1) assuming that you are logged in your raspberry pi, launch a local IPython kernel with ipython kernel. It should give you the name of your json file to use, which you should copy to your local pc. 2) in spyder on your local pc, connect to a remote kernel with that json file and your ssh credentials I know that it is cumbersome, but it is a first step..
After a search in the `site-packages\spyderlib` directory for the keyword `%run`, I found the method(in `site-packages\spyderlib\plugins\ipythonconsole.py`) which constructs the `%run` command: ``` def run_script_in_current_client(self, filename, wdir, args, debug): """Run script in current client, if any""" norm = lambda text: remove_backslashes(to_text_string(text)) client = self.get_current_client() if client is not None: # Internal kernels, use runfile if client.kernel_widget_id is not None: line = "%s('%s'" % ('debugfile' if debug else 'runfile', norm(filename)) if args: line += ", args='%s'" % norm(args) if wdir: line += ", wdir='%s'" % norm(wdir) line += ")" else: # External kernels, use %run line = "%run " if debug: line += "-d " line += "\"%s\"" % to_text_string(filename) if args: line += " %s" % norm(args) self.execute_python_code(line) self.visibility_changed(True) self.raise_() else: #XXX: not sure it can really happen QMessageBox.warning(self, _('Warning'), _("No IPython console is currently available to run <b>%s</b>." "<br><br>Please open a new one and try again." ) % osp.basename(filename), QMessageBox.Ok) ``` I added the following code to convert paths after `else: # External kernels, use %run` ``` # ----added to remap local dir to remote dir------- localpath = "Z:\wk" remotepath = "/mnt/sdb1/wk" if localpath in filename: # convert path to linux path filename = filename.replace(localpath, remotepath) filename = filename.replace("\\", "/") # ----- END mod ``` now it runs the file on the remote machine when I hit F5. I am on `Spyder 2.3.9` with samba share mapped to z: drive.
26,699,356
i am using Spyder 2.3.1 under Windows 7 and have a running iPython 2.3 Kernel on a Rasperry Pi RASPBIAN Linux OS. I can connect to an external kernel, using a .json file and this tutorial: [Remote ipython console](https://pythonhosted.org/spyder/ipythonconsole.html) But what now? If I "run" a script (F5), then the kernel tries to exectue the script like: ``` %run "C:\test.py" ``` ERROR: File `u'C:\\test.py'` not found. This comes back with an error, ofc, because the script lays on my machine under c: and not on the remote machine/raspberry pi. How to I tell Spyder to somehow copy first the script to the remote machine and execute it there? If I check the "this is a remote kernel" checkbox, I cannot connect to the existing kernel anymore. What does that box mean? Will it copy the script via SSH to the remote machine before execution? If I enter the SSH login information, I get an "It seems the kernel died unexpectedly" error.
2014/11/02
[ "https://Stackoverflow.com/questions/26699356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153871/" ]
The tutorial that you mention is a little bit our of date as Spyder now has the ability to connect to remote kernels. The "This is a remote kernel" checkbox, when checked, enables the portion of the dialog where you can enter your ssh connection credentials. (You should need this unless you have manually opened the required ssh tunnels to forward the process ports of your remote kernel... ) Besides, the ipython connection info (the json file) must correspond to the remote kernel, running on your raspberry pi. Finally, there is no means at this time to copy the script lying on your local pc when you hit run. The preferred method would actually be the reverse: mount your raspberry pi's filesystem using a tool like sshfs and edit them in place. The plan is to implement an sftp client in Spyder so that it will not be required and you will be able to explore the remote filesystem from Spyder's file explorer. To summarize: 1) assuming that you are logged in your raspberry pi, launch a local IPython kernel with ipython kernel. It should give you the name of your json file to use, which you should copy to your local pc. 2) in spyder on your local pc, connect to a remote kernel with that json file and your ssh credentials I know that it is cumbersome, but it is a first step..
Just thought I'd make my first post to update Roy Cai's answer for Spyder 4 in case anyone is looking for this. Roy's answer worked flawlessly for me. Spyder 4 has moved the relevant code from where it was when he wrote the answer. The method is now in \Lib\site-packages\spyder\plugins\ipythonconsole and the python file is plugin.py. Everything otherwise works the same as it used to - The place to insert the modified code is the same, and the same update fixes it. (incidentally - yay for the ability to save login info for logging into remote kernels in Spyder 4!)
74,180,540
i'm trying execute project python in terminal but appear this error: ``` (base) hopu@docker-manager1:~/bentoml-airquality$ python src/main.py Traceback (most recent call last): File "src/main.py", line 7, in <module> from src import VERSION, SERVICE, DOCKER_IMAGE_NAME ModuleNotFoundError: No module named 'src' ``` The project hierarchy is as follows: [Project hierarchy](https://i.stack.imgur.com/octVb.png) If I execute project with any IDE, it works well.
2022/10/24
[ "https://Stackoverflow.com/questions/74180540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15767572/" ]
Your PYTHONPATH is determined by the directory your python executable is located, not from where you're executing it. For this reason, you should be able to import the files directly, and not from source. You're trying to import from `/src`, but your path is already in there. Maybe something like this might work: ```py from . import VERSION, SERVICE, DOCKER_IMAGE_NAME ```
The interpretor is right. For `from src import VERSION, SERVICE, DOCKER_IMAGE_NAME` to be valid, `src` has to be a module or package accessible from the Python path. The problem is that the `python` program looks in the current directory to search for the modules or packages to run, but the current directory is not added to the Python path. So it does find the `src/main.py` module, but *inside the interpretor* it cannot find the `src` package. What can be done? 1. add the directory containing src to the Python path On a Unix-like system, it can be done simply with: ``` PYTHONPATH=".:$PYTHONPATH" python src/main.py ``` 2. start the module as a package element: ``` python -m src.main ``` That second way has an additional gift: you can then use the Pythonic `from . import ...`.
68,555,515
I am pretty new to python and webscraping, but I have managed to get a well working table to print, I am just curious how I would get this table into a CSV file in the exact same format as the print statement. Any logic explanations would be greatly appreciated and very helpful! My code is below... ``` from bs4 import BeautifulSoup import requests import time htmlText = requests.get('https://www.fangraphs.com/teams/mariners/stats').text soup = BeautifulSoup(htmlText, 'lxml', ) playerTable = soup.find('div', class_='team-stats-table') def BattingStats(): headers = [th.text for th in playerTable.find_all("th")] fmt_string = " ".join(["{:<25}", *["{:<6}"] * (len(headers) - 1)]) print(fmt_string.format(*headers)) for tr in playerTable.find_all("tr")[1:55]: tds = [td.text for td in tr.select("td")] with open('MarinersBattingStats.csv', 'w') as f: f.write(fmt_string.format(*tds)) print(fmt_string.format(*tds)) if __name__ == 'main': while True: BattingStats() timeWait = 100 time.sleep(432 * timeWait) BattingStats() ```
2021/07/28
[ "https://Stackoverflow.com/questions/68555515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16523958/" ]
The charges API allows specifying a description. The description can be anything you want. It's your own tidbit of info you can have added to each transaction. When you export transactions on the Stripe site to a CSV, the description can be exported too. I assume it can be extracted with their APIs as well. Would that help / suffice? ``` const stripe = require('stripe') await stripe(stripe_secret_key).charges.create({ amount: .., currency: .., source: .., application_fee: .., description: "this here can be whatever" }, { stripe_account: accountId }); ```
There isn't really a way to do this on the Stripe dashboard, but you can certainly build something like this yourself. You'd start by [retrieving](https://stripe.com/docs/api/checkout/sessions/list) all the Checkout Sessions, then loop over the list and add up the [totals](https://stripe.com/docs/api/checkout/sessions/object#checkout_session_object-amount_total) based on the `reference_id` in metadata (or lack thereof). Rather than redoing the above logic every time you want to check the totals (which will get progressively slower as the number of completed Checkout Sessions increases) you could instead rely on [webhooks](https://stripe.com/docs/webhooks) to increment your totals as they come in via the `checkout.session.completed` event.
71,213,873
I've been trying to run through this tutorial (<https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html>) for the past day and constantly get an error after running this portion: `bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)` The error is the following: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes ax.set_yscale("log", basey=10) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale ax.yaxis._set_scale(value, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale self._scale = mscale.scale_factory(value, self, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory return scale_cls(axis, **kwargs) TypeError: __init__() got an unexpected keyword argument 'basey' ``` Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either. Would appreciate any help! And thank you in advance I'm a beginner!
2022/02/21
[ "https://Stackoverflow.com/questions/71213873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18272784/" ]
It seems that `ax.set_yscale("log", basey=10)` does not recognise keyword argument `basey`. This keyword was replaced in the most recent matplotlib releases if you would install an older version it should work: `pip install matplotlib==3.3.4` So why is this happening in the first place? The package you are using does not have specific dependencies pinned down so it installs the most recent versions of dependencies. If there are any API changes to the more recent versions of packages the code breaks - it's good practice to pin down dependency versions of the project.
I looked for posts with a similar issue ("wrong" keyword calls on **init**) in Github and SO and it seems like you might need to update your matplotlib: ``` sudo pip install --upgrade matplotlib # for Linux sudo pip install matplotlib --upgrade # for Windows ```
71,213,873
I've been trying to run through this tutorial (<https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html>) for the past day and constantly get an error after running this portion: `bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)` The error is the following: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes ax.set_yscale("log", basey=10) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale ax.yaxis._set_scale(value, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale self._scale = mscale.scale_factory(value, self, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory return scale_cls(axis, **kwargs) TypeError: __init__() got an unexpected keyword argument 'basey' ``` Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either. Would appreciate any help! And thank you in advance I'm a beginner!
2022/02/21
[ "https://Stackoverflow.com/questions/71213873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18272784/" ]
It seems that `ax.set_yscale("log", basey=10)` does not recognise keyword argument `basey`. This keyword was replaced in the most recent matplotlib releases if you would install an older version it should work: `pip install matplotlib==3.3.4` So why is this happening in the first place? The package you are using does not have specific dependencies pinned down so it installs the most recent versions of dependencies. If there are any API changes to the more recent versions of packages the code breaks - it's good practice to pin down dependency versions of the project.
I think it is because of the version issues. In the version 3.6.0 of matplotlib, the keywords like "basey" or "susby" have been changed. More details could be find in [matplotlib.scale.LogScale](https://matplotlib.org/stable/api/scale_api.html#matplotlib.scale.LogScale) and [yscale](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.yscale.html) ``` class matplotlib.scale.LogScale(axis, *, base=10, subs=None, nonpositive='clip') Bases: ScaleBase A standard logarithmic scale. Care is taken to only plot positive values. Parameters: axisAxis The axis for the scale. basefloat, default: 10 The base of the logarithm. nonpositive{'clip', 'mask'}, default: 'clip' Determines the behavior for non-positive values. They can either be masked as invalid, or clipped to a very small positive number. subssequence of int, default: None Where to place the subticks between each major tick. For example, in a log10 scale, [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick. ```
71,213,873
I've been trying to run through this tutorial (<https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html>) for the past day and constantly get an error after running this portion: `bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)` The error is the following: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes ax.set_yscale("log", basey=10) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale ax.yaxis._set_scale(value, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale self._scale = mscale.scale_factory(value, self, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory return scale_cls(axis, **kwargs) TypeError: __init__() got an unexpected keyword argument 'basey' ``` Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either. Would appreciate any help! And thank you in advance I'm a beginner!
2022/02/21
[ "https://Stackoverflow.com/questions/71213873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18272784/" ]
It seems that `ax.set_yscale("log", basey=10)` does not recognise keyword argument `basey`. This keyword was replaced in the most recent matplotlib releases if you would install an older version it should work: `pip install matplotlib==3.3.4` So why is this happening in the first place? The package you are using does not have specific dependencies pinned down so it installs the most recent versions of dependencies. If there are any API changes to the more recent versions of packages the code breaks - it's good practice to pin down dependency versions of the project.
I had a similar problem when I tried to scale the y-axis of my plot logarithmically. Thereby the base should be 2. I had success when I tried base=2 instead of basey=2. ``` plt.yscale("log",base=2) ``` This should also work with the latest version of matplotlib.
71,213,873
I've been trying to run through this tutorial (<https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html>) for the past day and constantly get an error after running this portion: `bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)` The error is the following: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes ax.set_yscale("log", basey=10) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale ax.yaxis._set_scale(value, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale self._scale = mscale.scale_factory(value, self, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory return scale_cls(axis, **kwargs) TypeError: __init__() got an unexpected keyword argument 'basey' ``` Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either. Would appreciate any help! And thank you in advance I'm a beginner!
2022/02/21
[ "https://Stackoverflow.com/questions/71213873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18272784/" ]
I looked for posts with a similar issue ("wrong" keyword calls on **init**) in Github and SO and it seems like you might need to update your matplotlib: ``` sudo pip install --upgrade matplotlib # for Linux sudo pip install matplotlib --upgrade # for Windows ```
I think it is because of the version issues. In the version 3.6.0 of matplotlib, the keywords like "basey" or "susby" have been changed. More details could be find in [matplotlib.scale.LogScale](https://matplotlib.org/stable/api/scale_api.html#matplotlib.scale.LogScale) and [yscale](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.yscale.html) ``` class matplotlib.scale.LogScale(axis, *, base=10, subs=None, nonpositive='clip') Bases: ScaleBase A standard logarithmic scale. Care is taken to only plot positive values. Parameters: axisAxis The axis for the scale. basefloat, default: 10 The base of the logarithm. nonpositive{'clip', 'mask'}, default: 'clip' Determines the behavior for non-positive values. They can either be masked as invalid, or clipped to a very small positive number. subssequence of int, default: None Where to place the subticks between each major tick. For example, in a log10 scale, [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick. ```
71,213,873
I've been trying to run through this tutorial (<https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html>) for the past day and constantly get an error after running this portion: `bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)` The error is the following: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes ax.set_yscale("log", basey=10) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale ax.yaxis._set_scale(value, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale self._scale = mscale.scale_factory(value, self, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory return scale_cls(axis, **kwargs) TypeError: __init__() got an unexpected keyword argument 'basey' ``` Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either. Would appreciate any help! And thank you in advance I'm a beginner!
2022/02/21
[ "https://Stackoverflow.com/questions/71213873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18272784/" ]
I had a similar problem when I tried to scale the y-axis of my plot logarithmically. Thereby the base should be 2. I had success when I tried base=2 instead of basey=2. ``` plt.yscale("log",base=2) ``` This should also work with the latest version of matplotlib.
I looked for posts with a similar issue ("wrong" keyword calls on **init**) in Github and SO and it seems like you might need to update your matplotlib: ``` sudo pip install --upgrade matplotlib # for Linux sudo pip install matplotlib --upgrade # for Windows ```
71,213,873
I've been trying to run through this tutorial (<https://bedapub.github.io/besca/tutorials/scRNAseq_tutorial.html>) for the past day and constantly get an error after running this portion: `bc.pl.kp_genes(adata, min_genes=min_genes, ax = ax1)` The error is the following: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/miniconda3/lib/python3.9/site-packages/besca/pl/_filter_threshold_plots.py", line 57, in kp_genes ax.set_yscale("log", basey=10) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axes/_base.py", line 4108, in set_yscale ax.yaxis._set_scale(value, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/axis.py", line 761, in _set_scale self._scale = mscale.scale_factory(value, self, **kwargs) File "/opt/miniconda3/lib/python3.9/site-packages/matplotlib/scale.py", line 597, in scale_factory return scale_cls(axis, **kwargs) TypeError: __init__() got an unexpected keyword argument 'basey' ``` Anyone have any thoughts? I've uninstalled and installed matplotlib to make sure its updated but that doesn't seem to have done anything either. Would appreciate any help! And thank you in advance I'm a beginner!
2022/02/21
[ "https://Stackoverflow.com/questions/71213873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18272784/" ]
I had a similar problem when I tried to scale the y-axis of my plot logarithmically. Thereby the base should be 2. I had success when I tried base=2 instead of basey=2. ``` plt.yscale("log",base=2) ``` This should also work with the latest version of matplotlib.
I think it is because of the version issues. In the version 3.6.0 of matplotlib, the keywords like "basey" or "susby" have been changed. More details could be find in [matplotlib.scale.LogScale](https://matplotlib.org/stable/api/scale_api.html#matplotlib.scale.LogScale) and [yscale](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.yscale.html) ``` class matplotlib.scale.LogScale(axis, *, base=10, subs=None, nonpositive='clip') Bases: ScaleBase A standard logarithmic scale. Care is taken to only plot positive values. Parameters: axisAxis The axis for the scale. basefloat, default: 10 The base of the logarithm. nonpositive{'clip', 'mask'}, default: 'clip' Determines the behavior for non-positive values. They can either be masked as invalid, or clipped to a very small positive number. subssequence of int, default: None Where to place the subticks between each major tick. For example, in a log10 scale, [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick. ```
65,383,598
I have read some posts but I have not been able to get what I want. I have a dataframe with ~4k rows and a few columns which I exported from Infoblox (DNS server). One of them is dhcp attributes and I would like to expand it to have separated values. This is my df (I attach a screenshot from excel): [excel screenshot](https://i.stack.imgur.com/LT4BB.png) One of the columns is a dictionary from all the options, this is an example(sanitized): ``` [ {"name": "tftp-server-name", "num": 66, "value": "10.70.0.27", "vendor_class": "DHCP"}, {"name": "bootfile-name", "num": 67, "value": "pxelinux.0", "vendor_class": "DHCP"}, {"name": "dhcp-lease-time", "num": 51, "use_option": False, "value": "21600", "vendor_class": "DHCP"}, {"name": "domain-name-servers", "num": 6, "use_option": False, "value": "10.71.73.143,10.71.74.163", "vendor_class": "DHCP"}, {"name": "domain-name", "num": 15, "use_option": False, "value": "example.com", "vendor_class": "DHCP"}, {"name": "routers", "num": 3, "use_option": True, "value": "10.70.1.200", "vendor_class": "DHCP"}, ] ``` I would like to expand this column to some (to the same row), like this. Using "name" as df column and "value" as row value. This would be the goal: ``` tftp-server-name voip-tftp-server dhcp-lease-time domain-name-server domain-name routers 0 10.71.69.58 10.71.69.58,10.71.69.59 86400 10.71.73.143,10.71.74.163 example.com 10.70.12.254 ``` In order to have a global df with all the information, I guess I should create a new df keeping the index to merge with primary, but I wasn't able to do it. I have tried with expand, append, explode... Please, could you help me? Thank you so much for your solution (to both). I could get it work, this is my final file: I could do it. I add complete solution, just in case someone need it (maybe there is a way more pythonic, but it works): ``` def formato(df): opciones = df['options'] df_int = pd.DataFrame() for i in opciones: df_int = df_int.append(pd.DataFrame(i).set_index("name")[["value"]].T.reset_index(drop=True)) df_int.index = range(len(df_int.index)) df_global = pd.merge(df, df_int, left_index=True, right_index=True, how="inner") df_global = df.rename(columns={"comment": "Comentario", "end_addr": "IP Fin", "network": "Red", "start_addr": "IP Inicio", "disable": "Deshabilitado"}) df_global = df_global[["Red", "Comentario", "IP Inicio", "IP Fin", "dhcp-lease-time", "domain-name-servers", "domain-name", "routers", "tftp-server-name", "bootfile-name", "voip-tftp-server", "wdm-server-ip-address", "ftp-file-server", "vendor-encapsulated-options"]] return df_global ```
2020/12/20
[ "https://Stackoverflow.com/questions/65383598", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13546976/" ]
You seem to be a little confused about [variable scope](https://www.php.net/manual/en/language.variables.scope.php). There's also an errant `$` in `$this->$cards`. While this [is valid syntax](https://www.php.net/manual/en/language.variables.variable.php), it's not doing what you expect. Consider the following to get an idea of what's going wrong with your code. See comments and output at the end for explanation. ``` <?php $cards = [4, 5, 6]; // Global scope class GamesManager { public $cards = []; // Class scope public function __construct() { $this->cards = [1, 2, 3]; // This will set the class variable $cards to [1, 2, 3]; var_dump($this->cards); // This will print the variable we've just set. } public function pullCard() { global $cards; // This refers to $cards defined at the top ([4, 5, 6]); var_dump($this->cards); // This refers to the class variable named $cards /* array(3) { [0]=> int(1) [1]=> int(2) [2]=> int(3) } */ var_dump($cards); // This refers to the $cards 'imported' by the global statement at the top of this method. /* array(3) { [0]=> int(4) [1]=> int(5) [2]=> int(6) } */ } } $gm = new GamesManager; $gm->pullCard(); /* array(3) { [0]=> int(1) [1]=> int(2) [2]=> int(3) } array(3) { [0]=> int(1) [1]=> int(2) [2]=> int(3) } array(3) { [0]=> int(4) [1]=> int(5) [2]=> int(6) } */ ```
In this case you don't need a 'global' keyword. Just access your class attributes using $this keyword. ``` class GamesManager extends Main { protected $DB; public $cards = array(); public function __construct() { $this->cards = array('2' => 2, '3' => 3, '4' => 4, '5' => 5, '6' => 6, '7' => 7, '8' => 8, '9' => 9, 'T' => 10, 'J' => 10, 'Q' => 10, 'K' => 10, 'A' => 11); var_dump($this->cards); } public function pullCard() { var_dump($this->cards); } } ```
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
you could try to find 2 lines (with lookahead inside it to avoid capturing the linefeed) or only one (to process the last, odd line). I expanded your example to show that it works for more than 3 lines (with a little "cheat": adding a newline in the end to handle all cases: ``` import re s = "Line 1\nLine 2\nLine 3\nline4\nline5" result = re.findall(r'(.+?\n.+?(?=\n)|.+)', s+"\n") print(result) ``` result: ``` ['Line 1\nLine 2', 'Line 3\nline4', 'line5'] ``` the "add newline cheat" allows to process that properly: ``` s = "Line 1\nLine 2\nLine 3\nline4\nline5\nline6" ``` result: ``` ['Line 1\nLine 2', 'Line 3\nline4', 'line5\nline6'] ```
I wanted to post the grouper recipe from the itertools docs as well, but [PyToolz' `partition_all`](https://toolz.readthedocs.io/en/latest/api.html#toolz.itertoolz.partition_all) is actually a bit nicer. ``` from toolz import partition_all s = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5" result = ['\n'.join(tup) for tup in partition_all(2, s.splitlines())] # ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5'] ``` --- Here's the `grouper` solution for the sake of completeness: ``` from itertools import zip_longest # Recipe from the itertools docs. def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) result = ['\n'.join((a, b)) if b else a for a, b in grouper(s, 2)] ```
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
I'm hoping I get your logic right - If you want a list of string, each with *at most one* newline delimiter, then the following code snippet will work: ``` # Newline-delimited string a = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5\nLine 6\nLine 7" # Resulting list b = [] # First split the string into "1-line-long" pieces a = a.split("\n") for i in range(1, len(a), 2): # Then join the pieces by 2's and append to the resulting list b.append(a[i - 1] + "\n" + a[i]) # Account for the possibility of an odd-sized list if i == len(a) - 2: b.append(a[i + 1]) print(b) >>> ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5\nLine 6', 'Line 7'] ``` Although this solution isn't the fastest nor the best, it's easy to understand and it does not involve extra libraries.
Use [str.splitlines()](https://www.tutorialspoint.com/python/string_splitlines.htm) to split the full input into lines: ``` >>> inputString = "Line 1\nLine 2\nLine 3" >>> outputStringList = inputString.splitlines() >>> print(outputStringList) ['Line 1', 'Line 2', 'Line 3'] ``` Then, join the first lines to obtain the desired result: ``` >>> result = ['\n'.join(outputStringList[:-1])] + outputStringList[-1:] >>> print(result) ['Line 1\nLine 2', 'Line 3'] ``` Bonus: write a function that do the same, for any number of desired lines: ``` def split_to_max_lines(inputStr, n): lines = inputStr.splitlines() # This define which element in the list become the 2nd in the # final result. For n = 2, index = -1, for n = 4, index = -3, etc. split_index = -(n - 1) result = ['\n'.join(lines[:split_index])] result += lines[split_index:] return result print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 2)) print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 4)) print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 5)) ``` Returns: ``` ['Line 1\nLine 2\nLine 3\nline 4\nLine 5', 'Line 6'] ['Line 1\nLine 2\nLine 3', 'line 4', 'Line 5', 'Line 6'] ['Line 1\nLine 2', 'Line 3', 'line 4', 'Line 5', 'Line 6'] ```
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
Here is an alternative using the [`grouper` itertools recipe](https://more-itertools.readthedocs.io/en/latest/api.html#more_itertools.grouper) to group any number of lines together. *Note: you can implement this recipe by hand, or you can optionally install a third-party library that implements this recipe for you, i.e. `pip install more_itertools`.* **Code** ``` from more_itertools import grouper def group_lines(iterable, n=2): return ["\n".join((line for line in lines if line)) for lines in grouper(n, iterable.split("\n"), fillvalue="")] ``` **Demo** ``` s1 = "Line 1\nLine 2\nLine 3" s2 = "Line 1\nLine 2\nLine 3\nLine4\nLine5" group_lines(s1) # ['Line 1\nLine 2', 'Line 3'] group_lines(s2) # ['Line 1\nLine 2', 'Line 3\nLine4', 'Line5'] group_lines(s2, n=3) # ['Line 1\nLine 2\nLine 3', 'Line4\nLine5'] ``` --- **Details** `group_lines()` splits the string into lines and then groups the lines by `n` via `grouper`. ``` list(grouper(2, s1.split("\n"), fillvalue="")) [('Line 1', 'Line 2'), ('Line 3', '')] ``` Finally, for each group of lines, only non-emptry strings are rejoined with a newline character. See [`more_itertools` docs](https://more-itertools.readthedocs.io/en/latest/api.html) for more details on `grouper`.
``` b = "a\nb\nc\nd".split("\n", 3) c = ["\n".join(b[:-1]), b[-1]] print c ``` gives ``` ['a\nb\nc', 'd'] ```
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
Here is an alternative using the [`grouper` itertools recipe](https://more-itertools.readthedocs.io/en/latest/api.html#more_itertools.grouper) to group any number of lines together. *Note: you can implement this recipe by hand, or you can optionally install a third-party library that implements this recipe for you, i.e. `pip install more_itertools`.* **Code** ``` from more_itertools import grouper def group_lines(iterable, n=2): return ["\n".join((line for line in lines if line)) for lines in grouper(n, iterable.split("\n"), fillvalue="")] ``` **Demo** ``` s1 = "Line 1\nLine 2\nLine 3" s2 = "Line 1\nLine 2\nLine 3\nLine4\nLine5" group_lines(s1) # ['Line 1\nLine 2', 'Line 3'] group_lines(s2) # ['Line 1\nLine 2', 'Line 3\nLine4', 'Line5'] group_lines(s2, n=3) # ['Line 1\nLine 2\nLine 3', 'Line4\nLine5'] ``` --- **Details** `group_lines()` splits the string into lines and then groups the lines by `n` via `grouper`. ``` list(grouper(2, s1.split("\n"), fillvalue="")) [('Line 1', 'Line 2'), ('Line 3', '')] ``` Finally, for each group of lines, only non-emptry strings are rejoined with a newline character. See [`more_itertools` docs](https://more-itertools.readthedocs.io/en/latest/api.html) for more details on `grouper`.
I'm hoping I get your logic right - If you want a list of string, each with *at most one* newline delimiter, then the following code snippet will work: ``` # Newline-delimited string a = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5\nLine 6\nLine 7" # Resulting list b = [] # First split the string into "1-line-long" pieces a = a.split("\n") for i in range(1, len(a), 2): # Then join the pieces by 2's and append to the resulting list b.append(a[i - 1] + "\n" + a[i]) # Account for the possibility of an odd-sized list if i == len(a) - 2: b.append(a[i + 1]) print(b) >>> ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5\nLine 6', 'Line 7'] ``` Although this solution isn't the fastest nor the best, it's easy to understand and it does not involve extra libraries.
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
I wanted to post the grouper recipe from the itertools docs as well, but [PyToolz' `partition_all`](https://toolz.readthedocs.io/en/latest/api.html#toolz.itertoolz.partition_all) is actually a bit nicer. ``` from toolz import partition_all s = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5" result = ['\n'.join(tup) for tup in partition_all(2, s.splitlines())] # ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5'] ``` --- Here's the `grouper` solution for the sake of completeness: ``` from itertools import zip_longest # Recipe from the itertools docs. def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) result = ['\n'.join((a, b)) if b else a for a, b in grouper(s, 2)] ```
I'm not sure what you mean by "a maximum of 2 lines" and how you'd hope to achieve that. However, splitting on newlines is fairly simple. ``` 'Line 1\nLine 2\nLine 3'.split('\n') ``` This will result in: ``` ['line 1', 'line 2', 'line 3'] ``` To get the weird allowance for "some" line splitting, you'll have to write your own logic for that.
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
you could try to find 2 lines (with lookahead inside it to avoid capturing the linefeed) or only one (to process the last, odd line). I expanded your example to show that it works for more than 3 lines (with a little "cheat": adding a newline in the end to handle all cases: ``` import re s = "Line 1\nLine 2\nLine 3\nline4\nline5" result = re.findall(r'(.+?\n.+?(?=\n)|.+)', s+"\n") print(result) ``` result: ``` ['Line 1\nLine 2', 'Line 3\nline4', 'line5'] ``` the "add newline cheat" allows to process that properly: ``` s = "Line 1\nLine 2\nLine 3\nline4\nline5\nline6" ``` result: ``` ['Line 1\nLine 2', 'Line 3\nline4', 'line5\nline6'] ```
I'm hoping I get your logic right - If you want a list of string, each with *at most one* newline delimiter, then the following code snippet will work: ``` # Newline-delimited string a = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5\nLine 6\nLine 7" # Resulting list b = [] # First split the string into "1-line-long" pieces a = a.split("\n") for i in range(1, len(a), 2): # Then join the pieces by 2's and append to the resulting list b.append(a[i - 1] + "\n" + a[i]) # Account for the possibility of an odd-sized list if i == len(a) - 2: b.append(a[i + 1]) print(b) >>> ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5\nLine 6', 'Line 7'] ``` Although this solution isn't the fastest nor the best, it's easy to understand and it does not involve extra libraries.
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
I wanted to post the grouper recipe from the itertools docs as well, but [PyToolz' `partition_all`](https://toolz.readthedocs.io/en/latest/api.html#toolz.itertoolz.partition_all) is actually a bit nicer. ``` from toolz import partition_all s = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5" result = ['\n'.join(tup) for tup in partition_all(2, s.splitlines())] # ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5'] ``` --- Here's the `grouper` solution for the sake of completeness: ``` from itertools import zip_longest # Recipe from the itertools docs. def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) result = ['\n'.join((a, b)) if b else a for a, b in grouper(s, 2)] ```
Use [str.splitlines()](https://www.tutorialspoint.com/python/string_splitlines.htm) to split the full input into lines: ``` >>> inputString = "Line 1\nLine 2\nLine 3" >>> outputStringList = inputString.splitlines() >>> print(outputStringList) ['Line 1', 'Line 2', 'Line 3'] ``` Then, join the first lines to obtain the desired result: ``` >>> result = ['\n'.join(outputStringList[:-1])] + outputStringList[-1:] >>> print(result) ['Line 1\nLine 2', 'Line 3'] ``` Bonus: write a function that do the same, for any number of desired lines: ``` def split_to_max_lines(inputStr, n): lines = inputStr.splitlines() # This define which element in the list become the 2nd in the # final result. For n = 2, index = -1, for n = 4, index = -3, etc. split_index = -(n - 1) result = ['\n'.join(lines[:split_index])] result += lines[split_index:] return result print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 2)) print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 4)) print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 5)) ``` Returns: ``` ['Line 1\nLine 2\nLine 3\nline 4\nLine 5', 'Line 6'] ['Line 1\nLine 2\nLine 3', 'line 4', 'Line 5', 'Line 6'] ['Line 1\nLine 2', 'Line 3', 'line 4', 'Line 5', 'Line 6'] ```
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
Here is an alternative using the [`grouper` itertools recipe](https://more-itertools.readthedocs.io/en/latest/api.html#more_itertools.grouper) to group any number of lines together. *Note: you can implement this recipe by hand, or you can optionally install a third-party library that implements this recipe for you, i.e. `pip install more_itertools`.* **Code** ``` from more_itertools import grouper def group_lines(iterable, n=2): return ["\n".join((line for line in lines if line)) for lines in grouper(n, iterable.split("\n"), fillvalue="")] ``` **Demo** ``` s1 = "Line 1\nLine 2\nLine 3" s2 = "Line 1\nLine 2\nLine 3\nLine4\nLine5" group_lines(s1) # ['Line 1\nLine 2', 'Line 3'] group_lines(s2) # ['Line 1\nLine 2', 'Line 3\nLine4', 'Line5'] group_lines(s2, n=3) # ['Line 1\nLine 2\nLine 3', 'Line4\nLine5'] ``` --- **Details** `group_lines()` splits the string into lines and then groups the lines by `n` via `grouper`. ``` list(grouper(2, s1.split("\n"), fillvalue="")) [('Line 1', 'Line 2'), ('Line 3', '')] ``` Finally, for each group of lines, only non-emptry strings are rejoined with a newline character. See [`more_itertools` docs](https://more-itertools.readthedocs.io/en/latest/api.html) for more details on `grouper`.
I wanted to post the grouper recipe from the itertools docs as well, but [PyToolz' `partition_all`](https://toolz.readthedocs.io/en/latest/api.html#toolz.itertoolz.partition_all) is actually a bit nicer. ``` from toolz import partition_all s = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5" result = ['\n'.join(tup) for tup in partition_all(2, s.splitlines())] # ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5'] ``` --- Here's the `grouper` solution for the sake of completeness: ``` from itertools import zip_longest # Recipe from the itertools docs. def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) result = ['\n'.join((a, b)) if b else a for a, b in grouper(s, 2)] ```
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
I'm hoping I get your logic right - If you want a list of string, each with *at most one* newline delimiter, then the following code snippet will work: ``` # Newline-delimited string a = "Line 1\nLine 2\nLine 3\nLine 4\nLine 5\nLine 6\nLine 7" # Resulting list b = [] # First split the string into "1-line-long" pieces a = a.split("\n") for i in range(1, len(a), 2): # Then join the pieces by 2's and append to the resulting list b.append(a[i - 1] + "\n" + a[i]) # Account for the possibility of an odd-sized list if i == len(a) - 2: b.append(a[i + 1]) print(b) >>> ['Line 1\nLine 2', 'Line 3\nLine 4', 'Line 5\nLine 6', 'Line 7'] ``` Although this solution isn't the fastest nor the best, it's easy to understand and it does not involve extra libraries.
``` b = "a\nb\nc\nd".split("\n", 3) c = ["\n".join(b[:-1]), b[-1]] print c ``` gives ``` ['a\nb\nc', 'd'] ```
46,234,207
I have a multi-line string: ``` inputString = "Line 1\nLine 2\nLine 3" ``` I want to have an array, each element will have maximum 2 lines it it as below: ``` outputStringList = ["Line 1\nLine2", "Line3"] ``` Can i convert inputString to outputStringList in python. Any help will be appreciated.
2017/09/15
[ "https://Stackoverflow.com/questions/46234207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595590/" ]
Here is an alternative using the [`grouper` itertools recipe](https://more-itertools.readthedocs.io/en/latest/api.html#more_itertools.grouper) to group any number of lines together. *Note: you can implement this recipe by hand, or you can optionally install a third-party library that implements this recipe for you, i.e. `pip install more_itertools`.* **Code** ``` from more_itertools import grouper def group_lines(iterable, n=2): return ["\n".join((line for line in lines if line)) for lines in grouper(n, iterable.split("\n"), fillvalue="")] ``` **Demo** ``` s1 = "Line 1\nLine 2\nLine 3" s2 = "Line 1\nLine 2\nLine 3\nLine4\nLine5" group_lines(s1) # ['Line 1\nLine 2', 'Line 3'] group_lines(s2) # ['Line 1\nLine 2', 'Line 3\nLine4', 'Line5'] group_lines(s2, n=3) # ['Line 1\nLine 2\nLine 3', 'Line4\nLine5'] ``` --- **Details** `group_lines()` splits the string into lines and then groups the lines by `n` via `grouper`. ``` list(grouper(2, s1.split("\n"), fillvalue="")) [('Line 1', 'Line 2'), ('Line 3', '')] ``` Finally, for each group of lines, only non-emptry strings are rejoined with a newline character. See [`more_itertools` docs](https://more-itertools.readthedocs.io/en/latest/api.html) for more details on `grouper`.
Use [str.splitlines()](https://www.tutorialspoint.com/python/string_splitlines.htm) to split the full input into lines: ``` >>> inputString = "Line 1\nLine 2\nLine 3" >>> outputStringList = inputString.splitlines() >>> print(outputStringList) ['Line 1', 'Line 2', 'Line 3'] ``` Then, join the first lines to obtain the desired result: ``` >>> result = ['\n'.join(outputStringList[:-1])] + outputStringList[-1:] >>> print(result) ['Line 1\nLine 2', 'Line 3'] ``` Bonus: write a function that do the same, for any number of desired lines: ``` def split_to_max_lines(inputStr, n): lines = inputStr.splitlines() # This define which element in the list become the 2nd in the # final result. For n = 2, index = -1, for n = 4, index = -3, etc. split_index = -(n - 1) result = ['\n'.join(lines[:split_index])] result += lines[split_index:] return result print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 2)) print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 4)) print(split_to_max_lines("Line 1\nLine 2\nLine 3\nline 4\nLine 5\nLine 6", 5)) ``` Returns: ``` ['Line 1\nLine 2\nLine 3\nline 4\nLine 5', 'Line 6'] ['Line 1\nLine 2\nLine 3', 'line 4', 'Line 5', 'Line 6'] ['Line 1\nLine 2', 'Line 3', 'line 4', 'Line 5', 'Line 6'] ```
47,659,731
My code is running fine for first iteration but after that it outputs the following error: ``` ValueError: matrix must be 2-dimensional ``` To the best of my knowledge (which is not much in python), my code is correct. but I don't know, why it is not running correctly for all given iterations. Could anyone help me in this problem. ``` from __future__ import division import numpy as np import math import matplotlib.pylab as plt import sympy as sp from numpy.linalg import inv #initial guesses x = -2 y = -2.5 i1 = 0 while i1<5: F= np.matrix([[(x**2)+(x*y**3)-9],[(3*y*x**2)-(y**3)-4]]) theta = np.sum(F) J = np.matrix([[(2*x)+y**3, 3*x*y**2],[6*x*y, (3*x**2)-(3*y**2)]]) Jinv = inv(J) xn = np.array([[x],[y]]) xn_1 = xn - (Jinv*F) x = xn_1[0] y = xn_1[1] #~ print theta print xn i1 = i1+1 ```
2017/12/05
[ "https://Stackoverflow.com/questions/47659731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5507715/" ]
In a comment, you said, > > Yes that is the structure however const items, references, and class items can't be initialized in the body of constructors or in a non-constructor method. > > > A [delegating constructor](http://www.stroustrup.com/C++11FAQ.html#delegating-ctor) can be used to initialize reference member variables. Expanding your example code a bit, I can see something like: ``` class Obj { static const AType defaultAType; const AType &aRef; static const BType defaultBType; const BType &bRef; public: // Delegate with default values for both references Obj() : Obj(defaultAType, defaultBType) {} // Delegate with default value for the B reference Obj(AType &aType) : Obj(aType, defaultBType) {} // Delegate with default value for the A reference Obj(BType &bType) : Obj(defaultAType, bType) {} // A constructor that has all the arguments. Obj(AType& aType, BType& bType) : aRef(aType), bRef(bType) {} }; ```
So this is a restriction that you can't initialize const variables other than in the constructor so there is one approach in my mind. You can have one overloaded constructor with all the possible variable types as arguments and the last argument being the integer that represent which argument to take care off assuming you are considering one variable at each overload. So you can do something like this. Hope this one helps. You can increase on the number of variables to initialize. Please note to just pass null to the references not in consideration. ```js class Obj { const typea var1; const typeb var2; const typec var3; const typed var4; public Obj(typea * ptr1, typeb * ptr2, typec * ptr3, typed * ptr4, int index) { switch (index) { case 1: var1 = * ptr1; break; case 2: var2 = * ptr2; break; case 3: var3 = * ptr3; break; case 4: var4 = * ptr4; break; } } } ```