qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
18,118,226
|
I am new python. I have some predefined xml files. I have a script which generate new xml files. I want to write an automated script which compares xmls files and stores the name of differing xml file names in output file?
Thanks in advance
|
2013/08/08
|
[
"https://Stackoverflow.com/questions/18118226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2286286/"
] |
I think you're looking for the [`filecmp` module](http://docs.python.org/2/library/filecmp.html). You can use it like this:
```
import filecmp
cmp = filecmp.cmp('f1.xml', 'f2.xml')
# Files are equal
if cmp:
continue
else:
out_file.write('f1.xml')
```
Replace `f1.xml` and `f2.xml` with your xml files.
|
Building on @Xaranke's answer:
```
import filecmp
out_file = open("diff_xml_names.txt")
# Not sure what format your filenames will come in, but here's one possibility.
filePairs = [('f1a.xml', 'f1b.xml'), ('f2a.xml', 'f2b.xml'), ('f3a.xml', 'f3b.xml')]
for f1, f2 in filePairs:
if not filecmp.cmp(f1, f2):
# Files are not equal
out_file.write(f1+'\n')
out_file.close()
```
| 6,715
|
44,513,308
|
Can I use macros with the PythonOperator? I tried following, but I was unable to get the macros rendered:
```
dag = DAG(
'temp',
default_args=default_args,
description='temp dag',
schedule_interval=timedelta(days=1))
def temp_def(a, b, **kwargs):
print '{{ds}}'
print '{{execution_date}}'
print 'a=%s, b=%s, kwargs=%s' % (str(a), str(b), str(kwargs))
ds = '{{ ds }}'
mm = '{{ execution_date }}'
t1 = PythonOperator(
task_id='temp_task',
python_callable=temp_def,
op_args=[mm , ds],
provide_context=False,
dag=dag)
```
|
2017/06/13
|
[
"https://Stackoverflow.com/questions/44513308",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4116268/"
] |
Macros only get processed for templated fields. To get Jinja to process this field, extend the `PythonOperator` with your own.
```py
class MyPythonOperator(PythonOperator):
template_fields = ('templates_dict','op_args')
```
I added `'templates_dict'` to the `template_fields` because the `PythonOperator` itself has this field templated:
[PythonOperator](https://airflow.incubator.apache.org/_modules/python_operator.html)
Now you should be able to use a macro within that field:
```py
ds = '{{ ds }}'
mm = '{{ execution_date }}'
t1 = MyPythonOperator(
task_id='temp_task',
python_callable=temp_def,
op_args=[mm , ds],
provide_context=False,
dag=dag)
```
|
In my opinion a more native Airflow way of approaching this would be to use the included PythonOperator and use the `provide_context=True` parameter as such.
```py
t1 = MyPythonOperator(
task_id='temp_task',
python_callable=temp_def,
provide_context=True,
dag=dag)
```
Now you have access to all of the macros, airflow metadata and task parameters in the `kwargs` of your callable
```py
def temp_def(**kwargs):
print 'ds={}, execution_date={}'.format((str(kwargs['ds']), str(kwargs['execution_date']))
```
If you had some custom defined `params` associated with the task you could access those as well via `kwargs['params']`
| 6,719
|
3,853,136
|
The following code runs fine in my IDE (PyScripter), however it won't run outside of it. When I go into computer then python26 and double click the file (a .pyw in this case) it fails to run. I have no idea why it's doing this, can anyone please shed some light?
This is in windows 7 BTW.
My code:
```
#!/usr/bin/env python
import matplotlib
from mpl_toolkits.mplot3d import axes3d,Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from numpy import arange, sin, pi
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
from matplotlib.figure import Figure
from matplotlib.ticker import LinearLocator, FixedLocator, FormatStrFormatter
import Tkinter
import sys
class E(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self,parent)
self.parent = parent
self.protocol("WM_DELETE_WINDOW", self.dest)
self.main()
def main(self):
self.fig = plt.figure()
self.fig = plt.figure(figsize=(4,4))
ax = Axes3D(self.fig)
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
x = 10 * np.outer(np.cos(u), np.sin(v))
y = 10 * np.outer(np.sin(u), np.sin(v))
z = 10 * np.outer(np.ones(np.size(u)), np.cos(v))
t = ax.plot_surface(x, y, z, rstride=4, cstride=4,color='lightgreen',linewidth=1)
self.frame = Tkinter.Frame(self)
self.frame.pack(padx=15,pady=15)
self.canvas = FigureCanvasTkAgg(self.fig, master=self.frame)
self.canvas.get_tk_widget().pack(side='top', fill='both')
self.canvas._tkcanvas.pack(side='top', fill='both', expand=1)
self.btn = Tkinter.Button(self,text='button',command=self.alt)
self.btn.pack()
def alt (self):
print 9
def dest(self):
self.destroy()
sys.exit()
if __name__ == "__main__":
app = E(None)
app.title('Embedding in TK')
app.mainloop()
```
**EDIT:**
I tried to import the module in the command line and got the following warning.
```
Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\site-packages\matplotlib\__init__.py", line 129, in <module>
from rcsetup import defaultParams, validate_backend, validate_toolbar
File "C:\Python26\lib\site-packages\matplotlib\rcsetup.py", line 19, in <module>
from matplotlib.colors import is_color_like
File "C:\Python26\lib\site-packages\matplotlib\colors.py", line 54, in <module>
import matplotlib.cbook as cbook
File "C:\Python26\lib\site-packages\matplotlib\cbook.py", line 168, in <module>
class Scheduler(threading.Thread):
AttributeError: 'module' object has no attribute 'Thread'
>>>
```
**EDIT(2)**
I tried what McSmooth said and got the following output.
```
Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import threading
>>> print threading.__file__
threading.pyc
>>> threading.Thread
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'Thread'
>>>
```
|
2010/10/04
|
[
"https://Stackoverflow.com/questions/3853136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433417/"
] |
unless you've been messing around with your standard library, it seems that you have a file named `threading.py` somewhere on your python path that is replacing the standard one. Try:
```
>>>import threading
>>>print threading.__file__
```
and make sure that it's the one in your python lib directory (it should be`C:\python26\lib`). If it's not the right file that's getting imported, then you'll have to rename the fake one to something else. If it is the right file, then try:
```
>>>threading.Thread
```
and see if that throws an exception in the REPL.
update
------
That's weird. on my system, it gives the name of the source file. either save as a file or run at the command line the following code to find it.
```
import os.path as op
import sys
files = (op.join(path, 'threading.py') for path in sys.path)
print filter(op.exists, files)
```
|
You most likely need to adjust your [PYTHONPATH](http://docs.python.org/using/cmdline.html#envvar-PYTHONPATH); this is a list of directories Python uses to find modules. See also [How to add to the pythonpath in windows 7?](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7).
| 6,720
|
64,405,461
|
Trying to add Densenet121 functional block to the model.
I need Keras model to be written in this format, not using
```
model=Sequential()
model.add()
```
method
What's wrong the function, build\_img\_encod
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-62-69dd207148e0> in <module>()
----> 1 x = build_img_encod()
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
164 spec.min_ndim is not None or
165 spec.max_ndim is not None):
--> 166 if x.shape.ndims is None:
167 raise ValueError('Input ' + str(input_index) + ' of layer ' +
168 layer_name + ' is incompatible with the layer: '
AttributeError: 'Functional' object has no attribute 'shape'
```
```
def build_img_encod( ):
base_model = DenseNet121(input_shape=(150,150,3),
include_top=False,
weights='imagenet')
for layer in base_model.layers:
layer.trainable = False
flatten = Flatten(name="flatten")(base_model)
img_dense_encoder = Dense(1024, activation='relu',name="img_dense_encoder", kernel_regularizer=regularizers.l2(0.0001))(flatten)
model = keras.models.Model(inputs=base_model, outputs = img_dense_encoder)
return model
```
|
2020/10/17
|
[
"https://Stackoverflow.com/questions/64405461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14309081/"
] |
The reason why you get that error is that you need to provide the `input_shape` of the `base_model`, instead of the `base_model` per say.
Replace this line: `model = keras.models.Model(inputs=base_model, outputs = img_dense_encoder)`
with: `model = keras.models.Model(inputs=base_model.input, outputs = img_dense_encoder)`
|
```
def build_img_encod( ):
dense = DenseNet121(input_shape=(150,150,3),
include_top=False,
weights='imagenet')
for layer in dense.layers:
layer.trainable = False
img_input = Input(shape=(150,150,3))
base_model = dense(img_input)
flatten = Flatten(name="flatten")(base_model)
img_dense_encoder = Dense(1024, activation='relu',name="img_dense_encoder", kernel_regularizer=regularizers.l2(0.0001))(flatten)
model = keras.models.Model(inputs=img_input, outputs = img_dense_encoder)
return model
```
This worked..
| 6,722
|
39,612,416
|
I have a linux machine to which i installed Anaconda.
I am following:
<https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html>
pip instaltion part.
To be more specific:
```
which python
```
gives
```
/home/user/anaconda2/bin/python
```
After which i entered:
```
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0-cp27-none-linux_x86_64.whl
```
And after:
```
sudo pip install --upgrade $TF_BINARY_URL
```
However,
while trying:
```
python -c "import tensorflow"
```
I get an import error:
```
ImportError: No module named tensorflow
```
|
2016/09/21
|
[
"https://Stackoverflow.com/questions/39612416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6857504/"
] |
The problem is here:
```
xs foldLeft((0,0,0))(logic _)
```
Never go for the dotless notation unless it's for an operator. This way it works.
```
xs.foldLeft((0,0,0))(logic _)
```
Without a context I believe this is idiomatic enough.
|
I don't get the goal of your code but the problem you described can be solved this way:
```
val to = xs.foldLeft((0,0,0))(logic)
```
| 6,723
|
61,858,954
|
I'm trying to get two attributes at the time from my json data and add them as an item on my python list. However, when trying to add those two: `['emailTypeDesc']['createdDate']` it throws an error. Could someone help with this? thanks in advance!
json:
```
{
'readOnly': False,
'senderDetails': {'firstName': 'John', 'lastName': 'Doe', 'emailAddress': 'johndoe@gmail.com', 'emailAddressId': 123456, 'personalId': 123, 'companyName': 'ACME‘},
'clientDetails': {'firstName': 'Jane', 'lastName': 'Doe', 'emailAddress': 'janedoe@gmail.com', 'emailAddressId': 654321, 'personalId': 456, 'companyName': 'Lorem Ipsum‘}},
'notesSection': {},
'emailList': [{'requestId': 12345667, 'emailId': 9876543211, 'emailType': 3, 'emailTypeDesc': 'Email-In', 'emailTitle': 'SampleTitle 1', 'createdDate': '15-May-2020 11:15:52', 'fromMailList': [{'firstName': 'Jane', 'lastName': 'Doe', 'emailAddress': 'janedoe@gmail.com',}]},
{'requestId': 12345667, 'emailId': 14567775, 'emailType': 3, 'emailTypeDesc': 'Email-Out', 'emailTitle': 'SampleTitle 2', 'createdDate': '16-May-2020 16:15:52', 'fromMailList': [{'firstName': 'Jane', 'lastName': 'Doe', 'emailAddress': 'janedoe@gmail.com',}]},
{'requestId': 12345667, 'emailId': 12345, 'emailType': 3, 'emailTypeDesc': 'Email-In', 'emailTitle': 'SampleTitle 3', 'createdDate': '17-May-2020 20:15:52', 'fromMailList': [{'firstName': 'Jane', 'lastName': 'Doe', 'emailAddress': 'janedoe@gmail.com',}]
}
```
python:
```
final_list = []
data = json.loads(r.text)
myId = [(data['emailList'][0]['requestId'])]
for each_req in myId:
final_list.append(each_req)
myEmailList = [mails['emailTypeDesc']['createdDate'] for mails in data['emailList']]
for each_requ in myEmailList:
final_list.append(each_requ)
return final_list
```
This error comes up when I run the above code:
```
TypeError: string indices must be integers
```
Desired output for `final_list`:
```
[12345667, 'Email-In', '15-May-2020 11:15:52', 'Email-Out', '16-May-2020 16:15:52', 'Email-In', '17-May-2020 20:15:52']
```
My problem is definetely in this line:
```
myEmailList = [mails['emailTypeDesc']['createdDate'] for mails in data['emailList']]
```
because when I run this without the second attribute `['createdDate']` it would work, but I need both attributes on my `final_list`:
```
myEmailList = [mails['emailTypeDesc'] for mails in data['emailList']]
```
|
2020/05/17
|
[
"https://Stackoverflow.com/questions/61858954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9488179/"
] |
I think you're misunderstanding the syntax. `mails['emailTypeDesc']['createdDate']` is looking for the key `'createdDate'` *inside* the object `mails['emailTypeDesc']`, but in fact they are two items at the same level.
Since `mails['emailTypeDesc']` is a string, not a dictionary, you get the error you have quoted. It seems that you want to add the two items `mails['emailTypeDesc']` and `mails['createdDate']` to your list. I'm not sure if you'd rather join these together into a single string or create a sub-list or something else. Here's a sublist option.
```
myEmailList = [[mails['emailTypeDesc'], mails['createdDate']] for mails in data['emailList']]
```
|
Strings in JSON must be in double quotes, not single.
Edit: As well as names.
| 6,726
|
74,558,107
|
Is it possible to calculate an expression using python but without entering python shell? What I want to achieve is to use python in a following manner:
```
tail file.txt -n `python 123*456`
```
instead of having to calculate 123\*456 in a separate step.
|
2022/11/24
|
[
"https://Stackoverflow.com/questions/74558107",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4746861/"
] |
I don't understand your question: you say "I would like to do something using Python", but when you show what you want to do, Python seems not to be needed for achieving that.
Let me show you: what you want to achieve, can be done as follows:
```
tail -f file.txt -n $((123*456))
```
The `$((...))` notation is capable of performing integer calculations, as you can imagine.
Is this what you are looking for, or are you really forced to use Python, and if so, why do you think that?
|
You can try the `-c` option. For e.g, `tail test_log.txt -n `python -c "print(1 + 2)"``
| 6,727
|
58,165,158
|
I went through a tutorial to show me how to install a Python package that I developed to PyPI, so it could be installed by pip. Everything seemed to work great, but after installing with pip, I get an error trying to use the library. Here is a transcript:
```
C:\WINDOWS\system32> pip install pinyin_utils Collecting pinyin_utils
Using cached https://files.pythonhosted.org/packages/eb/26/95b2d80eae03dfe7698e9e5a83b49f75e769895a4e0bb8048a42c18c7109/pinyin_utils-0.1.0-py3-none-any.whl
Installing collected packages: pinyin-utils
Successfully installed pinyin-utils-0.1.0
C:\WINDOWS\system32> python Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 19:29:22) [MSC v.1916 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from pinyin_utils import convertPinyin
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'pinyin_utils'
>>>
```
On Windows 10, Python 3.7.4
|
2019/09/30
|
[
"https://Stackoverflow.com/questions/58165158",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1738579/"
] |
You could take [`Math.max`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/max) and spread the items.
```js
let array = [[1,2], [10, 15], [30, 40], [-1, -50]],
max = array.map(a => Math.max(...a));
console.log(max);
```
|
Empty numeric values wouldn't help you either using your logic, because you are setting wrong initial values for comparison. See this, same as yours, with minor fix:
```
let arr = [[1,2], [10, 15], [30, 40], [-1, -50]];
let newArr = [arr[0][0], arr[1][0], arr[2][0], arr[3][0]];
for (let x = 0; x < arr.length; x++) {
for (let y = 0; y < arr[x].length; y++) {
if (newArr[x] < arr[x][y]) {
newArr[x] = arr[x][y];
}
}
}
```
| 6,728
|
59,307,815
|
I am currently developing a python project where I am concerned with performance because my CPU is always using like 90-98% of its computing capacity.
So I was thinking about what could I change in my code to make it faster, and noticed that I have a string variable which always receives one of two values:
```
state = "ok"
state = "notOk"
```
Since it only has 2 values, I tought about changing it to a boolean like:
```
isStateOk = True
isStateOk = False
```
Does it make any sense to do that? Is there a big difference, or any difference at all, in the speed of attributing a string to a variable and attributing a boolean to a variable?
I should also mention that I am using this variable in like 30 **if comparisons** in my code, so maybe the **speed of comparison** would be a benefit?
```
if (state == "ok) # Original comparison
if (isStateOk) # New comparison
```
|
2019/12/12
|
[
"https://Stackoverflow.com/questions/59307815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5619301/"
] |
To parse HTML and XML documents (page source) and get elements with locators, you can use [beautifulsoup](/questions/tagged/beautifulsoup "show questions tagged 'beautifulsoup'"), [how to use it](https://pypi.org/project/beautifulsoup4/).
Regular expression do not parsing HTML documents. You get **NULL** because `Checkpath = re.search(d,src)` is `None`.
Here is example how you can get status without loop and parsing page source.
```
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
p = wait.until(EC.visibility_of_all_elements_located((By.XPATH, '//*[@id="General"]/fieldset/dl/dd')))
status = "NULL"
r = range(0, len(p))
if 15 in r:
status = p[8].text
elif 14 in r:
status = p[7].text
elif 13 in r:
status = p[6].text
elif 12 in r:
status = p[5].text
```
|
From the code you have posted, the strings you have hardcoded are all the same other than the index of the DD element. Since the Status index is always 7 less than the index of Checkpath, you can just loop through 15-12, do your search, and then Status is just 15-7. The code is below.
```
src = driver.page_source
loop = [15,14,13,12]
for d in loop:
Checkpath = re.search('//*[@id="General"]/fieldset/dl/dd[' + str(d) + ']',src)
Status = driver.find_element_by_xpath('//*[@id="General"]/fieldset/dl/dd[' + str(d - 7) + ']').text
print(Status)
```
| 6,733
|
8,275,650
|
I am stuck trying to perform this task and while trying I can't help thinking there will be a nicer way to code it than the way I have been trying.
I have a line of text and a keyword. I want to make a new list going down each character in each list. The keyword will just repeat itself until the end of the list. If there are any non-alpha characters the keyword letter will not be used.
For example:
```
Keyword="lemon"
Text="hi there!"
```
would result in
```
('lh', 'ei', ' ', 'mt' , 'oh', 'ne', 'lr', 'ee', '!')
```
Is there a way of telling python to keep repeating over a string in a loop, ie keep repeating over the letters of lemon?
I am new to coding so sorry if this isn't explained well or seems strange!
|
2011/11/26
|
[
"https://Stackoverflow.com/questions/8275650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1066430/"
] |
You've got two questions mashed into one. The first is: how do you remove non-alphanumeric chars from a string? You can do it a few ways, but regular expression substitution is a nice way.
```
import re
def removeWhitespace( s ):
return re.sub( '\s', '', s )
```
The second part of the question is about how to keep looping through the keyword, until the text line is consumed. You can write this as:
```
def characterZip( keyword, textline ):
res = []
textline = removeWhitespace(textline)
textlen = len(textline)
for i in xrange(textlen)):
res.append( '%s%s' % (keyword[i%len(keyword)], textline[i]) )
return res
```
Most pythonistas will look at this and see opportunity for refactoring. The patten that this code is trying to achieve is in functional programming termed a `zip`. The quirk is that in this case you're doing something slightly non-normative with the repeating characters of the keyword, this too has an equivalent, the [cycle](http://docs.python.org/library/itertools.html#itertools.cycle) function in the itertools module.
```
from itertools import cycle, islice, izip
def characterZip( keyword, textline ):
textline = removeWhitespace(textline)
textlen = len(textline)
it = islice( izip(cycle(keyword), textline), textlen )
return [ '%s%s' % val for val in it ]
```
|
I think you could use `enumerate` in that situation:
```
# remove unwanted stuff
l = [ c for c in Text if c.isalpha() ]
for n,k in enumerate(l):
print n, (Keyword[n % len(Keyword)], Text[l])
```
that gives you:
```
0 ('l', 'h')
1 ('e', 'i')
2 ('m', 't')
3 ('o', 'h')
4 ('n', 'e')
5 ('l', 'r')
6 ('e', 'e')
```
You could use that as the basis for your manipulation.
| 6,735
|
8,912,338
|
I have a struct in GDB and want to run a script which examines this struct. In Python GDB you can easily access the struct via
```
(gdb) python mystruct = gdb.parse_and_eval("mystruct")
```
Now I got this variable called mystruct which is a GDB.Value object. And I can access all the members of the struct by simply using this object as a dictionary (like`mystruct['member']`).
The problem is, that my script doesn't know which members a certain struct has. So I wanted to get the keys (or even the values) from this GDB.Value object. But neither `mystruct.values()` nor `mystruct.keys()` is working here.
Is there no possibility to access this information? I think it's highly unlikely that you can't access this information, but I didn't found it anywhere. A `dir(mystruct)` showed me that there also is no keys or values function. I can see all the members by printing the mystruct, but isn't there a way to get the members in python?
|
2012/01/18
|
[
"https://Stackoverflow.com/questions/8912338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1154311/"
] |
From GDB [documentation](http://sourceware.org/gdb/onlinedocs/gdb/Values-From-Inferior.html#Values-From-Inferior):
You can get the type of `mystruct` like so:
```
tp = mystruct.type
```
and iterate over the [fields](http://sourceware.org/gdb/onlinedocs/gdb/Types-In-Python.html#Types-In-Python) via `tp.fields()`
No evil workarounds required ;-)
**Update:**
GDB 7.4 has just been released. From the [announcement](http://sourceware.org/ml/gdb-announce/2012/msg00001.html):
>
> Type objects for struct and union types now allow access to
> the fields using standard Python dictionary (mapping) methods.
>
>
>
|
Evil workaround:
```
python print eval("dict(" + str(mystruct)[1:-2] + ")")
```
I don't know if this is generalisable. As a demo, I wrote a minimal example `test.cpp`
```
#include <iostream>
struct mystruct
{
int i;
double x;
} mystruct_1;
int main ()
{
mystruct_1.i = 2;
mystruct_1.x = 1.242;
std::cout << "Blarz";
std::cout << std::endl;
}
```
Now I run `g++ -g test.cpp -o test` as usual and fire up `gdb test`. Here is a example session transcript:
```
(gdb) break main
Breakpoint 1 at 0x400898: file test.cpp, line 11.
(gdb) run
Starting program: ...
Breakpoint 1, main () at test.cpp:11
11 mystruct_1.i = 2;
(gdb) step
12 mystruct_1.x = 1.242;
(gdb) step
13 std::cout << "Blarz";
(gdb) python mystruct = gdb.parse_and_eval("mystruct_1")
(gdb) python print mystruct
{i = 2, x = 1.242}
(gdb) python print eval("dict(" + str(mystruct)[1:-2] + ")")
{'i': 2, 'x': 1.24}
(gdb) python print eval("dict(" + str(mystruct)[1:-2] + ")").keys()
['i', 'x']
```
| 6,738
|
66,079,303
|
I have a `docker-compose.yml` file of a project server.It contains a `mongo` container, `nginx` service and a panel.
I have to containerize my service and the panel for both test and main environements.Firstly I publish on test, afterwards I publish on main.
After the publishing on main `nginx` forward requests to test and that is the issue.
Restarting the `nginx` fixes the problem, so how can I fix the issue permanantly without restarting `nginx` after each publish on the main environement.
`docker-compose.yml` :
```yaml
version: '3.7'
services: main-mongo:
image: mongo:4.2.3
container_name: main-mongo
volumes:
- mongodb:/data/db
- mongoconfig:/data/configdb
main_service:
image: project
container_name: main_service
restart: on-failure
volumes:
- ./views:/project/views
- ./public:/project/public
depends_on:
- main-mongo
test_service:
image: test_project
container_name: test_service
restart: on-failure
volumes:
- ./test_view/views:/project/views
- ./test_view/public:/project/public
environment:
- ENV=test
depends_on:
- main-mongo
test_panel:
image: test_panel
container_name: test_panel
restart: on-failure
environment:
- ENV=test
depends_on:
- main-mongo
main_panel:
image: panel
container_name: main_panel
restart: on-failure
depends_on:
- main-mongo
python_backend_test:
image: python_backend_test
container_name: python_backend_test
restart: on-failure
volumes:
- /tmp:/tmp
nginx:
image: nginx:1.17
container_name: nginx
restart: on-failure
ports:
- 80:80
volumes:
- ./nginx:/etc/nginx/conf.d/
- ./panel_front:/usr/share/panel
depends_on:
- main_panel
- test_panel
- test_service
- main_service
volumes: mongodb: mongoconfig:
```
`nginx.conf`:
```
server {
listen 80;
listen [::]:80;
server_name domain.com;
location /api {
proxy_pass http://main_panel:3000;
}
location /account {
root /usr/share/panel/live;
try_files $uri $uri/ /index.html;
}
location / {
proxy_pass http://main_service:3000;
}
}
server {
listen 80;
listen [::]:80;
server_name test.domain.com;
location /netflix {
proxy_pass http://python_backend_test:5000;
}
location /api {
proxy_pass http://test_panel:3000;
}
location /account {
alias /usr/share/panel/test/;
try_files $uri $uri/ /index.html;
}
location / {
proxy_pass http://test_service:3000;
}
}
```
|
2021/02/06
|
[
"https://Stackoverflow.com/questions/66079303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12535379/"
] |
There are four solutions. In all of them, x = 0x4121, y = 0x48ab. There are four options for z (two of its bits are free to be 0 or 1), namely 0x1307, 0x1387, 0x1707, 0x1787.
This can be calculated by treating the variables are arrays of 16 bits and implementing the bitwise operations on them in terms of operations on booleans. That could probably be done in Prolog, or it could be done with a SAT solver or with binary decisions diagrams, I used [this website](http://haroldbot.nl/?q=%28%28y+%7C+x%29+%3D%3D+0x49ab%29+%26%26+%28%28%28y+%3E%3E+2%29+%5E+x%29+%3D%3D+0x530b%29+%26%26+%28%28%28z+%3E%3E+1%29+%26+y%29+%3D%3D+0x0883%29+%26%26+%28%28%28%28x+%3C%3C+2%29+%26+0xFFFF%29+%7C+z%29+%3D%3D+0x1787%29) which uses BDDs internally.
|
Here is one using SWI-Prolog's [`library(clpb)`](https://eu.swi-prolog.org/pldoc/man?section=clpb) to solve constraints over boolean variables (thanks Markus Triska!).
Very simple translation (I have never used this library but it's rather straightforward):
```none
:- use_module(library(clpb)).
% sat(Expr) sets up a constraint over variables
% labeling(ListOfVariables) fixes 0,1 values for variables (several solutions possible)
% atomic_list_concat/3 builds the bitstrings
find(X,Y,Z) :-
sat(
*([~(X15 + Y15), % Y | X = 0X49ab (0100100110101011)
(X14 + Y14),
~(X13 + Y13),
~(X12 + Y12),
(X11 + Y11),
~(X10 + Y10),
~(X09 + Y09),
(X08 + Y08),
(X07 + Y07),
~(X06 + Y06),
(X05 + Y05),
~(X04 + Y04),
(X03 + Y03),
~(X02 + Y02),
(X01 + Y01),
(X00 + Y00),
~(0 # X15), % (Y >> 2) ^ X = 0X530b (0101001100001011)
(0 # X14),
~(Y15 # X13),
(Y14 # X12),
~(Y13 # X11),
~(Y12 # X10),
(Y11 # X09),
(Y10 # X08),
~(Y09 # X07),
~(Y08 # X06),
~(Y07 # X05),
~(Y06 # X04),
(Y05 # X03),
~(Y04 # X02),
(Y03 # X01),
(Y02 # X00),
~(0 * Y15), % (Z >> 1) & Y = 0X0883 (0000100010000011)
~(Z15 * Y14),
~(Z14 * Y13),
~(Z13 * Y12),
(Z12 * Y11),
~(Z11 * Y10),
~(Z10 * Y09),
~(Z09 * Y08),
(Z08 * Y07),
~(Z07 * Y06),
~(Z06 * Y05),
~(Z05 * Y04),
~(Z04 * Y03),
~(Z03 * Y02),
(Z02 * Y01),
(Z01 * Y00),
~(X13 + Z15), % (X << 2) | Z = 0X1787 (0001011110000111)
~(X12 + Z14),
~(X11 + Z13),
(X10 + Z12),
~(X09 + Z11),
(X08 + Z10),
(X07 + Z09),
(X06 + Z08),
(X05 + Z07),
~(X04 + Z06),
~(X03 + Z05),
~(X02 + Z04),
~(X01 + Z03),
(X00 + Z02),
( 0 + Z01),
( 0 + Z00) ])),
labeling([X15,X14,X13,X12,X11,X10,X09,X08,X07,X06,X05,X04,X03,X02,X01,X00,
Y15,Y14,Y13,Y12,Y11,Y10,Y09,Y08,Y07,Y06,Y05,Y04,Y03,Y02,Y01,Y00,
Z15,Z14,Z13,Z12,Z11,Z10,Z09,Z08,Z07,Z06,Z05,Z04,Z03,Z02,Z01,Z00]),
atomic_list_concat([X15,X14,X13,X12,X11,X10,X09,X08,X07,X06,X05,X04,X03,X02,X01,X00],X),
atomic_list_concat([Y15,Y14,Y13,Y12,Y11,Y10,Y09,Y08,Y07,Y06,Y05,Y04,Y03,Y02,Y01,Y00],Y),
atomic_list_concat([Z15,Z14,Z13,Z12,Z11,Z10,Z09,Z08,Z07,Z06,Z05,Z04,Z03,Z02,Z01,Z00],Z).
```
We find several solutions in 0.007 seconds, with the translations to hexadecimal (manual work) added:
```none
?- find(X,Y,Z).
X = '0100000100100001', % 4121
Y = '0100100010101011', % 48AB
Z = '0001001100000111' ; % 1307
X = '0100000100100001', % 4121
Y = '0100100010101011', % 48AB
Z = '0001001110000111' ; % 1387
X = '0100000100100001', % 4121
Y = '0100100010101011', % 48AB
Z = '0001011100000111' ; % 1707
X = '0100000100100001', % 4121
Y = '0100100010101011', % 48AB
Z = '0001011110000111'. % 1787
```
| 6,741
|
41,129,921
|
I want to write a function that takes a string and returns `True` if it is a valid ISO-8601 datetime--precise to microseconds, including a timezone offset--`False` otherwise.
I have found [other](https://stackoverflow.com/questions/969285/how-do-i-translate-a-iso-8601-datetime-string-into-a-python-datetime-object?rq=1) [questions](https://stackoverflow.com/questions/127803/how-to-parse-an-iso-8601-formatted-date-in-python/30696682#30696682) that provide different ways of *parsing* datetime strings, but I want to return `True` in the case of ISO-8601 format only. Parsing doesn't help me unless I can get it to throw an error for formats that don't match ISO-8601.
(I am using the nice [arrow](https://github.com/crsmithdev/arrow) library elsewhere in my code. A solution that uses `arrow` would be welcome.)
---
**EDIT:** It appears that a general solution to "is this string a valid ISO 8601 datetime" does not exist among the common Python datetime packages.
So, to make this question narrower, more concrete and answerable, I will settle for a format string that will validate a datetime string in this form:
```
'2016-12-13T21:20:37.593194+00:00'
```
Currently I am using:
```
format_string = '%Y-%m-%dT%H:%M:%S.%f%z'
datetime.datetime.strptime(my_timestamp, format_string)
```
This gives:
```
ValueError: time data '2016-12-13T21:20:37.593194+00:00' does not match format '%Y-%m-%dT%H:%M:%S.%f%z'
```
The problem seems to lie with the colon in the UTC offset (`+00:00`). If I use an offset without a colon (e.g. `'2016-12-13T21:20:37.593194+0000'`), this parses properly as expected. This is apparently because `datetime`'s `%z` token [does not respect the UTC offset form that has a colon](https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior), only the form without, even though [both are valid per the spec](https://en.wikipedia.org/wiki/ISO_8601#Time_offsets_from_UTC).
|
2016/12/13
|
[
"https://Stackoverflow.com/questions/41129921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/332936/"
] |
Here is a crude but functional solution (for the narrower question) using `datetime.strptime()`:
```
import datetime
def is_expected_datetime_format(timestamp):
format_string = '%Y-%m-%dT%H:%M:%S.%f%z'
try:
colon = timestamp[-3]
if not colon == ':':
raise ValueError()
colonless_timestamp = timestamp[:-3] + timestamp[-2:]
datetime.datetime.strptime(colonless_timestamp, format_string)
return True
except ValueError:
return False
```
|
Given the constraints you've put on the problem, you could easily solve it with a regular expression.
```
>>> import re
>>> re.match(r'^\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d{6}[+-]\d\d:\d\d$', '2016-12-13T21:20:37.593194+00:00')
<_sre.SRE_Match object; span=(0, 32), match='2016-12-13T21:20:37.593194+00:00'>
```
If you need to pass *all* variations of ISO 8601 it will be a much more complicated regular expression, but it could still be done. If you also need to validate the numeric ranges, for example verifying that the hour is between 0 and 23, you can put parentheses into the regular expression to create match groups then validate each group.
| 6,750
|
19,593,456
|
Can the "standard" subprocess pipeline technique (e.g. <http://docs.python.org/2/library/subprocess.html#replacing-shell-pipeline>) be "upgraded" to two pipelines?
```
# How about
p1 = Popen(["cmd1"], stdout=PIPE, stderr=PIPE)
p2 = Popen(["cmd2"], stdin=p1.stdout)
p3 = Popen(["cmd3"], stdin=p1.stderr)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
p1.stderr.close()
#p2.communicate() # or p3.communicate()?
```
OK, it's actually a different use case, but the closest starting point seems to be the pipeline example. By the way, how does p2.communicate() in a "normal" pipeline drive p1? Here's the normal pipeline for reference:
```
# From Python docs
output=`dmesg | grep hda`
# becomes
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
```
I guess I'm ultimately interested in what kind of "process graphs" (or maybe just trees?) can `communicate()` support, but we'll leave the general case for another day.
**Update**: Here's the baseline functionality. Without communicate(), create 2 threads reading from p1.stdout and p2.stdout. In the main process, inject input via p1.stdin.write(). The question is whether we can drive a 1-source, 2-sink graph using just communicate()
|
2013/10/25
|
[
"https://Stackoverflow.com/questions/19593456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/690620/"
] |
You could use bash's [process substitution](http://en.wikipedia.org/wiki/Process_substitution):
```
from subprocess import check_call
check_call("cmd1 > >(cmd2) 2> >(cmd3)", shell=True, executable="/bin/bash")
```
It redirects `cmd1`'s stdout to `cmd2` and `cmd1`'s stderr to `cmd3`.
If you don't want to use `bash` then the code in your question should work as is e.g.:
```
#!/usr/bin/env python
import sys
from subprocess import Popen, PIPE
from textwrap import dedent
# generate some output on stdout/stderr
source = Popen([sys.executable, "-c", dedent("""
from __future__ import print_function
import sys
from itertools import cycle
from string import ascii_lowercase
for i, c in enumerate(cycle(ascii_lowercase)):
print(c)
print(i, file=sys.stderr)
""")], stdout=PIPE, stderr=PIPE)
# convert input to upper case
sink = Popen([sys.executable, "-c", dedent("""
import sys
for line in sys.stdin:
sys.stdout.write(line.upper())
""")], stdin=source.stdout)
source.stdout.close() # allow source to receive SIGPIPE if sink exits
# square input
sink_stderr = Popen([sys.executable, "-c", dedent("""
import sys
for line in sys.stdin:
print(int(line)**2)
""")], stdin=source.stderr)
source.stderr.close() # allow source to receive SIGPIPE if sink_stderr exits
sink.communicate()
sink_stderr.communicate()
source.wait()
```
|
The solution here is to create a couple of background threads which read the output from one process and then write that into the inputs of several processes:
```
targets = [...] # list of processes as returned by Popen()
while True:
line = p1.readline()
if line is None: break
for p in targets:
p.stdin.write(line)
```
| 6,758
|
56,293,981
|
I'm writing a python script which has to internally create output path from the input path. However, I am facing issues to create the path which I can use irrespective of OS.
I have tried to use os.path.join and it has its own limitations.
Apart from that, I think simple string concatenation is not the way to go.
Pathlib can be an option but I am not allowed to use it.
```py
import os
inputpath = "C:\projects\django\hereisinput"
lastSlash = left.rfind("\\")
# This won't work as os path join stops at a slash
outputDir = os.path.join(left[:lastSlash], "\internal\morelevel\outputpath")
OR
OutDir = left[:lastSlash] + "\internal\morelevel\outputpath"
```
Expected output path :
C:\projects\django\internal\morelevel\outputpath
Also, the above code doesn't do it OS Specific where in Linux, the slash will be different.
Is os.sep() some option ?
|
2019/05/24
|
[
"https://Stackoverflow.com/questions/56293981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5729921/"
] |
From the documentation `os.path.join` can join "**one or more** path components...". So you could split `"\internal\morelevel\outputpath"` up into each of its components and pass all of them to your `os.path.join` function instead. That way you don't need to "hard-code" the separator between the path components. For example:
```
paths = ("internal", "morelevel", "outputpath")
outputDir = os.path.join(left[:lastSlash], *paths)
```
Remember that the backslash (`\`) is a special character in Python so your strings containing singular backslashes wouldn't work as you expect them to! You need to escape them with another `\` in front.
This part of your code `lastSlash = left.rfind("\\")` might also not work on any operating system. You could rather use something like [`os.path.split`](https://docs.python.org/3/library/os.path.html#os.path.split) to get the last part of the path that you need. For example, `_, lastSlash = os.path.split(left)`.
|
Assuming your original path is "C:\projects\django\hereisinput", your other part of the path as "internal\morelevel\outputpath" (notice this is a relative path, not absolute), you could always move your primary back one folder (or more) and then append the second part. Do note that your first path needs to contain only folders and can be absolute or relative, while your second path must always be relative for this hack to work
```
path_1 = r"C:\projects\django\hereisinput"
path_2 = r"internal\morelevel\outputpath"
path_1_one_folder_down = os.path.join(path_1, os.path.pardir)
final_path = os.path.join(path_1_one_folder_down, path_2)
'C:\\projects\\django\\hereisinput\\..\\internal\\morelevel\\outputpath'
```
| 6,759
|
41,247,105
|
I am trying to run this script in parallel, for i<=4 in each set. The `runspr.py` is itself parallel, and thats fine. What I am trying to do is running only 4 i loop in any instance.
In my present code, it will run everything.
```
#!bin/bash
for i in *
do
if [[ -d $i ]]; then
echo "$i id dir"
cd $i
python3 ~/bin/runspr.py SCF &
cd ..
else
echo "$i nont dir"
fi
done
```
I have followed <https://www.biostars.org/p/63816/> and <https://unix.stackexchange.com/questions/35416/four-tasks-in-parallel-how-do-i-do-that>
but unable to impliment the code in parallel.
|
2016/12/20
|
[
"https://Stackoverflow.com/questions/41247105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005559/"
] |
You don't need to use `for` loop. You can use [`gnu parallel`](https://www.gnu.org/software/parallel/parallel_tutorial.html) like this with `find`:
```
find . -mindepth 1 -maxdepth 1 -type d ! -print0 |
parallel -0 --jobs 4 'cd {}; python3 ~/bin/runspr.py SCF'
```
|
Another possible solution is:
```
find . -mindepth 1 -maxdepth 1 -type d ! -print0 |
xargs -I {} -P 4 sh -c 'cd {}; python3 ~/bin/runspr.py SCF'
```
| 6,760
|
58,478,181
|
I'm looking at breaking the enigma cipher with python, and need to generate plugboard combinations - what I need is a function which takes a `length` parameter and returns a generator object of all possible combinations as a list.
Example code:
```py
for comb in func(2):
print(comb)
```
```py
# Example output
['AB', 'CD']
['CD', 'EF']
...
```
Does anyone know of a library that provides such a generator, or how to go about making one?
EDIT: more detail about enigma
Please see [here](https://en.wikipedia.org/wiki/Enigma_machine#Plugboard) for detail about the design of the plugboard
Also, the output format of the generator must be concatanable to this format *without* running the whole generator:
`'AB FO ZP GI HY KS JW MQ XE ...'` the number of pairs would be the `length` parameter in the function.
|
2019/10/20
|
[
"https://Stackoverflow.com/questions/58478181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
I think what you're looking for is [itertools.combinations](https://docs.python.org/3/library/itertools.html#itertools.combinations)
```
>>> from itertools import combinations
>>> list(combinations('abcd', 2))
[('a', 'b'), ('a', 'c'), ('a', 'd'), ('b', 'c'), ('b', 'd'), ('c', 'd')]
>>> [''.join(comb) for comb in combinations('abcd', 2)]
['ab', 'ac', 'ad', 'bc', 'bd', 'cd']
```
|
Is this something close to what you're after? Let me know and I can revise.
The `plugboard()` function returns (`yield`) a `generator` object which can be accessed either by an iterable loop, or by calling the `next()` function.
```
from itertools import product
def plugboard(chars, length):
for comb in product(chars, repeat=length):
yield ''.join(comb)
```
### A call such as this will create `combs` as a `generator` object:
```
combs = plugboard('abcde', 2)
type(combs)
>>> generator
```
### To access the values you can use either:
```
next(combs)
>>> 'aa'
```
### Or:
```
for i in combs: print(i)
ab
ac
ad
ae
ba
bb
bc
...
```
| 6,761
|
63,810,966
|
I try to remove outliers in a python list. But it removes only the first one (190000) and not the second (20000). What is the problem ?
```
import statistics
dataset = [25000, 30000, 52000, 28000, 150000, 190000, 200000]
def detect_outlier(data_1):
threshold = 1
mean_1 = statistics.mean(data_1)
std_1 = statistics.stdev(data_1)
#print(std_1)
for y in data_1:
z_score = (y - mean_1)/std_1
print(z_score)
if abs(z_score) > threshold:
dataset.remove(y)
return dataset
dataset = detect_outlier(dataset)
print(dataset)
```
Output:
```
[25000, 30000, 52000, 28000, 150000, 200000]
```
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63810966",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9669017/"
] |
It is because you are trying to make operations on the same data address.
dataset's address is equals to the data\_1 address and when you are removing an element from the list, it pass the next element according to the foreach structure of Python. You must not make operations on a list during iteration.
Shortly, try to call the method like this(this sends dataset's elements as a new list, doesn't send the dataset):
```
dataset = detect_outlier(dataset[:])
```
|
```
import statistics
def detect_outlier(data_1):
threshold = 1
mean_1 = statistics.mean(data_1)
std_1 = statistics.stdev(data_1)
result_dataset = [y for y in data_1 if abs((y - mean_1)/std_1)<=threshold ]
return result_dataset
if __name__=="__main__":
dataset = [25000, 30000, 52000, 28000, 150000, 190000, 200000]
result_dataset = detect_outlier(dataset)
print(result_dataset)
```
| 6,762
|
61,660,067
|
This is the code i used:
```
df = None
from pyspark.sql.functions import lit
for category in file_list_filtered:
data_files = os.listdir('HMP_Dataset/'+category)
for data_file in data_files:
print(data_file)
temp_df = spark.read.option('header', 'false').option('delimiter', ' ').csv('HMP_Dataset/'+category+'/'+data_file, schema = schema)
temp_df = temp_df.withColumn('class', lit(category))
temp_df = temp_df.withColumn('source', lit(data_file))
if df is None:
df = temp_df
else:
df = df.union(temp_df)
```
and i got this error:
```
NameError Traceback (most recent call last)
<ipython-input-4-4296b4e97942> in <module>
9 for data_file in data_files:
10 print(data_file)
---> 11 temp_df = spark.read.option('header', 'false').option('delimiter', ' ').csv('HMP_Dataset/'+category+'/'+data_file, schema = schema)
12 temp_df = temp_df.withColumn('class', lit(category))
13 temp_df = temp_df.withColumn('source', lit(data_file))
NameError: name 'spark' is not defined
```
How can i solve it?
|
2020/05/07
|
[
"https://Stackoverflow.com/questions/61660067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13491209/"
] |
You can try *regular expressions* in order to parse. First, let's extract a model: since commands are in format
```
Start (zero or more Modifiers)
```
E.g.
```
%today-5day+3hour%
```
where `today` is `Start` and `-5day` and `+3hour` are `Modifiers` we want two *models*
```
using System.Linq;
using System.Text.RegularExpressions;
...
//TODO: add more starting points here
private static Dictionary<string, Func<DateTime>> s_Starts =
new Dictionary<string, Func<DateTime>>(StringComparer.OrdinalIgnoreCase) {
{ "now", () => DateTime.Now},
{ "today", () => DateTime.Today},
{ "yearend", () => new DateTime(DateTime.Today.Year, 12, 31) },
};
//TODO: add more modifiers here
private static Dictionary<string, Func<DateTime, int, DateTime>> s_Modifiers =
new Dictionary<string, Func<DateTime, int, DateTime>>(StringComparer.OrdinalIgnoreCase) {
{ "month", (source, x) => source.AddMonths(x)},
{ "week", (source, x) => source.AddDays(x * 7)},
{ "day", (source, x) => source.AddDays(x)},
{ "hour", (source, x) => source.AddHours(x)},
{ "min", (source, x) => source.AddMinutes(x)},
{ "sec", (source, x) => source.AddSeconds(x)},
};
```
Having a model (two dictionaries above) we can implement `MyParse` routine:
```
private static DateTime MyParse(string value) {
if (null == value)
throw new ArgumentNullException(nameof(value));
var match = Regex.Match(value,
@"^%(?<start>[a-zA-Z]+)(?<modify>\s*[+-]\s*[0-9]+\s*[a-zA-Z]+)*%$");
if (!match.Success)
throw new FormatException("Invalid format");
string start = match.Groups["start"].Value;
DateTime result = s_Starts.TryGetValue(start, out var startFunc)
? startFunc()
: throw new FormatException($"Start Date(Time) {start} is unknown.");
var adds = Regex
.Matches(match.Groups["modify"].Value, @"([+\-])\s*([0-9]+)\s*([a-zA-Z]+)")
.Cast<Match>()
.Select(m => (kind : m.Groups[3].Value,
amount : int.Parse(m.Groups[1].Value + m.Groups[2].Value)));
foreach (var (kind, amount) in adds)
if (s_Modifiers.TryGetValue(kind, out var func))
result = func(result, amount);
else
throw new FormatException($"Modification {kind} is unknown.");
return result;
}
```
**Demo:**
```
string[] tests = new string[] {
"%today%",
"%today-5day%",
"%today-1sec%",
"%today+1month%",
"%now+15min%",
"%now-30sec%",
"%now+1week%",
};
string report = string.Join(Environment.NewLine, tests
.Select(test => $"{test,-15} :: {MyParse(test):dd MM yyyy HH:mm:ss}")
);
Console.Write(report);
```
**Outcome:**
```
%today% :: 07 05 2020 00:00:00
%today-5day% :: 02 05 2020 00:00:00
%today-1sec% :: 06 05 2020 23:59:59
%today+1month% :: 07 06 2020 00:00:00
%now+15min% :: 07 05 2020 18:34:55
%now-30sec% :: 07 05 2020 18:19:25
%now+1week% :: 14 05 2020 18:19:55
```
|
This is a small example of parsing a string that has today and day in it (using the example of %today-5day%).
Note: this will only work if the value passed in for the added days is between 0-9, to handle large numbers you will have to loop over the result of the string.
You can follow this code block to parse other results as well.
```
switch (date)
{
case var a when a.Contains("day") && a.Contains("today"):
return DateTime.Today.AddDays(char.GetNumericValue(date[date.Length - 5]));
default:
return DateTime.Today;
}
```
| 6,763
|
46,574,860
|
I am trying to get percentage frequencies in pyspark. I did this in python as follows
```
Companies = df['Company'].value_counts(normalize = True)
```
Getting the frequencies is fairly straightforward:
```
# Dates in descending order of complaint frequency
df.createOrReplaceTempView('Comp')
CompDF = spark.sql("SELECT Company, count(*) as cnt \
FROM Comp \
GROUP BY Company \
ORDER BY cnt DESC")
CompDF.show()
```
```
+--------------------+----+
| Company| cnt|
+--------------------+----+
|BANK OF AMERICA, ...|1387|
| EQUIFAX, INC.|1285|
|WELLS FARGO & COM...|1119|
|Experian Informat...|1115|
|TRANSUNION INTERM...|1001|
|JPMORGAN CHASE & CO.| 905|
| CITIBANK, N.A.| 772|
|OCWEN LOAN SERVIC...| 481|
```
How do I get to percent frequencies from here? I tried a bunch of things with not much luck.
Any help would be appreciated.
|
2017/10/04
|
[
"https://Stackoverflow.com/questions/46574860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8722941/"
] |
As Suresh implies in the comments, assuming that `total_count` is the number of rows in dataframe `Companies`, you can use `withColumn` to add a new column named `percentages` in `CompDF`:
```
total_count = Companies.count()
df = CompDF.withColumn('percentage', CompDF.cnt/float(total_counts))
```
|
May be modifying the SQL query will get you the result you want.
```
"SELECT Company,cnt/(SELECT SUM(cnt) from (SELECT Company, count(*) as cnt
FROM Comp GROUP BY Company ORDER BY cnt DESC) temp_tab) sum_freq from
(SELECT Company, count(*) as cnt FROM Comp GROUP BY Company ORDER BY cnt
DESC)"
```
| 6,764
|
67,205,402
|
I have a file and its name is (Pro\_data.sh) which contains these commands:
```
python preprocess.py dex
python preprocess.py lex
python process.py
```
I do not know how can I run the file in the python terminal (pycharm as an example).
I can run each command alone but I want to know the right way to run the .sh file for saving execution time.
|
2021/04/22
|
[
"https://Stackoverflow.com/questions/67205402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10585944/"
] |
Is this what you're looking for?
main.py
```
import os
os.system('sh x.sh')
```
x.sh
```
python test2.py
```
test2.py
```
print('up')
```
output from main.py
```
up
```
|
.sh is a mac/Linux shell. for windows: create one with the name Pro\_data.bat and put those three commands in it, and run Prod\_data.bat
**Prod\_data.bat**
```
python preprocess.py dex
python preprocess.py lex
python process.py
```
| 6,765
|
4,743,016
|
I search a python builder IDE for wxpython similar to boa constructor.
any suggestions ?
|
2011/01/20
|
[
"https://Stackoverflow.com/questions/4743016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348081/"
] |
Well, there is [wxGlade](http://wxglade.sourceforge.net/), [DialogBlocks](http://www.dialogblocks.com/) and [wxDesigner](http://www.wxdesigner-software.de/), with both DialogBlocks and wxDesigner being commercial tools. There also used to be a open-source editor called wxFormBuilder, but the site hosting it seems down right now.
|
Boa constructor is working here....works with windows 10
<https://bitbucket.org/cwt/boa-constructor/overview>
| 6,766
|
68,640,517
|
I am trying to run a simple python script within a docker run command scheduled with Airflow.
I have followed the instructions here [Airflow init](https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html).
My `.env` file:
```
AIRFLOW_UID=1000
AIRFLOW_GID=0
```
And the `docker-compose.yaml` is based on the default one [docker-compose.yaml](https://airflow.apache.org/docs/apache-airflow/2.1.2/docker-compose.yaml). I had to add `- /var/run/docker.sock:/var/run/docker.sock` as an additional volume to run docker inside of docker.
My dag is configured as followed:
```py
""" this is an example dag """
from datetime import timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
from airflow.utils.dates import days_ago
from docker.types import Mount
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['info@foo.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 10,
'retry_delay': timedelta(minutes=5),
}
with DAG(
'msg_europe_etl',
default_args=default_args,
description='Process MSG_EUROPE ETL',
schedule_interval=timedelta(minutes=15),
start_date=days_ago(0),
tags=['satellite_data'],
) as dag:
download_and_store = DockerOperator(
task_id='download_and_store',
image='satellite_image:latest',
auto_remove=True,
api_version='1.41',
mounts=[Mount(source='/home/archive_1/archive/satellite_data',
target='/app/data'),
Mount(source='/home/dlassahn/projects/forecast-system/meteoIntelligence-satellite',
target='/app')],
command="python3 src/scripts.py download_satellite_images "
"{{ (execution_date - macros.timedelta(hours=4)).strftime('%Y-%m-%d %H:%M') }} "
"'msg_europe' ",
)
download_and_store
```
The Airflow log:
```py
[2021-08-03 17:23:58,691] {docker.py:231} INFO - Starting docker container from image satellite_image:latest
[2021-08-03 17:23:58,702] {taskinstance.py:1501} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.6/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.41/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 319, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 258, in _run_image
tty=self.tty,
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 430, in create_container
return self.create_container_from_config(config, name)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 441, in create_container_from_config
return self._result(res, True)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.41/containers/create: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmp037k87u6")
```
Trying to set `mount_tmp_dir=False` yield to an Dag ImportError because of unknown Keyword Argument `mount_tmp_dir`. (this might be an issue for the Documentation)
Nevertheless I do not know how to configure the tmp directory correctly.
My Airflow Version: 2.1.2
|
2021/08/03
|
[
"https://Stackoverflow.com/questions/68640517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4390189/"
] |
There was a bug in Docker Provider 2.0.0 which prevented Docker Operator to run with Docker-In-Docker solution.
You need to upgrade to the latest Docker Provider 2.1.0
<https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html#id1>
You can do it by extending the image as described in <https://airflow.apache.org/docs/docker-stack/build.html#extending-the-image> with - for example - this docker file:
```
FROM apache/airflow
RUN pip install --no-cache-dir apache-airflow-providers-docker==2.1.0
```
The operator will work out-of-the-box in this case with "fallback" mode (and Warning message), but you can also disable the mount that causes the problem. More explanation from the <https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html>
>
> By default, a temporary directory is created on the host and mounted
> into a container to allow storing files that together exceed the
> default disk size of 10GB in a container. In this case The path to the
> mounted directory can be accessed via the environment variable
> AIRFLOW\_TMP\_DIR.
>
>
> If the volume cannot be mounted, warning is printed and an attempt is
> made to execute the docker command without the temporary folder
> mounted. This is to make it works by default with remote docker engine
> or when you run docker-in-docker solution and temporary directory is
> not shared with the docker engine. Warning is printed in logs in this
> case.
>
>
> If you know you run DockerOperator with remote engine or via
> docker-in-docker you should set mount\_tmp\_dir parameter to False. In
> this case, you can still use mounts parameter to mount already
> existing named volumes in your Docker Engine to achieve similar
> capability where you can store files exceeding default disk size of
> the container,
>
>
>
|
I had the same issue and all "recommended" ways of solving the issue here and setting up mount\_dir params as descripted [here](https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html) just lead to other errors. The one solution that helped me was wrapping the invocated by docker code with the VPN (actually this hack was taken from another docker-powered DAG that used VPN and worked well).
So the final solution looks like:
```
#!/bin/bash
connect_to_vpn.sh &
sleep 10
python3 my_func.py
sleep 10
stop_vpn.sh
wait -n
exit $?
```
To connect to VPN I used openconnect. The took can be installed with apt install and supports anyconnect protocol (it was my crucial requirement).
| 6,767
|
5,532,902
|
I want to use recursion to reverse a string in python so it displays the characters backwards (i.e "Hello" will become "olleh"/"o l l e h".
I wrote one that does it iteratively:
```
def Reverse( s ):
result = ""
n = 0
start = 0
while ( s[n:] != "" ):
while ( s[n:] != "" and s[n] != ' ' ):
n = n + 1
result = s[ start: n ] + " " + result
start = n
return result
```
But how exactly do I do this recursively? I am confused on this part, especially because I don't work with python and recursion much.
Any help would be appreciated.
|
2011/04/03
|
[
"https://Stackoverflow.com/questions/5532902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/615585/"
] |
If this isn't just a homework question and you're actually trying to reverse a string for some greater goal, just do `s[::-1]`.
|
I know it's too late to answer original question and there are multiple better ways which are answered here already. My answer is for documentation purpose in case someone is trying to implement tail recursion for string reversal.
```
def tail_rev(in_string,rev_string):
if in_string=='':
return rev_string
else:
rev_string+=in_string[-1]
return tail_rev(in_string[:-1],rev_string)
in_string=input("Enter String: ")
rev_string=tail_rev(in_string,'')
print(f"Reverse of {in_string} is {rev_string}")
```
| 6,768
|
69,422,541
|
I am using the python code below to attempt to create a graph that fills the entire figure. The problem is I need the graph portion to stretch to fill the entire figure, there should not be any of the green showing and the graphed data should start at one side and go all the way to the other. I have found similar examples online but nothing that exactly solves my problem and I a matplotlib noob. Any help is greatly appreciated.
```
import numpy as np
import matplotlib.pyplot as plt
data = np.fromfile("rawdata.bin", dtype='<u2')
fig, ax = plt.subplots()
fig.patch.set_facecolor('xkcd:mint green')
ax.axes.xaxis.set_ticks([])
ax.axes.yaxis.set_ticks([])
ax.plot(data)
fig.tight_layout()
plt.show()
```
[](https://i.stack.imgur.com/dWy7D.png)
|
2021/10/03
|
[
"https://Stackoverflow.com/questions/69422541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2231017/"
] |
Just use `IsNullOrWhiteSpace`; it does what `IsNullOrEmpty` does:
>
> Return `true` if the value parameter is null or Empty, or if value consists exclusively of white-space characters.
> *-- documentation for IsNullOrWhiteSpace*
>
>
>
And it is mapped by EF Core to an SQL of
```
@value IS NULL OR LTRIM(RTRIM(@value)) = N''
```
Note that in EF Core 6 this changed to `@value IS NULL OR value= N''` - same deal; SQLS ignores trailing whitespace in string comp
This means your `IsSomething` method is effectively `!IsNullOrWhiteSpace`, which EF will translate if you use it..
|
You cannot translate this method to SQL Server. However, you can use a method that returns Expression. For example:
```
public static Expression<Func<Student, bool>> IsSomething()
{
return (s) => string.IsNullOrEmpty(s.FirstName) || string.IsNullOrWhiteSpace(s.FirstName);
}
```
Usually, you should call AsEnumerable before using methods:
```
dbset.AsEnumerable().Where(i => i.Title.IsSomething());
```
But it will load all entities from data base.
| 6,778
|
15,034,105
|
Let say I have the a list of list
```
[ ['B','2'] , ['o','0'], ['y']]
```
and I want to combine the list into something like this without using iteratool
```
["Boy","B0y","2oy","20y"]
```
I can't use itertool because I have to use python 2.5.
|
2013/02/22
|
[
"https://Stackoverflow.com/questions/15034105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1968057/"
] |
[`itertools.product()`](http://docs.python.org/2/library/itertools.html#itertools.product) does what you want.
```
>>> [''.join(x) for x in itertools.product(*[['B', '2'], ['o', '0'], ['y']])]
['Boy', 'B0y', '2oy', '20y']
```
|
If you don't want to use itertools, this list comprehension produces your output:
```
>>> LoL=[['B','2'], ['o','0'], ['y']]
>>> [a+b+c for a in LoL[0] for b in LoL[1] for c in LoL[2]]
['Boy', 'B0y', '2oy', '20y']
```
This is a more compact version of this:
```
LoL=[['B','2'], ['o','0'], ['y']]
r=[]
for a in LoL[0]:
for b in LoL[1]:
for c in LoL[2]:
r.append(a+b+c)
print r
```
In either case, you are producing a [cartesian product](http://en.wikipedia.org/wiki/Cartesian_product) which is better and more flexibly done with [itertools.product()](http://docs.python.org/2/library/itertools.html#itertools.product) (unless you are just curious about how to do it...)
| 6,779
|
51,115,744
|
How to set path for python 3.7.0?
I tried the every possible way but it still shows the error!
>
> Could not install packages due to an EnvironmentError: [WinError 5]
>
>
> Access is denied: 'c:\program files (x86)\python37-32\lib\site-packages\pip-10.0.1.dist-info\entry\_points.txt'
>
>
> Consider using the `--user` option or check the permissions
>
>
>
|
2018/06/30
|
[
"https://Stackoverflow.com/questions/51115744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10014902/"
] |
You can add --user in the end of your command. This works well in my case!
```
--user
```
My example:
```
python -m pip install --upgrade pip --user
```
|
Just try on Administrator cmd
```
pip install --user numpy
```
| 6,780
|
10,396,640
|
My current plan is to determine which is the first entry in a number of [Tkinter](http://wiki.python.org/moin/TkInter) listboxes highlighted using `.curselection()` and combining all of the resulting tuples into a list, producing this:
```
tupleList = [(), (), ('24', '25', '26', '27'), (), (), (), ()]
```
I'm wondering as to how to determine the lowest integer. Using `.min(tupleList)` returns only `()`, being the lowest entry in the list, but I'm looking for a method that would return 24.
What's the right way to get the lowest integer in any tuple in the list?
|
2012/05/01
|
[
"https://Stackoverflow.com/questions/10396640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1367581/"
] |
```
>>> nums = [(), (), ('24', '25', '26', '27'), (), (), (), ()]
>>> min(int(j) for i in nums for j in i)
24
```
|
```
>>> from itertools import chain
>>> nums = [(), (), ('24', '25', '26', '27'), (), (), (), ()]
>>> min(map(int,chain.from_iterable(nums)))
24
```
| 6,790
|
61,629,693
|
I have a 3D numpy array with integer values, something defined as:
```
import numpy as np
x = np.random.randint(0, 100, (10, 10, 10))
```
Now what I want to do is find the last slice (or alternatively the first slice) along a given axes (say 1) where a particular value occurs. At the moment, I do something like:
```
first=None
last=None
val = 20
for i in range(len(x.shape[1]):
slice = x[:, i, :]
if len(slice[slice==val]) > 0:
if not first:
first = i
last = i
return first, last
```
This seems a bit unpythonic and I wonder if there is some `numpy` magic to do this?
|
2020/05/06
|
[
"https://Stackoverflow.com/questions/61629693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2713740/"
] |
You probably can optimize this to be faster, but here is a vectorized version of what you search:
```
axis = 1
mask = np.where(x==val)[axis]
first, last = np.amin(mask), np.amax(mask)
```
It first finds the element `val` in your array using `np.where` and returns the `min` and `max` of indices along desired axis.
|
Per your question, you want to check if there's any such valid slice and hence, get the start/first, stop/last indices. In absence of any such valid slice, we must return None's there. That needs extra check. Also, we can use `masking` to get those indices in an efficient manner, like so -
```
def slice_info(x, val):
n = (x==val).any((0,2))
if n.any():
return n.argmax(), len(n)-n[::-1].argmax()-1
else:
return None,None
```
### Benchmarking
Other proposed solution(s) :
```
# https://stackoverflow.com/a/61629916/ @Ehsan
def where_amin_amax(x, val):
axis = 1
mask = np.where(x==val)[axis]
first, last = np.amin(mask), np.amax(mask)
return first, last
```
Timings -
```
# Same setup as in given sample
In [157]: np.random.seed(0)
...: x = np.random.randint(0, 100, (10, 10, 10))
In [158]: %timeit where_amin_amax(x, val=20)
...: %timeit slice_info(x, val=20)
15.1 µs ± 287 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
9.63 µs ± 43.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# Bigger
In [159]: np.random.seed(0)
...: x = np.random.randint(0, 100, (100, 100, 100))
In [160]: %timeit where_amin_amax(x, val=20)
...: %timeit slice_info(x, val=20)
3.34 ms ± 31.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
691 µs ± 3.69 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```
| 6,793
|
44,987,098
|
Is it possible to install [nodejs](/questions/tagged/nodejs "show questions tagged 'nodejs'") packages (/modules) from files like in [ruby](/questions/tagged/ruby "show questions tagged 'ruby'")'s Gemfile as done with `bundle install` or [python](/questions/tagged/python "show questions tagged 'python'")'s requirements file (eg: `requirements.pip`) as done using `pip install -r` commands ?
Say I have a file where three or four package names are listed.
How would I instruct `npm` to install packages whose names are found in that file (and resolve dependencies which it also does)?
|
2017/07/08
|
[
"https://Stackoverflow.com/questions/44987098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Just create your package JSON, you can use yarn to manage your packages instead of npm, it's faster.
Inside your package you can create a section of scripts accessed by `npm run`
`scripts: {
customBuild: 'your sh code/ruby/script whateve'
}`
And after you can run on terminal, `npm run customBuild` for example
|
you can use NPM init to create a package.json file which will store the node packages your application is dependent on, then use npm install which will install the node packages indicated in the package.json file
| 6,794
|
55,317,792
|
My sonar branch coverage results are not importing into sonarqube.
coverage.xml are generating in jenkins workspace.
following are the below jenkins and error details :
WARN: No report was found for sonar.python.coverage.reportPaths using pattern coverage-reports/coverage.xml
I have tried in my ways but nothing worked.
```
withSonarQubeEnv('Sonarqube') {
sh "${scannerHome}/bin/sonar-scanner -Dsonar.login=$USERNAME -Dsonar.password=$PASSWORD -Dsonar.projectKey=${params.PROJECT_KEY} -Dsonar.projectName=${params.PROJECT_NAME} -Dsonar.branch=${params.GIT_BRANCH} -Dsonar.projectVersion=${params.PROJECT_VERSION} -Dsonar.sources=. -Dsonar.language=${params.LANGUAGE} -Dsonar.python.pylint=${params.PYLINT_PATH} -Dsonar.python.pylint_config=${params.PYLINT} -Dsonar.python.pylint.reportPath=${params.PYLINT_REPORT} -Dsonar.sourceEncoding=${params.ENCODING} -Dsonar.python.xunit.reportPath=${params.NOSE} -Dsonar.python.coverage.reportPaths=${params.COVERAGE}"
}
```
I expect my coverage results to reflect on sonar
|
2019/03/23
|
[
"https://Stackoverflow.com/questions/55317792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11248384/"
] |
You are having that error because you are specifying the coverage report path option wrong, and therefore sonar is using the default location `coverage-reports/coverage.xml`.
The correct option is `-Dsonar.python.coverage.reportPath` (in singular).
|
I still have this problem on Azure Pipelines. Tried many ways without success.
WARN: No report was found for sonar.python.coverage.reportPaths using pattern coverage.xml
| 6,795
|
35,857,389
|
Encountered the following problem when trying to use the module scipy.optimize.slsqp.
```
>>> import scipy.optimize.slsqp
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/site-packages/scipy/optimize/__init__.py",
line 233, in <module>
from ._minimize import *
File "/usr/local/lib/python3.5/site-
packages/scipy/optimize/_minimize.py", line 26, in <module>
from ._trustregion_dogleg import _minimize_dogleg
File "/usr/local/lib/python3.5/site-
packages/scipy/optimize/_trustregion_dogleg.py", line 5, in <module>
import scipy.linalg
File "/usr/local/lib/python3.5/site-packages/scipy/linalg/__init__.py",
line 190, in <module>
from ._decomp_update import *
File "scipy/linalg/_decomp_update.pyx", line 1, in init
scipy.linalg._decomp_update (scipy/linalg/_decomp_update.c:39096)
ImportError: /usr/local/lib/python3.5/site-
packages/scipy/linalg/cython_lapack.cpython-35m-x86_64-linux-gnu.so:
undefined symbol: zlacn2
```
I'm using Python3.5, Scipy 0.17.0, Numpy 1.10.1, the OS is CentOS 5.11. Could anyone shed some lights into this? Thank you.
|
2016/03/08
|
[
"https://Stackoverflow.com/questions/35857389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6020400/"
] |
Depending on the scope you use it would be loaded in an application context, therefore one time (say in a singleton class loaded at the application startup).
But I wouldn't recommend this approach, I would recommend a proper designed database where you can put your csv data into. This way you would have the database engine to help you organize your data which would give you scalability and maintainability (although with a proper design of your classes say a DAO pattern would give you the same).
Organized data in a database would give you more flexibility to search through your data using already made SQL functions.
In order to make my case here are some advantages of a Database system over a file system:
* No redundant data – Redundancy removed by data normalization
* Data Consistency and Integrity – data normalization takes care of it too
* Secure – Each user has a different set of access
* Privacy – Limited access
* Easy access to data
* Easy recovery
* Flexible
* Concurrency - The database engine will allow you to concurrent read the data or even write to it.
I'm not listing the disadvantages since I'm making my case :)
|
I can read from a CSV file to build your arrays. You can then add the arrays to session scope. The CSV file will only be read at the servlet that processes it. Future usage will be retrieved from session.
| 6,796
|
51,529,408
|
I must be missing something but I look around and couldn't find reference to this issue.
I have the very basic code, as seen in flask-mongoengine documentation.
test.py:
```
from flask import Flask
from flask_mongoengine import MongoEngine
```
When I run
python test.py
...
```
from flask_mongoengine import MongoEngine
ImportError: cannot import name 'MongoEngine'
```
Module in virtual environment contain (requirements.txt):
```
click==6.7
Flask==1.0.2
flask-mongoengine==0.9.5
Flask-WTF==0.14.2
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.0
mongoengine==0.15.3
pymongo==3.7.1
six==1.11.0
Werkzeug==0.14.1
WTForms==2.2.1
```
My interpreter is Python 3.6.5
Any help would be appreciated. Thanks.
|
2018/07/26
|
[
"https://Stackoverflow.com/questions/51529408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5112112/"
] |
Since your using a virtual environment did you try opening your editor from your virtual environment?
For example opening the vscode editor from command-line is "code". Go to your virtual environment via the terminal and activate then type "code" at your prompt.
```
terminal:~path/to/virtual-enviroment$ source bin/activate
(virtual-enviroment)terminal:~path/to/virtual-enviroment$ code
```
If that doesn't work I, myself, haven't used flask-mongoengine. I was nervous of any issues that would come from the abstraction of it and instead just used Mongoengine with Flask.
I'm assuming you're only using this library for connection management so if you can't solve your issue with flask-mongoengine but are still interested in using mongoengine this was my approach. ~
I would put this in a config file somewhere and import it where appropriate-
```
from flask import Flask
MONGODB_DB = 'DB_NAME'
MONGODB_HOST = '127.0.0.1' # or whatever your db address
MONGODB_PORT = 27017 # or whatever your port
app = Flask(__name__) # you can import app from config and it will keep its configurations
```
then I would connect and disconnect from the database within each HTTP request function like this-
```
from config import MONGO_DB, MONGODB_HOST, MONGODB_PORT
# to connect
db = connect(MONGODB_DB, host=MONGODB_HOST, port=MONGODB_PORT)
# to close connection before any returns
db.close()
```
Hope this helps.
|
I had this issue and managed to fix it by deactivating, reinstalling flask-mongoengine and reactivating the venv (all in the Terminal):
```
deactivate
pip install flask-mongoengine
# Not required but good to check it was properly installed
pip freeze
venv\Scripts\activate
flask run
```
| 6,797
|
55,659,835
|
I am new to Python and I am trying to separate polygon data into x and y coordinates. I keep getting error: "AttributeError: ("'MultiPolygon' object has no attribute 'exterior'", 'occurred at index 1')"
From what I understand Python object MultiPolygon does not contain data exterior. But how do I remedy this to make the function work?
```
def getPolyCoords(row, geom, coord_type):
"""Returns the coordinates ('x' or 'y') of edges of a Polygon exterior"""
# Parse the exterior of the coordinate
geometry = row[geom]
if coord_type == 'x':
# Get the x coordinates of the exterior
return list( geometry.exterior.coords.xy[0] )
elif coord_type == 'y':
# Get the y coordinates of the exterior
return list( geometry.exterior.coords.xy[1] )
# Get the Polygon x and y coordinates
grid['x'] = grid.apply(getPolyCoords, geom='geometry', coord_type='x', axis=1)
grid['y'] = grid.apply(getPolyCoords, geom='geometry', coord_type='y', axis=1)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-73511dbae283> in <module>
1 # Get the Polygon x and y coordinates
----> 2 grid['x'] = grid.apply(getPolyCoords, geom='geometry', coord_type='x', axis=1)
3 grid['y'] = grid.apply(getPolyCoords, geom='geometry', coord_type='y', axis=1)
~\Anaconda3\lib\site-packages\pandas\core\frame.py in apply(self, func, axis, broadcast, raw, reduce, result_type, args, **kwds)
6012 args=args,
6013 kwds=kwds)
-> 6014 return op.get_result()
6015
6016 def applymap(self, func):
~\Anaconda3\lib\site-packages\pandas\core\apply.py in get_result(self)
140 return self.apply_raw()
141
--> 142 return self.apply_standard()
143
144 def apply_empty_result(self):
~\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_standard(self)
246
247 # compute the result using the series generator
--> 248 self.apply_series_generator()
249
250 # wrap results
~\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_series_generator(self)
275 try:
276 for i, v in enumerate(series_gen):
--> 277 results[i] = self.f(v)
278 keys.append(v.name)
279 except Exception as e:
~\Anaconda3\lib\site-packages\pandas\core\apply.py in f(x)
72 if kwds or args and not isinstance(func, np.ufunc):
73 def f(x):
---> 74 return func(x, *args, **kwds)
75 else:
76 f = func
<ipython-input-4-8c3864d38986> in getPolyCoords(row, geom, coord_type)
7 if coord_type == 'x':
8 # Get the x coordinates of the exterior
----> 9 return list( geometry.exterior.coords.xy[0] )
10 elif coord_type == 'y':
11 # Get the y coordinates of the exterior
AttributeError: ("'MultiPolygon' object has no attribute 'exterior'", 'occurred at index 1')
```
|
2019/04/12
|
[
"https://Stackoverflow.com/questions/55659835",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11053854/"
] |
I have updated your function `getPolyCoords()` to enable the handling of other geometry types, namely, `MultiPolygon`, `Point`, and `LineString`. Hope it works for your project.
```
def getPolyCoords(row, geom, coord_type):
"""Returns the coordinates ('x|y') of edges/vertices of a Polygon/others"""
# Parse the geometries and grab the coordinate
geometry = row[geom]
#print(geometry.type)
if geometry.type=='Polygon':
if coord_type == 'x':
# Get the x coordinates of the exterior
# Interior is more complex: xxx.interiors[0].coords.xy[0]
return list( geometry.exterior.coords.xy[0] )
elif coord_type == 'y':
# Get the y coordinates of the exterior
return list( geometry.exterior.coords.xy[1] )
if geometry.type in ['Point', 'LineString']:
if coord_type == 'x':
return list( geometry.xy[0] )
elif coord_type == 'y':
return list( geometry.xy[1] )
if geometry.type=='MultiLineString':
all_xy = []
for ea in geometry:
if coord_type == 'x':
all_xy.append(list( ea.xy[0] ))
elif coord_type == 'y':
all_xy.append(list( ea.xy[1] ))
return all_xy
if geometry.type=='MultiPolygon':
all_xy = []
for ea in geometry:
if coord_type == 'x':
all_xy.append(list( ea.exterior.coords.xy[0] ))
elif coord_type == 'y':
all_xy.append(list( ea.exterior.coords.xy[1] ))
return all_xy
else:
# Finally, return empty list for unknown geometries
return []
```
The portion of the code that handles `MultiPolygon` geometries has a loop that iterates for all the member `Polygon`s, and processes each of them. The code for `Polygon`
handling is reused there.
|
See the shapely docs about [multipolygons](https://shapely.readthedocs.io/en/stable/manual.html#MultiPolygons)
A multipolygon is a sequence of polygons, and it is the polygon object that has the exterior attribute. You need to iterate through the polygons of the multipolygon, and get `exterior.coords` of each polygon.
In practice, you probably want your geometries in the GeoDataFrame to be polygons, rather than multipolygons, but they aren't. You may want to split the rows in which have multipolygons into multiple rows each with one polygon (or not, depending on your use case)
| 6,798
|
65,146,279
|
I need to run K-means clustering algorithm to cluster textual data but by using cosine distance measure instead of Euclidean distance. Any reliable implementation of this in python?
***Edit:***
I have tried to use NLTK as following:
```
NUM_CLUSTERS=3
kclusterer = KMeansClusterer(NUM_CLUSTERS, distance=
nltk.cluster.util.cosine_distance, repeats=25)
clstr = kclusterer.cluster(X, clusters=False, trace=False)
print (clstr)
```
But it gives me error:
```
TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0]
```
X here is a TF-IDF matrix of shape (15, 155).
|
2020/12/04
|
[
"https://Stackoverflow.com/questions/65146279",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10153492/"
] |
If you want to do it yourself: [https://stanford.edu/~cpiech/cs221/handouts/kmeans.html](https://stanford.edu/%7Ecpiech/cs221/handouts/kmeans.html)
just change the distance measruing entry. The distance measuring is in the for loop over `i` of the pseudo code.
|
You can use NLTK for this. The K-means from NLTK allows you to specify which measure of distance you want to use.
[nltk](https://www.nltk.org/)
| 6,799
|
53,327,240
|
I have been trying to find a way to get python to ready my `csv`. Take the values that are in the date columns (`3months | 6months | 12months`) and plot it onto a graph however I have been struggling to find resources and have no previous experience with python.
If anyone could point me in the right direction it would be much appreciated.
I have already been able to get python to read a simple csv with little values in however **I cannot figure out a way to show a line for each of these shares over the look back dates**. Here is my csv format.
```
ticker, 3months, 6months, 12months
appl, -12, 16, 24
tsla, 9, 10, 7
amzn, -7, 14, 36
```
I am trying to read the csv and output it into a graph like this.

|
2018/11/15
|
[
"https://Stackoverflow.com/questions/53327240",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10471828/"
] |
The pattern `(?!\S)` uses a negative lookahead to check what follows is not a non whitespace character.
What you could so is replace the `(?!\S)` with a word boundary `\b` to let it not be part of a larger match:
`(?i)(?<!\S)lending\s?qb\b`
[Regex demo](https://regex101.com/r/nyHoT5/1)
Another way could be to use a positive lookahead to check for a whitespace character or `.,` or the end of the string using [`(?=[\s,.]|$)`](https://regex101.com/r/nyHoT5/1/)
For example:
```
str5 ="The best product is Lending qb."
print(re.findall(r'(?<!\S)lending\s?qb(?=[\s,.]|$)', str5, re.IGNORECASE)) # ['Lending qb']
```
|
This `(?!\S)` is a forward whitespace boundary.
It is really this `(?![^\s])` a negative of a negative
with the added benefit of it matching at the EOS (end of string).
What that means is you can use the negative class form to add characters
that qualify as a boundary.
So, just put the period and comma in with the whitespace.
`(?i)(?<![^\s,.])lending\s?qb(?![^\s,.])`
<https://regex101.com/r/BrOj2J/1>
As a tutorial point, this concept encapsulates multiple assertions
and is basic engine Boolean class logic which speeds up the engine
by a ten fold factor by comparison.
| 6,800
|
42,358,433
|
I have a simple Python test code as under:
**tmp.py**
```
import time
while True:
print "New val"
time.sleep(1)
```
If I run it as below, I see the logs on terminal normally:
```
python tmp.py
```
But if I redirect the logs to a log file, it takes quite a while before the logs appear in the file:
```
python tmp.py >/tmp/logs.log 2>&1
```
If I do a `cat /tmp/logs.log`, the output appears in that file quite late, or only when I quit the python application by pressing `Ctrl+C`
Why do I not see the logs instantly in the redirected file?
**Is it solvable by simple i/o redirection as I tried? (without doing code changes inside my Python code, by using modules like logging)**
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42358433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2091948/"
] |
The best option for your case is to set the environment variable `PYTHONUNBUFFERED`.
This is a little more robust than calling `#/usr/bin/python -u`, as this may not work in some virtual envs.
In your terminal:
```
export PYTHONUNBUFFERED=1
python tmp.py >/tmp/logs.log 2>&1 #or however else you want to call your script.
```
|
The issue is that the output is buffered, that means python saves the output in a buffer and flushes it every now and then, but not necessarily after each print.
You could fix by forcing a flush after each print by explicitly calling `sys.stdout.flush()`.
```
import time
import sys
while True:
print "New val"
sys.stdout.flush()
time.sleep(1)
```
| 6,805
|
65,700,886
|
My experience in python is close to 0, bear with me.
I want to install <https://pypi.org/project/locuplot/> on an EC2 machine to create some plots after running Locust in headless mode.
However, I do not manage to install it:
```
yum update -y
yum install python3 -y
yum install python3-devel -y
yum install python3-pip -y
yum install python-pip -y
yum -y groupinstall 'Development Tools'
pip3 install locust
pip install locuplot
```
But the last command results in:
```
Collecting locuplot
Could not find a version that satisfies the requirement locuplot (from versions: )
No matching distribution found for locuplot
```
What am I missing?
|
2021/01/13
|
[
"https://Stackoverflow.com/questions/65700886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11868615/"
] |
You have
```
pip install locuplot
```
in your last line, but `locuplot` does only work with python3 and, depending on your setup, `pip` might default to the python2 installation, so you should do
```
pip3 install locuplot
```
instead
|
Thanks to FlyingTeller found the issue.
The reason is that it requires python>=3.8
```
amazon-linux-extras install python3
python3.8 -m pip install locuplot
pip install locuplot
```
| 6,806
|
29,166,538
|
From what I read about variable scopes and importing resource files in [robotframework doc](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#id487) i would expect this to work (python 2.7, RF 2.8.7):
Test file:
```
*** Settings ***
Resource VarRes.txt
Suite Setup Preconditions
*** Variables ***
*** Test Cases ***
VarDemo
Log To Console imported [${TODAY}]
*** Keywords ***
```
Resource file:
```
*** Settings ***
Library DateTime
*** Variables ***
${TODAY} ${EMPTY} # Initialised during setup, see keyword Preconditions
*** Keywords ***
Format Local Date
[Arguments] ${inc} ${format}
${date} = Get Current Date time_zone=local increment=${inc} day result_format=${format}
[Return] ${date} # formatted date
Preconditions
${TODAY} = Format Local Date 0 %Y-%m-%d
Log To Console inited [${TODAY}]
```
However the output is:
```
inited [2015-03-20]
imported []
```
RF documentation states:
>
> Variables with the test suite scope are available anywhere in the test
> suite where they are defined **or imported**. They can be created in
> Variable tables, **imported from resource** and ....
>
>
>
which I think is done here.
|
2015/03/20
|
[
"https://Stackoverflow.com/questions/29166538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4629534/"
] |
The following works for me:
JavaScript:
```
// Traian Băsescu encodes to VHJhaWFuIELEg3Nlc2N1
var base64 = btoa(unescape(encodeURIComponent( $("#Contact_description").val() )));
```
PHP:
```
$utf8 = base64_decode($base64);
```
|
The problem is that Javascript strings are encoded in UTF-16, and browsers do not offer very many good tools to deal with encodings. A great resource specifically for dealing with Base64 encodings and Unicode can be found at MDN: <https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64/Base64_encoding_and_decoding>
Their suggested solution for encoding strings **without using the often cited solution involving the deprecated `unescape` function** is:
```
function b64EncodeUnicode(str) {
return btoa(encodeURIComponent(str).replace(/%([0-9A-F]{2})/g, function(match, p1) {
return String.fromCharCode('0x' + p1);
}));
}
```
For further solutions and details I highly recommend you read the entire page.
| 6,807
|
26,013,487
|
I need to change the path of the python's core dump file, or completely disable it. I'm aware that it's possible to change the pattern and location of the core dumps in linux using:
```
/proc/sys/kernel/core_pattern
```
But this is not a suitable solution on a shared server and/or on a grid engine.
So, how can I change the path of python's core dump, or how can I disable it's flush to the disk?
Is it possible to change that pattern to `/dev/null` only for my user?
|
2014/09/24
|
[
"https://Stackoverflow.com/questions/26013487",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2536294/"
] |
You can use shell command `ulimit` to control it:
```
ulimit -c 0 # Disable core file creation
```
Without the value, it will print current limit (the maximum size of core file will be created):
```
ulimit -c
```
|
I think, this page gives you what you are looking for:
<http://sigquit.wordpress.com/2009/03/13/the-core-pattern/>
Quoting from the page:
>
> "...the kernel configuration includes a file named “core\_pattern”:
>
>
>
>
```
/proc/sys/kernel/core_pattern
```
>
> In my system, that file contains just this single word:
> core
>
>
> As expected, this pattern shows how the core file will be generated. Two things can be understood from the previous line: The filename of the core dump file generated will be “core”; and second, the current directory will be used to store it (as the path specified is completely relative to the current directory).
>
>
> Now, if we change the contents of that file… (as root, of course)
>
>
>
```
$> mkdir -p /tmp/cores
$> chmod a+rwx /tmp/cores
$> echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
```
>
> You can use the following pattern elements in the core\_pattern file:
>
>
>
```
%p: pid
%: '%' is dropped
%%: output one '%'
%u: uid
%g: gid
%s: signal number
%t: UNIX time of dump
%h: hostname
%e: executable filename
%: both are dropped
```
Further on, this part touches the specific need of handling things when working on a cluster of nodes:
>
> Isn’t is great?! Imagine that you have a cluster of machines and you want to use a NFS directory to store all core files from all the nodes. You will be able to detect which node generated the core file (with the hostname), which program generated it (with the program name), and also when did it happen (with the unix time).
>
>
>
And configuring it for good,
>
> The changes done before are only applicable until the next reboot. In order to make the change in all future reboots, you will need to add the following in “/etc/sysctl.conf“:
>
>
>
```
# Own core file pattern...
kernel.core_pattern=/tmp/cores/core.%e.%p.%h.%t
```
>
> sysctl.conf is the file controlling every configuration under /proc/sys
>
>
>
| 6,808
|
56,863,556
|
I want to convert Binary Tree into Array using Python and i don't know how to give index to tree-node?
I have done this using the formula
left\_son=(2\*p)+1;
and right\_son=(2\*p)+2; in java But i'm stuck in python. Is there any Function to give index to tree-node in python ?
|
2019/07/03
|
[
"https://Stackoverflow.com/questions/56863556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8627333/"
] |
You can represent a binary tree in python as a one-dimensional list the exact same way.
For example if you have at tree represented as:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15]
```
`0` is the root
`1` & `2` are the next level
the children of `1` & `2` are `[3, 4]` and `[5, 6]`
This corresponds to your formula. For a given node at index `i` the children of that node are `(2*i)+1` `(2*i)+2`. This is a common way to model a heap. You can read more about how Python uses this in its [heapq library].(<https://docs.python.org/3.0/library/heapq.html>)
You can use this to traverse the tree in depths with something like:
```
a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15]
def childNodes(i):
return (2*i)+1, (2*i)+2
def traversed(a, i=0, d = 0):
if i >= len(a):
return
l, r = childNodes(i)
traversed(a, r, d = d+1)
print(" "*d + str(a[i]))
traversed(a, l, d = d+1)
traversed(a)
```
**This Prints**
```
15
6
14
2
13
5
11
0
10
4
9
1
8
3
7
```
|
Incase of a binary tree, you'll have to perform a level order traversal. And as you keep traversing, you'll have to store the values in the array in the order they appear during the traversal. This will help you in restoring the properties of a binary tree and will also maintain the order of elements. Here is the code to perform level order traversal.
```
class Node:
# A utility function to create a new node
def __init__(self, key):
self.data = key
self.left = None
self.right = None
# Function to print level order traversal of tree
def printLevelOrder(root):
h = height(root)
for i in range(1, h+1):
printGivenLevel(root, i)
# Print nodes at a given level
def printGivenLevel(root , level):
if root is None:
return
if level == 1:
print "%d" %(root.data),
elif level > 1 :
printGivenLevel(root.left , level-1)
printGivenLevel(root.right , level-1)
""" Compute the height of a tree--the number of nodes
along the longest path from the root node down to
the farthest leaf node
"""
def height(node):
if node is None:
return 0
else :
# Compute the height of each subtree
lheight = height(node.left)
rheight = height(node.right)
#Use the larger one
if lheight > rheight :
return lheight+1
else:
return rheight+1
# Driver program to test above function
root = Node(1)
root.left = Node(2)
root.right = Node(3)
root.left.left = Node(4)
root.left.right = Node(5)
print "Level order traversal of binary tree is -"
printLevelOrder(root)
```
| 6,809
|
45,804,534
|
I'm trying to load Parquet data into `PySpark`, where a column has a space in the name:
```
df = spark.read.parquet('my_parquet_dump')
df.select(df['Foo Bar'].alias('foobar'))
```
Even though I have aliased the column, I'm still getting this error and error propagating from the `JVM` side of `PySpark`. I've attached the stack trace below.
Is there a way I can load this parquet file into `PySpark`, without pre-processing the data in Scala, and without modifying the source parquet file?
```
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
/usr/local/python/pyspark/sql/utils.py in deco(*a, **kw)
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
/usr/local/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
318 "An error occurred while calling {0}{1}{2}.\n".
--> 319 format(target_id, ".", name), value)
320 else:
Py4JJavaError: An error occurred while calling o864.collectToPython.
: org.apache.spark.sql.AnalysisException: Attribute name "Foo Bar" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.;
at org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$.checkConversionRequirement(ParquetSchemaConverter.scala:581)
at org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$.checkFieldName(ParquetSchemaConverter.scala:567)
at org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$$anonfun$checkFieldNames$1.apply(ParquetSchemaConverter.scala:575)
at org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$$anonfun$checkFieldNames$1.apply(ParquetSchemaConverter.scala:575)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$.checkFieldNames(ParquetSchemaConverter.scala:575)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.buildReaderWithPartitionValues(ParquetFileFormat.scala:293)
at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:285)
at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:283)
at org.apache.spark.sql.execution.FileSourceScanExec.inputRDDs(DataSourceScanExec.scala:303)
at org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:42)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:386)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:228)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:311)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:2803)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2800)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2800)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2823)
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:2800)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
During handling of the above exception, another exception occurred:
AnalysisException Traceback (most recent call last)
<ipython-input-37-9d7c55a5465c> in <module>()
----> 1 spark.sql("SELECT `Foo Bar` as hey FROM df limit 10").take(1)
/usr/local/python/pyspark/sql/dataframe.py in take(self, num)
474 [Row(age=2, name=u'Alice'), Row(age=5, name=u'Bob')]
475 """
--> 476 return self.limit(num).collect()
477
478 @since(1.3)
/usr/local/python/pyspark/sql/dataframe.py in collect(self)
436 """
437 with SCCallSiteSync(self._sc) as css:
--> 438 port = self._jdf.collectToPython()
439 return list(_load_from_socket(port, BatchedSerializer(PickleSerializer())))
440
/usr/local/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
1131 answer = self.gateway_client.send_command(command)
1132 return_value = get_return_value(
-> 1133 answer, self.gateway_client, self.target_id, self.name)
1134
1135 for temp_arg in temp_args:
/usr/local/python/pyspark/sql/utils.py in deco(*a, **kw)
67 e.java_exception.getStackTrace()))
68 if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
70 if s.startswith('org.apache.spark.sql.catalyst.analysis'):
71 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
AnalysisException: 'Attribute name "Foo Bar" contains invalid character(s) among " ,;{}()\\n\\t=". Please use alias to rename it.;'
```
|
2017/08/21
|
[
"https://Stackoverflow.com/questions/45804534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1922392/"
] |
Have you tried,
```py
df = df.withColumnRenamed("Foo Bar", "foobar")
```
When you select the column with an alias you're still passing the wrong column name through a select clause.
|
I tried @ktang 's method and it worked for me as well. I'm working with SQL and Python, so it may be different for you, but it worked nonetheless.
Ensure there are no spaces in your column names/headers. Despite the list of characters provided in the error message, space ( ) doesn't seem to be acceptable by Pyspark.
I was initially using:
```
master_df = spark.sql(f'''
SELECT occ.num, 'Occ Event' AS `Event Type`,
...
```
Changing the space ( ) in Event Type to an underscore (\_) worked:
```
master_df = spark.sql(f'''
SELECT occ.num, 'Occ Event' AS `Event_Type`,
...
```
| 6,810
|
20,862,510
|
How do you determine if an incoming url Request to an Openshift app is 'http' or 'https'? I wrote an Openshift python 3.3 app back in June '13. I was able to tell if the incoming url to my Openshift app was 'http' or 'https' by the following code:
```
if request['HTTP_X_FORWARDED_PROTO'] == 'https': #do something
```
This code no longer works when I create new Openshift apps. I have posted on the Openshift forums without finding the solution. They suggested that I post here.
Anybody know how to do this?
|
2013/12/31
|
[
"https://Stackoverflow.com/questions/20862510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1924325/"
] |
This is a bug and will be fixed.
EDIT: Opened <https://bugzilla.redhat.com/show_bug.cgi?id=1048331> to track
|
In the meantime I found that this works.
if request.environ['HTTP\_X\_FORWARDED\_PROTO'] == 'http': # or https
| 6,811
|
32,111,279
|
I want to read a pdf file in python. Tried some of the ways- PdfReader and pdfquery but not getting the result in string format. Want to have some of the content from that pdf file. is there any way to do that?
|
2015/08/20
|
[
"https://Stackoverflow.com/questions/32111279",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4075219/"
] |
[PDFminer](http://www.unixuser.org/%7Eeuske/python/pdfminer/index.html) is a tool for extracting information from PDF documents.
|
Does it matter in your case if file is pdf or not. If you just want to read your file as string, just open it as you would open a normal file.
E.g.-
```
with open('my_file.pdf') as file:
content = file.read()
```
| 6,812
|
56,914,592
|
I have a lot of PNG images that I want to classify, using a trained CNN model.
To speed up the process, I would like to use multiple-processing with CPUs (I have 72 available, here I'm just using 4). I don't have a GPU available at the moment, but if necessary, I could get one.
**My workflow:**
1. read a figure with `openCV`
2. adapt shape and format
3. use `mymodel.predict(img)` to get the probability for each class
When it comes to the prediction step, it never finishes the `mymodel.predict(img)` step. When I use the code without the multiprocessing module, it works fine. For the model, I'm using keras with a tensorflow backend.
```
# load model
mymodel = load_model('190704_1_fcs_plotclassifier.h5')
# use python library multiprocessing to use different CPUs
import multiprocessing as mp
pool = mp.Pool(4)
# Define callback function to collect the output in 'outcomes'
outcomes = []
def collect_result(result):
global outcomes
outcomes.append(result)
# Define prediction function
def prediction(img):
img = cv2.resize(img,(49,49))
img = img.astype('float32') / 255
img = np.reshape(img,[1,49,49,3])
status = mymodel.predict(img)
status = status[0][1]
return(status)
# Define evaluate function
def evaluate(i,figure):
# predict the propability of the picture to be in class 0 or 1
img = cv2.imread(figure)
status = prediction(img)
outcome = [figure, status]
return(i,outcome)
# execute multiprocessing
for i, item in enumerate(listoffigurepaths):
pool.apply_async(evaluate, args=(i, item), callback=collect_result)
pool.close()
pool.join()
# get outcome
print(outcomes)
```
It would be great if somebody knows how to predict several images at a time!
I simplified my code here, but if somebody has an example how it could be done, I would highly appreciate it.
|
2019/07/06
|
[
"https://Stackoverflow.com/questions/56914592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11124934/"
] |
Does a processing-speed or a size-of-RAMor a number-of-CPU-coresor an introduced add-on processing latency matter most?[ALL OF THESE DO:](https://stackoverflow.com/revisions/18374629/3)
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The python **`multiprocessing`** module is known ( and the **`joblib`** does the same ) to:
>
> The `multiprocessing` package offers both local and remote concurrency, **effectively side-stepping the Global Interpreter Lock** by using **subprocesses** instead of threads.
>
>
>
Yet, as everything in our Universe, this comes at cost:
The wish, expressed by O/P as:
>
> *To speed up the process, I would like to **use multiple-processing with CPUs** (I have **72 available***
>
>
>
will, for this kind of a similar application of a pre-trained **`mymodel.predict()`**-or, if sent into a `Pool( 72 )`-execution almost for sure suffocate almost any hardware RAM by swapping.
Here is an example, where "just"-Do-Nothing worker was spawned by the **`n_jobs = 100`** directive - to see what happens ( time-wise ~ 532+ [ms] lost + memory-allocation-wise where XYZ [GB] or RAM have immediately been allocated by O/S ):
[](https://i.stack.imgur.com/Oshjh.png)
This comes from the fact, that each **`multiprocessing`** spawned sub-process ( not threads, as O/P has already experienced on her own ) is first instantiated ( after an adequate add-on latency due to O/S process/RAM-allocations-management ) as a ---**FULL-COPY**--- of the ecosystem present inside the original python process ( the complete **`python`** interpreter + all its **`import`**-ed modules + all its internal state and data-structures - used or not - ) so indeed huge amounts of RAM-allocations take place ( have you noticed the platform started to SWAP? notice how many sub-processes were spawned until that time and you have a ceiling of how many such can fit in-RAM and it makes devastating performance effects if trying ( or letting, by using the **`joblib`**-s `n_jobs = -1` auto-scaling directive ) to populate more sub-processes, that this SWAP-introducing number...
So far good, we have paid some ( often for carefully designed code a reasonably negligible amount, if compared to fully train again the whole predictor, doesn't it? ) time to spawn some number of parallel processes.
If the distributed workload next goes back, to one, common and performance-wise singular resource ( a disk directory-tree with files ), the performance of parallel-processes goes but in wreck havoc - it has to wait for such resource(!) to first get it free again.
Finally, even the "right"-amount of **`Pool()`**-spawned sub-processes, such that prevents am O/S to start SWAPPING RAM to disk and back, the inter-process communication is extremely expensive -- here, serialising ( Pickling/unPickling ) + enQueueing + deQueueing all DATA-objects, one has to pass there and back ( yes, even for the **`callback`** fun ), so the less one sends, the way faster the `Pool`-processing will become.
Here, all `Pool`-associated processes might benefit from independent logging of the results, which may reduce both the scales and latency of the inter-process communications, but will also consolidate the results, reported by any number of workers into the common log.
---
How to ... ? First benchmark the costs of each step:
----------------------------------------------------
Without hard facts ( measured durations in `[us]` ), one remains with just an opinion.
```
def prediction( img ):
img = cv2.resize( img, ( 49, 49 ) )
img = img.astype( 'float32' ) / 255
img = np.reshape( img, [1, 49, 49, 3] )
status = mymodel.predict( img )
status = status[0][1]
return( status )
def evaluate( i, figure ): # predict the propability of the picture to be in class 0 or 1
img = cv2.imread( figure )
status = prediction( img )
outcome = [figure, status]
return( i, outcome )
#--------------------------------------------------
from zmq import Stopwatch
aClk = Stopwatch()
#------------------------------------NOW THE COSTS OF ORIGINAL VERSION:
aListOfRESULTs = []
for iii in range( 100 ):
#-------------------------------------------------aClk-ed---------- SECTION
aClk.start(); _ = evaluate( 1, aFigureNAME ); A = aClk.stop()
#-------------------------------------------------aClk-ed---------- SECTION
print( "as-is took {0:}[us]".format( A ) );aListOfRESULTs.append( A )
#----------------------------------------------------------------------
print( [ aFun( aListOfRESULTs ) for aFun in ( np.min, np.mean, np.max ) ] )
#----------------------------------------------------------------------
```
---
Lets try something a bit else:
```
def eval_w_RAM_allocs_avoided( indexI, aFigureNAME ):
return [ indexI,
[ aFigureNAME,
mymodel.predict( ( cv2.resize( cv2.imread( aFigureNAME ),
( 49, 49 )
).astype( 'float32' ) / 255
).reshape( [1, 49, 49, 3]
)
)[0][1],
],
]
#------------------------------------NOW THE COSTS OF MOD-ed VERSION:
aListOfRESULTs = []
for iii in range( 100 ):
#-------------------------------------------------aClk-ed---------- SECTION
aClk.start()
_ = eval_w_RAM_allocs_avoided( 1, aFigureNAME )
B = aClk.stop()
#-------------------------------------------------aClk-ed---------- SECTION
print( "MOD-ed took {0:}[us] ~ {1:} x".format( B, float( B ) / A ) )
aListOfRESULTs.append( B )
#----------------------------------------------------------------------
print( [ aFun( aListOfRESULTs ) for aFun in ( np.min, np.mean, np.max ) ] )
#----------------------------------------------------------------------
```
---
And the actual `img` pre-processing pipeline overhead costs:
```
#------------------------------------NOW THE COSTS OF THE IMG-PREPROCESSING
aListOfRESULTs = []
for iii in range( 100 ):
#-------------------------------------------------aClk-ed---------- SECTION
aClk.start()
aPredictorSpecificFormatIMAGE = ( cv2.resize( cv2.imread( aFigureNAME ),
( 49, 49 )
).astype( 'float32' ) / 255
).reshape( [1, 49, 49, 3]
)
C = aClk.stop()
#-------------------------------------------------aClk-ed---------- SECTION
print( "IMG setup took {0:}[us] ~ {1:} of A".format( C, float( C ) / A ) )
aListOfRESULTs.append( C )
#----------------------------------------------------------------------
print( [ aFun( aListOfRESULTs ) for aFun in ( np.min, np.mean, np.max ) ] )
#----------------------------------------------------------------------
```
---
Actual fileI/O ops:
```
#------------------------------------NOW THE COSTS OF THE IMG-FILE-I/O-READ
aListOfRESULTs = []
for iii in range( 100 ):
#-------------------------------------------------aClk-ed---------- SECTION
aFileNAME = listoffigurepaths[158 + iii * 172]
aClk.start()
_ = cv2.imread( aFileNAME )
F = aClk.stop()
#-------------------------------------------------aClk-ed---------- SECTION
print( "aFileIO took {0:}[us] ~ {1:} of A".format( F, float( F ) / A ) )
aListOfRESULTs.append( F )
#----------------------------------------------------------------------
print( [ aFun( aListOfRESULTs ) for aFun in ( np.min, np.mean, np.max ) ] )
#----------------------------------------------------------------------
```
Without these hard-fact collected ( as a form of quantitative records-of-evidence ), one could hardly decide, what would be the best performance boosting step here for any massive-scale prediction-pipeline image processing.
Having these items tested, post results and further steps ( be it for going via **`multiprocessing.Pool`** or using other strategy for larger performance scaling, to whatever higher performance ) may first get reasonably evaluated, as the hard facts were first collected to do so.
|
One python package I know that may help you is `joblib`. Hope it can solve your problem.
```
from joblib import Parallel, delayed
```
```
# load model
mymodel = load_model('190704_1_fcs_plotclassifier.h5')
# Define callback function to collect the output in 'outcomes'
outcomes = []
def collect_result(result):
global outcomes
outcomes.append(result)
# Define prediction function
def prediction(img):
img = cv2.resize(img,(49,49))
img = img.astype('float32') / 255
img = np.reshape(img,[1,49,49,3])
status = mymodel.predict(img)
status = status[0][1]
return(status)
# Define evaluate function
def evaluate(i,figure):
# predict the propability of the picture to be in class 0 or 1
img = cv2.imread(figure)
status = prediction(img)
outcome = [figure, status]
return(i,outcome)
outcomes = Parallel(n_jobs=72)(delayed(evaluate)(i,figure) for figure in listoffigurepaths)
```
| 6,813
|
12,167,192
|
I want to create a dict which can be accessed as:
```
d[id_1][id_2][id_3] = amount
```
As of now I have a huge ugly function:
```
def parse_dict(id1,id2,id3,principal, data_dict):
if data_dict.has_key(id1):
values = data_dict[id1]
if values.has_key[id2]
..
else:
inner_inner_dict = {}
# and so on
```
What is the pythonic way to do this?
note, that i input the principal.. but what i want is the amount..
So if all the three keys are there.. add principal to the previous amount!
Thanks
|
2012/08/28
|
[
"https://Stackoverflow.com/questions/12167192",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] |
You may want to consider using [`defaultdict`](http://docs.python.org/library/collections.html):
For example:
```
json_dict = defaultdict(lambda: defaultdict(dict))
```
will create a `defaultdict` of `defaultdict`s of `dict`s (I know..but it is right), to access it, you can simply do:
```
json_dict['context']['name']['id'] = '42'
```
without having to resort to using `if...else` to initialize.
|
Maybe you need to have a look at multi-dimensional arrays - for example in numpy:
<http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html>
| 6,815
|
50,552,153
|
I have used the following code to find the last digit of sum of fibonacci numbers
```
#using python3
def fibonacci_sum(n):
if n < 2: print(n)
else:
a, b = 0, 1
sum=1
for i in range(1,n):
a, b = b, (a+b)
sum=sum+b
lastdigit=(sum)%10
print(lastdigit)
n = int(input(""));
fibonacci_sum(n);
```
My code is working fine. But for a larger given input like 613455 it takes much more time. I need an efficient code. Can anyone help me with this.
|
2018/05/27
|
[
"https://Stackoverflow.com/questions/50552153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5405806/"
] |
Keeping track of the last digit only
====================================
Note that whenever you add integers, the last digit of the sum depends only on the last digits of the addents. This means we only have to keep the last digit on every iteration. The same applies to the sum, at all time we only need to keep its last digit.
```
def fibonacci_last_digit_sum(n):
sum_, a, b = 0, 0, 1
for _ in range(n):
a, b = b, (a + b) % 10
sum_ = (sum_ + b) % 10
return sum_
print(fibonacci_last_digit_sum(613455)) # 2
```
The above yields a cyclic result
================================
Any triple `(a, b, sum_)` entirely defines the next elements in the sequence. But since we are keeping track solely of the last digit, then there are finitely many such triples. This means that the sequence of triples `(a, b, sum_)` must be cyclic, it comes back to the same value at some point. In particular, a gross upper bound for the length of this cycle is `1000`.
This means we can easily check empirically what the length of that cycle is with this updated version of our code.
```
def fibonacci_last_digit_sum(n):
sum_, a, b = 0, 0, 1
for _ in range(n):
a, b = b, (a + b) % 10
sum_ = (sum_ + b) % 10
return a, b, sum_
seen = set()
for x in range(1000):
triple = fibonacci_last_digit_sum(x)
if triple in seen:
print('Back to start:', x)
break
seen.add(triple)
```
Output:
```
Back to start: 60
```
In other words, the cycle is of length `60`.
This means for any `n`, the answer will be the same as for `n % 60`. This allows to write a solution which run-time depends only on the modulo operation `n % 60`, [which is *O(log(n))*](https://math.stackexchange.com/questions/320420/time-complexity-of-a-modulo-operation).
```
def fibonacci_last_digit_sum(n):
sum_, a, b = 0, 0, 1
for _ in range(n % 60): # We range over n % 60 instead of n
a, b = b, (a + b) % 10
sum_ = (sum_ + b) % 10
return sum_
print(fibonacci_last_digit_sum(613455)) # 2
```
One last improvement would be to hardcode the cycle in a list, as it is relatively short, and return the value at index `n % 60`.
|
You're asking for code that returns the last digit of the sum of the first n Fibonacci numbers.
First thing to note is that fib(1) + fib(2) + ... + fib(n) = fib(n+2)-1.
That's easily proved: Let S(n) be the sum of the first n Fibonacci numbers. Then S(1) = 1, S(2) = 2, and S(n) - S(n-1) = fib(n). The result follows by induction.
Second, modulo 2, the Fibonacci numbers repeat on a cycle of length 3, and modulo 5, the Fibonacci numbers repeat on a cycle of length 20. [See the wikipedia page on the Pisano period](https://en.wikipedia.org/wiki/Pisano_period).
This gives us an O(1) solution using the above and the [Chinese Remainder Theorem](https://en.wikipedia.org/wiki/Chinese_remainder_theorem)
```
f2 = [0,1,1]
f5 = [0,1,1,2,3,0,3,3,1,4,0,4,4,3,2,0,2,2,4,1]
def f1(n):
y = f5[(n+2) % 20]
return (y-1)%10 if y%2==f2[(n+2)%3] else y+4
cases = [1, 2, 100, 12334, 1234567]
for i in cases:
print(i, f1(i))
```
| 6,822
|
63,559,190
|
I am trying to loop through the files in a folder with python. I have found different ways to do that such as using os package or glob. But for some reason, they don't maintain the order the files appear in the folder. For example, my folder has `img_10`, `img_20`, `img_30`... But when i loop through them, my code reads the files like: `img_30`, `img_10`, `img_50`... and so on.
I would like my program to read the files as they appear in the folder, maintaining the sequence they are in.
|
2020/08/24
|
[
"https://Stackoverflow.com/questions/63559190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11446390/"
] |
You need to pass the data in an `array`(`->with()` method) or you can use `compact` method too.
```php
return view('admin.faq.index', compact('faqs'));
```
Or
```php
return view('admin.faq.index')->with(array('faqs'=>$faqs));
```
|
try to use the `compact` method as below:
```
public function index(Request $request)
{
$faqs = Faq::all();
return view('admin.faq.index',compact('faqs'));
}
```
| 6,823
|
36,177,019
|
Okays so I'm new to python and I just really need some help with this. This is my code so far. I keep getting a syntax error and I have no idea what im doing wrong
```
count = int(input("What number do you want the timer to start: "))
count == ">" -1:
print("count")
print("")
count = count - 1
time.sleep(1)
```
|
2016/03/23
|
[
"https://Stackoverflow.com/questions/36177019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5970976/"
] |
You need to ensure you import the time library before you can access the time.sleep method.
Also it may be more effective to a for use a loop to repeat code. The structure of your if statement is also incorrect and is not a correct expression.
```
IF <Expression> is TRUE:
DO THIS.
```
Also consider using a range within your for loop see below;
```
import time
count = int(input("What number do you want the timer to start: "))
for countdown in range(count,0,-1):
print (countdown)
time.sleep(1)
```
Explanation;
```
for countdown in range(count,0,-1):
```
range (starting point, end point, step) . Starts at your given integer, ends at 0, steps by -1 every iteration.
|
In the 2nd line, you can't deduct 1 from ">" which is a string.
What you need here is apparently a for loop. EDIT: You forgot the import too!
```
import time
count = int(input("What number do you want the timer to start: "))
for i in range(count):
print("count")
print(i)
count = count - 1
time.sleep(1)
```
| 6,826
|
52,008,168
|
I have this python code that should make a video:
```
import cv2
import numpy as np
out = cv2.VideoWriter("/tmp/test.mp4",
cv2.VideoWriter_fourcc(*'MP4V'),
25,
(500, 500),
True)
data = np.zeros((500,500,3))
for i in xrange(500):
out.write(data)
out.release()
```
I expect a black video but the code throws an assertion error:
```
$ python test.py
OpenCV(3.4.1) Error: Assertion failed (image->depth == 8) in writeFrame, file /io/opencv/modules/videoio/src/cap_ffmpeg.cpp, line 274
Traceback (most recent call last):
File "test.py", line 11, in <module>
out.write(data)
cv2.error: OpenCV(3.4.1) /io/opencv/modules/videoio/src/cap_ffmpeg.cpp:274: error: (-215) image->depth == 8 in function writeFrame
```
I tried various fourcc values but none seem to work.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/52008168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/325809/"
] |
So it looks like you did a DBinspect on an existing database to generate this model. I'm guessing this is failing because Django ORM expects your table to have a primary key. "id" is the default name for a Django generated model primary key field. I suspect when you are trying to call `Characterweapons.objects.all()` it's trying to get the primary key field for some reason.
There may be a workaround but unless you *really* know what your doing with your database, I would highly urge you to set a primary key on your tables.
|
The best solution that I found, in this case, was to perform my own query, for example:
```
fact = MyModelWithoutPK.objects.raw("SELECT * FROM my_model_without_pk WHERE my_search=some_search")
```
This way you don't have to implement or add another middleware.
see more at [Django docs](https://docs.djangoproject.com/en/2.2/topics/db/sql/)
| 6,830
|
7,772,965
|
I'm trying to install the Python M2Crypto package into a virtualenv on an x86\_64 RHEL 6.1 machine. This process invokes swig, which fails with the following error:
```
$ virtualenv -q --no-site-packages venv
$ pip install -E venv M2Crypto==0.20.2
Downloading/unpacking M2Crypto==0.20.2
Downloading M2Crypto-0.20.2.tar.gz (412Kb): 412Kb downloaded
Running setup.py egg_info for package M2Crypto
Installing collected packages: M2Crypto
Running setup.py install for M2Crypto
building 'M2Crypto.__m2crypto' extension
swigging SWIG/_m2crypto.i to SWIG/_m2crypto_wrap.c
swig -python -I/usr/include/python2.6 -I/usr/include -includeall -o SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i
/usr/include/openssl/opensslconf.h:31: Error: CPP #error ""This openssl-devel package does not work your architecture?"". Use the -cpperraswarn option to continue swig processing.
error: command 'swig' failed with exit status 1
Complete output from command /home/lorin/venv/bin/python -c "import setuptools;__file__='/home/lorin/venv/build/M2Crypto/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-BFiNtU-record/install-record.txt --install-headers /home/lorin/venv/include/site/python2.6:
```
I've got OpenSSL 1.0.0 installed via RPM packages from RedHat.
The part of /usr/include/openssl/opensslconf.h that causes the error looks like this:
```
#if defined(__i386__)
#include "opensslconf-i386.h"
#elif defined(__ia64__)
#include "opensslconf-ia64.h"
#elif defined(__powerpc64__)
#include "opensslconf-ppc64.h"
#elif defined(__powerpc__)
#include "opensslconf-ppc.h"
#elif defined(__s390x__)
#include "opensslconf-s390x.h"
#elif defined(__s390__)
#include "opensslconf-s390.h"
#elif defined(__sparc__) && defined(__arch64__)
#include "opensslconf-sparc64.h"
#elif defined(__sparc__)
#include "opensslconf-sparc.h"
#elif defined(__x86_64__)
#include "opensslconf-x86_64.h"
#else
#error "This openssl-devel package does not work your architecture?"
#endif
```
gcc has the right variable defined:
```
$ echo | gcc -E -dM - | grep x86_64
#define __x86_64 1
#define __x86_64__ 1
```
But apparenty swig doesn't, since this is the line that's failing:
```
swig -python -I/usr/include/python2.6 -I/usr/include -includeall -o \
SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i
```
Is there a way to fix this by changing something in my system configuration? M2Crypto gets installed in a virtualenv as part of a larger script I don't control, so avoiding mucking around with the M2Crypto files would be a good thing.
|
2011/10/14
|
[
"https://Stackoverflow.com/questions/7772965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/742/"
] |
You just don't have `swig` installed.
**Try:**
```
sudo yum install swig
```
**And then:**
```
sudo easy_install M2crypto
```
|
```
sudo yum install m2crypto
```
worked for me to get around this problem.
| 6,831
|
52,733,094
|
I am using Ubuntu 16.04 lts. My default python binary is python2.7. When I am trying to install ipykernel for hydrogen in atom editor, with the following command
```
python -m pip install ipykernel
```
It is giving the following errors
```
ERROR: ipykernel requires Python version 3.4 or above.
```
I am trying to install ipykernel for python2. I have already installed python3.7. Also ipython and jupyter notebook is installed.
|
2018/10/10
|
[
"https://Stackoverflow.com/questions/52733094",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8881054/"
] |
Starting with version 5.0 of the [kernel](https://ipykernel.readthedocs.io/en/latest/changelog.html#id3), and version 6.0 of [IPython](https://ipython.readthedocs.io/en/stable/whatsnew/version6.html#ipython-6-0), compatibility with Python 2 was dropped.
As far as I know, the only solution is to install an earlier release.
In order to have Python 2.7 available in the Jupyter Notebook I installed IPython 5.7, and ipykernel 4.10. If you want to install earlier releases of IPython or ipykernel you can do the following:
* Uninstall IPython
`pip uninstall ipython`
* Reinstall IPython
`python2 -m pip install ipython==5.7 --user`
* Install ipykernel
`python2 -m pip install ipykernel==4.10 --user`
|
Try using `Anaconda`
You can learn how to install Anaconda from [here](https://conda.io/docs/user-guide/install/linux.html)
After that, try creating a virtual environment via:
```
conda create -n yourenvname python=2.7 anaconda
```
And activate it via:
```
source activate yourenvname
```
After that, try installing:
```
pip install ipython
pip intall ipykernel
```
| 6,841
|
56,789,173
|
I have a simple class in which I want to generate methods based on inherited class fields:
```
class Parent:
def __init__(self, *args, **kwargs):
self.fields = getattr(self, 'TOGGLEABLE')
self.generate_methods()
def _toggle(self, instance):
print(self, instance) # Prints correctly
# Here I need to have the caller method, which is:
# toggle_following()
def generate_methods(self):
for field_name in self.fields:
setattr(self, f'toggle_{field_name}', self._toggle)
class Child(Parent):
following = ['a', 'b', 'c']
TOGGLEABLE = ('following',)
```
At this moment, there is a `toggle_following` function successfully generated in `Child`.
Then I invoke it with a single parameter:
```
>>> child = Child()
>>> child.toggle_following('b')
<__main__.Child object at 0x104d21b70> b
```
And it prints out the expected result in the `print` statement.
**But I need to receive the caller name `toggle_following` in my generic `_toggle` function.**
I've tried using the [`inspect`](https://docs.python.org/3/library/inspect.html) module, but it seems that it has a different purpose regarding function inspection.
|
2019/06/27
|
[
"https://Stackoverflow.com/questions/56789173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5729960/"
] |
maybe this is too hackish, and maybe there's a more elegant (i.e. dedicated) way to achieve this, but:
you can create a wrapper function that passes the func\_name to the inner `_toggle` func:
```py
class Parent:
def __init__(self, *args, **kwargs):
self.fields = getattr(self, 'TOGGLEABLE')
self.generate_methods()
def _toggle(self, instance, func_name):
print(self, instance) # Prints correctly
print(func_name)
def generate_methods(self):
for field_name in self.fields:
func_name = 'toggle_{}'.format(field_name) # I don't have python 3.7 on this computer :P
setattr(self, func_name, lambda instance: self._toggle(instance, func_name))
class Child(Parent):
following = ['a', 'b', 'c']
TOGGLEABLE = ('following',)
child = Child()
child.toggle_following('b')
```
Output:
```
<__main__.Child object at 0x7fbe566c0748> b
toggle_following
```
|
Another approach to solve the same problem would be to use `__getattr__` in the following way:
```
class Parent:
def _toggle(self, instance, func_name):
print(self, instance) # Prints correctly
print(func_name)
def __getattr__(self, attr):
if not attr.startswith("toggle_"):
raise AttributeError("Attribute {} not found".format(attr))
tmp = attr.replace("toggle_", "")
if tmp not in self.TOGGLEABLE:
raise AttributeError(
"You cannot toggle the untoggleable {}".format(attr)
)
return lambda x: self._toggle(x, attr)
class Child(Parent):
following = ['a', 'b', 'c']
TOGGLEABLE = ('following',)
child = Child()
# This can toggle
child.toggle_following('b')
# This cannot toggle
child.toggle_something('b')
```
which yields:
```
(<__main__.Child instance at 0x107a0e290>, 'b')
toggle_following
Traceback (most recent call last):
File "/Users/urban/tmp/test.py", line 26, in <module>
child.toggle_something('b')
File "/Users/urban/tmp/test.py", line 13, in __getattr__
raise AttributeError("You cannot toggle the untoggleable {}".format(attr))
AttributeError: You cannot toggle the untoggleable toggle_something
```
| 6,842
|
39,400,319
|
My opinion of the Azure-Python SDK is not high for Azure RM. What takes 1 line in PowerShell takes 10 in Python. That is the opposite of what python is supposed to do.
So, my idea is to create python package which comes with a directory containing a few template .ps1 scripts. You would define a few variables like vmname, resourcegroup, location, etc... inject these into the .ps1 templates, then call commands from the REPL.
Right now I'm having trouble using the subprocess module to keep PS open until I tell it to close. As it stands now, I need to include
```
login-azurerm
```
and authenticate before running **any** command. This won't do. I'd like to fix this, but frankly right now I'm wondering whether or not the premise is a good idea to start with.
Any input is much appreciated!
|
2016/09/08
|
[
"https://Stackoverflow.com/questions/39400319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6293857/"
] |
@RobTruxal, the new CLI for Azure will be in Python and will be released as a preview soon. You can already try it from the github account:
<https://github.com/Azure/azure-cli>
The Azure SDK for Python is not supposed to mimic the Powershell cmdlets, but to be a language SDK (like C#, Java, Ruby, etc.).
If you have any suggestion/comments about the Python SDK itself, please do not hesitate to fill an issue on the issue tracker: <https://github.com/Azure/azure-sdk-for-python/issues>
(FYI, I'm the owner of the Python SDK repo at MS)
|
@RobTruxal, It seems that a feasible way to call PowerShell in Python is using the module `subprocess`, such as the code below as reference.
```
import subprocess
subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "your-script.ps1", "arguments"])
```
You need to write your powershell script which includes `login-azurerm` command, and interaction with cmd that running python script.
Or you can choose the AzureCLI for cross platform in node.js as the command line tool for managing Azure Resoure.
Hope it helps.
| 6,843
|
62,295,329
|
I'm trying to play an mp3 file using python VLC but it seems like nothing is happening and there is no error message. Below is the code:
```
import vlc
p = vlc.MediaPlayer(r"C:\Users\user\Desktop\python\projects\etc\lady_maria.mp3")
p.play()
```
I tried below code as I've read from another post:
```
import vlc
mp3 = "C:/Users/user/Desktop/python/projects/etc/lady_maria.mp3"
instance = vlc.get_default_instance()
media = instance.media_new(mp3)
media_list = instance.media_list_new([mp3])
player = instance.media_player_new()
player.set_media(media)
list_player = instance.media_list_player_new()
list_player.set_media_player(player)
list_player.set_media_list(media_list)
```
I also tried to use pygame mixer but same reason, no sounds, and no error message.
```
from pygame import mixer
mixer.init()
mixer.music.load(r'C:\Users\user\Desktop\python\projects\etc\lady_maria.mp3')
mixer.music.play()
```
All of these do not give an error message so I'm not sure what is going on.. Any advice will be greatly appreciated!
|
2020/06/10
|
[
"https://Stackoverflow.com/questions/62295329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11639529/"
] |
If audio files are playing fine on the system:-
for pygame library adjust volume using:
```
mixer.music.set_volume(1.0) # float value from 0.0 to 1.0 for volume setting
```
|
I can't work out what the actual issue is given the code in the OP. Please try this test code.
```
import pygame
WINDOW_WIDTH = 200
WINDOW_HEIGHT = 200
### initialisation
pygame.init()
pygame.font.init()
pygame.mixer.init()
window = pygame.display.set_mode( ( WINDOW_WIDTH, WINDOW_HEIGHT ) )
# Rain sound from: https://www.freesoundslibrary.com/sound-of-rain-falling-mp3/ (CC BY 4.0)
pygame.mixer.music.load( 'rain-falling.mp3' )
pygame.mixer.music.play( loops=-1 ) # loop forever
### Main Loop
clock = pygame.time.Clock()
font = pygame.font.SysFont(None, 40)
done = False
while not done:
# Handle user-input
for event in pygame.event.get():
if ( event.type == pygame.QUIT ):
done = True
# Draw some text
window.fill( ( 0, 0, 50 ) )
ms = font.render( str( pygame.time.get_ticks() ), True, ( 255,255,255 ) )
window.blit( ms, ( 100, 100 ) )
pygame.display.flip()
# Clamp FPS
clock.tick_busy_loop(30)
pygame.quit()
```
If you download the MP3 from the link in the source, and running this script produces no sound, the issue is with your local machine setup. In this case perhaps verify you are using a recent version of Python and PyGame. I've tested this with Python 3.8.2 and PyGame 1.9.6.
I don't think installing [VLC Media Player](https://www.videolan.org/vlc/) would hurt anything. This is my daily "go-to" media player.
| 6,844
|
67,109,676
|
example1.py:
```
from tkinter import *
root = Tk()
filename = 'james'
Lbl = Label(root,text="ciao")
Lbl.pack()
root.mainloop()
```
example2.py:
```
from example1 import filename
print(filename)
```
Why python open tkinter window if I run only example2.py?
It is necessary for me that filename is in the example1.py and called from example2.py.
I called only filename variable and not a tkinter window in example2.
|
2021/04/15
|
[
"https://Stackoverflow.com/questions/67109676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9811405/"
] |
**This is because the standard rules of Python**
yup! Python automatically excecutes the Python file which you imported.In Your Case its example1 file.
**To prevent This Use this instance:**
```
if __name__ == '__main__':
root.mainloop()
```
in your file [see](https://www.geeksforgeeks.org/what-does-the-if-__name__-__main__-do/%3Cbr%3E)
**Edit:**
And the rest problem is that, You cannot directly import a variable from a file.
**You Will have to store it in an function like:**
`yourFileName`
**So your final code will be**
**example1.py**
```
from tkinter import *
root = Tk()
filename = 'james'
Lbl = Label(root,text="ciao")
Lbl.pack()
def filenamefunc():
return filename
if __name__ == '__main__':
root.mainloop()
```
**And your example2.py will be**
```
from example1 import filenamefunc
print(filenamefunc)
```
**Thank-you**
|
First, I don't know how to solve your problem. but I know what you want to know.
I understand you have two python file ('example 1' and 'example 2'). And you want to import 'filename' from 'example 1'. But Tkinter is worked and you want to know why. right?
The import function is not pick-up tool.
\*\*\* That's mean you can't get the 'filename' without other data-processing.\*\*\*
When you call the import file 'example1', python is run the 'example1' code from beginning to end and get the filename. It's faster to try than to explain.
```
from tkinter import *
filename = 'james'
print(' before ')
root = Tk()
Lbl = Label(root,text="ciao")
Lbl.pack()
root.mainloop()
print(' after ')
```
Fix your code like this and run 'example 2'. First,You can see 'before' text in your terminal. And tkinter will be ran and then when you close the tkinter, you can see 'after' text and work your code `print(filename)`
example1[filename is saved 'james' >> print 'before' >> run the tkinter mainloop >> print 'after'] >> example2[get filename from example1 >> filename is 'james' >> print(filename)]
Anyway, It's the answer about 'why' tkinter is run.
| 6,846
|
5,249,353
|
I'm learning python. I have a list of simple entries and I want to convert it in a dictionary where the first element of list is the key of the second element, the third is the key of the fourth, and so on. How can I do it?
```
list = ['first_key', 'first_value', 'second_key', 'second_value']
```
Thanks in advance!
|
2011/03/09
|
[
"https://Stackoverflow.com/questions/5249353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/619378/"
] |
```
myDict = dict(zip(myList[::2], myList[1::2]))
```
Please do not use 'list' as a variable name, as it prevents you from accessing the list() function.
If there is much data involved, we can do it more efficiently using iterator functions:
```
from itertools import izip, islice
myList = ['first_key', 'first_value', 'second_key', 'second_value']
myDict = dict(izip(islice(myList,0,None,2), islice(myList,1,None,2)))
```
|
The most concise way is
```
some_list = ['first_key', 'first_value', 'second_key', 'second_value']
d = dict(zip(*[iter(some_list)] * 2))
```
| 6,847
|
45,636,955
|
I've got the following python method, it gets a string and returns an integer. I'm looking for the correct `input` that will print "Great Success!"
```
input = "XXX"
def enc(pwd):
inc = 0
for i in range(1, len(pwd) + 1):
_1337 = pwd[i - 1]
_move = ord(_1337) - 47
if i == 1:
inc += _move
else:
inc += _move * (42 ** (i - 1))
return inc
if hex(enc(input)) == 0xEA9D1ED352B8:
print "Great Success!"
```
|
2017/08/11
|
[
"https://Stackoverflow.com/questions/45636955",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8451227/"
] |
It's encoding the input in base 42 (starting from `chr(47)` which is `'/'`), and easy to decode:
```
def dec(x):
while x:
yield chr(47 + x % 42)
x //= 42
print ''.join(dec(0xEA9D1ED352B8))
```
The output is: `?O95PIVII`
|
You can try to bruteforce it.
Just make a while loop where the input is encrypted by the function until you have the same hash.
But with no informations about the length of the input and so on it could take a while.
| 6,853
|
38,697,820
|
I'm trying to run sqoop command inside Python script. I had no problem to do that trough shell command, but when I'm trying to execute python stript:
```
#!/usr/bin/python
sqoopcom="sqoop import --direct --connect abcd --username abc --P --query "queryname" "
exec (sqoopcom)
```
I got an error, Invalid syntax, how to solve it ?
|
2016/08/01
|
[
"https://Stackoverflow.com/questions/38697820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5312431/"
] |
The build in `exec` statement that you're using is for interpreting python code inside a python program.
What you want is to execute an external (shell) command. For that you could use `call` from the **subprocess module**
```
import subprocess
subprocess.call(["echo", "Hello", "World"])
```
<https://docs.python.org/3/library/subprocess.html>
|
You need to skip " on --query param
```
sqoopcom="sqoop import --direct --connect abcd --username abc --P --query \"queryname\" --target-dir /pwd/dir --m 1 --fetch-size 1000 --verbose --fields-terminated-by , --escaped-by \\ --enclosed-by '\"'/dir/part-m-00000"
```
| 6,855
|
46,225,871
|
```
x = open("file.txt",'w')
s = chr(931) # 'Σ'
x.write(s)
```
Error
```
Traceback (most recent call last):
File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u03a3' in position 0: character maps to <undefined>
```
Even if I save the char 'Σ' in Windows txt-editor under code UTF-8 and then open in python the return will be 'Σ' and not what I expect Σ.
I don't understand why python interpret the sign wrong because it's utf-8 or is this a problem in windows?
|
2017/09/14
|
[
"https://Stackoverflow.com/questions/46225871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4772772/"
] |
Your default encoding seems to be cp1252, not utf-8.
You need to specify the encoding, to be sure it's utf-8.
### this works fine:
```
with open('outfile.txt', 'w', encoding='utf-8') as f:
f.write('Σ')
```
### this raises your error:
```
with open('outfile.txt', 'w', encoding='cp1252') as f:
f.write('Σ')
```
|
I solve the problem by saving as bytes instead of string
```
def save_byte():
x = open("file.txt",'wb')
s = chr(931) # 'Σ'
s = s.encode()
x.write(s)
x.close()
```
outout:
Σ
| 6,858
|
7,139,293
|
I'm currently building my android project from the project folder with ant like the following:
```
MyProject/
build.xml
```
The ant command that I use to build is:
```
$ MyProject/ant install
```
In my java code, I have some unused imports and variables, for instance:
```
import java.io.IOException;
String doNothing = "Do Nothing";
```
I'm not making used of the above in my code. Is there way to detect those from the command line with `ant`? If not, do I have to use a third party tool?
In Python, I use [pyflakes](http://pypi.python.org/pypi/pyflakes) to cleanup my code. I'm looking for the equivalent in Java at the command line.
|
2011/08/21
|
[
"https://Stackoverflow.com/questions/7139293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/117642/"
] |
You can use [PMD](http://pmd.sourceforge.net/) to do this and much more. It includes checks for both unused local variables and unused imports. It can be [integrated with ant](https://pmd.github.io/pmd-6.17.0/pmd_userdocs_tools_ant.html) and you can configure it to fail the build if any errors are detected. If you are using the standard build structure that is created via the `android` command line tool you can tie the PMD target into the `-pre-compile` stage.
|
You do not have to do this if you have proguard enabled. <http://developer.android.com/guide/developing/tools/proguard.html>
| 6,859
|
3,032,519
|
When I was using the built-in simple server, everything is OK, the admin interface is beautiful:
`python manage.py runserver`
However, when I try to serve my application using a wsgi server with `django.core.handlers.wsgi.WSGIHandler`, Django seems to forget where the admin media files is, and the admin page is not styled at all:
`gunicorn_django`
How did this happen?
|
2010/06/13
|
[
"https://Stackoverflow.com/questions/3032519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/225262/"
] |
When I look into the source code of Django, I find out the reason.
Somewhere in the `django.core.management.commands.runserver` module, a `WSGIHandler` object is
wrapped inside an `AdminMediaHandler`.
According to the document, `AdminMediaHandler` is a
>
> WSGI middleware that intercepts calls
> to the admin media directory, as
> defined by the ADMIN\_MEDIA\_PREFIX setting, and serves those images.
> Use this ONLY LOCALLY, for development! This hasn't been tested
> for
> security and is not super efficient.
>
>
>
And that's why the admin media files can only be found automatically when I was using the test server.
Now I just go ahead and set up the admin media url mapping manually :)
|
Django by default doesn't serve the media files since it usually is better to serve these static files on another server (for performance etc.). So, when deploying your application you have to make sure you setup another server (or virtual server) which serves the media (including the admin media). You can find the admin media in `django/contrib/admin/media`. You should setup your MEDIA\_URL and ADMIN\_MEDIA\_URL so that they point to the media files. See also <http://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files>.
| 6,860
|
72,991,013
|
How can I get an input like this in python3
The first input is = 2 and based on this first input I want to get 2 get new inputs
For example:
```
2 # how many inputs?
1 2 # 2 numbers inputs
```
or
```
3 # how many inputs?
3 5 8 # in one line getting 3 inputs
```
Here is another example:
```
4
6 8 7 9
```
How I can get in one line inputs based on the first input?
I tried to use map but with no luck
```
n = int(input())
for i in range(n):
x = map(int,input().split())
```
But the output is
```
if n is 2
1 2
```
Again asking for inputs because of `i = 0` getting `1 2` and `i = 1` again asking for input.
|
2022/07/15
|
[
"https://Stackoverflow.com/questions/72991013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19312041/"
] |
This might be an approach:
```
<?php
$input = "1234-ABC 2345 rBC 9998DDD 9657 lJi";
preg_match_all('/(\d{4}[-\s]*[a-z]{3})/i', $input, $matches);
$output = array_shift($matches);
array_walk($output, function(&$value) {
$value = strtoupper(str_replace(" ", "", $value));
});
print_r($output);
```
The output obviously is:
```
Array
(
[0] => 1234ABC
[1] => 2345RBC
[2] => 9998DDD
[3] => 9657LJI
)
```
|
You regular expression `(\d{4})( *)(-*)( *)([a-zA-Z]{3})` was correct but you needed to use `preg_match_all` to return multiple matches.
This is a demo:
<https://onlinephp.io/c/e5acb>
```
<?php
function parsePlates($subject){
$plates = [];
preg_match_all('/(\d{4})( *)(-*)( *)([a-zA-Z]{3})/sim', $subject, $result, PREG_PATTERN_ORDER);
for ($i = 0; $i < count($result[0]); $i++) {
$plate = $result[0][$i];
$plates[] = $plate;
}
return $plates;
}
$plates = parsePlates('1234 ABC 4567BCX');
var_dump($plates);
```
| 6,863
|
38,442,107
|
I tried the following:
```
#!/usr/bin/env python
import keras
from keras.models import model_from_yaml
model_file_path = 'model-301.yaml'
weights_file_path = 'model-301.hdf5'
# Load network
with open(model_file_path) as f:
yaml_string = f.read()
model = model_from_yaml(yaml_string)
model.load_weights(weights_file_path)
model.compile(optimizer='adagrad', loss='binary_crossentropy')
# Visualize
from keras.utils.visualize_util import plot
```
However, this gives an error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/moose/.local/lib/python2.7/site-packages/keras/utils/visualize_util.py", line 7, in <module>
if not pydot.find_graphviz():
AttributeError: 'module' object has no attribute 'find_graphviz'
```
How can I fix this?
Note: The hdf5 and the YAML file can be found [on Github](https://github.com/TensorVision/MediSeg/tree/master/AP3/model-301).
|
2016/07/18
|
[
"https://Stackoverflow.com/questions/38442107",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/562769/"
] |
The problem is also referenced on the [issues page](https://github.com/fchollet/keras/issues/3210) of the keras project.
You need to install a version of `pydot` <= 1.1.0 because the function `find_graphviz` was [removed](https://github.com/erocarrera/pydot/commit/bc639e76b214b1795ebd6263680ee55d9d4fca9f#diff-44fda7721399b25e8cb06471016a3454) in version 1.2.0. Alternatively you could install [pydot-ng](https://github.com/pydot/pydot-ng) instead, which is [recommended](https://github.com/fchollet/keras/blob/master/keras/utils/visualize_util.py#L2) by the keras developers.
|
If you have not already installed `pydot` python package - try to install it. If you have `pydot` reinstallation should help with your problem.
| 6,864
|
10,782,285
|
>
> **Possible Duplicate:**
>
> [How to generate all permutations of a list in Python](https://stackoverflow.com/questions/104420/how-to-generate-all-permutations-of-a-list-in-python)
>
>
>
I am given a list `[1,2,3]` and the task is to create all the possible permutations of this list.
Expected output:
```
[[1, 2, 3], [1, 3, 2], [2, 3, 1], [2, 1, 3], [3, 1, 2], [3, 2, 1]]
```
I can't even think from where to begin. Can anyone help?
Thanks
|
2012/05/28
|
[
"https://Stackoverflow.com/questions/10782285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1407910/"
] |
[itertools.permutations](http://docs.python.org/library/itertools.html#itertools.permutations) does this for you.
Otherwise, a simple method consist in finding the permutations recursively: you successively select the first element of the output, then ask your function to find all the permutations of the remaining elements.
A slightly different, but similar solution can be found at <https://stackoverflow.com/a/104436/42973>. It finds all the permutations of the remaining (non-first) elements, and then inserts the first element successively at all the possible locations.
|
this is a rudimentary solution...
the idea is to use recursion to go through all permutation and reject the non valid permutations.
```
def perm(list_to_perm,perm_l,items,out):
if len(perm_l) == items:
out +=[perm_l]
else:
for i in list_to_perm:
if i not in perm_l:
perm(list_to_perm,perm_l +[i],items,out)
a = [1,2,3]
out = []
perm(a,[],len(a),out)
print out
```
output:
```
[[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]]
```
| 6,865
|
35,042,340
|
I am using [Backbone.LocalStorage](https://github.com/jeromegn/Backbone.localStorage) plugin with backbone app. It is working fine in chrome and safari however, it is giving me below error in firefox.
>
> DOMException [SecurityError: "The operation is insecure."
> code: 18
> nsresult: 0x80530012
> location: <http://localhost:8000/js/libs/backbone.localStorage/backbone.localStorage.js?version=1453910702146:137]>
>
>
>
I am using python `simpleHttpServer`
How can I resolve this error?
**UPDATE**
Here is my code.
```
paths: {
'jquery' : 'libs/jquery/dist/jquery',
'underscore' : 'libs/underscore/underscore',
'backbone' : 'libs/backbone/backbone',
'localStorage' : 'libs/backbone.localStorage/backbone.localStorage',
'text' : 'plugins/text'
}
```
Here is collection where localStorage is used.
```
var Items = Backbone.Collection.extend({
model: SomeModel,
localStorage: new Backbone.LocalStorage('items'),
});
```
**UPDATE 2**
I am using firefox 36.
**UPDATE 3**
It seems like it is a CORS issue but my firefox version is 36. Which should be fine.
**UPDATE 4**
I am also getting this error in firefox nightly version 44. I also updated my firefox to version 44. Still same error.
|
2016/01/27
|
[
"https://Stackoverflow.com/questions/35042340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1136062/"
] |
Make sure Firefox has cookies enabled.
The setting can be found under Menu/Options/Privacy/History
In the dropdown, select either 'Remember History' or if You prefer use custom settings for history, but select option Accept cookies from sites
Hope it helps.
|
Make sure your domains are same. verify [Same Origin Policy](http://en.wikipedia.org/wiki/Same_origin_policy) which means same domain, subdomain, protocol (http vs https) and same port.
[What is Same Origin Policy?](http://en.wikipedia.org/wiki/Same_origin_policy)
[How does pushState protect against potential content forgeries?](https://stackoverflow.com/questions/6193983/how-does-pushstate-protect-against-potential-content-forgeries)
| 6,866
|
54,204,181
|
I am trying to set the return value of a `get` request in python in order to do a unit test, which tests if the `post` request is called with the correct arguments. Assume I have the following code to test
```
# main.py
import requests
from django.contrib.auth.models import User
def function_with_get():
client = requests.session()
some_data = str(client.get('https://cool_site.com').content)
return some_data
def function_to_test(data):
for user in User.objects.all():
if user.username in data:
post_data = dict(data=user.username)
else:
post_data = dict(data='Not found')
client.post('https://not_cool_site.com', data=post_data)
#test.py
from unittest import mock
from unittest import TestCase
from main import function_with_get, function_to_test
class Testing(TestCase):
@mock.patch('main.requests.session')
def test_which_fails_because_of_get(self, mock_sess):
mock_sess.get[0].return_value.content = 'User1'
data = function_with_get()
function_to_test(data)
assertIn('Not Found', mock_sess.retrun_value.post.call_args_list[1])
```
This, sadly, does not work and I have also tried to set it without `content`, however, I get an error **AttributeError: 'str' object has no attribute 'content'**
What would be the correct way to set the `return_value` of the `get` request, so that I can test the arguments of the `post` request?
|
2019/01/15
|
[
"https://Stackoverflow.com/questions/54204181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7226268/"
] |
I think you almost have it, except you're missing the return value for `session()` - because `session` is instantiated to create the `client` instance. I think you can drop the `[0]` too.
Try:
```
mock_sess**.return\_value.**get.return_value.content = 'User1'
```
|
Try with .text because this should work for strings.
```
s = requests.Session()
s.get('https://httpbin.org/cookies/ set/sessioncookie/123456789')
r = s.get('https://httpbin.org/ cookies')
print(r.text)
```
<http://docs.python-requests.org/en/master/user/advanced/>
| 6,872
|
37,932,363
|
So I need to make this plot in python. I wish to remove my legend's border. However, when I tried the different solutions other posters made, they were unable to work with mine. Please help.
**This doesn't work:**
```
plt.legend({'z$\sim$0.35', 'z$\sim$0.1','z$\sim$1.55'})
plt.legend(frameon=False)
```
---
```
plt.legend({'z$\sim$0.35', 'z$\sim$0.1','z$\sim$1.55'})
plt.legend.get_frame().set_linewidth(0.0)
```
---
```
plt.legend({'z$\sim$0.35', 'z$\sim$0.1','z$\sim$1.55'}, 'Box', 'off')
```
Additionally, when I plotted, I imported two different files and graphed them with a line and with circles respectively. How could I put a line or a circle within the legend key?
The plot:
[](https://i.stack.imgur.com/Axl8g.png)
|
2016/06/20
|
[
"https://Stackoverflow.com/questions/37932363",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6491230/"
] |
It's very strange because the command :
```
plt.legend(frameon=False)
```
Should work very well.
You can also try this command, to compare :
```
plt.legend(frameon=None)
```
You can also read the documentation on this page about [plt.legend](http://matplotlib.org/api/legend_api.html)
I scripted something as example to you :
```
import numpy as np
import matplotlib.pyplot as plt
x = np.array([0,4,8,13])
y = np.array([0,1,2,3])
fig1, ((ax1, ax2)) = plt.subplots(1, 2)
ax1.plot(x,y, label=u'test')
ax1.legend(loc='upper left', frameon=False)
ax2.plot(x,y, label=u'test2')
ax2.legend(loc='upper left', frameon=None)
plt.show()
```
[](https://i.stack.imgur.com/cxDcK.png)
|
Try this if you want to draw only one plot (without subplot)
```
plt.legend({'z$\sim$0.35', 'z$\sim$0.1','z$\sim$1.55'}, frameon=False)
```
It is enough one plt.legend. The second one rewrites the first one.
| 6,873
|
69,364,940
|
The text is like "1-2years. 3years. 10years."
I want get result `[(1,2),(3),(10)]`.
I use python.
I first tried `r"([0-9]?)[-]?([0-9])years"`. It works well except for the case of 10. I also tried `r"([0-9]?)[-]?([0-9]|10)years"` but the result is still `[(1,2),(3),(1,0)]`.
|
2021/09/28
|
[
"https://Stackoverflow.com/questions/69364940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10490005/"
] |
You need to listen for click event on newly added element. hence added a click listener on **newSpan** element after adding it into DOM.
You are listening for event for removeLists element only but when you add a new element in the DOM, the newly doesn't have the click event. Hence, we have to listen for the event explicitly.
I hope, it answers your question. :)
```js
const toDoInput = document.querySelector("input");
const addButton = document.querySelector("button");
const listParent = document.querySelector("ul");
const removeLists = document.querySelectorAll(".remove-li");
const listItemMake = () => {
if (toDoInput.value !== "") {
const newDiv = document.createElement("div");
newDiv.classList.add("list-item");
const newListItem = document.createElement("li");
newListItem.append(toDoInput.value);
newDiv.append(newListItem);
toDoInput.value = "";
const newSpan = document.createElement("span");
newSpan.classList.add("remove-li");
newSpan.innerHTML = '(Remove Button)';
newDiv.append(newSpan);
listParent.append(newDiv);
newSpan.addEventListener("click", () => {
newSpan.parentElement.remove();
});
}
};
addButton.addEventListener("click", () => {
listItemMake();
});
toDoInput.addEventListener("keydown", (event) => {
if (event.code === "Enter") {
listItemMake();
}
});
for (const removeList of removeLists) {
removeList.addEventListener("click", () => {
removeList.parentElement.remove();
})
}
```
```html
<h1>ToDo Application</h1>
<div class="container">
<div class="form-container">
<input type="text" placeholder="Add New Tasks">
<button>Add</button>
</div>
<div class="list-container">
<ul>
<div class="list-item">
<li>Sample List Item</li>
<span class="remove-li">(Remove Button)</span>
</div>
<div class="list-item">
<li>Sample List Item</li>
<span class="remove-li">(Remove Button)</span>
</div>
<div class="list-item">
<li>Sample List Item</li>
<span class="remove-li">(Remove Button)</span>
</div>
</ul>
</div>
</div>
```
|
Please change some code.
Please add remove event function to listItemMake() function.
```js
const toDoInput = document.querySelector("input");
const addButton = document.querySelector("button");
const listParent = document.querySelector("ul");
const listItemMake = () => {
if (toDoInput.value !== "") {
const newDiv = document.createElement("div");
newDiv.classList.add("list-item");
const newListItem = document.createElement("li");
newListItem.append(toDoInput.value);
newDiv.append(newListItem);
toDoInput.value = "";
const newSpan = document.createElement("span");
newSpan.classList.add("remove-li");
newSpan.innerHTML = '(Remove Button)';
newDiv.append(newSpan);
listParent.append(newDiv);
}
addRemoveEvent();
};
addButton.addEventListener("click", () => {
listItemMake();
});
toDoInput.addEventListener("keydown", (event) => {
if (event.code === "Enter") {
listItemMake();
}
});
function addRemoveEvent() {
const removeLists = document.querySelectorAll(".remove-li");
for (const removeList of removeLists) {
removeList.addEventListener("click", () => {
removeList.parentElement.remove();
})
}
}
addRemoveEvent();
```
```html
<h1>ToDo Application</h1>
<div class="container">
<div class="form-container">
<input type="text" placeholder="Add New Tasks">
<button>Add</button>
</div>
<div class="list-container">
<ul>
<div class="list-item">
<li>Sample List Item</li>
<span class="remove-li">(Remove Button)</span>
</div>
<div class="list-item">
<li>Sample List Item</li>
<span class="remove-li">(Remove Button)</span>
</div>
<div class="list-item">
<li>Sample List Item</li>
<span class="remove-li">(Remove Button)</span>
</div>
</ul>
</div>
</div>
```
| 6,875
|
8,948,034
|
I want to retrieve and work with basic Vimeo data in python 3.2, given a video's URL. I'm a newcomer to JSON (and python), but it looked like the right fit for doing this.
1. Request Vimeo video data (via an API-formatted .json URL)
2. Convert returned JSON data into python dict
3. Display dict keys & data ("id", "title", "description", etc.)
Another SO page [Get json data via url and use in python](https://stackoverflow.com/questions/1640715/get-json-data-via-url-and-use-in-python-simplejson) did something similar in python 2.x, but syntax changes (like integrating urllib2) led me to try this.
```
>>> import urllib
>>> import json
>>> req = urllib.request.urlopen("http://vimeo.com/api/v2/video/31161781.json")
>>> opener = urllib.request.build_opener()
>>> f = opener.open(req)
Traceback (most recent call last):
File "<pyshell#28>", line 1, in <module>
f = opener.open(req)
File "C:\Python32\lib\urllib\request.py", line 358, in open
protocol = req.type
AttributeError: 'HTTPResponse' object has no attribute 'type'
```
This code will integrate into an existing project, so I'm tied to using python. I know enough about HTTP queries to guess the data's within that response object, but not enough about python to understand why the open failed and how to reference it correctly. What should I try instead of `opener.open(req)`?
|
2012/01/20
|
[
"https://Stackoverflow.com/questions/8948034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/73807/"
] |
This works for me:
```
import urllib.request, json
response = urllib.request.urlopen('http://vimeo.com/api/v2/video/31161781.json')
content = response.read()
data = json.loads(content.decode('utf8'))
```
Or with Requests:
```
import requests
data = requests.get('http://vimeo.com/api/v2/video/31161781.json').json()
```
|
Check out: <http://www.voidspace.org.uk/python/articles/urllib2.shtml>
```
>>> import urllib2
>>> import json
>>> req = urllib2.Request("http://vimeo.com/api/v2/video/31161781.json")
>>> response = urllib2.urlopen(req)
>>> content_string = response.read()
>>> content_string
'[{"id":31161781,"title":"Kevin Fanning talks about hiring for Boston startups","description":"CogoLabs.com talent developer and author Kevin Fanning talks about hiring for small teams in Boston, how job seekers can make themselves more attractive, and why recruiters should go the extra mile to attract talent.","url":"http:\\/\\/vimeo.com\\/31161781","upload_date":"2011-10-26 15:37:35","thumbnail_small":"http:\\/\\/b.vimeocdn.com\\/ts\\/209\\/777\\/209777866_100.jpg","thumbnail_medium":"http:\\/\\/b.vimeocdn.com\\/ts\\/209\\/777\\/209777866_200.jpg","thumbnail_large":"http:\\/\\/b.vimeocdn.com\\/ts\\/209\\/777\\/209777866_640.jpg","user_name":"Venture Cafe","user_url":"http:\\/\\/vimeo.com\\/venturecafe","user_portrait_small":"http:\\/\\/b.vimeocdn.com\\/ps\\/605\\/605070_30.jpg","user_portrait_medium":"http:\\/\\/b.vimeocdn.com\\/ps\\/605\\/605070_75.jpg","user_portrait_large":"http:\\/\\/b.vimeocdn.com\\/ps\\/605\\/605070_100.jpg","user_portrait_huge":"http:\\/\\/b.vimeocdn.com\\/ps\\/605\\/605070_300.jpg","stats_number_of_likes":0,"stats_number_of_plays":43,"stats_number_of_comments":0,"duration":531,"width":640,"height":360,"tags":"startup stories, entrepreneurship, interview, Venture Cafe, jobs","embed_privacy":"anywhere"}]'
>>> loaded_content = json.loads(content_string)
>>> type(content_string)
<type 'str'>
>>> type(loaded_content)
<type 'list'>
```
| 6,883
|
48,117,638
|
I would like to know how to have setup.py install c modules locally. Locally as in not in `/usr/local/python..` and not in `~/local/python...`, but in `[where_all_my_code_is]/bin` and I can import it from scripts within the [where\_all\_my\_code\_is] folder.
I have some c code. src/foo/foo.c
```
#include <Python.h>
static PyObject * foo(PyObject* o) {
PyObject* five = PyInt_FromLong(5);
return PyNumber_Add(&o, &five);
}
static PyMethodDef funcs[] = {
{"foo", (PyCFunction)foo, METH_VARARGS, "foo, foo and more foo"},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC initaddList(void){
Py_InitModule3("foo", funcs, "do the foo");
}
```
setup.py
```
from distutils.core import setup, Extension
setup(name='foo', version='1.0',
ext_modules=[Extension('foo', ['src/foo/foo.c'])])
```
Now, instead of installing this into my whatever /local/python folders or whatever, I want the code to be in a bin folder right there next to the module.
e.g.
```
~/My_python_project
./src
./foo
./foo.c
./some_code_that_imports_foo.py
./bin
./foo
./my_importable_foo.so
```
some\_code\_that\_imports\_foo.py:
```
import foo
print(foo.foo(10))
# prints 15
```
What is the appropriate/nicest way to accomplish this thing?
|
2018/01/05
|
[
"https://Stackoverflow.com/questions/48117638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2886575/"
] |
Here is one option:
setup using
```
$ python3 setup.py install --root . --install-lib lib
```
add local `lib` path to the python path
```
$ export PYTHONPATH:$PYTHONPATH:./lib
```
Now python scripts in `.` can import the `c` modules we just compiled.
Something fancier would need to be used for the exact scenario I suggested in the question, but the general procedure should apply.
limitations? possible pitfalls? brittlenesses?
|
you would have to create custom package for the type of os you are using, deb or rpm or ebuild these system level tools install files into /usr/bin and /usr/lib instead of your local builds which go to /usr/local or user builds ~/local.
| 6,888
|
37,955,984
|
I don't have a clue what's causing this error. It appears to be a bug that there isn't a fix for. Could anyone tell give me a hint as to how I might get around this? It's frustrating me to no end. Thanks.
```
Operations to perform:
Apply all migrations: admin, contenttypes, optilab, auth, sessions
Running migrations:
Rendering model states... DONE
Applying optilab.0006_auto_20160621_1640...Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 353, in execute_from_command_line
utility.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 399, in execute
output = self.handle(*args, **options)
File "C:\Python27\lib\site-packages\django\core\management\commands\migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "C:\Python27\lib\site-packages\django\db\migrations\executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "C:\Python27\lib\site-packages\django\db\migrations\executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "C:\Python27\lib\site-packages\django\db\migrations\executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Python27\lib\site-packages\django\db\migrations\migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "C:\Python27\lib\site-packages\django\db\migrations\operations\fields.py", line 121, in database_forwards
schema_editor.remove_field(from_model, from_model._meta.get_field(self.name))
File "C:\Python27\lib\site-packages\django\db\backends\sqlite3\schema.py", line 247, in remove_field
self._remake_table(model, delete_fields=[field])
File "C:\Python27\lib\site-packages\django\db\backends\sqlite3\schema.py", line 197, in _remake_table
self.quote_name(model._meta.db_table),
File "C:\Python27\lib\site-packages\django\db\backends\base\schema.py", line 110, in execute
cursor.execute(sql, params)
File "C:\Python27\lib\site-packages\django\db\backends\utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Python27\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Python27\lib\site-packages\django\db\utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Python27\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Python27\lib\site-packages\django\db\backends\sqlite3\base.py", line 323, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: near ")": syntax error
```
Here's the contents of 0006\_auto\_20160621\_1640.py
```
# -*- coding: utf-8 -*-
# Generated by Django 1.9.6 on 2016-06-21 22:40
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('optilab', '0005_test'),
]
operations = [
migrations.RemoveField(
model_name='lasersubstrate',
name='substrate_ptr',
),
migrations.RemoveField(
model_name='waveguidesubstrate',
name='substrate_ptr',
),
migrations.DeleteModel(
name='LaserSubstrate',
),
migrations.DeleteModel(
name='WaveguideSubstrate',
),
]
```
Here's the SQL produced from running 'python manage.py sqlmigrate optilab 0006'
```
BEGIN;
--
-- Remove field substrate_ptr from lasersubstrate
--
ALTER TABLE "optilab_lasersubstrate" RENAME TO "optilab_lasersubstrate__old";
CREATE TABLE "optilab_lasersubstrate" ("substrate_ptr_id" integer NOT NULL PRIMARY KEY REFERENCES "optilab_substrate" ("id"));
INSERT INTO "optilab_lasersubstrate" () SELECT FROM "optilab_lasersubstrate__old";
DROP TABLE "optilab_lasersubstrate__old";
--
-- Remove field substrate_ptr from waveguidesubstrate
--
ALTER TABLE "optilab_waveguidesubstrate" RENAME TO "optilab_waveguidesubstrate__old";
CREATE TABLE "optilab_waveguidesubstrate" ("substrate_ptr_id" integer NOT NULL PRIMARY KEY REFERENCES "optilab_substrate" ("id"));
INSERT INTO "optilab_waveguidesubstrate" () SELECT FROM "optilab_waveguidesubstrate__old";
DROP TABLE "optilab_waveguidesubstrate__old";
--
-- Delete model LaserSubstrate
--
DROP TABLE "optilab_lasersubstrate";
--
-- Delete model WaveguideSubstrate
--
DROP TABLE "optilab_waveguidesubstrate";
COMMIT;
```
|
2016/06/21
|
[
"https://Stackoverflow.com/questions/37955984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2075859/"
] |
This appears to be the line that's causing the errror:
```
INSERT INTO "optilab_lasersubstrate" () SELECT FROM "optilab_lasersubstrate__old";
```
You are usually expected to have a list of columns in those parenthesis. Eg `INSERT INTO "optilab_lasersubstrate" (col1,col2,etc)` however the migration has produced a blank set! Similarly the `SELECT FROM` portion should read as `SELECT col1,col2 FROM`. By some strange set of events you appear to have managed to create a table with no columns!!
I see from your migration file that you are anyway dropping this table. So there isn't any reason to struggle with the `RemoveField` portion. It's code associated with the `RemoveField` that's causing the error. Change your migration as follows:
```
class Migration(migrations.Migration):
dependencies = [
('optilab', '0005_test'),
]
operations = [
migrations.DeleteModel(
name='LaserSubstrate',
),
migrations.DeleteModel(
name='WaveguideSubstrate',
),
]
```
|
Edit base.py in the lines that breaks and update it to:
```
def execute(self, query, params=None):
if params is None:
if '()' not in str(query):
return Database.Cursor.execute(self, query)
query = self.convert_query(query)
if '()' not in str(query):
return Database.Cursor.execute(self, query, params)
```
| 6,889
|
33,281,217
|
I'm crawling through a simple, but long HTML chunk, which is similar to this:
```html
<table>
<tbody>
<tr>
<td> Some text </td>
<td> Some text </td>
</tr>
<tr>
<td> Some text
<br/>
Some more text
</td>
</tr>
</tbody>
</table>
```
I'm collecting the data with following little python code (using lxml):
```
for element in root.iter():
if element == 'td':
print element.text
```
Some of the texts are divided into two rows, but mostly they fit in a single row. The problem is within the divided rows.
The root element is the 'table' tag. That little code can print out all the other texts, but not what comes after the 'br' tags. If I don't exclude non-td tags, the code tries to print possible text from inside the 'br' tags, but of course there's nothing in there and thus this prints just empty new line.
However after this 'br', the code moves to the next tag on the line within the iteration, but ignores that data that's still inside the previous 'td' tag.
How can I get also the data after those tags?
Edit: It seems that some of the 'br' tags are self closing, but some are left open
```html
<td>
Some text
<br>
Some more text
</td>
```
The element.tail method, suggested in the first answer, does not seem to be able to get the data after that open tag.
Edit2: Actually it works. Was my own mistake. Forgot to mention that the "print element.text" part was encapsulated by try-except, which in case of the br tag caught an AttributeError, because there's nothing inside the br tags. I had set the exception to just pass and print out nothing. Inside the same try-except I tried also print out the tail, but printing out the tail was never reached, because of the exception that happened before it.
|
2015/10/22
|
[
"https://Stackoverflow.com/questions/33281217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/256965/"
] |
Because `<br/>` is a self-closing tag, it does not have any `text` content. Instead, you need to access it's `tail` content. The `tail` content is the content after the element's closing tag, but before the next opening tag. To access this content in your for loop you will need to use the following:
```
for element in root.iter():
element_text = element.text
element_tail = element.tail
```
Even if the `br` tag is an opening tag, this method will still work:
```
from lxml import etree
content = '''
<table>
<tbody>
<tr>
<td> Some text </td>
<td> Some text </td>
</tr>
<tr>
<td> Some text
<br>
Some more text
</td>
</tr>
</tbody>
</table>
'''
root = etree.HTML(content)
for element in root.iter():
print(element.tail)
```
**Output**
```
Some more text
```
|
To me below is working to extract all the text after `br`-
```
normalize-space(//table//br/following::text()[1])
```
**Working example is** [**at**](http://www.xpathtester.com/xpath/283dca20a024adbb5ece93de6c914531).
| 6,890
|
30,975,911
|
I have a python list of list as follows. I want to flatten it to a single list.
```
l = [u'[190215]']
```
I am trying.
```
l = [item for value in l for item in value]
```
It turns the list to `[u'[', u'1', u'9', u'0', u'2', u'1', u'5', u']']`
How to remove the `u` from the list.
|
2015/06/22
|
[
"https://Stackoverflow.com/questions/30975911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/567797/"
] |
The `u` means a [`unicode`](https://docs.python.org/2/howto/unicode.html) string which should be perfectly fine to use.
But if you want to convert `unicode` to `str` (which just represents plain bytes in Python 2) then you may `encode` it using a character encoding such as `utf-8`.
```
>>> items = [u'[190215]']
>>> [item.encode('utf-8') for item in items]
['[190215]']
```
|
You can convert your unicode to normal string with `str` :
```
>>> list(str(l[0]))
['[', '1', '9', '0', '2', '1', '5', ']']
```
| 6,891
|
43,201,211
|
I hope you can help.
I am currently writing a code to extract historical data for various games on SteamSpy.com. After backing the project on Patreon, you get to view the long history of various metrics for each game. I would like to make comparisons between multiple games, and as such I want to extract the data.
I know from earlier that BeautifulSoup can be very helpful for this task, unfortunately I can not use it in the way I have done earlier. I will describe it in detail below, but the main problem is, that all the relevant data is included inside ONE single tag.
* Example: Dota2
* URL: <http://steamspy.com/app/570>
* Source code: [*view-source:http://steamspy.com/app/570*](http://view-source:http://steamspy.com/app/570)
**Source code**
Here is the part of the source code, that I am interested in (it is obviously far longer when logged in, and you have access to the historical data).
```
<div class="tab-content no-padding bg-transparent">
<div class="tab-pane active relative" id="tab-sales">
<h2>Owners data:</h2>
<div id="nvd3-sales" class="line-chart" data-area-color="master" data-points="false" data-stroke-width="4">
<svg></svg>
</div>
<script>
var data2sales= [
{
"key": "Owners",
"bar": true,
"values": [
[1489363200000, 97073321, ""],
[1489449600000, 97138657, ""],
[1489536000000, 97126694, ""],
[1489622400000, 98535521, ""],
[1489708800000, 98482905, ""],
[1489795200000, 98496091, ""],
[1489881600000, 98627987, "#2B6A94"],
[1489968000000, 98798351, ""],
[1490054400000, 98936652, ""],
[1490140800000, 99025494, ""],
[1490227200000, 99208644, ""],
[1490313600000, 99163634, ""],
[1490400000000, 99097059, ""],
[1490486400000, 98986347, "#2B6A94"],
[1490572800000, 99005343, ""],
[1490659200000, 99023673, ""],
[1490745600000, 99084059, ""],
[1490832000000, 98988641, ""],
[1490918400000, 99120523, ""],
[1491004800000, 99058884, ""],
[1491091200000, 99206546, "#2B6A94"],
[1491177600000, 99155567, ""] ]},{
"key" : "Price",
"values" : [
[1489363200000,, "#ffffff" ],
[1489449600000,, "#ffffff" ],
[1489536000000,, "#ffffff" ],
[1489622400000,, "#ffffff" ],
[1489708800000,, "#ffffff" ],
[1489795200000,, "#ffffff" ],
[1489881600000,, "#ffffff" ],
[1489968000000,, "#ffffff" ],
[1490054400000,, "#ffffff" ],
[1490140800000,, "#ffffff" ],
[1490227200000,, "#ffffff" ],
[1490313600000,, "#ffffff" ],
[1490400000000,, "#ffffff" ],
[1490486400000,, "#ffffff" ],
[1490572800000,, "#ffffff" ],
[1490659200000,, "#ffffff" ],
[1490745600000,, "#ffffff" ],
[1490832000000,, "#ffffff" ],
[1490918400000,, "#ffffff" ],
[1491004800000,, "#ffffff" ],
[1491091200000,, "#ffffff" ],
[1491177600000,, "#ffffff" ]] } ];
</script>
</div>
```
My ultimate goal is to extract the three values for each row in the code below, inside the `<script>` tag, and only the values that begin in the row after `"values": [`.
Below is the part of my code that takes care of collecting the data, I have tried multiple solutions, but all I have found only suggests that I iterate over the tags in the "soup", and collect the data inside, however, here all my data is placed inside one SINGLE tag. I hope you can help out.
Please also tell me, if I can provide more information, that could be useful.
Cheers!
**Code**
My code (I run it inside a session, since I have to log in before collecting the data, I have removed the log in part):
```
### START ###
import requests
from requests import session
import bs4
from bs4 import BeautifulSoup
baseURL = "http://steamspy.com/app/"
appNum = "387990"
fullURL = baseURL + appNum
# log-in information to access the full historical data.
payload = {
'username': 'XXXX',
'password': 'XXXX',
'doLogin': 'doLogin'
}
with requests.Session() as s:
#log in on steamSpy
# q = s.post('https://steamspy.com/login.php', data=payload)
# print(q.status_code)
# print(q.history)
# print(q.url)
#navigate to the desired webpage
r = s.get(fullURL)
soup = bs4.BeautifulSoup(r.text, "lxml")
ownr = soup.find("div", {"id": "tab-sales"}).find("script").get_text()
print(ownr)
```
**Output**
And here is my output, which obviously is similar to what is inside the tag in the source code:
```
C:\Python35\python.exe "C:/Users/nohgjk/Dropbox/Gaming/Project steamSpy/Python/steamSpy - Test/steamSpySoup2.py"
var data2sales= [
{
"key": "Owners",
"bar": true,
"values": [
[1489363200000, 549045, ""],
[1489449600000, 550812, ""],
[1489536000000, 550773, ""],
[1489622400000, 544180, ""],
[1489708800000, 532284, ""],
[1489795200000, 546592, ""],
[1489881600000, 545925, "#2B6A94"],
[1489968000000, 550721, ""],
[1490054400000, 539253, ""],
[1490140800000, 536258, ""],
[1490227200000, 544210, ""],
[1490313600000, 560977, ""],
[1490400000000, 562907, ""],
[1490486400000, 554817, "#2B6A94"],
[1490572800000, 552973, ""],
[1490659200000, 551875, ""],
[1490745600000, 554853, ""],
[1490832000000, 553309, ""],
[1490918400000, 551987, ""],
[1491004800000, 551671, ""],
[1491091200000, 541915, "#2B6A94"],
[1491177600000, 541280, ""] ]},{
"key" : "Price",
"values" : [
[1489363200000, 19.99, ""],
[1489449600000, 19.99, ""],
[1489536000000, 19.99, ""],
[1489622400000, 19.99, ""],
[1489708800000, 19.99, ""],
[1489795200000, 19.99, ""],
[1489881600000, 19.99, "#2B6A94"],
[1489968000000, 19.99, ""],
[1490054400000, 19.99, ""],
[1490140800000, 19.99, ""],
[1490227200000, 19.99, ""],
[1490313600000, 19.99, ""],
[1490400000000, 19.99, ""],
[1490486400000, 19.99, "#2B6A94"],
[1490572800000, 19.99, ""],
[1490659200000, 19.99, ""],
[1490745600000, 19.99, ""],
[1490832000000, 19.99, ""],
[1490918400000, 19.99, ""],
[1491004800000, 19.99, ""],
[1491091200000, 19.99, "#2B6A94"],
[1491177600000, 19.99, ""]] } ];
Process finished with exit code 0
```
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43201211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3494191/"
] |
You could try
```
SELECT
REPLACE(CONVERT(VARCHAR(11), Date, 6), ' ', '/') + ' 12:00:00 AM' AS Date
FROM
TableFiles
```
|
just an idea: select convert(**date**, yourvalue ) + 0.5) .... should use only date part and add an half day?
| 6,900
|
65,160,481
|
I'm trying to install neovim on Windows and import my previous init.vim file.
I've previously defined my snippets in ultisnips.
I'm using windows and ahve tested this in another version of windows and it works, however, at the moment when I run
```
:checkhealth
```
I get the following error:
```
## Python 3 provider (optional)
30 - WARNING: No Python executable found that can `import neovim`. Using the first available executable for diagnostics.
31 - ERROR: Python provider error:
32 - ADVICE:
33 - provider/pythonx: Could not load Python 3:
34 python3 not found in search path or not executable.
35 python3.7 not found in search path or not executable.
36 python3.6 not found in search path or not executable.
37 python3.5 not found in search path or not executable.
38 python3.4 not found in search path or not executable.
39 python3.3 not found in search path or not executable.
40 python not found in search path or not executable.
41 - INFO: Executable: Not found
```
But I do have python 3.7 installed. I can run python in cmd / powershell, and I can import neovim python module without any issues. Does someone know how to get neovim to pick up python?
|
2020/12/05
|
[
"https://Stackoverflow.com/questions/65160481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2427492/"
] |
**Make sure you have Python3 installed, the following answer also works with Python2.**
Check for help:
```
:help provider-python
```
Go to the `PYTHON QUICKSTART` and you will see one of this two options:
* For Python 2 plugins:
* For Python 3 plugins:
*Because you're problem is with python 3 follow this steps that are mentioned in* `For Python 3 plugins:`
```
1. Make sure Python 3.4+ is available in your $PATH.
2. Install the module (try "python" if "python3" is missing): >
python3 -m pip install --user --upgrade pynvim
```
This should fix the problem, if it persist then make sure the right path is being used, go to `~/.vimrc` and add the following:
For python3:
```
let g:python3_host_prog = '/path/to/python3'
```
For Python2:
```
let g:python_host_prog = '/path/to/python'
```
Note: I found this in `PYTHON QUICKSTART` if you continue reading you will find it.
To know the location of python use:
-----------------------------------
* For Linux and Mac
```sh
which python3
```
* For [windows](https://datatofish.com/locate-python-windows/)
Use `:checkhealth` again and it should output the following:
```
## Python 3 provider (optional)
- INFO: Using: g:python3_host_prog = "/usr/local/bin/python3"
- INFO: Executable: /usr/local/bin/python3
- INFO: Python version: 3.9.1
- INFO: pynvim version: 0.4.2
- OK: Latest pynvim is installed.
```
|
It seems, that neovim is confusing python 2 and python 3 somehow. For me it worked just to have renamed python executable in PATH of python 2, that is python.exe -> python2.exe, and now it seems to work fine. However it is maybe not the perfect solution.
| 6,901
|
69,700,550
|
Shown below is the code from my launch.json file for my Flask application. I have various environment variables defined in the `"env": {}` portion. However, when I run my Flask application from the run.py script, it doesn't seem to recognize the variables. Although the `"FLASK_DEBUG"` is set to `"1"`, the application still runs on `**Debug mode: off**`.
Does anyone know why?
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Flask",
"type": "python",
"request": "launch",
"module": "flask",
"env": {
"FLASK_APP": "run.py",
"FLASK_ENV": "development",
"FLASK_DEBUG": "1",
"EMAIL_USER": "farmgo.project.21@gmail.com",
"EMAIL_PASS": "password",
},
"args": [
"run",
//"--no-debugger"
],
"jinja": true,
}
],
}
```
If I set:
```
if __name__ == '__main__':
app.run(debug=True)
```
then the app does run in Debug mode. However, I still cannot get the other environment variables:
```
>>> import os
>>> print(os.environ.get('FLASK_DEBUG'))
None
```
|
2021/10/24
|
[
"https://Stackoverflow.com/questions/69700550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16425143/"
] |
This is very inefficient\*, but it meets your requirements:
```ml
let transpose (matrix : List<List<_>>) =
[
if matrix.Length > 0 then
assert(matrix |> List.distinctBy List.length |> List.length = 1)
for i = 0 to matrix.[0].Length - 1 do
yield [
for list in matrix do
yield list.[i]
]
]
```
Example:
```ml
let matrix = [[1; 2; 3]; [4; 5; 6]; [7; 8; 9]; [10; 11; 12]]
transpose matrix |> printfn "%A" // [[1; 4; 7; 10]; [2; 5; 8; 11]; [3; 6; 9; 12]]
```
\* O(n3) where n is the length of a square matrix.
|
For better performance, you can convert the list of list to an Array2D, and then create your transposed list from the Array2D.
```
let transpose' xss =
let xss = array2D xss
let dimx = Array2D.length1 xss - 1
let dimy = Array2D.length2 xss - 1
[ for y=0 to dimy do [
for x=0 to dimx do
yield xss.[x,y]
]
]
```
For non-learning purpose, you should use the built-int `List.transpose`
```
List.transpose lst
```
| 6,906
|
64,768,621
|
Hello I am new into python . practicing web scraping with some demo sites .
I am trying to scrape this website <http://books.toscrape.com/> and want to extract
1. href
2. name/title
3. start rating/star-rating
4. price/price\_color
5. in-stock availbility/instock availability
i written a basic code which goes to each book level.
but after that i am clueless as how i can extract those information.
```
import requests
from csv import reader,writer
from bs4 import BeautifulSoup
base_url= "http://books.toscrape.com/"
r = requests.get(base_url)
htmlContent = r.content
soup = BeautifulSoup(htmlContent,'html.parser')
for article in soup.find_all('article'):
```
|
2020/11/10
|
[
"https://Stackoverflow.com/questions/64768621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13597511/"
] |
It's for folders that are added in you .proj file, but doesn't exist on a hard drive.
My guess is that you are using some kind of version control (git or svn or else) and someone made change to project structure (added folder) but forgot to add that folder to version control system.
|
For me red cross appeared on all projects because while cloning component from git/stash, most of the files and folder path were too long and are not acceptable.
Tried cloning at different path which is smaller and red crosses disappeared.
| 6,907
|
17,055,642
|
I was wondering if there is already available a library which does something similar to scrapely
<https://github.com/scrapy/scrapely>
WHat it does is that you give an example url and then you give the data you want to extract from that html..
```
url1 = 'http://pypi.python.org/pypi/w3lib/1.1'
data = {'name': 'w3lib 1.1', 'author': 'Scrapy project', 'description': 'Library of web-related functions'}
```
and then you initiate this rule by simply:
```
s.train(url1, data)
```
Now, I can extract the same data from different url...
But is there any library which does the same but for raw text...
For example:
```
raw_text = "|foo|bar,name = how cool"
```
And then I want to extract "bar" from this.
I know, I can write a simple regex rules and get done with this.. but is there any library available which solves this as an instance based learning problem..
i.e rather than specifying a regex rule and then passing data over it..
instead I specify an instance and what I would like to extract and it automatically builds rule ?
Hope that I am making some sense.
|
2013/06/11
|
[
"https://Stackoverflow.com/questions/17055642",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] |
The likeliest problem is that you're using backslashes in your file name, so they get interpreted as control characters. The IO error is because the filename is mangled.
try
```
stockPath = "C:\\Users\\Dryan\\Desktop\\table.csv" # double slashes to get single slashes in the string
```
or
```
stockPath = "C:/Users/Dryan/Desktop/table.csv" # it's more python-y to always use right slashes.
```
|
As joojaa said, try to avoid using backslashes when you can. I try to always convert any incoming path to a forward slashes version, and just before outputting it I normalize it using os.path.normpath.
```
clean_path = any_path_i_have_to_deal_with.replace("\\", "/")
# do stuff with it
# (concat, XML save, assign to a node attribute...)
print os.path.normpath(clean_path) # back to the OS version
```
| 6,908
|
68,728,817
|
I need some help with respect to changing a dataframe in python.
```
NODE SIGNAL-STRENGTH LOCATION
0 A 76 1
1 A 78 1
2 A 78 1
3 A 79 1
4 A 79 2
5 A 70 2
.
.
.
95 E 111 4
96 E 123 5
97 E 154 5
98 E 113 5
99 E 234 5
```
The above data is a dataframe where there are 5 nodes each with 20 data points which are 5 locations that each send 4 signal strength values to each node. For each node I have 4 signal strength values from a location and the location itself. The location is the label and the signal strength changes for different nodes for different nodes.
I wish to have only 20 rows, with 6 columns. One of the columns will be location. the other five must be the nodes and have the signal strength for that location.
The location order does not change after 20 rows. This means that the 20 locations are in order after the end of each node.
I tried to do groupby but I don't know how to change it to rows.
Final dataframe must be as
```
LOCATION NODEA_SIGNAL NODE_B_SIGNAL NODE_C_SIGNAL NODE_D_SIGNAL NODE_E_SIGNAL
0 1 76 34 55 44 64
1 1 77 33 55 45 65
2 1 77 33 54 43 66
3 1 78 31 53 45 67
4 2 34 42 94 85 12
5 2 37 44 98 82 13
6 2 36 45 97 83 14
7 2 35 44 96 86 16
8 3 23 16 47 65 85
.
.
.
15 4 16 24 64 95 75
16 5 46 74 15 54 34
17 5 47 73 15 55 35
18 5 47 73 14 53 36
19 5 48 71 13 55 37
```
|
2021/08/10
|
[
"https://Stackoverflow.com/questions/68728817",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14548962/"
] |
You can do it with a `FULL` join of the tables:
```
SELECT COALESCE(a.name, b.name) AS name,
CASE WHEN a.name IS NULL THEN false ELSE true END AS is_in_A,
CASE WHEN b.name IS NULL THEN false ELSE true END AS is_in_B
FROM a FULL OUTER JOIN b
ON b.name = a.name
```
If your database does not support boolean values like `true` and `false`, change to `1` and `0` or string literals `'true'` and `'false'`.
See a simplified [demo](https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=702094c97af48761ab23e9aecbffa2e4).
|
You can use `union all` and aggregation:
```
select name, sum(in_a), sum(in_b)
from ((select name, 1 as in_a, 0 as in_b
from a
) union all
(select name, 0, 1
from b
)
) ab
group by name;
```
| 6,909
|
63,156,890
|
I'm really new to coding and after learning all the syntax for HTML and CSS I figured I'd move on to python but I'd actually learn how to use it to become a python programmer instead of just learning the syntax
So as one of my first small projects I figured id make a small piece of code that'd ask the user for some items they're buying then add the price of those items together to give them the price of their shopping list essentially, However, I couldn't get it working (p.s I'm really new so my code will probably look dumb)
```
# items and their prices
items = {'banana': 3,
'apple': 2,
'milk': 12,
'bread': 15,
'orange': 3,
'cheese': 14,
'chicken': 42,
'peanuts':32 ,
'butter': 20,
'cooldrink': 18,
}
#Asking for the item
choice = input("Select your item(type 'nothing' to make it stop) \n : ")
#the list the purchased items would go into
price = []
while items != 'nothing':
input('Another item?: ')
if choice in items:
the_choice = price[choice]
else:
print("Uh oh, I don't know about that item")
```
|
2020/07/29
|
[
"https://Stackoverflow.com/questions/63156890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14016707/"
] |
I recommend searching about the python basics and loops
This will make it
```
# items and their prices
items = {'banana': 3,
'apple': 2,
'milk': 12,
'bread': 15,
'orange': 3,
'cheese': 14,
'chicken': 42,
'peanuts':32 ,
'butter': 20,
'cooldrink': 18,
}
#Asking for the item
price = 0
while True:
choice = input("Select your item(type 'nothing' to make it stop) \n : ")
if choice == 'nothing':
break
if choice in items.keys():
price += items[choice]
else:
print("Uh oh, I don't know about that item")
print('your result', price )
```
|
Is this what you want?
```
# items and their prices
items = {'banana': 3,
'apple': 2,
'milk': 12,
'bread': 15,
'orange': 3,
'cheese': 14,
'chicken': 42,
'peanuts':32 ,
'butter': 20,
'cooldrink': 18,
}
price = 0
while True:
choice = input("Select your item(type 'nothing' to make it stop) \n : ")
if choice in items:
price += items[choice]
elif choice == 'nothing':
break
else:
print("Uh oh, I don't know about that item")
print('Your total price is: ',price)
```
| 6,910
|
22,567,083
|
I'm trying to run this simple example I found [here](https://groups.google.com/forum/#!msg/pyqtgraph/haiJsGhxTaQ/sTtMa195dHsJ), on MacOS X with Anaconda python.
```
import pyqtgraph as pg
import time
plt = pg.plot()
def update(data):
plt.plot(data, clear=True)
class Thread(pg.QtCore.QThread):
newData = pg.QtCore.Signal(object)
def run(self):
while True:
data = pg.np.random.normal(size=100)
# do NOT plot data from here!
self.newData.emit(data)
time.sleep(0.05)
thread = Thread()
thread.newData.connect(update)
thread.start()
```
However I keep getting:
```
QThread: Destroyed while thread is still running
```
|
2014/03/21
|
[
"https://Stackoverflow.com/questions/22567083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2585497/"
] |
The `pyqtgraph.plot` method seems buggy to me (anyway, I wasn't able to get it to produce any useful output, but maybe I'm doing something wrong).
However, if I create a `PlotWidget` and set up the application "manually", it all works as expected:
```
import pyqtgraph as pg
import numpy as np
import time
app = pg.QtGui.QApplication([])
window = pg.QtGui.QMainWindow()
plot = pg.PlotWidget()
window.setCentralWidget(plot)
window.show()
def update(data):
plot.plot(data, clear=True)
class Thread(pg.QtCore.QThread):
newData = pg.QtCore.Signal(object)
def run(self):
while True:
data = pg.np.random.normal(size=100)
# do NOT plot data from here!
self.newData.emit(data)
time.sleep(0.05)
thread = Thread()
thread.newData.connect(update)
thread.start()
app.exec_()
```
|
When you call `QThread.start()`, that function returns immediately. What happens is,
1. Thread #1 - creates new thread, #2
2. Thread #2 is created
3. Thread #1 regains control and runs.
4. Thread #1 dies or `thread` variable is cleaned up by GC (garbage collector) - I'm assuming the 2nd scenario should not happen
To solve this, don't let the main thread die. Before it dies, clean up all threads.
<http://pyqt.sourceforge.net/Docs/PyQt4/qthread.html#wait>
>
> bool QThread.wait (self, int msecs = ULONG\_MAX)
>
>
> Blocks the thread until either of these conditions is met:
>
>
> * The thread associated with this QThread object has finished execution (i.e. when it returns from run()). This function will return
> true if the thread has finished. It also returns true if the thread
> has not been started yet.
> * time milliseconds has elapsed. If time is ULONG\_MAX (the default), then the wait will never timeout (the thread must return from run()).
> This function will return false if the wait timed out.
>
>
> This provides similar functionality to the POSIX pthread\_join()
> function.
>
>
>
So, add `thread.wait()` to your code.
NOTE: You will need to make sure that your thread quits. As is, it will never quit.
| 6,918
|
11,466,909
|
```
node helloworld.js alex
```
I want it to console.log() alex. How do I pass "alex" as an argument into the code?
In python ,it is `sys.argv[1]`
|
2012/07/13
|
[
"https://Stackoverflow.com/questions/11466909",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/179736/"
] |
You can access command line arguments using `process.argv` in node.js.
The array also includes the node command and the application file, so the first custom command line argument will have an index = 2.
```
process.argv[2] === 'alex'; // true
```
<http://nodejs.org/api/process.html#process_process_argv>
|
If your requirements are more complex and you can take advantage of a command line argument parser, there are several choices. Two that seem popular are
* <https://www.npmjs.org/package/minimist> (simple and minimal)
* <https://www.npmjs.org/package/nomnom> (option parser with generated usage and commands)
More options available at [How do I pass command line arguments?](https://stackoverflow.com/questions/4351521/how-to-pass-command-line-arguments-to-node-js)
| 6,921
|
55,400,056
|
I have two lists of dicts: `list1` and `list2`.
```
print(list1)
[{'name': 'fooa', 'desc': 'bazv', 'city': 1, 'ID': 1},
{'name': 'bard', 'desc': 'besd', 'city': 2, 'ID': 1},
{'name': 'baer', 'desc': 'bees', 'city': 2, 'ID': 1},
{'name': 'aaaa', 'desc': 'bnbb', 'city': 1, 'ID': 2},
{'name': 'cgcc', 'desc': 'dgdd', 'city': 1, 'ID': 2}]
print(list2)
[{'name': 'foo', 'desc': 'baz', 'city': 1, 'ID': 1},
{'name': 'bar', 'desc': 'bes', 'city': 1, 'ID': 1},
{'name': 'bar', 'desc': 'bes', 'city': 2, 'ID': 1},
{'name': 'aaa', 'desc': 'bbb', 'city': 1, 'ID': 2},
{'name': 'ccc', 'desc': 'ddd', 'city': 1, 'ID': 2}]
```
I need a list of tuples that will hold two paired dicts (one dict from each list) with the same `city` and `ID`.
I did it with double loop:
```
list_of_tuples = []
for i in list1:
for j in list2:
if i['ID'] == j['ID'] and i['city'] == j['city']:
list_of_tuples.append((i, j))
print(list_of_tuples)
[({'name': 'fooa', 'desc': 'bazv', 'city': 1, 'ID': 1},
{'name': 'foo', 'desc': 'baz', 'city': 1, 'ID': 1}),
({'name': 'fooa', 'desc': 'bazv', 'city': 1, 'ID': 1},
{'name': 'bar', 'desc': 'bes', 'city': 1, 'ID': 1}),
({'name': 'bard', 'desc': 'besd', 'city': 2, 'ID': 1},
{'name': 'bar', 'desc': 'bes', 'city': 2, 'ID': 1}),
({'name': 'baer', 'desc': 'bees', 'city': 2, 'ID': 1},
{'name': 'bar', 'desc': 'bes', 'city': 2, 'ID': 1}),
({'name': 'aaaa', 'desc': 'bnbb', 'city': 1, 'ID': 2},
{'name': 'aaa', 'desc': 'bbb', 'city': 1, 'ID': 2}),
({'name': 'aaaa', 'desc': 'bnbb', 'city': 1, 'ID': 2},
{'name': 'ccc', 'desc': 'ddd', 'city': 1, 'ID': 2}),
({'name': 'cgcc', 'desc': 'dgdd', 'city': 1, 'ID': 2},
{'name': 'aaa', 'desc': 'bbb', 'city': 1, 'ID': 2}),
({'name': 'cgcc', 'desc': 'dgdd', 'city': 1, 'ID': 2},
{'name': 'ccc', 'desc': 'ddd', 'city': 1, 'ID': 2})]
```
**Question:** How to do this in a more pythonic way (without loops)?
|
2019/03/28
|
[
"https://Stackoverflow.com/questions/55400056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10095977/"
] |
You can use [`itertools.product`](https://docs.python.org/3.7/library/itertools.html#itertools.product) and `filter`:
```
from itertools import product
list1 = [{'name': 'fooa', 'desc': 'bazv', 'city': 1, 'ID': 1},
{'name': 'bard', 'desc': 'besd', 'city': 2, 'ID': 1},
{'name': 'baer', 'desc': 'bees', 'city': 2, 'ID': 1},
{'name': 'aaaa', 'desc': 'bnbb', 'city': 1, 'ID': 2},
{'name': 'cgcc', 'desc': 'dgdd', 'city': 1, 'ID': 2}]
list2 = [{'name': 'foo', 'desc': 'baz', 'city': 1, 'ID': 1},
{'name': 'bar', 'desc': 'bes', 'city': 1, 'ID': 1},
{'name': 'bar', 'desc': 'bes', 'city': 2, 'ID': 1},
{'name': 'aaa', 'desc': 'bbb', 'city': 1, 'ID': 2},
{'name': 'ccc', 'desc': 'ddd', 'city': 1, 'ID': 2}]
def condition(x):
return x[0]['ID'] == x[1]['ID'] and x[0]['city'] == x[1]['city']
list_of_tuples = list(filter(condition, product(list1, list2)))
```
|
Having nested loops is not "not pythonic". However, you can achieve the same result with a list comprehension. I don't think it's more readable though:
```
[(i, j) for j in list2 for i in list1 if i['ID'] == j['ID'] and i['city'] == j['city']]
```
| 6,922
|
3,824,373
|
I am quite new at python and regex so please bear with me.
I am trying to read in a file, match a particular name using a regex while ignoring the case, and store each time I find it. For example, if the file is composed of `Bill bill biLl biLL`, I need to store each variation in a dictionary or list.
Current code:
```
import re
import sys
import fileinput
if __name__ == '__main__':
print "flag"
pattern = re.compile("""([b][i][l][l])""")
for line in fileinput.input():
variation=set(pattern.search(line, re.I))
print variation.groupdict()
print "flag2"
```
When ran, the code will return an error: 'NoneType' cannot be iterated (or something along those lines).
So how do I store each variation?
Thanks in advance!
|
2010/09/29
|
[
"https://Stackoverflow.com/questions/3824373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/461063/"
] |
I'd use findall:
```
re.findall(r'bill', open(filename).read(), re.I)
```
Easy as pie:
```
>>> s = 'fooBiLL bill BILL bIlL foo bar'
>>> import re
>>> re.findall(r'bill', s, re.I)
['BiLL', 'bill', 'BILL', 'bIlL']
```
|
I think that you want [re.findall](http://localhost/docs/python-2.7-docs-html/library/re.html#re.findall). This is of course available on the compiled regular expression as well. The particular error code that you are getting though, would seem to indicate that you are [not matching your pattern](http://localhost/docs/python-2.7-docs-html/library/re.html#re.search). try
```
pattern = re.compile("bill", re.IGNORE_CASE)
```
| 6,925
|
65,890,917
|
I would like to obtain a nested dictionary from a string, which can be split by delimiter, `:`.
```
s='A:B:C:D'
v=['some','object']
desired={'A':{'B':{'C':{'D':v}}}}
```
Is there a "pythonic" way to generate this?
|
2021/01/25
|
[
"https://Stackoverflow.com/questions/65890917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12844579/"
] |
Hmmm . . . You want the most recent value from the `tracker` table. Then compare that to the existing values and insert where there are differences:
```
INSERT INTO Tracker (customer_id, value, date_of_value)
SELECT v.customer_id, v.value, GETDATE()
FROM (SELECT v.*,
ROW_NUMBER() OVER (PARTITION BY v.customer_id ORDER BY v.value DESC) as seqnum
FROM values v
) v LEFT JOIN
Tracker t
ON v.customer_id = t.customer_id AND
v.value = t.value
WHERE v.seqnum = 1 AND t.customer_id IS NULL;
```
|
We can create a trigger on the main table to insert a new row into the tracker table:
```sql
CREATE TRIGGER trg_Tracking
ON MainTable
AFTER INSERT, UPDATE AS
IF (NOT UPDATE(value) OR NOT EXISTS
(SELECT customer_id, value FROM inserted
EXCEPT
SELECT customer_id, value FROM deleted))
RETURN; -- early bail-out if we can
INSERT Tracker (customerr_id, value, date_of_value)
SELECT customer_id, value, GETDATE()
FROM (
SELECT customer_id, value FROM inserted
EXCEPT --make sure to compare inserted and deleted values (AFTER INSERT has empty deleted values)
SELECT customer_id, value FROM deleted
) changed;
GO -- No BEGIN and END as the whole batch is the trigger
```
| 6,926
|
72,366,129
|
I want to reach a document, but I always get this error:
**FileNotFoundError: [Errno 44] No such file or directory: 'dbs/school/students.ini' )**
I've tried putting "/" or "./" before the dbs, but it doesn't work, so I want to know how can I reach that document, the dbs folder is in the same directory where I run the html.
The code I'm using works in python so I don't know why it doesn't work in PyScript
[Here is a picture of the folder](https://i.stack.imgur.com/Jb5Gr.png)
|
2022/05/24
|
[
"https://Stackoverflow.com/questions/72366129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19059715/"
] |
Probably you need to add the file to your paths in your html code like:
```
<py-env>
- paths:
- dbs/school/students.ini
</py-env>
```
This way it should be added to your python environment.
|
i had the same issue, the solution i found was to insert my file into a github public branch and get the link from my github file, the code is below:
```
from pyodide.http import open_url
url_content = open_url('https://raw.githubusercontent.com/')
```
to know more about it read
<https://pyodide.org/en/stable/usage/faq.html>
and see the videos from 1littlecoder about pyscript
<https://www.youtube.com/c/1littlecoder>
| 6,927
|
22,489,243
|
I'm writing tutorial about "advanced" regular expressions for Python,
and I cannot understand
```
lastindex
```
attribute. Why is it always 1 in the given examples:
<http://docs.python.org/2/library/re.html#re.MatchObject.lastindex>
I mean this example:
```
re.match('((ab))', 'ab').lastindex
```
Why is it 1? Second group is matching too.
|
2014/03/18
|
[
"https://Stackoverflow.com/questions/22489243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1377803/"
] |
`lastindex` is the index of the last group that matched. The examples in the documentation include one that uses 2 capturing groups:
```
(a)(b)
```
where `lastindex` is set to 2 as the last capturing group to match was `(b)`.
The attribute comes in handy when you have optional capturing groups; compare:
```
>>> re.match('(required)( optional)?', 'required').lastindex
1
>>> re.match('(required)( optional)?', 'required optional').lastindex
2
```
When you have *nested* groups, the outer group is the last to match. So with `((ab))` or `((a)(b))` the outer group is group 1 and the last to match.
|
In regex groups are captured using `()`. In python `re` the `lastindex` holds the last capturing group.
Lets see this small code example:
```
match = re.search("(\w+).+?(\d+).+?(\W+)", "input 123 ---")
if match:
print match.lastindex
```
In this example, the output will be 3 as I have used three `()` in my regex and it matched all of those.
For the above code, if you execute the following line in `if` block, it will output `123` as it is the second capture group.
```
print match.group(2)
```
| 6,928
|
12,059,852
|
I am new to python or coding , so please be patient with my question,
So here's my busy XML
```
<?xml version="1.0" encoding="utf-8"?>
<Total>
<ID>999</ID>
<Response>
<Detail>
<Nix>
<Check>pass</Check>
</Nix>
<MaxSegment>
<Status>V</Status>
<Input>
<Name>
<First>jack</First>
<Last>smiths</Last>
</Name>
<Address>
<StreetAddress1>100 rodeo dr</StreetAddress1>
<City>long beach</City>
<State>ca</State>
<ZipCode>90802</ZipCode>
</Address>
<DriverLicense>
<Number>123456789</Number>
<State>ca</State>
</DriverLicense>
<Contact>
<Email>x@me.com</Email>
<Phones>
<Home>0000000000</Home>
<Work>1111111111</Work>
</Phones>
</Contact>
</Input>
<Type>Regular</Type>
</MaxSegment>
</Detail>
</Response>
</Total>
```
what I am trying to do is extract these value into nice and clean table below :

Here's my code so far.. but I couldn't figure it out how to get the subchild :
```
import os
os.chdir('d:/py/xml/')
import xml.etree.ElementTree as ET
tree = ET.parse('xxml.xml')
root=tree.getroot()
x = root.tag
y = root.attrib
print(x,y)
#---PRINT ALL NODES---
for child in root:
print(child.tag, child.attrib)
```
Thank you in advance !
|
2012/08/21
|
[
"https://Stackoverflow.com/questions/12059852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/918460/"
] |
Figured it out, add the `UIPopoverController` delegate in addition to the `UISplitViewControllerDelegate`:
```c
//Sent when switching to portrait
- (void)splitViewController:(UISplitViewController*)svc willHideViewController:(UIViewController *)aViewController withBarButtonItem:(UIBarButtonItem*)barButtonItem forPopoverController:(UIPopoverController*)pc
{
...
self.popoverController = pc;
[self.popoverController setDelegate:self];
}
-(void)splitViewController:(UISplitViewController *)svc popoverController:(UIPopoverController *)pc willPresentViewController:(UIViewController *)aViewController
{
NSLog(@"SHOWING POPOVER");
}
- (void)popoverControllerDidDismissPopover:(UIPopoverController *)popoverController
{
NSLog(@"HIDING POPOVER");
}
```
|
When you get that first delegate notification, you're passed a reference to the UIPopoverController that will present the hidden view controller. Register as its delegate, then use the [`-popoverControllerDidDismissPopover:`](http://developer.apple.com/library/ios/documentation/uikit/reference/UIPopoverControllerDelegate_protocol/Reference/Reference.html#//apple_ref/occ/intfm/UIPopoverControllerDelegate/popoverControllerDidDismissPopover%3a) delegate method from the UIPopoverControllerDelegate protocol.
| 6,929
|
50,107,263
|
I have the following dataframe:
```
0 1 2 3 4
0 1.JPG NaN NaN NaN NaN
1 2883 2957.0 3412.0 3340.0 miscellaneous
2 3517 3007.0 4062.0 3371.0 miscellaneous
3 5678 3158.0 6299.0 3423.0 miscellaneous
4 1627 3287.0 2149.0 3694.0 miscellaneous
5 2894 3272.0 3421.0 3664.0 miscellaneous
6 3525 3271.0 4064.0 3672.0 miscellaneous
7 4759 3337.0 5321.0 3640.0 miscellaneous
8 6141 3289.0 6664.0 3654.0 miscellaneous
9 1017 3598.0 1539.0 3979.0 miscellaneous
10 1624 3586.0 2155.0 3993.0 miscellaneous
11 2252 3612.0 2777.0 3967.0 miscellaneous
12 3211 3548.0 3735.0 3944.0 miscellaneous
13 6052 3616.0 6572.0 3983.0 miscellaneous
14 691 3911.0 1204.0 4223.0 miscellaneous
15 2.JPG NaN NaN NaN NaN
16 3.JPG NaN NaN NaN NaN
17 5384 2841.0 5963.0 3095.0 miscellaneous
18 5985 2797.0 6611.0 3080.0 miscellaneous
19 3512 3012.0 4025.0 3366.0 miscellaneous
20 5085 2974.0 5587.0 3367.0 miscellaneous
21 2593 3224.0 3148.0 3469.0 miscellaneous
22 1044 3630.0 1511.0 3928.0 miscellaneous
23 4764 3619.0 5283.0 3971.0 miscellaneous
24 5103 3613.0 5635.0 3928.0 miscellaneous
```
I want to split this data frame into multiple csv such that: First csv should be named 1.csv and have all data which is below 1.jpg and so on.
For example, exported CSV should be like:
1.csv
```
2883 2957 3412 3340 miscellaneous
3517 3007 4062 3371 miscellaneous
5678 3158 6299 3423 miscellaneous
1627 3287 2149 3694 miscellaneous
2894 3272 3421 3664 miscellaneous
3525 3271 4064 3672 miscellaneous
4759 3337 5321 3640 miscellaneous
6141 3289 6664 3654 miscellaneous
1017 3598 1539 3979 miscellaneous
1624 3586 2155 3993 miscellaneous
2252 3612 2777 3967 miscellaneous
3211 3548 3735 3944 miscellaneous
6052 3616 6572 3983 miscellaneous
691 3911 1204 4223 miscellaneous
```
2.csv (this csv should should be blank)
3.csv
```
5384 2841 5963 3095 miscellaneous
5985 2797 6611 3080 miscellaneous
3512 3012 4025 3366 miscellaneous
5085 2974 5587 3367 miscellaneous
2593 3224 3148 3469 miscellaneous
1044 3630 1511 3928 miscellaneous
4764 3619 5283 3971 miscellaneous
5103 3613 5635 3928 miscellaneous
```
How do I do this using python and pandas ?
|
2018/04/30
|
[
"https://Stackoverflow.com/questions/50107263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4663377/"
] |
You can use:
```
for n,g in df.assign(grouper = df['0'].where(df['1'].isnull())
.ffill().astype('category'))\
.dropna().groupby('grouper'):
g.drop('grouper', axis=1).to_csv(n+'.csv', header=None, index=False)
```
***Note:*** used astype('category') to pickup that group with no records
Output `!dir *.JPG.csv`
```
04/30/2018 03:43 PM 657 1.JPG.csv
04/30/2018 03:43 PM 0 2.JPG.csv
04/30/2018 03:43 PM 376 3.JPG.csv
```
List contents of 1.jpg.csv
```
2883,2957.0,3412.0,3340.0,miscellaneous
3517,3007.0,4062.0,3371.0,miscellaneous
5678,3158.0,6299.0,3423.0,miscellaneous
1627,3287.0,2149.0,3694.0,miscellaneous
2894,3272.0,3421.0,3664.0,miscellaneous
3525,3271.0,4064.0,3672.0,miscellaneous
4759,3337.0,5321.0,3640.0,miscellaneous
6141,3289.0,6664.0,3654.0,miscellaneous
1017,3598.0,1539.0,3979.0,miscellaneous
1624,3586.0,2155.0,3993.0,miscellaneous
2252,3612.0,2777.0,3967.0,miscellaneous
3211,3548.0,3735.0,3944.0,miscellaneous
6052,3616.0,6572.0,3983.0,miscellaneous
691,3911.0,1204.0,4223.0,miscellaneous
```
|
```
# program splits one big csv file into individiual image csv 's
import pandas as pd
import numpy as np
df = pd.read_csv('results.csv', header=None)
#df1 = df.replace(np.nan, '1', regex=True)
print(df)
for n,g in df.assign(grouper = df[0].where(df[1].isnull())
.ffill().astype('category'))\
.dropna().groupby('grouper'):
g.drop('grouper', axis=1).to_csv(n+'.csv',float_format="%.0f", header=None, index=False )
```
This produces the required result
| 6,930
|
36,243,814
|
I noticed this while playing around with graph\_tool. Some module attributes only seem to be available when run from ipython. The simplest example (example.py)
```
import graph_tool as gt
g = gt.Graph()
gt.draw.sfdp_layout(g)
```
Runs without error from ipython using `run example.y', but from the command line,`python example.py` yields
```
AttributeError: 'module' object has no attribute 'draw'
```
The same hold true for `ipython example.py`. I'm lost as to what would cause this. I would like to access the draw module but it seems like I can only do this via `from graph_tool.draw import *` Any help or explanation would be appreciated.
|
2016/03/27
|
[
"https://Stackoverflow.com/questions/36243814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1308706/"
] |
based on comments:
if you are attempting to create a database programmatically, you can't use a connection to an unborn database, so in this situation you must open a connection to `master` database, something like that:
```
"Server = localhost; Database = master; User Id = 123; Password = abc;"
```
|
```
String str;
SqlConnection myConn = new SqlConnection("Server=localhost;User Id = 123; Password = abc;database=master");
str = "CREATE DATABASE database1";
SqlCommand myCommand = new SqlCommand(str, myConn);
try
{
myConn.Open();
myCommand.ExecuteNonQuery();
MessageBox.Show("DataBase is Created Successfully", "Creation", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
catch (System.Exception ex)
{
MessageBox.Show(ex.ToString(), "MyProgram", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
finally
{
if (myConn.State == ConnectionState.Open)
{
myConn.Close();
}
}
```
Please try this above code. Its working for me.Don't give database name before you create the database. Use database master for initialize the database connection.
| 6,931
|
2,125,612
|
I'm building my first python app on app-engine and wondering if I should use Django or not.
What are the strong points of each? If you have references that support your answer, please post them. Maybe we can make a wiki out of this question.
|
2010/01/24
|
[
"https://Stackoverflow.com/questions/2125612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16628/"
] |
Aral Balkan wrote [a really nice piece](http://aralbalkan.com/1313) addressing this very question. Its a year or so old, so take it with a grain of salt - I think that a lot more emphasis should be put on the awesomeness of django's Object-Relational-Model. Basically, IMHO, it all comes down to whether or not you have a preference for using DJango's object model (which I happen to).
|
If it's not a small project, I try to use Django. You can use the App Engine Patch (<http://code.google.com/p/app-engine-patch/>). However, the ORM cannot use Django's meaning your models.py will still be using GAE's Datastore.
One of the advantages of using Django on GAE is session management. GAE does not have a built-in session.
You won't be able to using most Django 3rd-party apps though especially those with model changes. I had to build my own tagging app for GAE.
| 6,932
|
72,297,725
|
On my OSX M1 Mac I have the following keyring setup which works with Google Artifact Repository to resolve dependencies when using python on the command line shell.
Refer to <https://cloud.google.com/artifact-registry/docs/python/store-python>
```
$ keyring --list-backends
keyring.backends.chainer.ChainerBackend (priority: 10)
keyring.backends.fail.Keyring (priority: 0)
keyring.backends.macOS.Keyring (priority: 5)
keyrings.gauth.GooglePythonAuth (priority: 9)
```
If I try to install the dependencies from within PyCharm it does not work automatically, it prompts for a user as can be seen. I expected it to resolve the dependencies automatically from my already authenticated account. How to get PyCharm to work with Google Artifact Repository?
[](https://i.stack.imgur.com/B1Gwc.png)
|
2022/05/19
|
[
"https://Stackoverflow.com/questions/72297725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78000/"
] |
Based on your description, I suppose you have successfully set up the Python virtual environment (venv) and installed all the dependencies via the terminal (Step 1-4 in <https://cloud.google.com/artifact-registry/docs/python/store-python>). And you want PyCharm also run your Python codes using that virtual environment.
Usually, PyCharm will create a new Python virtual environment or use the default Python interpreter on your machine when you first open a project. That is why you will have to install the dependencies again.
In order to make PyCharm run your project using the venv you have created earlier (through the command line), go to the Setting -> Project: your-project-name -> Python interpreter -> Add -> Choose Existing environment -> Hit ... -> Navigate to the the venv folder you created -> Select venv/bin/python.exe or venv/bin/python3.exe or venv/bin/python.
After that PyCharm will run your project using the existing virtual environment, and you don't have to install everything again. I am from Upwork, by the way.
|
It's basically the same as what you found already on your link but you missed the following site :)
1. Setup service account with `roles/artifactregistry.writer` rights
2. Create key file (json file with your credentials) using `gcloud iam service-accounts keys create FILE_NAME.json --iam-account=SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com`
3. After this you can follow the following website. [Setting up authentication to Python package repositories](https://cloud.google.com/artifact-registry/docs/python/authentication)
| 6,933
|
12,790,065
|
When I use the python Sybase package, and do a cursor fetch(), one of the columns is a datetime - in the sequence returned, it has a DateTimeType object for this value.
I can find it is in the sybasect module but can't find any documentation for this datatype.
I would like to be able to convert it to a string output - similar to using strftime() with datetime objects.
Any info on this?
|
2012/10/08
|
[
"https://Stackoverflow.com/questions/12790065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1133237/"
] |
I think this is what are you looking for
```
$db = Zend_Db::factory( ...options... );
$select = $db->select()
->from(array('org' => 'orgTable'),
array(
'orgid' => 'org.orgid',
'role' =>'org.role',
'userid' =>'user.userid',
'firstname' =>'user.firstname'
))
->join(array('user' => 'userTable'),
'org.userid = user.userid',array())
->where('org.orgid = ?',$generated_id);
```
|
Here is a `Zend_Db_Select` that returns the result you are looking for.
```
$select = $db->select()
->from(array('org' => 'orgTable'), array('orgid', 'role'))
->join(array('user' => 'userTable'), 'org.userid = user.userid', array('userid', 'firstname'))
->where('org.orgid = ?', 'generated-id');
```
You can use the array notation for table names to get the aliased names in the query.
Hope that helps.
| 6,934
|
36,331,946
|
I installed Linux Mint as Virtual Machine.
When I do:
```
python --version
```
I get:
```
Python 2.7.6
```
I installed seperate python folder acording to [this](http://www.experts-exchange.com/articles/18641/Upgrade-Python-2-7-6-to-Python-2-7-10-on-Linux-Mint-OS.html).
and When I do:
```
python2.7 --version
```
I get:
```
Python 2.7.11
```
Now, I want to work only on Python 2.7.11
I installed pip
and installed a package using pip with `pip install paypalrestsdk`
It was successfull:
[](https://i.stack.imgur.com/ZPOgo.png)
However when I run the script using this package I get:
[](https://i.stack.imgur.com/WMLIW.png)
i suspect that pip and the install were done on the `python 2.7.6` rather than the `python 2.7.11`
What can I do?
|
2016/03/31
|
[
"https://Stackoverflow.com/questions/36331946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1712099/"
] |
mPDF uses autosizing tables and this influences, among other things, the font-size as well. When outputting tables with mPDF you need to set:
```
<table style="overflow: wrap">
```
on every table according to this [answer](https://stackoverflow.com/questions/23760018/mpdf-font-size-not-working).
You can set font-size by `$mpdf->SetFontSize(6);` Here font-size is measured in point(pt). Or you can set default font-size using the mpdf function `SetDefaultFontSize()`. According to MPDF documentation, `SetDefaultFontSize()` is depracated but still available. The other alternative is to edit the mpdf default stylesheet(mpdf.css).
Hope this may help
|
kindly use
```
$mpdf->keep_table_proportions = false;
```
| 6,936
|
62,264,821
|
Here is my [python](/questions/tagged/python "show questions tagged 'python'") code:
```py
while exit:
serialnumber = int(input("serial number of product :"))
try:
if len(str(serialnumber)) == 6:
break
else:
print("serial number cant be used")
serialnumber = int(serialnumber)
break
except ValueError:
print("Invalid input")
print()
```
I have been trying to make a length check on the input so that it does not go over 6 characters.
I would also like my program to keep asking for input until it passes the check.
However, in my program, if the input fails the check then it will just display the serial number and it won't prompt the user again.
|
2020/06/08
|
[
"https://Stackoverflow.com/questions/62264821",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13653559/"
] |
You don't need to do any math manually. Simply declare a `DIBSECTION` variable, and then pass a pointer to it to `GetObject()` along with `sizeof()` as the size of that variable, eg:
```
DIBSECTION dib;
GetObject(hBitmap, sizeof(dib), &dib);
```
|
I took out a piece of paper and added up everything myself.
A `DIBSECTION` contains 5 parts.
```
typedef struct tagDIBSECTION {
BITMAP dsBm;
BITMAPINFOHEADER dsBmih;
DWORD dsBitfields[3];
HANDLE dshSection;
DWORD dsOffset;
} DIBSECTION, *LPDIBSECTION, *PDIBSECTION;
```
So let's start with `BITMAP`.
```
typedef struct tagBITMAP {
LONG bmType;
LONG bmWidth;
LONG bmHeight;
LONG bmWidthBytes;
WORD bmPlanes;
WORD bmBitsPixel;
LPVOID bmBits;
} BITMAP, *PBITMAP, *NPBITMAP, *LPBITMAP;
```
A `LONG` is just an `int` which is 4 bytes. A `WORD` is a `unsigned short` which is 2 bytes. And `LPVOID` is a `ptr`.
`4+4+4+4+2+2` = `20`. But wait, a struct has to be aligned properly. So we need to test divisibility by `8` on 64-bit systems. 20 is not divisible by 8, so we add 4 bytes of padding to get `24`. Adding the `ptr` gives us `32`.
The size of the `BITMAPINFOHEADER` is 40 bytes. It's divisible by 8, so nothing fancy needed. We're at `72` now.
Back to the `DIBSECTION`. There's an array of `DWORD`s. And each `DWORD` is an `unsigned int`. Adding 12 to 72 gives us `84`.
Now there's a `handle`. A handle is basically a pointer, whose value can be 4 or 8 depending on 32 or 64 bit. Time to check if `84` is divisible by 8. It's not so we add 4 bytes of padding to get `88`. Then add the pointer to get `96`.
Finally there's the last `DWORD` and the total reaches `100` on a 64-bit system.
But what about `sizeof()`?????? Can't you just do `sizeof(DIBSECTION)`? After all magic numbers = bad. Ken White said in the comments that I didn't need to do any math. I disagree with this. First, as a programmer, it's essential to understand what is happening and why. Nothing could be more elementary than memory on a computer. Second, I only tagged the post as `winapi`. For the people reading this, if you scroll down on the `GetObject` page, the function is exported on `Gdi32.dll`. Any windows program has access to `Gdi32.dll`. Not every windows program has access to `sizeof()`. Third, it may be important for people who need to know the math to have the steps shown. Not everyone programs in a high level language. It might even be a question on an exam.
Perhaps the real question is if a struct of size 100 gets padded to 104 when memory is being granted on a 64-bit system.
| 6,937
|
62,563,923
|
This seems really basic but I can't seem to figure it out. I am working in a Pyspark Notebook on EMR and have taken a pyspark dataframe and converted it to a pandas dataframe using `toPandas()`.
Now, I would like to save this dataframe to the local environment using the following code:
```
movie_franchise_counts.to_csv('test.csv')
```
But I keep getting a permission error:
```
[Errno 13] Permission denied: 'test.csv'
Traceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/pandas/core/generic.py", line 3204, in to_csv
formatter.save()
File "/usr/local/lib64/python3.6/site-packages/pandas/io/formats/csvs.py", line 188, in save
compression=dict(self.compression_args, method=self.compression),
File "/usr/local/lib64/python3.6/site-packages/pandas/io/common.py", line 428, in get_handle
f = open(path_or_buf, mode, encoding=encoding, newline="")
PermissionError: [Errno 13] Permission denied: 'test.csv'
```
Any help would be much appreciated.
|
2020/06/24
|
[
"https://Stackoverflow.com/questions/62563923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13136602/"
] |
Your not passing anything to updateSettings `google.script.run.updateSettings();`>
I would do it like this:
```
<input type = "button" value="Submit" onClick="google.script.run.updateSettings(this.parentNode);" />
```
I'm running this as a dialog and it runs okay now. I added values to the radio buttons and now the weekly one returns 'weekly' and the daily one returns 'daily' and the IDdrive returns a string.
gs:
```
function openFolderForm() { SpreadsheetApp.getUi().showModelessDialog(HtmlService.createHtmlOutputFromFile('ah1').setHeight(525).setWidth(800), 'Export Settings');
}
function updateSettings(form) {
console.log(form)
var formQ1=form.IDdrive;
if (form.radioDaily == true) { var formQ2 = 1; } else { var formQ2 = 7}
}
function exportCSV() {
var changelogSheetName = "data";
var ss=SpreadsheetApp.getActive();
var sheets=ss.getSheets();
var tab=ss.getSheetByName('data');
var folder=DriveApp.getFolderById(formQ1);
}
```
html:
```
<!DOCTYPE html>
<html>
<head>
<base target="_top">
<link rel="stylesheet" href="https://ssl.gstatic.com/docs/script/css/add-ons1.css">
<title><b>Output Folder</b></title>
</head>
<body>
<p>Enter the ID of your output folder. A Drive ID is made up by the characters after the last /folder/ in the URL</p>
<form>
<input type='text' name='IDdrive' id="IDdrive" style="width: 300px;"/><br />
<input type="radio" name="radio" id="radioDaily" value="daily"> <label for="radioDaily">Daily</label><br />
<input type="radio" name="radio" id="radioWeekly" value="weekly"> <label for="radioWeekly">Weekly</label><br />
<input type="button" value="Submit" class="action" onClick="google.script.run.updateSettings(this.parentNode);" />
<input type="button" value="Cancel" class="cancel" onClick="google.script.host.close();" />
</form>
</body>
</html>
```
|
Got it to work. The secret was needed to just JSON to properly store the form input criteria.
**CODE**
```
function updateSettings(formObject) {
var uiForm = SpreadsheetApp.getUi();
JSON.stringify(formObject);
var formText = formObject.formQ1;
var formRadio = formObject.formQ2;
if (formRadio == "Daily") { var frequency = 1; } else { var frequency = 7};
etc etc
```
**HTML**
```
<form id="myForm" onsubmit="event.preventDefault(); google.script.run.updateSettings(this); google.script.host.close();">
<div>
<input type='text' name='formQ1' id="formQ1" style="width: 300px;"/>
</div>
<div class="inline form-group">
<input type="radio" name="formQ2" id="formQ2" value="Daily" /> <label for="radioDaily">Daily</label>
</div>
<div>
<input type="radio" name="formQ2" id="formQ2" value="Weekly" /> <label for="radioWeekly">Weekly</label>
</div>
<br><br>
<div class="inline form-group">
<input type="submit" value="Submit" style="color:#4285F4"/>
<input type="button" value="Cancel" class="cancel" onClick="google.script.host.close();" />
```
| 6,938
|
72,688,811
|
I tried to implement the binary tree program in python. I want to add one more function to get the level of a specific node.
eg:-
```
10 # level 0
/ \
5 15 # level 1
/ \
3 7 # level 2
```
if we search the level of node 3, it should return
```
class BinaryTree:
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def insert(self, data):
if self.data == data:
return
if self.data<data:
if self.left:
self.left.insert(data)
else:
self.left = BinaryTree(data)
else:
if self.right:
self.right.insert(data)
else:
self.right = BinaryTree(data)
def print_tree(self):
if self:
print(self.data)
if self.left:
self.left.print_tree()
elif self.right:
self.right.print_tree()
def get_level(self, data, level=0):
print(type(data))
level += 1
if self.data == data:
return level
elif self.left and self.right:
if self.left:
self.left.get_level(data)
elif self.right:
self.right.get_level(data)
return level
def in_order(self):
if self:
#left
in_order(self.left)
#root
print(self.data, '->')
#right
in_order(self.right)
```
This is my code for Binary tree. I need to add another function to this class which will tell the **level of a node** after giving the value
For example:
```
def get_level(node):
#some code
return level
get_level(10) # returns the level of the node 10
```
|
2022/06/20
|
[
"https://Stackoverflow.com/questions/72688811",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14931124/"
] |
If by "simplify", you mean "shorten", then this is as good as it gets:
```
public get isUsersLimitReached(): boolean {
return !this.isAdmin && usersCount >= this.max_users;
}
```
Other than that, there's really not much to work with here.
|
A user limit is reached only if the user is not admin and count exceeds.
```
public get isUsersLimitReached(): boolean {
return !this.isAdmin && usersCount >= this.max_users;
}
```
| 6,939
|
67,775,753
|
This is extended question to [Can we send data from Google cloud storage to SFTP server using GCP Cloud function?](https://stackoverflow.com/questions/67772938/can-we-send-data-from-cloud-storage-to-sftp-server-using-gcp-cloud-function)
```py
with pysftp.Connection(host=myHostName, username=myUsername,
password=myPassword, cnopts=cnopts) as sftp:
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(filename) #source_blob_name
with sftp.open(remotepath, 'w+', 32768) as f:
blob.download_to_file(f)
```
The `W+`/`w` command is not truncating the previous file and throwing error. This is when there is already a file with same name in SFTP server existing! what should be the optimal solution to this?
The complete error code log is
```none
gcs-to-sftp
4wz44jq3oe2g
Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/cloud/storage/client.py", line 728, in download_blob_to_file
checksum=checksum,
File "/env/local/lib/python3.7/site-packages/google/cloud/storage/blob.py", line 986, in _do_download
response = download.consume(transport, timeout=timeout)
File "/env/local/lib/python3.7/site-packages/google/resumable_media/requests/download.py", line 168, in consume
self._process_response(result)
File "/env/local/lib/python3.7/site-packages/google/resumable_media/_download.py", line 186, in _process_response
response, _ACCEPTABLE_STATUS_CODES, self._get_status_code
File "/env/local/lib/python3.7/site-packages/google/resumable_media/_helpers.py", line 104, in require_status_code
*status_codes
google.resumable_media.common.InvalidResponse: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py", line 449, in run_background_function
_function_handler.invoke_user_function(event_object)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py", line 268, in invoke_user_function
return call_user_function(request_or_event)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py", line 265, in call_user_function
event_context.Context(**request_or_event.context))
File "/user_code/main.py", line 20, in hello_sftp
copy_file_on_ftp(myHostName, myUsername, myPassword, bucket_name, filename)
File "/user_code/main.py", line 42, in copy_file_on_ftp
file_to_export.download_to_file(f)
File "/env/local/lib/python3.7/site-packages/google/cloud/storage/blob.py", line 1128, in download_to_file
checksum=checksum,
File "/env/local/lib/python3.7/site-packages/google/cloud/storage/client.py", line 731, in download_blob_to_file
_raise_from_invalid_response(exc)
File "/env/local/lib/python3.7/site-packages/google/cloud/storage/blob.py", line 4100, in _raise_from_invalid_response
raise exceptions.from_http_status(response.status_code, message, response=response)
google.api_core.exceptions.NotFound: 404 GET https://storage.googleapis.com/download/storage/v1/b/sftp__test/o/test_file.csv?alt=media: No such object: sftp__test/test_file.csv: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
```
|
2021/05/31
|
[
"https://Stackoverflow.com/questions/67775753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15452168/"
] |
For google, there is a formula googlefinance. For yahoo, all datas are imported in the code source within a big json you can fetch by this way
```
var source = UrlFetchApp.fetch(url).getContentText()
var jsonString = source.match(/(?<=root.App.main = ).*(?=}}}})/g) + '}}}}'
var data = JSON.parse(jsonString)
```
you can then navigate through data to retrieve the informations you need.
|
See formula GOOGLEFINANCE in Google Sheets. It's not totally live (about 20 minutes delay) but for many purposes is enough. I use it mostly for currency exchange rates but there far more options:
<https://support.google.com/docs/answer/3093281?hl=en>
Yahoo Finance and most of live sites are protected against webscraping or use javascript (that makes parts generated by javascript impossible for import by formulasa like IMPORTHML, IMPORTXML, IMPORTDATA).
| 6,940
|
46,282,656
|
I have a test project that looks like this:
```
_ test_project
├- __init__.py
├- main.py
└- output.py
```
`__init__.py` is empty, and the other two files look like this:
```
# main.py
from . import output
```
and
```
# output.py
print("hello world")
```
I would like to import `output.py` just for the side effect, but I am getting this message instead:
```
(venv) $ python test_project/main.py
Traceback (most recent call last):
File "test_project/main.py", line 2, in <module>
from . import output
ImportError: cannot import name 'output'
```
What does the import statement in `main.py` have to be to just print "hello world"?
|
2017/09/18
|
[
"https://Stackoverflow.com/questions/46282656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5203563/"
] |
Relative imports can only be performed in a package. So, run the code as a package.
```
$ cd /pathabovetest_project
$ python -m test_project.main
```
|
Just do `import output`, that worked for me.
| 6,943
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.