qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
63,354,202
|
i am beginer of the python programming. i am creating simple employee salary calculation using python.
**tax = salary \* 10 / 100** this line said wrong error displayed Unindent does not match outer indentation level
this is the full code
```
salary = 60000
if(salary > 50000):
tax = float(salary * 10 / 100)
elif(salary > 35000):
tax = float(salary * 5 / 100)
else:
tax = 0
netsal = salary - tax
print(tax)
print(netsal)
```
|
2020/08/11
|
[
"https://Stackoverflow.com/questions/63354202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12932093/"
] |
The error message is self explanatory.
You can't indent your elif and else, they should be at the same level as the if condition.
```
salary = 60000
if(salary > 50000):
tax = salary * 10 / 100
elif(salary > 35000):
tax = salary * 5 / 100
else :
tax = 0
netsal = salary - tax
print(tax)
print(netsal)
```
|
You just need to fix your indentation, I would suggest using an IDE
```py
salary = 60000
if(salary > 50000):
tax = salary * 10 / 100
elif(salary > 35000):
tax = salary * 5 / 100
else:
tax = 0
print(tax)
>>> 6000.0
```
| 10,982
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
Currently, there are wheels compatible with the official distributions of **Python 2.7, 3.4, 3.5, and 3.6.**
Apparently, there is no version of that library for Python 3.7, so I'd try downgrading the Python version.
Download the wheel on this site: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio>.
Choose:
* PyAudio‑0.2.11‑cp37‑cp37m‑win32.whl if you use 32 bit
* PyAudio‑0.2.11‑cp37‑cp37m‑win\_amd64.whl for 64 bit
Then go to your download folder:
```
cd <your_donwload_path>
pip install PyAudio‑0.2.11‑cp37‑cp37m‑win_amd64.whl
```
|
You need to install Microsoft Visual C++ 14.0
This should work <https://visualstudio.microsoft.com/visual-cpp-build-tools/>
| 10,983
|
18,041,050
|
I've got a py2.7 project which I want to test under py3.2. For this purpose, I want to use virtualenv. I wanted to create an environment that would run 3.2 version internally:
```
virtualenv 3.2 -p /usr/bin/python3.2
```
but it failed. My default python version is `2.7` (ubuntu default settings). Here is `virtualenv --version 1.10`. The error output is:
```
Running virtualenv with interpreter /usr/bin/python3.2
New python executable in 3.2/bin/python3.2
Also creating executable in 3.2/bin/python
Installing Setuptools...................................................................................................................................................................................................................................done.
Installing Pip..............
Complete output from command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record:
Traceback (most recent call last):
File "setup.py", line 5, in <module>
from setuptools import setup, find_packages
File "/usr/lib/python2.7/dist-packages/setuptools/__init__.py", line 2, in <module>
from setuptools.extension import Extension, Library
File "/usr/lib/python2.7/dist-packages/setuptools/extension.py", line 2, in <module>
from setuptools.dist import _get_unpatched
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 103
except ValueError, e:
^
SyntaxError: invalid syntax
----------------------------------------
...Installing Pip...done.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2308, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 821, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 963, in create_environment
install_sdist('Pip', 'pip-*.tar.gz', py_executable, search_dirs)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 932, in install_sdist
filter_stdout=filter_install_output)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 899, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record failed with error code 1
```
I don't know what the hell is this syntax error - where does it come from... I know there was a change in try...catch statement syntax between 2.x and 3.x, but should virtualenv throw syntax errors?
I'd be grateful if someone pointed me out if there's something I'm doing wrong or if there is an installation problem on my machine.
|
2013/08/04
|
[
"https://Stackoverflow.com/questions/18041050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/769384/"
] |
To create a Python 3.2 virtual environment you should use the virtualenv you installed for Python 3.2. In your case that would be:
```
/usr/bin/virtualenv-3.2
```
|
You'll have to use a Python 3 version of `virtualenv`; the version you are using is installing Python 2 tools into a Python 3 virtual environment and these are not compatible.
| 10,990
|
68,762,785
|
I have the following dataframes.
```
Name | Data
A foo
A bar
B foo
B bar
C foo
C bar
C cat
Name | foo | bar | cat
A 1 2 3
B 4 5 6
C 7 8 9
```
I need to lookup the values present in the 2nd dataframe and create a dataframe like this
```
Name | Data | Value
A foo 1
A bar 2
B foo 4
B bar 5
C foo 7
C bar 8
C cat 9
```
I tried looping over df1 and parsing df2 like df2[df2['Name']=='A']['foo'], this works but it takes forever to complete. I am new to python and any help to reduce the runtime would be appreciated.
|
2021/08/12
|
[
"https://Stackoverflow.com/questions/68762785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16652846/"
] |
You can use `.melt` + `.merge`:
```py
x = df1.merge(df2.melt("Name", var_name="Data"), on=["Name", "Data"])
print(x)
```
Prints:
```none
Name Data value
0 A foo 1
1 A bar 2
2 B foo 4
3 B bar 5
4 C foo 7
5 C bar 8
6 C cat 9
```
|
You can melt your second dataframe and then merge it with your first:
```
import pandas as pd
df1 = pd.DataFrame({
'Name': ['A', 'A', 'B', 'B', 'C', 'C', 'C'],
'Data': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'cat'],
})
df2 = pd.DataFrame({
'Name': ['A', 'B', 'C'],
'foo': [1, 4, 7],
'bar': [2, 5, 8],
'cat': [3, 6, 9],
})
df1.merge(df2.melt('Name', var_name='Data'), on=['Name', 'Data'])
```
| 10,993
|
56,878,362
|
I'm trying to create a role wrapper which will allow me to restrict certain pages and content for different users. I already have methods implemented for checking this, but the wrapper/decorator for implementing this fails and sometimes doesn't, and I have no idea of what the cause could be.
I've searched around looking for a conclusive reason as to what is causing this problem, but unfortunately, Flask's tracebacks do not give a conclusive reason or solution, like most other searches I come up with do.
I'm using Flask-Login, Flask-Migrate, and Flask-SQLAlchemy to manage my web application, I've looked into different methods of applying RBAC, but they all required seemingly complicated changes to my database models, and I felt that my method would have a higher chance of working in the long run.
Here is my simplified code (I can provide the full application if requested). Below that is the full traceback from the debugger.
Thank you.
`routes.py`
```py
def require_role(roles=["User"]):
def wrap(func):
def run(*args, **kwargs):
if current_user.is_authenticated:
if current_user.has_roles(roles):
return func(*args, **kwargs)
return abort(401)
return run
return wrap
@app.route('/hidden<id>/history')
@login_required
@require_role(roles=['Admin'])
def hidden_history(id):
if not validate_id(id):
return '<span style="color: red;">error:</span> bad id'
return render_template('hidden_history.html')
@app.route('/hidden<id>/help')
@login_required
def hidden_help(id):
if not validate_id(id):
return '<span style="color: red;">error:</span> bad id'
return render_template('hidden_help.html')
@app.route('/hidden<id>/')
@login_required
@require_role(roles=['Hidden'])
def hidden(id):
if not validate_id(id):
return '<span style="color: red;">error:</span> bad id'
# ...
return render_template('hidden.html')
```
`Traceback (most recent call last)`
```py
Traceback (most recent call last):
File "A:\Programming\Python\Flask\xevion.dev\wsgi.py", line 1, in <module>
from app import app, db
File "A:\Programming\Python\Flask\xevion.dev\app\__init__.py", line 18, in <module>
from app import routes, models
File "A:\Programming\Python\Flask\xevion.dev\app\routes.py", line 143, in <module>
@require_role(roles=['Hidden'])
File "c:\users\xevion\appdata\local\programs\python\python36\lib\site-packages\flask\app.py", line 1251, in decorator
self.add_url_rule(rule, endpoint, f, **options)
File "c:\users\xevion\appdata\local\programs\python\python36\lib\site-packages\flask\app.py", line 67, in wrapper_func
return f(self, *args, **kwargs)
File "c:\users\xevion\appdata\local\programs\python\python36\lib\site-packages\flask\app.py", line 1222, in add_url_rule
'existing endpoint function: %s' % endpoint)
AssertionError: View function mapping is overwriting an existing endpoint function: run
```
**Edit:** I realize now that it doesn't work when there are more than one calls to the wrapper function. How come?
|
2019/07/03
|
[
"https://Stackoverflow.com/questions/56878362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6912830/"
] |
So to solve the problem that has been plaguing me for the last couple of hours, I've looked into how the `flask_login` module actually works, and after a bit of investigating, I found out that they use an import from `functools` called `wraps`.
I imported that, copied how `flask_login` implemented it essentially, and my app is now working.
```py
def require_role(roles=["User"]):
def wrap(func):
@wraps(func)
def decorated_view(*args, **kwargs):
if current_user.is_authenticated:
if current_user.has_roles(roles):
return func(*args, **kwargs)
return abort(401)
return decorated_view
return wrap
```
---
[`flask_login/utils.py#L264-L273`](https://github.com/maxcountryman/flask-login/blob/9dafc656f8771d3943f0b1fd96aff52f24223d17/flask_login/utils.py#L264-L273)
|
At first glance it looks like a conflict with your `run` function in the `require_role` decorator ([docs](http://flask.pocoo.org/docs/1.0/patterns/viewdecorators/)):
```
def require_role(roles=["User"]):
def wrap(func):
def wrapped_func(*args, **kwargs):
...
```
| 10,994
|
38,882,845
|
Anaconda for python 3.5 and python 2.7 seems to install just as a drop in folder inside my home folder on Ubuntu. Is there an installed version of Anaconda for Ubuntu 16? I'm not sure how to ask this but do I need python 3.5 that comes by default if I am also using Anaconda 3.5?
It seems like the best solution is docker these days. I mean I understand virtualenv and virtualenvwrapper. However, sometimes I try to indicate in my .bashrc that I want to use python 3.5 and yet I'll use the command mkvirtualenv and it will start installing the python 2.7 versions of python.
Should I choose either Anaconda or the version of python installed with my OS from python.org or is there an easy way to manage many different versions of Python?
Thanks,
Bruce
|
2016/08/10
|
[
"https://Stackoverflow.com/questions/38882845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/784304/"
] |
My solution for Python 3.5 and Anaconda on Ubuntu 16.04 LTS (with the bonus of OpenCV 3) was to install Anaconda, then deprecate to 3.5. You have to be sure to update anaconda afterwards - that's the bit that got me at first. The commands I gave were:
```
bash Anaconda3-4.3.1-Linux-x86_64.sh
conda install python=3.5
conda update anaconda
conda install -c menpo opencv3
```
This seems to work for me; tried it on a few other Ubuntu 16.04 machines as well, and it seemed to work well.
|
Use anaconda version `Anaconda3-4.2.0-Linux-x86_64.sh` from the anaconda installer archive.This comes with `python 3.5`. This worked for me.
| 10,995
|
35,528,078
|
I have a Python code like this,
```
pyg = 'ay'
original = raw_input('Enter a word:')
if len(original) > 0 and original.isalpha():
word = original.lower()
first = word[0]
new_word = word+first+pyg
new_word[1:]
print original
else:
print 'empty'
```
The output of variable "new\_word" should be "ythonpay", but now the new\_word contains "pythonpay". Could anyone let me know what is the mistake i am doing here.
|
2016/02/20
|
[
"https://Stackoverflow.com/questions/35528078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1802617/"
] |
You will need your own implementation of `ToString` in your `Employee` class. You just need to override it and put your code of `PrintEmployee` in the new method.
Just to make it clear what I mean I give you a sample on how the override should look like:
```
public override string ToString()
{
return string.Format("ID:{0}{5}Full Name: {1} {2}{5}Social Security Number: {3}{5}Wage: {4}{5}", ID, FirstName, LastName, SocialNumber, HourWage, Environment.NewLine);
}
```
To give you a little background information: every class or struct in c# inherits from the `Object` class. Per default the `Object` class already has an implementation of `ToString`. This basic implementation would only print out namespace + class name of your Employee class. But with the above override you could simply write something like the following and get your wished output:
```
var employee = new Employee { Firstname = "Happy", Lastname = "Coder", ... }
Console.WriteLine(employee);
```
|
Here's a simple solution
```
private void PrintRegistry()
{
foreach(Employee employee in Accounts)
{
Console.WriteLine("\nID:{0}\nFull Name: {1} {2}\nSocial Security Number: {3}\nWage: {4}\n", employee.ID, employee.FirstName, employee.LastName, employee.SocialNumber, employee.HourWage);
}
}
```
Or if the "PrintEmployee" PROCEDURE (not function as it doesn't return a value) is in the "Employee" class.
```
private void PrintRegistry()
{
foreach(Employee employee in Accounts)
{
employee.PrintEmployee();
}
}
```
| 10,996
|
16,946,684
|
Minimal working example that shows this error:
```
from os import listdir, getcwd
from os.path import isfile, join, realpath, dirname
import csv
def gd(mypath, myfile):
# Obtain the number of columns in the data file
with open(myfile) as f:
reader = csv.reader(f, delimiter=' ', skipinitialspace=True)
for i in range(20):
row_20 = next(reader)
# Save number of clumns in 'num_cols'.
num_cols = len(row_20)
return num_cols
mypath = realpath(join(getcwd(), dirname(__file__)))
# Iterate through all files. Stores name of file in 'myfile'.
for myfile in listdir(mypath):
if isfile(join(mypath,myfile)) and (myfile.endswith('.dat')):
num_cols = gd(mypath, myfile)
print(num_cols)
```
I have a single file called 'data.dat' in that folder and `python` returns the error:
```
----> 9 with open(myfile) as f:
....
IOError: [Errno 2] No existe el archivo o el directorio: u'data.dat'
```
Which translates to *No file or directory: u'data.dat'*.
Why is that *u* being added at the beginning of the file name and how do I get the code to correctly parse the file name?
|
2013/06/05
|
[
"https://Stackoverflow.com/questions/16946684",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391441/"
] |
The `u` just indicates that it is a unicode string and is not relevant to the problem.
The file isn't found because you aren't adding the `mypath` in front of the filename - try `with open(join(mypath, myfile)) as f:`
|
Your problem is that `myfile` is just a filename, not the result of `join(mypath,myfile)`.
| 10,997
|
57,523,861
|
I'm attempting to install pymc on MacOS 10.14.5 Mojave. However, there seems to be a problem with the gfortran module. The error message is minimally helpful.
I have attempted all the possible ways to install pymc as suggested here: <https://pymc-devs.github.io/pymc/INSTALL.html>
I first came across a problem with not recognising f951 in my gfortran compiler, but I solved that by adding the path to f951 explicitly to my `PATH`.
Now I come across the following after a bunch of warning messages in `pymc.flib.f`:
```
ld: unknown option: -idsym
error: Command "/usr/local/bin/gfortran -Wall -g -m64 -Wall -g -undefined dynamic_lookup -bundle build/temp.macosx-10.7-x86_64-3.7/cephes/i0.o build/temp.macosx-10.7-x86_64-3.7/cephes/c2f.o build/temp.macosx-10.7-x86_64-3.7/cephes/chbevl.o build/temp.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/pymc/flibmodule.o build/temp.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/pymc/fortranobject.o build/temp.macosx-10.7-x86_64-3.7/pymc/flib.o build/temp.macosx-10.7-x86_64-3.7/pymc/histogram.o build/temp.macosx-10.7-x86_64-3.7/pymc/flib_blas.o build/temp.macosx-10.7-x86_64-3.7/pymc/blas_wrap.o build/temp.macosx-10.7-x86_64-3.7/pymc/math.o build/temp.macosx-10.7-x86_64-3.7/pymc/gibbsit.o build/temp.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/pymc/flib-f2pywrappers.o -L/Users/cameron/anaconda3/lib -L/usr/local/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0 -L/usr/local/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0/../../.. -L/usr/local/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0/../../.. -lmkl_rt -lpthread -lgfortran -o build/lib.macosx-10.7-x86_64-3.7/pymc/flib.cpython-37m-darwin.so" failed with exit status 1
```
No online searches reveal what might cause the exit status 1 with gfortran.
|
2019/08/16
|
[
"https://Stackoverflow.com/questions/57523861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11935431/"
] |
I'm not familiar with mongoose, so I will take for granted that `"user_count": user_count++` works.
For the rest, there are two things that won't work:
* the `$` operator in `"users.$.id": req.user.id,` is known as the positional operator, and that's not what you want, it's used to update a specific element in an array. Further reading here: <https://docs.mongodb.com/manual/reference/operator/update/positional/>
* the `upsert` is about inserting a full document if the `update` does not match anything in the collection. In your case you just want to push an element in the array right?
In this case I guess something like this might work:
```
const result = await Example.updateOne(
{
_id: id,
},
{
$set: {
"user_count": user_count++
},
$addToSet: {
"users": {
"id": req.user.id,
"action": true
}
}
}
);
```
Please note that `$push` might also do the trick instead of `$addToSet`. But `$addToSet` takes care of keeping stuff unique in your array.
|
```
db.collection.findOneAndUpdate({_id: id}, {$set: {"user_count": user_count++},$addToSet: {"users": {"id": req.user.id,"action": true}}}, {returnOriginal:false}, (err, doc) => {
if (err) {
console.log("Something wrong when updating data!");
}
console.log(doc);
});
```
| 10,998
|
25,113,767
|
I am programming in python which involves me implementing a shell in Python in Linux. I am trying to run standard unix commands by using os.execvp(). I need to keep asking the user for commands so I have used an infinite while loop. However, the infinite while loop doesn't work. I have tried searching online but they're isn't much available for Python. Any help would be appreciated. Thanks
This is the code I have written so far:
```
import os
import shlex
def word_list(line):
"""Break the line into shell words."""
lexer = shlex.shlex(line, posix=True)
lexer.whitespace_split = False
lexer.wordchars += '#$+-,./?@^='
args = list(lexer)
return args
def main():
while(True):
line = input('psh>')
split_line = word_list(line)
if len(split_line) == 1:
print(os.execvp(split_line[0],[" "]))
else:
print(os.execvp(split_line[0],split_line))
if __name__ == "__main__":
main()
```
So when I run this and put in the input "ls" I get the output "HelloWorld.py" (which is correct) and "Process finished with exit code 0". However I don't get the output "psh>" which is waiting for the next command. No exceptions are thrown when I run this code.
|
2014/08/04
|
[
"https://Stackoverflow.com/questions/25113767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3903472/"
] |
Your code does not work because it uses [`os.execvp`](https://docs.python.org/3/library/os.html#os.execvp). `os.execvp` **replaces the current process image completely with the executing program**, your running process **becomes** the `ls`.
To execute a **subprocess** use the aptly named [`subprocess`](https://docs.python.org/3/library/subprocess.html) module.
---
In case of an **ill-advised** programming exercise then you need to:
```
# warning, never do this at home!
pid = os.fork()
if not pid:
os.execvp(cmdline) # in child
else:
os.wait(pid) # in parent
```
`os.fork` returns twice, giving the pid of child in parent process, zero in child process.
|
If you want it to run like a shell you are looking for os.fork() . Call this before you call os.execvp() and it will create a child process. os.fork() returns the process id. If it is 0 then you are in the child process and can call os.execvp(), otherwise continue with the code. This will keep the while loop running. You can have the original process either wait for it to complete os.wait(), or continue without waiting to the start of the while loop. The pseudo code on page 2 of this link should help <https://www.cs.auckland.ac.nz/courses/compsci340s2c/assignments/A1/A1.pdf>
| 10,999
|
14,303,300
|
>
> **Possible Duplicate:**
>
> [Python \_\_str\_\_ and lists](https://stackoverflow.com/questions/727761/python-str-and-lists)
>
>
>
What is the cause when python prints the address of an object instead of the object itself?
for example the output for a print instruction is this:
```
[< ro.domain.entities.Person object at 0x01E6BA10>, < ro.domain.entities.Person object at 0x01E6B9F0>, < ro.domain.entities.Person object at 0x01E6B7B0>]
```
My code looks like this:
```
class PersonRepository:
"""
Stores and manages person information.
"""
def __init__(self):
"""
Initializes the list of persons.
"""
self.__list=[]
def __str__(self):
"""
Returns the string format of the persons list.
"""
s=""
for i in range(0, len(self.__list)):
s=s+str(self.__list[i])+"/n"
return s
def add(self, p):
"""
data: p - person.
Adds a new person, raises ValueError if there is already a person with the given id.
pos: list contains new person.
"""
for q in self.__list:
if q.get_personID()==p.get_personID():
raise ValueError("Person already exists.")
self.__list.append(p)
def get_all(self):
"""
Returns the list containing all persons.
"""
l=str(self.__list)
return l
```
I have a Person class as well, with a get\_personID() function. After I have added some elements and try to print them using get\_all(), it returns the above line, instead of the persons I have added.
|
2013/01/13
|
[
"https://Stackoverflow.com/questions/14303300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1796659/"
] |
You are looking at the `repr()` representation of a custom class, which by default include the `id()` (== memory address in CPython).
This is the default used when printing a list, any contents are included using the representation:
```
>>> class CustomObject(object):
... def __str__(self):
... return "I am a custom object"
...
>>> custom = CustomObject()
>>> print(custom)
I am a custom object
>>> print(repr(custom))
<__main__.CustomObject object at 0x10552ff10>
>>> print([custom])
[<__main__.CustomObject object at 0x10552ff10>]
```
|
Python calls for every list item the **repr**() output of each list item.
See [Python \_\_str\_\_ and lists](https://stackoverflow.com/questions/727761/python-str-and-lists)
| 11,000
|
3,306,518
|
I needed to have a directly executable python script, so i started the file with `#!/usr/bin/env python`. However, I also need unbuffered output, so i tried `#!/usr/bin/env python -u`, but that fails with `python -u: no such file or directory`.
I found out that `#/usr/bin/python -u` works, but I need it to get the `python` in `PATH` to support virtual `env` environments.
What are my options?
|
2010/07/22
|
[
"https://Stackoverflow.com/questions/3306518",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/381048/"
] |
Here is a script alternative to `/usr/bin/env`, that permits passing of arguments on the hash-bang line, based on `/bin/bash` and with the restriction that spaces are disallowed in the executable path. I call it "envns" (env No Spaces):
```
#!/bin/bash
ARGS=( $1 ) # separate $1 into multiple space-delimited arguments.
shift # consume $1
PROG=`which ${ARGS[0]}`
unset ARGS[0] # discard executable name
ARGS+=( "$@" ) # remainder of arguments preserved "as-is".
exec $PROG "${ARGS[@]}"
```
Assuming this script is located at /usr/local/bin/envns, here's your shebang line:
```
#!/usr/local/bin/envns python -u
```
Tested on Ubuntu 13.10 and cygwin x64.
|
Building off of Larry Cai's answer, `env` allows you to set a variable directly in the command line. That means that `-u` can be replaced by the equivalent `PYTHONUNBUFFERED` setting before `python`:
```
#!/usr/bin/env PYTHONUNBUFFERED="YESSSSS" python
```
Works on RHEL 6.5. I am pretty sure that feature of `env` is just about universal.
| 11,001
|
41,020,233
|
How to get user-defined class attributes from class instance? I tried this:
```
class A:
FOO = 'foo'
BAR = 'bar'
a = A()
print(a.__dict__) # {}
print(vars(a)) # {}
```
I use python 3.5.
Is there a way to get them?
I know that `dir(a)` returns a list with names of attributes, but I need only used defined, and as a dict name to value.
|
2016/12/07
|
[
"https://Stackoverflow.com/questions/41020233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7262895/"
] |
You've defined those variables within the class namespace which haven't propagated into instances. You can use the `__class__` attribute of the instance to access the class object, and then use the `__dict__` method to get the namespace's contents:
```
>>> {k: v for k, v in a.__class__.__dict__.items() if not k.startswith('__')}
{'BAR': 'bar', 'FOO': 'foo'}
```
|
Try this:
```
In [5]: [x for x in dir(a) if not x.startswith('__')]
Out[5]: ['BAR', 'FOO']
```
| 11,011
|
58,499,136
|
I have a python script which I want to execute when someone clicks on a button in an HTML/PHP web page in the browser. How can this be achieved and what is the best way of doing it?
|
2019/10/22
|
[
"https://Stackoverflow.com/questions/58499136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8110933/"
] |
You need to use flask server for this requirement as browser can not access local file.
By using flask, You need to write Ajax call in `.js`.
Sample Ajax call.
```
$(document).ready(function () {
$('#<ButtonID>').click(function (event) {
$.ajax({
url: '/<Flask URL>',
type: 'POST',
success: function (data) {
<DATA OBJECT>
},
});
event.preventDefault();
});
});
```
Sample Flask function
```
@app.route('/<Flask URL>',methods=['POST'])
def result():
try:
<DO>
except:
logger.error()
raise
return jsonify({'data':<Python variable>})
```
You might need to import necessary modules.
|
With `exec`,
```
$command = "python script_path";
exec($command,$output,$return_var);
if ($return_var) {
$error = error_get_last();
var_dump($error)
}
```
| 11,012
|
62,857,693
|
How do I install spacy on ARM processor? I get an error
```
ERROR: Command errored out with exit status 1:
command: /root/miniforge3/bin/python3.7 /root/miniforge3/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-ahxo0t0p/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.25' 'cymem>=2.0.2,<2.1.0' 'preshed>=3.0.2,<3.1.0' 'murmurhash>=0.28.0,<1.1.0' thinc==7.4.1
```
Full error log:
<https://gist.github.com/shantanuo/1fb62b83a54713e517307c9cad9624f5>
|
2020/07/12
|
[
"https://Stackoverflow.com/questions/62857693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139150/"
] |
>
> You cannot use "hot reload" features with flutter app because the process of deployment/running never finishes
>
>
>
Actually, you can! Just connect to the device with `adb connect <IP>` instead of going through the socket. See my full blog post here:
<https://dnmc.in/2021/01/25/setting-up-flutter-natively-with-wsl2-vs-code-hot-reload/>
|
If you are like me you had a lot of issues getting adb to work. You need to make sure that windows host and the linux image both have the same version of adb. Following this guide <https://www.androidexplained.com/install-adb-fastboot/#update> helped me update adb on windows.
| 11,013
|
66,524,661
|
Here I check the installed version of pip
`py -m pip --version`
```
pip 21.0.1 from C:\Users\hp\AppData\Local\Programs\Python\Python39\lib\site-packages\pip (python 3.9)
```
Now I try to run a pip command
`pip install pip --target $HOME\\.pyenv`
```
pip: The term 'pip' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
```
|
2021/03/08
|
[
"https://Stackoverflow.com/questions/66524661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/865220/"
] |
Add the scripts folder to PATH
`C:\Users\hp\AppData\Local\Programs\Python\Python39\Scripts`
or
`C:\Python39\Scripts`
(depending on how you have installed python locate and add python/scripts folder)
|
Just use on the terminal:
```
py -m pip install
```
followed by the library you want to install. It tends to work.
| 11,014
|
19,494,511
|
```
Error: Error: if n == 0 or n>4:
UnboundLocalError: local variable 'n' referenced before assignment.
```
Tried isdigit method, but seems not working. what is the issue ?
```
#!usr/bin/python
import sys
class Person:
def __init__(self, firstname=None, lastname=None, age=None, gender=None):
self.fname = firstname
self.lname = lastname
self.age = age
self.gender = gender
def display(self):
found = False
n1 = raw_input("Enter for Search Criteria\n1.FirstName == 2.LastName == 3.Age == 4.Gender : " )
print "Not a valid input"
if n1.isdigit():
n = int(n1)
else:
print "Enter Integer only"
if n == 0 or n>4:
print "Enter valid search "
if n == 1:
StringSearch = raw_input("Enter FirstName :")
for records in list_of_records:
if StringSearch in records.fname:
found = True
print records.fname, records.lname, records.age, records.gender
if not found:
print "No matched record"
if n == 2:
StringSearch = raw_input("Enter LastName :")
for records in list_of_records:
if StringSearch in records.lname:
found = True
print records.fname, records.lname, records.age, records.gender
if not found:
print "No matched record"
if n == 3:
StringSearch = raw_input("Enter Age :")
for records in list_of_records:
if StringSearch in records.age:
if not found:
print "No matched record"
if n == 4:
StringSearch = raw_input("Enter Gender(M/F) :")
for records in list_of_records:
if StringSearch in records.gender:
found = True
print records.fname, records.lname, records.age, records.gender
if not found:
print "No matched record"
f= open("abc","r")
list_of_records = [Person(*line.split()) for line in f]
#for record in list_of_records:
for per in list_of_records:
per.display()
```
PLease help me out in how to handle this issue ?
|
2013/10/21
|
[
"https://Stackoverflow.com/questions/19494511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2897545/"
] |
Okay, you are doing a few things wrong.
First of all, `raw_input` will always give you a string.
So you need to convert it into an integer anyway. But also, you are using the variable `n` in parts of your code that it might not exist at yet.
You need to change this part:
```
print "Not a valid input"
if n1.isdigit():
n = int(n1)
else:
print "Enter Integer only"
```
To this:
```
try:
n = int(n1)
except:
print "Enter Integer only"
raise
```
Unless you want to keep asking until valid input is received, then make a function:
```
def get_user_int(prompt="Enter an integer: "):
while True:
try:
return int(raw_input(prompt)))
except:
print 'Try again'
```
And call it like this:
```
n = get_user_int("Enter choice for Search Criteria\n - 1.FirstName\n - 2.LastName\n - 3.Age\n - 4.Gender\n> ")
```
|
Your're testing for n == something condition before setting the value of n. Simply initialize it to zero or whatever else default value.
```
def display(self):
found = False
n = 0
```
| 11,015
|
41,061,824
|
I have a problem with installing csv package in pycharm (running under python 3.5.2)
When I try to install it I get an error saying
Executed command:
`pip install --user csv`
Error occurred:
Non-zero exit code (1)
Could not find a version that satisfies the requirement csv (from versions: )
No matching distribution found for csv
I updated the pip package to version 9.0.1 but still nothing.
When I run the following code:
```
import csv
f = open('fl.csv')
csv_f = csv.reader(f)
f.close()
```
I get this error:
csv\_f = csv.reader(f)
AttributeError: module 'csv' has no attribute 'reader'
I thought it was because I could not install the "csv package"
also I tried running:
```
import csv
print(dir(csv))
```
and the print result is:
['**doc**', '**loader**', '**name**', '**package**', '**path**', '**spec**']
A lot of methods including csv.reader are missing
This is pretty much all the useful information up until comment #29
|
2016/12/09
|
[
"https://Stackoverflow.com/questions/41061824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4005127/"
] |
You can't `pip install csv` because the csv module is included in the Python installation.
You can directly use :
```
import csv
```
in your program
Thanks
|
The problem is that you have another file in your directory called `csv.py`. And in this file you do not have a `reader` function.
Change its name to `my_csv.py`
| 11,018
|
47,900,257
|
I have the Python Extensions for Windows installed. Within the PythonWin IDE I can get autocomplete on Automation objects (specifically, objects created with `win32com.client.Dispatch`):
[](https://i.stack.imgur.com/uxxkh.png)
How can I get the same autocomplete in VS Code?
I am using the [Microsoft Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python).
The Python Windows Extensions has a tool called COM Makepy, which apparently generates Python representations of Automation objects, but I can't figure out how to use it.
**Update**
Apparently, the Microsoft Python extension uses [Jedi](https://github.com/davidhalter/jedi) for autocompletion.
I've filed an [issue](https://github.com/Microsoft/vscode-python/issues/487) on the extension project on Github.
Note that in general I have Intellisense in Python; it's only the Intellisense on Automation objects that I am missing.
|
2017/12/20
|
[
"https://Stackoverflow.com/questions/47900257",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/111794/"
] |
I don't think that the example you show with `PythonWin` is easily reproducible in VS Code. The quick start guide of `win32com` itself (cited below) says, its only possible with a COM browser or the documentation of the product (Word in this case). The latter one is unlikely, so `PythonWin` is probably using a COM browser to find the properties. And because `PythonWin` and `win32com` come in the same package, its not that unlikely that `PythonWin` has a COM browser built in.
>
> **How do I know which methods and properties are available?**
>
>
> Good question. This is hard! You need to use the documentation with the
> products, or possibly a COM browser. Note however that COM browsers
> typically rely on these objects registering themselves in certain
> ways, and many objects to not do this. You are just expected to know.
>
>
>
If you wanted the same functionality from the [VS Code plugin](https://marketplace.visualstudio.com/items?itemName=ms-python.python) a COM browser would have to be implemented into [Jedi](https://github.com/davidhalter/jedi) (IntelliSense of the VS Code plugin).
**Edit:** I found [this](https://wingware.com/pipermail/wingide-users/2016-August/011203.html) suggestion, on how a auto-complete, that can find these hidden attributes, could be done:
>
> These wrappers do various things that make static analysis difficult,
> unless we were to special case them. The general solution for such
> cases is to run to a breakpoint and work in the live runtime state.
> The auto-completer should then include the complete list of symbols
> because it inspects the runtime.
>
>
>
The conversation is from an mailing list of the python IDE [wingwide](https://wingware.com). [Here](https://wingware.com/doc/edit/auto-completion) you can see, that they implemented the above mentioned approach:
[](https://i.stack.imgur.com/dE6M7.png)
|
I think your problem is related to defining `Python interpreter`.
Choose proper Python interpreter by executing `python interpreter` command in `VS Code` command palette by pressing **`f1`** or **`ctrl+shift+p`** key.
| 11,025
|
12,756,885
|
I've looked at all the other questions like this and they all seem to be a slight variation of this one in which I can't extract an answer for my problem.
```
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy
```
So I installed it with homebrew into the site-packages directory. At this point in time, importing numpy worked. But then when I had trouble downloading matplotlib, I downloaded the original python 2.7 as opposed to the one that comes on Mac. Now I can't import the module unless I' in the numpy directory, and when I try to build matplotlib it can't find numpy (which is a dependency). Any ideas on what could be wrong?
|
2012/10/06
|
[
"https://Stackoverflow.com/questions/12756885",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1217616/"
] |
Third-party add-ons ("distributions") to Python, like `numpy`, are installed to a particular instance of Python. On OS X 10.8 (Mountain Lion), the Apple-supplied Python 2.7 comes with a version of `numpy` pre-installed. You can access that python with:
```
/usr/bin/python2.7
```
I'm not sure what you mean by "downloaded the original python2.7", but if you installed another version of python, you would need to install another version of `numpy` using it.
|
If Your Numpy not install during python 2.7 installation so you can download numpy and install easly from this link [install link](http://www.iram.fr/IRAMFR/GILDAS/doc/html/gildas-python-html/node38.html)
| 11,027
|
29,786,474
|
I execute `launchctl start com.xxx.xxx.plist`
I can find `AutoMakeLog.err` and the content :
```
Traceback (most recent call last):
File "/Users/xxxx/Downloads/Kevin/auto.py", line 67, in <module>
output = open(file_name, 'w')
IOError: [Errno 13] Permission denied: '2015-04-22-09:15:40.log'
```
plist content :
```
<array>
<string>/Users/xxxx/Downloads/Kevin/auto.sh</string>
</array>
<key>StartCalendarInterval</key>
<dict>
<key>Minute</key>
<integer>30</integer>
<key>Hour</key>
<integer>08</integer>
</dict>
<key>StandardOutPath</key>
<string>/Users/xxxx/Downloads/Kevin/AutoMakeLog.log</string>
<key>StandardErrorPath</key>
<string>/Users/xxx/Downloads/Kevin/AutoMakeLog.err</string>
```
auto.sh
```
#!/bin/sh
/usr/bin/python /Users/xxxx/Downloads/Kevin/auto.py
```
auto.py
```
file_name = time.strftime('%Y-%m-%d-%H:%M:%S', time.localtime(time.time()))
file_name += '.log'
output = open(file_name, 'w')
output.writelines(response.text)
output.close()
```
auto.sh and auto.py have chomd 777
PS: I direct execution auto.sh there is nothing error.
|
2015/04/22
|
[
"https://Stackoverflow.com/questions/29786474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1327056/"
] |
`int[]` is an `integer array` type.
`int` is an `integer` type.
You can't convert an array to a number.
|
You have multiple problems in the code. declare int as integer, initialise Questions and You have to convert String to integer before assigning it to question.
```
int i = 0;
int [] question = new int [100];
question[i++] = Integer.parseInt(line[0]);
```
| 11,028
|
49,776,619
|
I'm learning flask web microframework and after initialization of my database I run `flask db init` I run `flask db migrate`, to migrate my models classes to the database and i got an error. I work on Windows 10, the database is MySQL, and extensions install are `flask-migrate`, `flask-sqlalchemy`, `flask-login`.
```
(env) λ flask db migrate
Traceback (most recent call last):
File "c:\python36\Lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python36\Lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\aka\Dev\dream-team\env\Scripts\flask.exe\__main__.py", line 9, in <module>
File "c:\users\aka\dev\dream-team\env\lib\site-packages\flask\cli.py", line 513, in main
cli.main(args=args, prog_name=name)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\flask\cli.py", line 380, in main
return AppGroup.main(self, *args, **kwargs)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\click\core.py", line 697, in main
rv = self.invoke(ctx)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\click\core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "c:\users\aka\dev\dream-team\env\lib\site-packages\click\core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "c:\users\aka\dev\dream-team\env\lib\site-packages\click\core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\click\core.py", line 535, in invoke
return callback(*args, **kwargs)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\click\decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\flask\cli.py", line 257, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\click\core.py", line 535, in invoke
return callback(*args, **kwargs)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\flask_migrate\cli.py", line 90, in migrate
rev_id, x_arg)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\flask_migrate\__init__.py", line 197, in migrate
version_path=version_path, rev_id=rev_id)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\alembic\command.py", line 176, in revision
script_directory.run_env()
File "c:\users\aka\dev\dream-team\env\lib\site-packages\alembic\script\base.py", line 427, in run_env
util.load_python_file(self.dir, 'env.py')
File "c:\users\aka\dev\dream-team\env\lib\site-packages\alembic\util\pyfiles.py", line 81, in load_python_file
module = load_module_py(module_id, path)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\alembic\util\compat.py", line 83, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "migrations\env.py", line 87, in <module>
run_migrations_online()
File "migrations\env.py", line 70, in run_migrations_online
poolclass=pool.NullPool)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\sqlalchemy\engine\__init__.py", line 465, in engine_from_config
return create_engine(url, **options)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\sqlalchemy\engine\__init__.py", line 424, in create_engine
return strategy.create(*args, **kwargs)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\sqlalchemy\engine\strategies.py", line 50, in create
u = url.make_url(name_or_url)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\sqlalchemy\engine\url.py", line 211, in make_url
return _parse_rfc1738_args(name_or_url)
File "c:\users\aka\dev\dream-team\env\lib\site-packages\sqlalchemy\engine\url.py", line 270, in _parse_rfc1738_args
"Could not parse rfc1738 URL from string '%s'" % name)
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string 'mysql/dt_admin:dt2016@localhost/dreamteam_db'
```
|
2018/04/11
|
[
"https://Stackoverflow.com/questions/49776619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8559808/"
] |
i'd forget the port number to enter the port, this is the URL connection string:
```
`SQLALCHEMY_DATABASE_URI = 'mysql://dt_admin:dt2016@localhost:3308/dreamteam_db'
```
it work now, thanks
|
For me I got this error once I was trying to fix this issue
```
raise TypeError("option values must be strings")
TypeError: option values must be strings
```
so I tried to stringify the url like so
```py
config.set_main_option(
"sqlalchemy.url", f'{os.environ.get("DATABASE_URL")}')
```
After that I got new error because `os.environ.get()` does not return the Environment Value in windows without exporting the value, here is the error I got
```
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string 'None'
```
So I should either export the value than get it with `os.environ.get()` or use a config file and get it like so:
```py
config.set_main_option(
"sqlalchemy.url", settings.database_url.replace("postgres://", "postgresql+asyncpg://", 1))
```
>
> A detail about how to get env vars using config file you can find it on this answer [os.environ.get() does not return the Environment Value in windows?](https://stackoverflow.com/questions/42978739/os-environ-get-does-not-return-the-environment-value-in-windows/72916628#72916628)
>
>
>
| 11,030
|
50,126,064
|
I've been learning about about C++ in college and one thing that interests me is the ability to create a shared header file so that all the cpp files can access the objects within. I was wondering if there is some way to do the same thing in python with variables and constants? I only know how to import and use the functions or classes in other py files.
|
2018/05/02
|
[
"https://Stackoverflow.com/questions/50126064",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7944978/"
] |
First, if you've ever used `sys.argv` or `os.sep`, you've already used another module's variables and constants.
Because the way you share variables and constants is exactly the same way you share functions and classes.
In fact, functions, classes, variables, constants—they're all just module-global variables as far as Python is concerned. They may have values of different types, but they're the same kind of variable.
So, let's say you write this module:
```
# spam.py
cheese = ['Gouda', 'Edam']
def breakfast():
print(cheese[-1])
```
If you `import spam`, you can use `cheese`, exactly the same way you use `eggs`:
```
import spam
# call a function
spam.eggs()
# access a variable
print(spam.cheese)
# mutate a variable's value
spam.cheese.append('Leyden')
spam.eggs() # now it prints Leyden instead of Edam
# even rebind a variable
spam.cheese = (1, 2, 3, 4)
spam.eggs() # now it prints 4
# even rebind a function
spam.eggs = lambda: print('monkeypatched')
spam.eggs()
```
---
C++ header files are really just a poor man's modules. Not every language is as flexible as Python, but almost every language from Ruby to Rust has some kind of real module system; only C++ (and C) requires you to fake it by having code that gets included into a bunch of different files at compile time.
|
If you are just looking to make function definitions, then this post may answer your question:
[Python: How to import other Python files](https://stackoverflow.com/questions/2349991/python-how-to-import-other-python-files)
Then you can define a function as per here:
<https://www.tutorialspoint.com/python/python_functions.htm>
Or if you are looking to make a class:
<https://docs.python.org/3/tutorial/classes.html>
You can look at example 3.9.5 in the previous link in order to understand how to create a shared variable among different object instances.
| 11,040
|
5,971,635
|
My python script which calls many python functions and shell scripts. I want to set a environment variable in Python (main calling function) and all the daughter processes including the shell scripts to see the environmental variable set.
I need to set some environmental variables like this:
```
DEBUSSY 1
FSDB 1
```
`1` is a number, not a string. Additionally, how can I read the value stored in an environment variable? (Like `DEBUSSY`/`FSDB` in another python child script.)
|
2011/05/11
|
[
"https://Stackoverflow.com/questions/5971635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749632/"
] |
Try using the `os` module.
```
import os
os.environ['DEBUSSY'] = '1'
os.environ['FSDB'] = '1'
# Open child processes via os.system(), popen() or fork() and execv()
someVariable = int(os.environ['DEBUSSY'])
```
See the [Python docs](http://docs.python.org/library/os.html#os.environ) on `os.environ`. Also, for spawning child processes, see Python's [subprocess docs](http://docs.python.org/library/subprocess.html#module-subprocess).
|
Use `os.environ[str(DEBUSSY)]` for both reading and writing (<http://docs.python.org/library/os.html#os.environ>).
As for reading, you have to parse the number from the string yourself of course.
| 11,041
|
3,148,352
|
Need Help Creating GAE Datastore Loader Class for uploading data using appcfg.py?
Any other way to simplified this process?
is there any detailed example better than [here](http://code.google.com/appengine/docs/python/tools/uploadingdata.html)
When try using bulkloader.yaml:
```
Uploading data records.
[INFO ] Logging to bulkloader-log-20100701.041515
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20100701.041515.sql3
[INFO ] Connecting to livelihoodproducer.appspot.com/remote_api
[INFO ] Starting import; maximum 10 entities per post
[ERROR ] [Thread-1] WorkerThread:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/adaptive_thread_pool.py", line 150, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/bulkloader.py", line 693, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/bulkloader.py", line 848, in _TransferItem
self.content = self.request_manager.EncodeContent(self.rows)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/bulkloader.py", line 1269, in EncodeContent
entity = loader.create_entity(values, key_name=key, parent=parent)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/bulkloader_config.py", line 385, in create_entity
return self.dict_to_entity(input_dict, self.bulkload_state)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/bulkloader_config.py", line 133, in dict_to_entity
self.__run_import_transforms(input_dict, instance, bulkload_state_copy)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/bulkloader_config.py", line 233, in __run_import_transforms
value = self.__dict_to_prop(transform, input_dict, bulkload_state)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/bulkloader_config.py", line 188, in __dict_to_prop
value = transform.import_transform(value)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/bulkloader_parser.py", line 93, in __call__
return self.method(*args, **kwargs)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/bulkload/transform.py", line 143, in generate_foreign_key_lambda
return datastore.Key.from_path(kind, value)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/datastore_types.py", line 387, in from_path
'received %r (a %s).' % (i + 2, id_or_name, typename(id_or_name)))
BadArgumentError: Expected an integer id or string name as argument 2; received None (a NoneType).
[INFO ] [Thread-3] Backing off due to errors: 1.0 seconds
[INFO ] Unexpected thread death: Thread-1
[INFO ] An error occurred. Shutting down...
[ERROR ] Error in Thread-1: Expected an integer id or string name as argument 2; received None (a NoneType).
[INFO ] 30 entites total, 0 previously transferred
[INFO ] 0 entities (733 bytes) transferred in 2.8 seconds
[INFO ] Some entities not successfully transferred
```
In the process, i've downloadeded csv data manually inserted on appspot.com. While i try to upload my own csv data, the column order should made exactly like csv downloaded from appspot.com? how about blank value?
|
2010/06/30
|
[
"https://Stackoverflow.com/questions/3148352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/288541/"
] |
I've created config.yaml with bulkloader config, and also written simple helper function to process None-references. I don't know why it's not done in original helper.
The helper (file `helpers.py` is very simple, just place it to the same directory where you placed `config.yaml`):
```
from google.appengine.api import datastore
def create_foreign_key(kind, key_is_id=False):
def generate_foreign_key_lambda(value):
if value is None:
return None
if key_is_id:
value = int(value)
return datastore.Key.from_path(kind, value)
return generate_foreign_key_lambda
```
And this is cut from my `config.yaml`:
```
python_preamble:
- import: helpers # this will import our helper
[other imports]
...
- kind: ArticleComment
connector: simplexml
connector_options:
xpath_to_nodes: "/blog/Comments/Comment"
style: element_centric
property_map:
- property: __key__
external_name: key
export_transform: transform.key_id_or_name_as_string
- property: parent_comment
external_name: parent-comment
export_transform: transform.key_id_or_name_as_string
import_transform: helpers.create_foreign_key('ArticleComment')
# ^^^^^^^ here it is
# use this instead of transform.create_foreign_key
```
|
It looks like you have reference properties with None values, such values are handled incorrectly by bulkloader's helpers.
| 11,047
|
68,490,787
|
Any time I make a change in the view, and HTML, or CSS, I have to stop and re-run
```
python manage.py runserver
```
for my changes to be dispayed. This is very annoying because it wastes a lot of my time trying to find the terminal and run it again. Is there a workaround for this?
|
2021/07/22
|
[
"https://Stackoverflow.com/questions/68490787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14296523/"
] |
`python manage.py runserver` should normally perform hot reload on your Django application, except you've updated the config in the settings.py file. Check if `DEBUG = True` in settings.py
|
My advice is to use Vscode for Django developing because it gives you autosave feature so you don't have to stop and rerun the server the only thing you have to do is reload the web page. I hope it might be helpful
| 11,048
|
11,590,082
|
I am using python and sqlite3 to handle a website. I need all timezones to be in localtime, and I need daylight savings to be accounted for. The ideal method to do this would be to use sqlite to set a global datetime('now') to be +10 hours.
If I can work out how to change sqlite's 'now' with a command, then I was going to use a cronjob to adjust it (I would happily go with an easier method if anyone has one, but cronjob isn't too hard)
|
2012/07/21
|
[
"https://Stackoverflow.com/questions/11590082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1503619/"
] |
I'm not 100% on what you're asking, but to keep this simple I would say store your dates in UTC, and present them as local time if need be:
```
sqlite> select datetime('now', 'utc');
2012-07-21 09:58:21
sqlite> select datetime('now', 'localtime');
2012-07-21 13:58:33
```
See SQLite's [date and time functions documentation](http://www.sqlite.org/lang_datefunc.html).
|
you can try this code, I am in Taiwan , so I add 8 hours:
`DateTime('now','+8 hours')`
| 11,049
|
9,597,122
|
I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster.
Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed.
As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code.
.
.
packageJobJar: [**/tmp/hello/hello**, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null
```
JarBuilder.addNamedStream hello
.
.
12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1
12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local]
12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057
12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run:
12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057
12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057
12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0%
12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100%
12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run:
12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057
12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057
12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful!
12/03/07 22:32:05 INFO streaming.StreamJob: killJob...
Streaming Job Failed!
```
Command I am trying is :
```
hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose
```
where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning.
My Python code is :
```
#!/usr/bin/env python
import subprocess
subprocess.call(["./hello"])
```
Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this.
Thanks,
Ganesh
|
2012/03/07
|
[
"https://Stackoverflow.com/questions/9597122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1253987/"
] |
Used simple id and hash token like Amazon for now...
|
Your site can absolutely be an OAuth server (to clients) and an OAuth consumer (of other APIs) at the same time, the same way that a hairdresser can also be the customer of another hairdresser.
| 11,050
|
72,720,385
|
I created a model like this.
```
class BloodDiscard(models.Model):
timestamp = models.DateTimeField(auto_now_add=True, blank=True)
created_by = models.ForeignKey(Registration, on_delete=models.SET_NULL, null=True)
blood_group = models.ForeignKey(BloodGroupMaster, on_delete=models.SET_NULL, null=True)
blood_cells = models.ForeignKey(BloodCellsMaster, on_delete=models.SET_NULL, null=True)
quantity = models.FloatField()
```
But now I need to apply inheritance to my model, like this. [(models.Model) ---> (BaseModel)]
```
class BloodDiscard(BaseModel):
timestamp = models.DateTimeField(auto_now_add=True, blank=True)
created_by = models.ForeignKey(Registration, on_delete=models.SET_NULL, null=True)
blood_group = models.ForeignKey(BloodGroupMaster, on_delete=models.SET_NULL, null=True)
blood_cells = models.ForeignKey(BloodCellsMaster, on_delete=models.SET_NULL, null=True)
quantity = models.FloatField()
```
BaseModel is another model I created before but forgot to inherit it in my current model.
```
class BaseModel(models.Model):
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
status_master = models.ForeignKey(StatusMaster,on_delete=models.SET_NULL,default=3,null=True, blank=True)
```
I applied "python manage.py makemigrations" after changing (models.Model) ---> (BaseModel) and got this...
```
(venv) G:\office\medicover\medicover_bloodbank_django>python manage.py makemigrations
You are trying to add the field 'created_at' with 'auto_now_add=True' to blooddiscard without a default; the database needs something to populate existing rows.
1) Provide a one-off default now (will be set on all existing rows)
2) Quit, and let me add a default in models.py
Select an option: 1
Please enter the default value now, as valid Python
You can accept the default 'timezone.now' by pressing 'Enter' or you can provide another value.
The datetime and django.utils.timezone modules are available, so you can do e.g. timezone.now
Type 'exit' to exit this prompt
[default: timezone.now] >>>
Migrations for 'Form':
Form\migrations\0018_auto_20220622_2322.py
- Add field created_at to blooddiscard
- Add field status_master to blooddiscard
- Add field updated_at to blooddiscard
```
But after that when I am applying "python manage.py migrate". I am getting this error.
```
(venv) G:\office\medicover\medicover_bloodbank_django>python manage.py migrate
Operations to perform:
Apply all migrations: Form, Master, User, admin, auth, contenttypes, sessions
Running migrations:
Applying Form.0018_auto_20220622_2322...Traceback (most recent call last):
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.ForeignKeyViolation: insert or update on table "Form_blooddiscard" violates foreign key constraint "Form_blooddiscard_status_master_id_ffe293fa_fk_Master_st"
DETAIL: Key (status_master_id)=(3) is not present in table "Master_statusmaster".
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\core\management\__init__.py", line 419, in execute_from_command_line
utility.execute()
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\core\management\__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\core\management\base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\core\management\base.py", line 398, in execute
output = self.handle(*args, **options)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\core\management\base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\core\management\commands\migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\migrations\executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\migrations\executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\migrations\executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\migrations\migration.py", line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\migrations\operations\fields.py", line 104, in database_forwards
schema_editor.add_field(
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\base\schema.py", line 522, in add_field
self.execute(sql, params)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\base\schema.py", line 145, in execute
cursor.execute(sql, params)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\utils.py", line 98, in execute
return super().execute(sql, params)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "G:\office\medicover\medicover_bloodbank_django\venv\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: insert or update on table "Form_blooddiscard" violates foreign key constraint "Form_blooddiscard_status_master_id_ffe293fa_fk_Master_st"
DETAIL: Key (status_master_id)=(3) is not present in table "Master_statusmaster".
```
And when doing "python manage.py runserver" getting this...
```
(venv) G:\office\medicover\medicover_bloodbank_django>python manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
You have 1 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): Form.
Run 'python manage.py migrate' to apply them.
June 22, 2022 - 23:23:51
Django version 3.2.5, using settings 'App.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
```
**NOTE:-**
In StatusMaster table there's only two rows having id's 1 and 2.
|
2022/06/22
|
[
"https://Stackoverflow.com/questions/72720385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15809668/"
] |
You can use the `matplotlib`'s `Dateformatter`. Updated code and plot below. I did notice that the `Date` column you posted had dates on 2nd and 17th. I changed those to show everything on the 2nd. Otherwise, there would be too many entries. Hope this helps...
```
df = pd.DataFrame({"Date":["2015-02-02 10:19:00","2015-02-02 12:22:00","2015-02-02 14:57:00","2015-02-02 16:58:59"],"Occurrence":[1,0,1,1]})
df["Date"] = pd.to_datetime(df["Date"])
import seaborn as sns
sns.set_theme(style="darkgrid")
ax = sns.lineplot(x="Date", y="Occurrence", data=df)
import matplotlib.dates as mdates
ax.xaxis.set_major_locator(mdates.HourLocator(interval=2))
# set formatter
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
```
**Output Plot**
[](https://i.stack.imgur.com/FAQm4.png)
|
You would want to convert your ['Date'] column to only include time information, im not sure if you want the data to be ordered by date or not but that should just show time information on the X-axis:
```
df['Date'].dt.time
```
| 11,051
|
5,043,188
|
i have a trouble with run django project on production server with Apache and mod\_wsgi. This Error happened when i'm start apache and go to site first time or go from other:
>
> ImportError at /
>
> Exception Value: cannot import name MyName
>
> Exception Location /var/www/projectname/appname/somemodule.py
>
>
>
When i'm reload page the error disappears and site work fine. Another point is that this error happened selectively and sometime not appear.
In project i'm use imports without project name prefix (i mean 'from accounts.models import Account' instead 'from projectname.accounts.models import Account').
On development (manage.py runserver) server all work fine without any troubles.
I have used many variations of my apache and wsgi script configurations but problem is not solved.
Here my current projectname.wsgi:
```
#!/usr/bin/env python
import os, sys, re
sys.path.append('/var/www/projectname')
sys.path.append('/var/www')
os.environ['PYTHON_EGG_CACHE'] = '/var/www/projectname/.python-egg'
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
```
Here is some parts from apache config:
```
<VirtualHost ip:80>
ServerAdmin admin@server.com
DocumentRoot /var/www
ServerName www.projectname.com
WSGIScriptAlias / "/var/www/projectname/projectname.wsgi"
WSGIDaemonProcess projectname threads=5 maximum-requests=5000
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
....
```
Also i'm use a separate Virtual Host for SSL.
I hope somebody help me.
Thanks!
|
2011/02/18
|
[
"https://Stackoverflow.com/questions/5043188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/337077/"
] |
```
php -i | more
```
should work on both Linux and Unix. On Linux, though, `more` is simply an alias for the equivalent `less`
|
On Linux you could do from the shell
```
php -r "phpinfo();" | less
```
| 11,052
|
33,124,269
|
I have been following a Caffe example [here](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb) to plot the Convolution kernels from my ConvNet. I have attached an image below of my kernels, however it looks nothing like the kernels in the example. I have followed the example exactly, anyone know what the issue may be?
My net is trained on a set of simulated images (with two classes) and the performance of the net is pretty good, around 80% test accuracy.
[](https://i.stack.imgur.com/iwWNU.png)
```
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_file: "/tmp/stage5/mean/mean.binaryproto"
}
data_param {
source: "/tmp/stage5/train/train-lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mean_file: "/tmp/stage5/mean/mean.binaryproto"
}
data_param {
source: "/tmp/stage5/validation/validation-lmdb"
batch_size: 10
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
convolution_param {
num_output: 40
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool1"
top: "ip1"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
```
|
2015/10/14
|
[
"https://Stackoverflow.com/questions/33124269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5321075/"
] |
Twilio developer evangelist here.
You absolutely can do app to app calls using the iOS SDK. Let me explain.
Your Twilio Client capability token is created with a TwiML Application, which supplies the URL that Twilio will hit when a call is created to find out what to do with it. Normally, you would pass a phone number as a parameter to your `TCDevice`'s `connect` which would be handed to your app URL when the call connects. This would then be used to produce TwiML to direct the call onto that number, like this:
```
<Response>
<Dial>
<Number>{{ to_number }}</Number>
</Dial>
</Response>
```
To make this work for client to client calls, you can pass another client ID to the URL and on your server, instead of `<Dial>`ing to a `<Number>` you would `<Dial>` to a `<Client>`. Like so:
```
<Response>
<Dial>
<Client>{{ client_id }}</Client>
</Dial>
</Response>
```
You can discover which clients are available by listening for [presence events](https://www.twilio.com/docs/client/ios/TCPresenceEvent) with your `TCDevice` object. You will also have to [handle incoming calls within applications](https://www.twilio.com/docs/quickstart/php/ios-client/incoming-connections).
I recommend following the [Twilio Client iOS Quickstart guide](https://www.twilio.com/docs/quickstart/php/ios-client) all the way through, which will guide you through most of these points, including passing parameters to your application URL and generating the right TwiML to accomplish this (though it doesn't cover presence events).
Let me know if this helps at all.
|
Not sure it is possible with Twilio. We have used twilio for the same purpose u mentioned (Call to phone numbers) and was working fine. I think the main purpose of twilio is that. Anyways i'm not sure about it.
May be [VoIP](https://en.wikipedia.org/wiki/Voice_over_IP) will suit for your functionality. **PortSIP** is a good SDK for voice and video communications between apps.
You can download the **iOS SDK** from here <https://www.portsip.com/downloads-center/>
It is payable like Twilio only if you want to use it for business.
For more refer [here](http://www.portsip.com/)
Thanks.
| 11,053
|
49,135,963
|
I am new to python and currently work on data analysis.
I am trying to open multiple folders in a loop and read all files in folders.
Ex. working directory contains 10 folders needed to open and each folder contains 10 files.
My code for open each folder with .txt file;
```
file_open = glob.glob("home/....../folder1/*.txt")
```
I want to open folder 1 and read all files, then go to folder 2 and read all files... until folder 10 and read all files.
Can anyone help me how to write loop to open folder, included library needed to be used?
I have my background in R, for example, in R I could write loop to open folders and files use code below.
```
folder_open <- dir("......./main/")
for (n in 1 to length of (folder_open)){
file_open <-dir(paste0("......./main/",folder_open[n]))
for (k in 1 to length of (file_open){
file_open<-readLines(paste0("...../main/",folder_open[n],"/",file_open[k]))
//Finally I can read all folders and files.
}
}
```
|
2018/03/06
|
[
"https://Stackoverflow.com/questions/49135963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9452236/"
] |
This recursive method will scan all directories within a given directory and then print the names of the `txt` files. I kindly invite you to take it forward.
```
import os
def scan_folder(parent):
# iterate over all the files in directory 'parent'
for file_name in os.listdir(parent):
if file_name.endswith(".txt"):
# if it's a txt file, print its name (or do whatever you want)
print(file_name)
else:
current_path = "".join((parent, "/", file_name))
if os.path.isdir(current_path):
# if we're checking a sub-directory, recursively call this method
scan_folder(current_path)
scan_folder("/example/path") # Insert parent direcotry's path
```
|
This code will look for all directories inside a directory, printing out the names of all files found there:
```
#--------*---------*---------*---------*---------*---------*---------*---------*
# Desc: print filenames one level down from starting folder
#--------*---------*---------*---------*---------*---------*---------*---------*
import os, fnmatch, sys
def find_dirs(directory, pattern):
for item in os.listdir(directory):
if os.path.isdir(os.path.join(directory, item)):
if fnmatch.fnmatch(item, pattern):
filename = os.path.join(directory, item)
yield filename
def find_files(directory, pattern):
for item in os.listdir(directory):
if os.path.isfile(os.path.join(directory, item)):
if fnmatch.fnmatch(item, pattern):
filename = os.path.join(directory, item)
yield filename
#--------*---------*---------*---------*---------*---------*---------*---------#
while True:# M A I N L I N E #
#--------*---------*---------*---------*---------*---------*---------*---------#
# # Set directory
os.chdir("C:\\Users\\Mike\\\Desktop")
for filedir in find_dirs('.', '*'):
print ('Got directory:', filedir)
for filename in find_files(filedir, '*'):
print (filename)
sys.exit() # END PROGRAM
```
| 11,055
|
6,131,629
|
I'm trying to create a module that initializes a serial port connection using python:
```
import serial
class myserial:
def __init__(self, port, baudrate)
self = serial.Serial(port, baudrate)
```
When I run this in Python I get an AttributeError message stating that self does not have an attribute open. Does anyone have any idea what's wrong with this code above? Any help would be greatly appreciated.
thanks
|
2011/05/25
|
[
"https://Stackoverflow.com/questions/6131629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Those are the basic pieces of information. Anything beyond that could be viewed as SpyWare-like and privacy advocates will [justifiably] frown upon it.
The best way to obtain more information from your users is to ask them, make the fields optional, and inform your user of exactly what you will be using the information for. Will you be mailing them a newsletter?
If you plan to eMail them, then you MUST use the "confirmed opt-in" approach -- get their consent (by having them respond to an eMail, keyed with a special-secret-unique number, confirming that they are granting permission for you to send them that newsletter or whatever notifications you plan to send to them) first.
As long as you're up-front about how you plan to use the information, and give the users options to decide how you can use it (these options should all be "you do NOT have permission" by default), you're likely to get more users who are willing to trust you and provide you with better quality information. For those who don't wish to reveal any personal information about themselves, don't waste your time trying to get it because many of them take steps to prevent that and hide anyway (and that is their right).
|
For what end?
Remember that client IP is close to meaningless now. All users coming from the same proxy or same NAT point would have the same client IP. Years go, all of AOL traffic came from just a few proxies, though now actual AOL users may be outnumbered by the proxies :).
If you want to uniquely identify a user, its easy to create a cookie in apache (mod\_usertrack) or whatever framework you use. If the person blocks cookies, please respect that and don't try tricks to track them anyway. Or take the lesson of Google, make it so useful, people will choose the utility over cookie worries.
Remember that Javascript runs on the client. Your document.write() will show the info on their webpage, not do anything for your server. You'd want to use Javascript to put this info in a cookie, or store with a form submission if you have any forms.
| 11,065
|
24,751,181
|
I have a text file named headerValue.txt which contains the following data:-
```
Name Age
sam 22
Bob 21
```
I am trying to write a python script that would add a new header called 'Eligibility'.
It would read the lines from the text file and check if the age is more than 18, then it will print 'True' underneath 'Eligibility' header and also add one more header named 'Adult' and print a 'yes' underneath and 'no' if the age is less than 18.
Since in the text file the age of both is more than 18. The output will something be like:-
```
Name Age Eligibility Adult
sam 22 True yes
Bob 21 True yes
```
code snippet:-
```
fo =open("headerValue.txt", "r")
data = fo.readlines()
for line in data:
if "Age" in line:
if int(Age) > 18:
Eligibility = "True"
Adult = "yes"
```
I am kinda stuck here since I am new to python. Any help how to update the text file and add headers with its values?Thanks
|
2014/07/15
|
[
"https://Stackoverflow.com/questions/24751181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3559954/"
] |
```
fo =open("headerValue.txt", "r")
data = [l.split() for l in fo.readlines()]
headers = data[0]
headers.extend(('Eligibility', 'Adult'))
ageind = headers.index('Age')
for line in data[1:]:
if int(line[ageind]) > 18:
line.extend(('True', 'yes'))
else:
line.extend(('False', 'no'))
```
The `if` in your code would only be true for the first line (not the one you're interested in). I replaced it with checking for the index of age. Also, usually variable names are not capitalized.
Also, regarding `data = [l.split() for l in fo.readlines()]` - it looks like your file is space separated. If it's tab/comma separated you can split accordingly, but I suggest using [csv](https://docs.python.org/2/library/csv.html).
|
This module is suited for you : <https://docs.python.org/2/library/fileinput.html?highlight=fileinput#fileinput>
For a infile edition you can use:
```
#!/usr/bin/python
import fileinput
cont = 0
for line in fileinput.input("./path/to/file", inplace=True):
if not cont:
name = "Name"
age = "Age"
Eligibility= "Eligibility"
Adult = "Adult"
else:
name ,age = line.split()
if int(age) > 18:
Eligibility = "True"
Adult = "yes"
print "{0}\t{1}\t{2}\t{3}".format(name, age, Eligibility, Adult)
cont += 1
```
This , will **update** your **original** file.
| 11,075
|
67,665,789
|
Hi I am a Newbie to programming. So I spent 4 days trying to learn python. I evented some new swear words too.
I was particularly interested in trying as an exercise some web-scraping to learn something new and get some exposure to see how it all works.
This is what I came up with. See code at end. It works (to a degree)
But what's missing?
1. This website has pagination on it. In this case 11 pages worth. How would you go about adding to this script and getting python to go look at those other pages too and carry out the same scrape. Ie scrape page one , scrape page 2, 3 ... 11 and post the results to a csv?
<https://www.organicwine.com.au/vegan/?pgnum=1>
<https://www.organicwine.com.au/vegan/?pgnum=2>
<https://www.organicwine.com.au/vegan/?pgnum=3>
<https://www.organicwine.com.au/vegan/?pgnum=4>
<https://www.organicwine.com.au/vegan/?pgnum=5>
<https://www.organicwine.com.au/vegan/?pgnum=6>
<https://www.organicwine.com.au/vegan/?pgnum=7>
8, 9,10, and 11
2. On these pages the images are actually a thumbnail images something like 251px by 251px.
How would you go about adding to this script to say. And whilst you are at it follow the links to the detailed product page and capture the image link from there where the images are 1600px by 1600px and post those links to CSV
<https://www.organicwine.com.au/mercer-wines-preservative-free-shiraz-2020>
3. When we have identified those links lets also download those larger images to a folder
4. CSV writer. Also I don't understand line 58
for i in range(23)
how would i know how many products there were without counting them (i.e. there is 24 products on page one)
So this is what I want to learn how to do. Not asking for much (he says sarcastically) I could pay someone on up-work to do it but where's the fun in that? and that does not teach me how to 'fish'.
Where is a good place to learn python? A master class on web-scraping. It seems to be trial and error and blog posts and where ever you can pick up bits of information to piece it all together.
Maybe I need a mentor.
I wish there had been someone I could have reached out to, to tell me what beautifulSoup was all about. worked it out by trial and error and mostly guessing. No understanding of it but it just works.
Anyway, any help in pulling this all together to produce a decent script would be greatly appreciated.
Hopefully there is someone out there who would not mind helping me.
Apologies to organicwine for using their website as a learning tool. I do not wish to cause any harm or be a nuisance to the site
Thank you in advance
John
code:
```
import requests
import csv
from bs4 import BeautifulSoup
URL = "https://www.organicwine.com.au/vegan/?pgnum=1"
response = requests.get(URL)
website_html = response.text
soup = BeautifulSoup(website_html, "html.parser")
product_title = soup.find_all('div', class_="caption")
# print(product_title)
winename = []
for wine in product_title:
winetext = wine.a.text
winename.append(winetext)
print(f'''Wine Name: {winetext}''')
# print(f'''\nWine Name: {winename}\n''')
product_price = soup.find_all('div', class_='wrap-thumb-mob')
# print(product_price.text)
price =[]
for wine in product_price:
wineprice = wine.span.text
price.append(wineprice)
print(f'''Wine Price: {wineprice}''')
# print(f'''\nWine Price: {price}\n''')
image =[]
product_image_link = (soup.find_all('div', class_='thumbnail-image'))
# print(product_image_link)
for imagelink in product_image_link:
wineimagelink = imagelink.a['href']
image.append(wineimagelink)
# image.append(imagelink)
print(f'''Wine Image Lin: {wineimagelink}''')
# print(f'''\nWine Image: {image}\n''')
#
#
# """ writing data to CSV """
# open OrganicWine2.csv file in "write" mode
# newline stops a blank line appearing in csv
with open('OrganicWine2.csv', 'w',newline='') as file:
# create a "writer" object
writer = csv.writer(file, delimiter=',')
# use "writer" obj to write
# you should give a "list"
writer.writerow(["Wine Name", "Wine Price", "Wine Image Link"])
for i in range(23):
writer.writerow([
winename[i],
price[i],
image[i],
])
```
|
2021/05/24
|
[
"https://Stackoverflow.com/questions/67665789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16011875/"
] |
In this case, to do pagination, instead of `for i in range(1, 100)` which is a hardcoded way of paging, it's better to use a `while` loop to dynamically paginate all possible pages.
"While" is an infinite loop and it will be executed until the transition to the next page is possible, in this case it will check for the presence of the button for the next page, for which the CSS selector ".fa-chevron-right" is responsible:
```py
if soup.select_one(".fa-chevron-right"):
params["pgnum"] += 1 # go to the next page
else:
break
```
To extract the full size image an additional request is required, CSS selector ".main-image a" is responsible for full-size images:
```py
full_image_html = requests.get(link, headers=headers, timeout=30)
image_soup = BeautifulSoup(full_image_html.text, "lxml")
try:
original_image = f'https://www.organicwine.com.au{image_soup.select_one(".main-image a")["href"]}'
except:
original_image = None
```
An additional step to avoid being blocked is to [rotate user-agents](https://serpapi.com/blog/how-to-reduce-chance-of-being-blocked-while-web/#rotate-user-agents). Ideally, it would be better to use residential proxies with random user-agent.
[`pandas`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html) can be used to extract data in CSV format:
```py
pd.DataFrame(data=data).to_csv("<csv_file_name>.csv", index=False)
```
For a quick and easy search for CSS selectors, you can use the [SelectorGadget](https://selectorgadget.com/) Chrome extension (*not always work perfectly if the website is rendered via JavaScript*).
Check code with pagination and saving information to CSV in [online IDE](https://replit.com/@denisskopa/scrape-organicwine-with-pagination-bs4#main.py).
```py
from bs4 import BeautifulSoup
import requests, json, lxml
import pandas as pd
# https://requests.readthedocs.io/en/latest/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36",
}
params = {
'pgnum': 1 # number page by default
}
data = []
while True:
page = requests.get(
"https://www.organicwine.com.au/vegan/?",
params=params,
headers=headers,
timeout=30,
)
soup = BeautifulSoup(page.text, "lxml")
print(f"Extracting page: {params['pgnum']}")
for products in soup.select(".price-btn-conts"):
try:
title = products.select_one(".new-h3").text
except:
title = None
try:
price = products.select_one(".price").text.strip()
except:
price = None
try:
snippet = products.select_one(".price-btn-conts p a").text
except:
snippet = None
try:
link = products.select_one(".new-h3 a")["href"]
except:
link = None
# additional request is needed to extract full size image
full_image_html = requests.get(link, headers=headers, timeout=30)
image_soup = BeautifulSoup(full_image_html.text, "lxml")
try:
original_image = f'https://www.organicwine.com.au{image_soup.select_one(".main-image a")["href"]}'
except:
original_image = None
data.append(
{
"title": title,
"price": price,
"snippet": snippet,
"link": link,
"original_image": original_image
}
)
if soup.select_one(".fa-chevron-right"):
params["pgnum"] += 1
else:
break
# save to CSV (install, import pandas as pd)
pd.DataFrame(data=data).to_csv("<csv_file_name>.csv", index=False)
print(json.dumps(data, indent=2, ensure_ascii=False))
```
Example output:
```json
[
{
"title": "Yangarra McLaren Vale GSM 2016",
"price": "$29.78 in a straight 12\nor $34.99 each",
"snippet": "The Yangarra GSM is a careful blending of Grenache, Shiraz and Mourvèdre in which the composition varies from year to year, conveying the traditional estate blends of the southern Rhône. The backbone of the wine comes fr...",
"link": "https://www.organicwine.com.au/yangarra-mclaren-vale-gsm-2016",
"original_image": "https://www.organicwine.com.au/assets/full/YG_GSM_16.png?20211110083637"
},
{
"title": "Yangarra Old Vine Grenache 2020",
"price": "$37.64 in a straight 12\nor $41.99 each",
"snippet": "Produced from the fruit of dry grown bush vines planted high up in the Estate's elevated vineyards in deep sandy soils. These venerated vines date from 1946 and produce a wine that is complex, perfumed and elegant with a...",
"link": "https://www.organicwine.com.au/yangarra-old-vine-grenache-2020",
"original_image": "https://www.organicwine.com.au/assets/full/YG_GRE_20.jpg?20210710165951"
},
#...
]
```
|
Create the URL by putting the page number in it, then put the rest of your code into a `for` loop and you can use `len(winenames)` to count how many results you have. You should do the writing outside the `for` loop. Here's your code with those changes:
```py
import requests
import csv
from bs4 import BeautifulSoup
num_pages = 11
result = []
for pgnum in range(num_pages):
url = f"https://www.organicwine.com.au/vegan/?pgnum={pgnum+1}"
response = requests.get(url)
website_html = response.text
soup = BeautifulSoup(website_html, "html.parser")
product_title = soup.find_all("div", class_="caption")
winename = []
for wine in product_title:
winetext = wine.a.text
winename.append(winetext)
product_price = soup.find_all("div", class_="wrap-thumb-mob")
price = []
for wine in product_price:
wineprice = wine.span.text
price.append(wineprice)
image = []
product_image_link = soup.find_all("div", class_="thumbnail-image")
for imagelink in product_image_link:
winelink = imagelink.a["href"]
response = requests.get(winelink)
wine_page_soup = BeautifulSoup(response.text, "html.parser")
main_image = wine_page_soup.find("a", class_="fancybox")
image.append(main_image['href'])
for i in range(len(winename)):
result.append([winename[i], price[i], image[i]])
with open("/tmp/OrganicWine2.csv", "w", newline="") as file:
writer = csv.writer(file, delimiter=",")
writer.writerow(["Wine Name", "Wine Price", "Wine Image Link"])
writer.writerows(results)
```
And here's how I would rewrite your code to accomplish this task. It's more pythonic (you should basically never write `range(len(something))`, there's always a cleaner way) and it doesn't require knowing how many pages of results there are:
```py
import csv
import itertools
import time
import requests
from bs4 import BeautifulSoup
data = []
# Try opening 100 pages at most, in case the scraping code is broken
# which can happen because websites change.
for pgnum in range(1, 100):
url = f"https://www.organicwine.com.au/vegan/?pgnum={pgnum}"
response = requests.get(url)
website_html = response.text
soup = BeautifulSoup(website_html, "html.parser")
search_results = soup.find_all("div", class_="thumbnail")
for search_result in search_results:
name = search_result.find("div", class_="caption").a.text
price = search_result.find("p", class_="price").span.text
# link to the product's page
link = search_result.find("div", class_="thumbnail-image").a["href"]
# get the full resolution product image
response = requests.get(link)
time.sleep(1) # rate limit
wine_page_soup = BeautifulSoup(response.text, "html.parser")
main_image = wine_page_soup.find("a", class_="fancybox")
image_url = main_image["href"]
# or you can just "guess" it from the thumbnail's URL
# thumbnail = search_result.find("div", class_="thumbnail-image").a.img['src']
# image_url = thumbnail.replace('/thumbL/', '/full/')
data.append([name, price, link, image_url])
# if there's no "next page" button or no search results on the current page,
# stop scraping
if not soup.find("i", class_="fa-chevron-right") or not search_results:
break
# rate limit
time.sleep(1)
with open("/tmp/OrganicWine3.csv", "w", newline="") as file:
writer = csv.writer(file, delimiter=",")
writer.writerow(["Wine Name", "Wine Price", "Wine Link", "Wine Image Link"])
writer.writerows(data)
```
| 11,078
|
70,883,363
|
I am working on a python project that depends on some other files. It all works fine while testing. However, I want the program to run on start up. The working directory for programs that run on start up seems to be `C:Windows\system32`. When installing a program, it usually asks where to install it and no matter where you put it, if it runs on start up, it knows where its files are located. How do they achieve that? Also, how to achieve the same thing in python?
|
2022/01/27
|
[
"https://Stackoverflow.com/questions/70883363",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12209262/"
] |
>
> But now the project grade file seems different.
>
>
>
Yes, starting with the new [Bumblebee](https://android-developers.googleblog.com/2022/01/android-studio-bumblebee-202111-stable.html) update of Android Studio, the build.gradle (Project) file is changed. In order to be able to use Google Services, you have to add to your build.gradle (Project) file the following lines:
```
plugins {
id 'com.android.application' version '7.1.0' apply false
id 'com.android.library' version '7.1.0' apply false
id 'com.google.gms.google-services' version '4.3.0' apply false
}
task clean(type: Delete) {
delete rootProject.buildDir
}
```
And inside your build.gradle (Module) file, the following plugin IDs:
```
plugins {
id 'com.android.application'
id 'com.google.gms.google-services'
}
```
In this way, your Firebase services will finally work.
P.S. Not 100% sure if the [Firebase Assistant](https://firebase.google.com/docs/android/setup) is updated with these new Android Studio changes.
**Edit 2022-14-07**:
Here is a repo that uses this new structure:
* <https://github.com/alexmamo/FirestoreCleanArchitectureApp>
**Edit:**
There is nothing changed regarding the way you add dependencies in the build.gradle (Module) file. If for example, you want to add Firestore and Glide, please use:
```
dependencies {
//Regular dependecies
implementation platform("com.google.firebase:firebase-bom:29.0.4")
implementation "com.google.firebase:firebase-firestore"
implementation "com.github.bumptech.glide:glide:4.12.0"
}
```
**Edit2:**
With the new updates, you don't need to worry about adding any lines under the repositories { }. That was a requirement in the previous versions.
|
I've had the same problem just add in the gradle.build project
>
> id 'com.google.gms.google-services' version '4.3.0' apply false
>
>
>
and then add in gradle.build module
>
> dependencies {
> //Regular dependecies
> implementation platform("com.google.firebase:firebase-bom:30.4.0")
> implementation "com.google.firebase:firebase-firestore"
>
>
>
}
| 11,079
|
51,039,271
|
<https://github.com/ITCoders/Human-detection-and-Tracking/blob/master/main.py>
This is the code I obtained for the human detection. I'm using anaconda navigator(jupyter notebook). How can I use argument parser in this? How can I give the video path *-v* ? Can anyone please say me a solution for this? As the running of the program is done by clicking on the run button or by giving *shift+Enter*. I need to do human detection. I'm a beginner to python and opencv. So please do help.
|
2018/06/26
|
[
"https://Stackoverflow.com/questions/51039271",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9993397/"
] |
What you are asking seems similar to: [Passing command line arguments to argv in jupyter/ipython notebook](https://stackoverflow.com/questions/37534440/passing-command-line-arguments-to-argv-in-jupyter-ipython-notebook)
There are two different methods mentioned in the post that were helpful. That said, I would suggest using command line tools and a Python IDE for writing scripts to run machine learning models. IPython may be helpful for visualization, fast debugging or running pre-trained models on commonly available datasets.
|
I tried out the answers listed on "[Passing command line arguments to argv in jupyter/ipython notebook](https://stackoverflow.com/questions/37534440/passing-command-line-arguments-to-argv-in-jupyter-ipython-notebook)", and came up with a different solution.
My original code was
```
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="path to input image")
ap.add_argument("-y", "--yolo", required=True, help="base path to YOLO directory")
ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections")
ap.add_argument("-t", "--threshold", type=float, default=0.3, help="threshold when applying non-maxima suppression")
args = vars(ap.parse_args())
```
The most common solution was to create a dummy class, which I wrote as:
```
Class Args():
image='photo.jpg'
yolo='yolo-coco'
confidence=0.5
threshold=0.3
args=Args()
```
but futher code snippets were producing an error.
So I printed `args` after `vars(ap.parse_args())` and found that it was a dictionary.
So just create a dictionary for the original args:
```
args={"image": 'photo.jpg', "yolo": 'yolo-coco', "confidence": 0.5,"threshold": 0.3}
```
| 11,081
|
72,879,863
|
yolov5 is detecting perfect while I run detect.py but unfortunately with deepsort track.py is not tracking even not detecting with tracker. how to set parameter my tracker ?
---
yolov5:
```
>> python detect.py --source video.mp4 --weights best.pt
```
yolov5+deepsort:
```
>> python track.py --yolo-weights best.pt --source video.mp4 --strong-sort-weights osnet_x0_25_msmt17.pt --show-vid --imgsz 640 --hide-labels
```
---
```
import argparse
from email.headerregistry import ContentDispositionHeader
import os
from pkg_resources import fixup_namespace_packages
# limit the number of cpus used by high performance libraries
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["OPENBLAS_NUM_THREADS"] = "1"
os.environ["MKL_NUM_THREADS"] = "1"
os.environ["VECLIB_MAXIMUM_THREADS"] = "1"
os.environ["NUMEXPR_NUM_THREADS"] = "1"
import sys
import numpy as np
from pathlib import Path
import torch
import torch.backends.cudnn as cudnn
FILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # yolov5 strongsort root directory
WEIGHTS = ROOT / 'weights'
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT)) # add ROOT to PATH
if str(ROOT / 'yolov5') not in sys.path:
sys.path.append(str(ROOT / 'yolov5')) # add yolov5 ROOT to PATH
if str(ROOT / 'strong_sort') not in sys.path:
sys.path.append(str(ROOT / 'strong_sort')) # add strong_sort ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
import logging
from yolov5.models.common import DetectMultiBackend
from yolov5.utils.dataloaders import VID_FORMATS, LoadImages, LoadStreams
from yolov5.utils.general import (LOGGER, check_img_size, non_max_suppression, scale_coords, check_requirements, cv2,
check_imshow, xyxy2xywh, increment_path, strip_optimizer, colorstr, print_args, check_file)
from yolov5.utils.torch_utils import select_device, time_sync
from yolov5.utils.plots import Annotator, colors, save_one_box
from strong_sort.utils.parser import get_config
from strong_sort.strong_sort import StrongSORT
# remove duplicated stream handler to avoid duplicated logging
logging.getLogger().removeHandler(logging.getLogger().handlers[0])
list_ball_cord = list()
@torch.no_grad()
def run(
source='0',
yolo_weights=WEIGHTS / 'yolov5m.pt', # model.pt path(s),
strong_sort_weights=WEIGHTS / 'osnet_x0_25_msmt17.pt', # model.pt path,
config_strongsort=ROOT / 'strong_sort/configs/strong_sort.yaml',
imgsz=(640, 640), # inference size (height, width)
conf_thres=0.25, # confidence threshold
iou_thres=0.45, # NMS IOU threshold
max_det=1000, # maximum detections per image
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
show_vid=False, # show results
save_txt=False, # save results to *.txt
save_conf=False, # save confidences in --save-txt labels
save_crop=False, # save cropped prediction boxes
save_vid=False, # save confidences in --save-txt labels
nosave=False, # do not save images/videos
classes=None, # filter by class: --class 0, or --class 0 2 3
agnostic_nms=False, # class-agnostic NMS
augment=False, # augmented inference
visualize=False, # visualize features
update=False, # update all models
project=ROOT / 'runs/track', # save results to project/name
name='exp', # save results to project/name
exist_ok=False, # existing project/name ok, do not increment
line_thickness=3, # bounding box thickness (pixels)
hide_labels=False, # hide labels
hide_conf=False, # hide confidences
hide_class=False, # hide IDs
half=False, # use FP16 half-precision inference
dnn=False, # use OpenCV DNN for ONNX inference
):
source = str(source)
save_img = not nosave and not source.endswith('.txt') # save inference images
is_file = Path(source).suffix[1:] in (VID_FORMATS)
is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
if is_url and is_file:
source = check_file(source) # download
# Directories
if not isinstance(yolo_weights, list): # single yolo model
exp_name = str(yolo_weights).rsplit('/', 1)[-1].split('.')[0]
elif type(yolo_weights) is list and len(yolo_weights) == 1: # single models after --yolo_weights
exp_name = yolo_weights[0].split(".")[0]
else: # multiple models after --yolo_weights
exp_name = 'ensemble'
exp_name = name if name is not None else exp_name + "_" + str(strong_sort_weights).split('/')[-1].split('.')[0]
save_dir = increment_path(Path(project) / exp_name, exist_ok=exist_ok) # increment run
(save_dir / 'tracks' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
# Load model
device = select_device(device)
model = DetectMultiBackend(yolo_weights, device=device, dnn=dnn, data=None, fp16=half)
stride, names, pt = model.stride, model.names, model.pt
imgsz = check_img_size(imgsz, s=stride) # check image size
# Dataloader
if webcam:
show_vid = check_imshow()
cudnn.benchmark = True # set True to speed up constant image size inference
dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt)
nr_sources = len(dataset)
else:
dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt)
nr_sources = 1
vid_path, vid_writer, txt_path = [None] * nr_sources, [None] * nr_sources, [None] * nr_sources
# initialize StrongSORT
cfg = get_config()
cfg.merge_from_file(opt.config_strongsort)
# Create as many strong sort instances as there are video sources
strongsort_list = []
for i in range(nr_sources):
strongsort_list.append(
StrongSORT(
strong_sort_weights,
device,
max_dist=cfg.STRONGSORT.MAX_DIST,
max_iou_distance=cfg.STRONGSORT.MAX_IOU_DISTANCE,
max_age=cfg.STRONGSORT.MAX_AGE,
n_init=cfg.STRONGSORT.N_INIT,
nn_budget=cfg.STRONGSORT.NN_BUDGET,
mc_lambda=cfg.STRONGSORT.MC_LAMBDA,
ema_alpha=cfg.STRONGSORT.EMA_ALPHA,
)
)
outputs = [None] * nr_sources
# Run tracking
model.warmup(imgsz=(1 if pt else nr_sources, 3, *imgsz)) # warmup
dt, seen = [0.0, 0.0, 0.0, 0.0], 0
curr_frames, prev_frames = [None] * nr_sources, [None] * nr_sources
for frame_idx, (path, im, im0s, vid_cap, s) in enumerate(dataset):
t1 = time_sync()
im = torch.from_numpy(im).to(device)
im = im.half() if half else im.float() # uint8 to fp16/32
im /= 255.0 # 0 - 255 to 0.0 - 1.0
if len(im.shape) == 3:
im = im[None] # expand for batch dim
t2 = time_sync()
dt[0] += t2 - t1
# Inference
visualize = increment_path(save_dir / Path(path[0]).stem, mkdir=True) if opt.visualize else False
pred = model(im, augment=opt.augment, visualize=visualize)
t3 = time_sync()
dt[1] += t3 - t2
# Apply NMS
pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, opt.classes, opt.agnostic_nms, max_det=opt.max_det)
dt[2] += time_sync() - t3
# Process detections
for i, det in enumerate(pred): # detections per image
seen += 1
if webcam: # nr_sources >= 1
p, im0, _ = path[i], im0s[i].copy(), dataset.count
p = Path(p) # to Path
s += f'{i}: '
txt_file_name = p.name
save_path = str(save_dir / p.name) # im.jpg, vid.mp4, ...
else:
p, im0, _ = path, im0s.copy(), getattr(dataset, 'frame', 0)
p = Path(p) # to Path
# video
### =============================================================================================
### ROI Rectangle ( I will use cv2.selectROI later )
# left_roi = [(381,331), (647,336), (647,497), (334,492)]
# right_roi = [(648,335), (914,338), (958,498), (646,495)]
# table_roi = [(381,331), (914,338), (958,498), (334,492)]
# table_roi = [(0,0), (1280,0), (1280,720), (0,720)]
table_roi = [(381,331), (1280,0), (1280,720), (0,720)]
cv2.polylines(im0, [np.array(table_roi, np.int32)],True, (0,0,255),2 )
# cv2.polylines(im0, [np.array(right_roi, np.int32)],True, (0,0,255),2 )
### =============================================================================================
if source.endswith(VID_FORMATS):
txt_file_name = p.stem
save_path = str(save_dir / p.name) # im.jpg, vid.mp4, ...
# folder with imgs
else:
txt_file_name = p.parent.name # get folder name containing current img
save_path = str(save_dir / p.parent.name) # im.jpg, vid.mp4, ...
curr_frames[i] = im0
txt_path = str(save_dir / 'tracks' / txt_file_name) # im.txt
s += '%gx%g ' % im.shape[2:] # print string
imc = im0.copy() if save_crop else im0 # for save_crop
annotator = Annotator(im0, line_width=2, pil=not ascii)
if cfg.STRONGSORT.ECC: # camera motion compensation
strongsort_list[i].tracker.camera_update(prev_frames[i], curr_frames[i])
if det is not None and len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
# Print results
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
xywhs = xyxy2xywh(det[:, 0:4])
confs = det[:, 4]
clss = det[:, 5]
# pass detections to strongsort
t4 = time_sync()
outputs[i] = strongsort_list[i].update(xywhs.cpu(), confs.cpu(), clss.cpu(), im0)
t5 = time_sync()
dt[3] += t5 - t4
# draw boxes for visualization
if len(outputs[i]) > 0:
for j, (output, conf) in enumerate(zip(outputs[i], confs)):
### ========================================================================================================================================================
### Results ROI
### ========================================================================================================================================================
# if output[5] == 0.0:
# bboxes = output[0:4]
# id = output[4]
# cls = output[5]
# center = int((((output[0]) + (output[2]))/2) , (((output[1]) + (output[3]))/2))
# print("center",center)
"""
- create rectangle left/right
- display ball cordinates
- intersect ball & rectangle left/right
"""
## ball cord..
if output[5] == 0.0:
# print("bbox----------", output[0:4])
print("class----------", output[5])
# print("id -------------", output[4])
print("=============================================")
# display ball rectangle
## cv2.rectangle(im0,(int(output[0]),int(output[1])),(int(output[2]),int(output[3])),(0,255,0),2 )
ball_box = output[0:4]
list_ball_cord.append(ball_box)
bbox_left = output[0]
bbox_top = output[1]
bbox_w = output[2] - output[0]
bbox_h = output[3] - output[1]
# print("bbox_left--------", bbox_left)
# print("bbox_top--------", bbox_top)
# print("bbox_w--------", bbox_w)
# print("bbox_h--------", bbox_h)
## ball center point
ball_cx = int(bbox_left + bbox_w /2)
ball_cy = int(bbox_top + bbox_h /2)
# cv2.circle(im0, (ball_cx,ball_cy),5, (0,0,255),-1)
# # ball detect only on table >> return three output +1-inside the table -1-outside the table 0-on the boundry
ball_on_table_res = cv2.pointPolygonTest(np.array(table_roi,np.int32), (int(ball_cx),int(ball_cy)), False)
if ball_on_table_res >= 0:
cv2.circle(im0, (ball_cx,ball_cy),20, (0,0,0),-1)
### ========================================================================================================================================================
bboxes = output[0:4]
id = output[4]
cls = output[5]
# print("bboxes--------", bboxes)
# print("cls-----------", cls)
if save_txt:
# to MOT format
bbox_left = output[0]
bbox_top = output[1]
bbox_w = output[2] - output[0]
bbox_h = output[3] - output[1]
# Write MOT compliant results to file
with open(txt_path + '.txt', 'a') as f:
f.write(('%g ' * 10 + '\n') % (frame_idx + 1, id, bbox_left, # MOT format
bbox_top, bbox_w, bbox_h, -1, -1, -1, i))
if save_vid or save_crop or show_vid: # Add bbox to image
c = int(cls) # integer class
id = int(id) # integer id
label = None if hide_labels else (f'{id} {names[c]}' if hide_conf else \
(f'{id} {conf:.2f}' if hide_class else f'{id} {names[c]} {conf:.2f}'))
annotator.box_label(bboxes, label, color=colors(c, True))
#####################print("label---------", label)
if save_crop:
txt_file_name = txt_file_name if (isinstance(path, list) and len(path) > 1) else ''
save_one_box(bboxes, imc, file=save_dir / 'crops' / txt_file_name / names[c] / f'{id}' / f'{p.stem}.jpg', BGR=True)
fps_StrongSORT = 1 / (t5-t4)
fps_yolo = 1/ (t3-t2)
LOGGER.info(f'{s}Done. YOLO:({t3 - t2:.3f}s), StrongSORT:({t5 - t4:.3f}s), ')
print("fps_StrongSORT-----", fps_StrongSORT)
print("fps_yolo-----", fps_yolo)
else:
strongsort_list[i].increment_ages()
LOGGER.info('No detections')
# Stream results
im0 = annotator.result()
if show_vid:
# im0 = cv2.resize(im0, (640,640))
cv2.imshow(str(p), im0)
cv2.waitKey(1) # 1 millisecond
# Save results (image with detections)
if save_vid:
if vid_path[i] != save_path: # new video
vid_path[i] = save_path
if isinstance(vid_writer[i], cv2.VideoWriter):
vid_writer[i].release() # release previous video writer
if vid_cap: # video
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
else: # stream
fps, w, h = 30, im0.shape[1], im0.shape[0]
save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos
vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
vid_writer[i].write(im0)
prev_frames[i] = curr_frames[i]
print("fffffffffffffffffffffffffffffffff----------------------------------------",list_ball_cord)
# Print results
t = tuple(x / seen * 1E3 for x in dt) # speeds per image
LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS, %.1fms strong sort update per image at shape {(1, 3, *imgsz)}' % t)
if save_txt or save_vid:
s = f"\n{len(list(save_dir.glob('tracks/*.txt')))} tracks saved to {save_dir / 'tracks'}" if save_txt else ''
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
if update:
strip_optimizer(yolo_weights) # update model (to fix SourceChangeWarning)
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--yolo-weights', nargs='+', type=str, default='v5best_bp.pt', help='model.pt path(s)')
parser.add_argument('--strong-sort-weights', type=str, default=WEIGHTS / 'osnet_x0_25_msmt17.pt')
parser.add_argument('--config-strongsort', type=str, default='strong_sort/configs/strong_sort.yaml')
parser.add_argument('--source', type=str, default='0', help='file/dir/URL/glob, 0 for webcam')
parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
parser.add_argument('--conf-thres', type=float, default=0.5, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.5, help='NMS IoU threshold')
parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--show-vid', action='store_true', help='display tracking video results')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
parser.add_argument('--save-vid', action='store_true', help='save video tracking results')
parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
# class 0 is person, 1 is bycicle, 2 is car... 79 is oven
parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--visualize', action='store_true', help='visualize features')
parser.add_argument('--update', action='store_true', help='update all models')
parser.add_argument('--project', default=ROOT / 'runs/track', help='save results to project/name')
parser.add_argument('--name', default='exp', help='save results to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
parser.add_argument('--hide-class', default=False, action='store_true', help='hide IDs')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
opt = parser.parse_args()
opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
print_args(vars(opt))
return opt
def main(opt):
check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop'))
run(**vars(opt))
if __name__ == "__main__":
opt = parse_opt()
main(opt)
```
[enter image description here](https://i.stack.imgur.com/QuUHW.jpg)
|
2022/07/06
|
[
"https://Stackoverflow.com/questions/72879863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14141667/"
] |
DISCLAIMER: I am the creator of <https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet>
First of all:
Does Yolov5+StrongSORT+OSNet run correctly without you custom modifications?
Secondly:
Have you checked that you are loading the same weights for Yolov5 and Yolov5+StrongSORT+OSNet?
Moreover:
Why all the custom modifications? If you only want to track class 0. Then you can run the following command:
```
python track.py --source 0 --yolo-weights best.pt --classes 0
```
|
I am also using the same model and was facing the same issue.
Try annotating more image and increase the image size to 1024. Also make sure to use the best weights of yolov5 in yolov5+deepsort.
| 11,083
|
66,877,531
|
I have the following arguments which are to be parsed using argparse
* input\_dir (string: Mandatory)
* output\_dir (string: Mandatory)
* file\_path (string: Mandatory)
* supported\_file\_extensions (comma separated string - Optional)
* ignore\_tests (boolean - Optional)
If either comma separated string and a string are provided for `supported_file_extensions` and `ignore_tests` respectively, then I want the value to be set to the argument variable. My set up is as follows
```
import sys
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'input_dir', type=str,
help='Fully qualified directory for the input directory')
parser.add_argument(
'file_path', type=str,
help='Fully qualified path name for file')
parser.add_argument(
'output', type=str, help='Fully qualified output directory path')
parser.add_argument(
'-supported_file_extensions',nargs='?', type=str, default=None,
help='optional comma separated file extensions'
) # Should default to False
parser.add_argument(
'-ignore_tests', nargs='?', type=bool, default=True
) # Should default to True if the argument is passed
(arguments, _) = parser.parse_known_args(sys.argv[1:])
print(arguments)
```
When I pass the following command
```
python cloc.py -input_dir /myproject -file_path /myproject/.github/CODEOWNERS -output output.json --supported_file_extensions java,kt --ignore_tests False
```
I want the value of the arguments.--supported\_file\_extensions to be equal to `java,kt` and `ignore_tests` to equal to `False` but I get the following values
```
Namespace(file_path='/myproject/.github/CODEOWNERS', ignore_tests=True, input_dir='/myproject', output='output.json', supported_file_extensions=None)
```
If I don't pass `--ignored_tests` in my command line argument, then I want the file to default to `False`. Similarly, when I pass `--ignored_tests`, then I want the argument to be set to `True`.
If I pass `--supported_file_extensions java,kt`, then I want the argument to be set to `java,kt` as a string, and not a list.
I have tried going through but I have not had any success.
1. [Python argparse: default value or specified value](https://stackoverflow.com/questions/15301147/python-argparse-default-value-or-specified-value)
2. [python argparse default value for optional argument](https://stackoverflow.com/questions/29847450/python-argparse-default-value-for-optional-argument)
3. [Creating command lines with python (argparse)](https://stackoverflow.com/questions/61216374/creating-command-lines-with-python-argparse)
**UPDATE**
I updated the last two arguments as follows:
```
parser.add_argument(
'-supported_file_extensions', type=str,
help='optional comma separated file extensions'
)
parser.add_argument(
'-ignore_tests', action='store_true',
help='optional flag to ignore '
)
(arguments, _) = parser.parse_known_args(sys.argv[1:])
print(arguments)
```
When I run the command as
```
python cloc.py -input_dir /myproject -file_path /myproject/.github/CODEOWNERS -output output.json --supported_file_extensions java,kt
```
The output is as follows
```
Namespace(file_path='/myproject/.github/CODEOWNERS', ignore_tests=False, input_dir='/myproject', output='output.json', supported_file_extensions=None)
```
While the value of `ignore_tests` is correct, (defaulted to false), the value of `supported_file_extensions` is `None`, which is not correct since I had passed `java,kt` as command line arguments.
When I pass
```
python cloc.py -input_dir /myproject -file_path /myproject/.github/CODEOWNERS -output output.json --supported_file_extensions java,kt --ignore_tests
```
I get the following output
```
Namespace(file_path='/myproject/.github/CODEOWNERS', ignore_tests=False, input_dir='/myproject', output='output.json', supported_file_extensions=None)
```
|
2021/03/30
|
[
"https://Stackoverflow.com/questions/66877531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7717431/"
] |
The column `type` is ambiguous because it appears in both `t1` and in your JSON\_TABLE result. You should qualify the column.
```
mysql> select distinct t.type from t1,
json_table(type, '$[*]' columns (type varchar(50) PATH '$')) as t
order by t.type;
+-------+
| type |
+-------+
| type1 |
| type2 |
| type4 |
+-------+
```
|
I am able to resolve it by writing a `CROSS JOIN`
```
SELECT distinct j.type FROM t1
CROSS JOIN
JSON_TABLE(t1.type, " $ [*]"
COLUMNS ( type VARCHAR(20) PATH '$' )
) j
```
| 11,084
|
8,456,516
|
How can I go about "selecting" on multiple [`queue.Queue`](http://docs.python.org/py3k/library/queue.html)'s simultaneously?
Golang has the [desired feature](http://golang.org/doc/go_spec.html#Select_statements) with its channels:
```
select {
case i1 = <-c1:
print("received ", i1, " from c1\n")
case c2 <- i2:
print("sent ", i2, " to c2\n")
case i3, ok := (<-c3): // same as: i3, ok := <-c3
if ok {
print("received ", i3, " from c3\n")
} else {
print("c3 is closed\n")
}
default:
print("no communication\n")
}
```
Wherein the first channel to unblock executes the corresponding block. How would I achieve this in Python?
Update0
-------
Per [the link](http://www.1024cores.net/home/lock-free-algorithms/queues) given in [tux21b's answer](https://stackoverflow.com/a/8539910/149482), the desired queue type has the following properties:
* Multi-producer/multi-consumer queues (MPMC)
* provides per-producer FIFO/LIFO
* When a queue is empty/full consumers/producers get blocked
Furthermore channels can be blocking, producers will block until a consumer retrieves the item. I'm not sure that Python's Queue can do this.
|
2011/12/10
|
[
"https://Stackoverflow.com/questions/8456516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/149482/"
] |
There are many different implementations of producer-consumer queues, like [queue.Queue](http://docs.python.org/py3k/library/queue.html) available. They normally differ in a lot of properties like listed on this [excellent article](http://www.1024cores.net/home/lock-free-algorithms/queues) by Dmitry Vyukov. As you can see, there are more than 10k different combinations possible. The algorithms used for such queues also differ widely depending on the requirements. It's not possible to just extend an existing queue algorithm to guarantee additional properties, since that normally requires different internal data structures and different algorithms.
Go's channels offer a relatively high number of guaranteed properties, so those channels might be suitable for a lot of programs. One of the hardest requirements there is the support for reading / blocking on multiple channels at once (select statement) and to choose a channel fairly if more than one branch in a select statement is able to proceed, so that no messages will be left behind. Python's [queue.Queue](http://docs.python.org/py3k/library/queue.html) doesn't offer this features, so it's simply not possible to archive the same behavior with it.
So, if you want to continue using [queue.Queue](http://docs.python.org/py3k/library/queue.html) you need to find workarounds for that problem. The workarounds have however their own list of drawbacks and are harder to maintain. Looking for another producer-consumer queue which offers the features you need might be a better idea! Anyway, here are two possible workarounds:
**Polling**
```
while True:
try:
i1 = c1.get_nowait()
print "received %s from c1" % i1
except queue.Empty:
pass
try:
i2 = c2.get_nowait()
print "received %s from c2" % i2
except queue.Empty:
pass
time.sleep(0.1)
```
This might use a lot of CPU cycles while polling the channels and might be slow when there are a lot of messages. Using time.sleep() with an exponential back-off time (instead of the constant 0.1 secs shown here) might improve this version drastically.
**A single notify-queue**
```
queue_id = notify.get()
if queue_id == 1:
i1 = c1.get()
print "received %s from c1" % i1
elif queue_id == 2:
i2 = c2.get()
print "received %s from c2" % i2
```
With this setup, you must send something to the notify queue after sending to c1 or c2. This might work for you, as long as only one such notify-queue is enough for you (i.e. you do not have multiple "selects", each blocking on a different subset of your channels).
Alternatively you can also consider using Go. Go's goroutines and concurrency support is much more powerful than Python's limited threading capabilities anyway.
|
If you use `queue.PriorityQueue` you can get a similar behaviour using the channel objects as priorities:
```
import threading, logging
import random, string, time
from queue import PriorityQueue, Empty
from contextlib import contextmanager
logging.basicConfig(level=logging.NOTSET,
format="%(threadName)s - %(message)s")
class ChannelManager(object):
next_priority = 0
def __init__(self):
self.queue = PriorityQueue()
self.channels = []
def put(self, channel, item, *args, **kwargs):
self.queue.put((channel, item), *args, **kwargs)
def get(self, *args, **kwargs):
return self.queue.get(*args, **kwargs)
@contextmanager
def select(self, ordering=None, default=False):
if default:
try:
channel, item = self.get(block=False)
except Empty:
channel = 'default'
item = None
else:
channel, item = self.get()
yield channel, item
def new_channel(self, name):
channel = Channel(name, self.next_priority, self)
self.channels.append(channel)
self.next_priority += 1
return channel
class Channel(object):
def __init__(self, name, priority, manager):
self.name = name
self.priority = priority
self.manager = manager
def __str__(self):
return self.name
def __lt__(self, other):
return self.priority < other.priority
def put(self, item):
self.manager.put(self, item)
if __name__ == '__main__':
num_channels = 3
num_producers = 4
num_items_per_producer = 2
num_consumers = 3
num_items_per_consumer = 3
manager = ChannelManager()
channels = [manager.new_channel('Channel#{0}'.format(i))
for i in range(num_channels)]
def producer_target():
for i in range(num_items_per_producer):
time.sleep(random.random())
channel = random.choice(channels)
message = random.choice(string.ascii_letters)
logging.info('Putting {0} in {1}'.format(message, channel))
channel.put(message)
producers = [threading.Thread(target=producer_target,
name='Producer#{0}'.format(i))
for i in range(num_producers)]
for producer in producers:
producer.start()
for producer in producers:
producer.join()
logging.info('Producers finished')
def consumer_target():
for i in range(num_items_per_consumer):
time.sleep(random.random())
with manager.select(default=True) as (channel, item):
if channel:
logging.info('Received {0} from {1}'.format(item, channel))
else:
logging.info('No data received')
consumers = [threading.Thread(target=consumer_target,
name='Consumer#{0}'.format(i))
for i in range(num_consumers)]
for consumer in consumers:
consumer.start()
for consumer in consumers:
consumer.join()
logging.info('Consumers finished')
```
Example output:
```
Producer#0 - Putting x in Channel#2
Producer#2 - Putting l in Channel#0
Producer#2 - Putting A in Channel#2
Producer#3 - Putting c in Channel#0
Producer#3 - Putting z in Channel#1
Producer#1 - Putting I in Channel#1
Producer#1 - Putting L in Channel#1
Producer#0 - Putting g in Channel#1
MainThread - Producers finished
Consumer#1 - Received c from Channel#0
Consumer#2 - Received l from Channel#0
Consumer#0 - Received I from Channel#1
Consumer#0 - Received L from Channel#1
Consumer#2 - Received g from Channel#1
Consumer#1 - Received z from Channel#1
Consumer#0 - Received A from Channel#2
Consumer#1 - Received x from Channel#2
Consumer#2 - Received None from default
MainThread - Consumers finished
```
In this example, `ChannelManager` is just a wrapper around `queue.PriorityQueue` that implements the `select` method as a `contextmanager` to make it look similar to the `select` statement in Go.
A few things to note:
* Ordering
+ In the Go example, the order in which the channels are written inside the `select` statement determines which channel's code will be executed if there's data available for more than one channel.
+ In the python example the order is determined by the priority assigned to each channel. However, the priority can be dinamically assigned to each channel (as seen in the example), so changing the ordering would be possible with a more complex `select` method that takes care of assigning new priorities based on an argument to the method. Also, the old ordering could be reestablished once the context manager is finished.
* Blocking
+ In the Go example, the `select` statement is blocking if a `default` case exists.
+ In the python example, a boolean argument has to be passed to the `select` method to make it clear when blocking/non-blocking is desired. In the non-blocking case, the channel returned by the context mananager is just the string `'default'` so it's easy in the code inside to detect this in the code inside the `with` statement.
* Threading: Object in the `queue` module are already ready for multi-producer, multiconsumer-scenarios as already seen in the example.
| 11,085
|
38,272,965
|
It seems win32api should be able to do this given the answer [here](http://VBComponents.Remove) and [.
I would like to remove all modules from an excel workbook (.xls) using python
|
2016/07/08
|
[
"https://Stackoverflow.com/questions/38272965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/360826/"
] |
Consider using the [VBComponents collection](https://msdn.microsoft.com/en-us/library/aa443983(v=vs.60).aspx) available in the COM interface accessible with Python's `win23com.client` module. However, inside the particular workbook you need to first grant [programmatic access to the VBA object library](https://stackoverflow.com/questions/25638344/programmatic-access-to-visual-basic-project-is-not-trusted-excel).
Also, you will need to conditionally set the type as this procedure cannot delete object modules including Sheets (`Type = 100`) and Workbooks (`Type = 100`). Only Standard Modules (`Type = 1`), Class Modules (`Type = 2`), UserForms (`Type= 3`), and other inserted components can be removed. Of course you can iterate through defined list of modules to remove. `Try/Except` statement is used to effectively close out the Excel.Application process whether script fails or not.
```
import win32com.client
# OPEN EXCEL APP AND WORKBOOK
xlApp = win32com.client.Dispatch("Excel.Application")
xlwb = xlApp.Workbooks.Open("C:\\Path\\To\\Workbook.xlsm")
# ITERATE THROUGH EACH VB COMPONENT (CLASS MODULE, STANDARD MODULE, USER FORMS)
try:
for i in xlwb.VBProject.VBComponents:
xlmodule = xlwb.VBProject.VBComponents(i.Name)
if xlmodule.Type in [1, 2, 3]:
xlwb.VBProject.VBComponents.Remove(xlmodule)
except Exception as e:
print(e)
finally:
# CLOSE AND SAVE AND UNINITIALIZE APP
xlwb.Close(True)
xlApp.Quit
xlApp = None
```
|
Actually simply reading in the file with [`xlrd`](https://pypi.python.org/pypi/xlrd), copying with [`xlutils`](https://pypi.python.org/pypi/xlutils), and spitting it out with [`xlwt`](https://pypi.python.org/pypi/xlwt) will remove the VBA (and other stuff):
```
import xlrd, xlwt
from xlutils.copy import copy as xl_copy
rb = xlrd.open_workbook('/path/to/fin.xls')
wb = xl_copy(rb)
wb.save('/path/to/fou.xls')
```
| 11,090
|
8,923,764
|
```
#!/bin/python
from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
import time
import commands
import sys
import string
import random
device = MonkeyRunner.waitForConnection(10)
device.press('KEYCODE_BACK', MonkeyDevice.DOWN_AND_UP)
time.sleep(1)
device.press('KEYCODE_BACK', MonkeyDevice.DOWN_AND_UP)
package = 'com.pak.pak1'
activity = 'com.pak.pak1.Activity123'
runComponent = package + '/' + activity
device.startActivity(component=runComponent)
time.sleep(1)
device.touch( 20, 90, MonkeyDevice.DOWN_AND_UP )
time.sleep(2)
device.touch( 20, 90, MonkeyDevice.DOWN_AND_UP )
#time.sleep(10)
device.touch( 450, 95, MonkeyDevice.DOWN_AND_UP )
```
if I run the script like this it works just fine
But if I put some delay (time.sleep(10)) then ti gives this error
```
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] Error getting the manager to quit
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice]java.net.SocketException: Broken pipe
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at java.net.SocketOutputStream.socketWrite0(Native Method)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at java.io.BufferedWriter.flush(BufferedWriter.java:236)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at com.android.monkeyrunner.MonkeyManager.sendMonkeyEventAndGetResponse(MonkeyManager.java:167)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at com.android.monkeyrunner.MonkeyManager.quit(MonkeyManager.java:288)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at com.android.monkeyrunner.adb.AdbMonkeyDevice.dispose(AdbMonkeyDevice.java:79)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at com.android.monkeyrunner.adb.AdbBackend.shutdown(AdbBackend.java:120)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at com.android.monkeyrunner.MonkeyRunnerStarter.run(MonkeyRunnerStarter.java:95)
120119 10:31:24.823:S [main] [com.android.monkeyrunner.adb.AdbMonkeyDevice] at com.android.monkeyrunner.MonkeyRunnerStarter.main(MonkeyRunnerStarter.java:203)
```
|
2012/01/19
|
[
"https://Stackoverflow.com/questions/8923764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/706780/"
] |
Not all namespaces have corresponding DLL file names used in the Add Reference dialog. System.Windows is one of these.
For example, `System.Windows.Clipboard` is resident in the PresentationCore.dll, but `System.Windows.SizeConverter` is in WindowsBase.dll. It all depends on the actual types you need to access.
|
System.Windows is the WPF base classes - you cant add that assembly to an ASP.NET site. You will need to change your application type to use it.
| 11,091
|
55,930,785
|
I am trying to understand NumPy `np.fromfunction()`.
following piece of code is extracted from this [post](https://stackoverflow.com/a/55929789/11074017).
```
dist = np.array([ 1, -1])
f = lambda x: np.linalg.norm(x, 1)
f(dist)
```
the output
```
2.0
```
is as expected.
when I put them together to use np.linalg.norm() inside np.fromfunction()
```
ds = np.array([[1, 2, 1],
[1, 1, 0],
[0, 1, 1]])
np.fromfunction(f, ds.shape)
```
error shows up.
```
> TypeError Traceback (most recent call
last) <ipython-input-6-f5d65a6d95c4> in <module>()
2 [1, 1, 0],
3 [0, 1, 1]])
----> 4 np.fromfunction(f, ds.shape)
~/anaconda3/envs/tf11/lib/python3.6/site-packages/numpy/core/numeric.py
in fromfunction(function, shape, **kwargs)
2026 dtype = kwargs.pop('dtype', float)
2027 args = indices(shape, dtype=dtype)
-> 2028 return function(*args, **kwargs)
2029
2030
TypeError: <lambda>() takes 1 positional argument but 2 were given
```
is it possible to put a lambda function(may be another lambda function) inside np.fromfunction() to do this job(get a distance array)?
|
2019/05/01
|
[
"https://Stackoverflow.com/questions/55930785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Look at the error:
```
In [171]: np.fromfunction(f, ds.shape)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-171-1a3ed1ade41a> in <module>
----> 1 np.fromfunction(f, ds.shape)
/usr/local/lib/python3.6/dist-packages/numpy/core/numeric.py in fromfunction(function, shape, **kwargs)
2026 dtype = kwargs.pop('dtype', float)
2027 args = indices(shape, dtype=dtype)
-> 2028 return function(*args, **kwargs)
2029
2030
TypeError: <lambda>() takes 1 positional argument but 2 were given
```
`fromfunction` is a small Python function; there's no compiled magic.
Based on the shape you give, it generates `indices`.
```
In [172]: np.indices(ds.shape)
Out[172]:
array([[[0, 0, 0],
[1, 1, 1],
[2, 2, 2]],
[[0, 1, 2],
[0, 1, 2],
[0, 1, 2]]])
```
That's a (2,3,3) array. The 2 from the 2 element shape, and the (3,3) from the shape itself. This is similar to what `np.meshgrid` and `np.mgrid` produce. Just indexing arrays.
It then passes that array to your function, with `*args` unpacking.
```
function(*args, **kwargs)
In [174]: f(Out[172][0], Out[172][1])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-174-40469f1ab449> in <module>
----> 1 f(Out[172][0], Out[172][1])
TypeError: <lambda>() takes 1 positional argument but 2 were given
```
That's all it does - generate a n-d grid, and pass it as n arguments to your function.
===
Note also that you passed `ds.shape` to `fromfunction`, but not `ds` itself. You could just as well written `np.fromfunction(f,(3,3))`.
What do you want your `lambda` to do with `ds`? Clearly `fromfunction` isn't doing it for you.
===
With this `f`, the only that `fromfunction` can do is give it a `arange`:
```
In [176]: np.fromfunction(f, (10,))
Out[176]: 45.0
In [177]: f(np.arange(10))
Out[177]: 45.0
```
===
In the linked SO the lambda takes 2 arguments, `lambda x,y`:
```
np.fromfunction(lambda x,y: np.abs(target[0]-x) + np.abs(target[1]-y), ds.shape)
```
In that SO, both the question and answer, the `ds` array is just the source of the shape, Target is (0,1), the largest element of `ds`.
Effectively, the `fromfunction` in the linked answer is just doing:
```
In [180]: f1 = lambda x,y: np.abs(0-x) + np.abs(1-y)
In [181]: f1(Out[172][0], Out[172][1])
Out[181]:
array([[1, 0, 1],
[2, 1, 2],
[3, 2, 3]])
In [182]: np.abs(0-Out[172][0]) + np.abs(1-Out[172][1])
Out[182]:
array([[1, 0, 1],
[2, 1, 2],
[3, 2, 3]])
In [183]: np.abs(np.array([0,1])[:,None,None]-Out[172]).sum(axis=0)
Out[183]:
array([[1, 0, 1],
[2, 1, 2],
[3, 2, 3]])
In [184]: np.abs(0-np.arange(3))[:,None] + np.abs(1-np.arange(3))
Out[184]:
array([[1, 0, 1],
[2, 1, 2],
[3, 2, 3]])
```
|
As you are using a 2 dimensional array, your function needs to take 2 inputs.
The docs of `np.fromfunction()` <https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html> say "Construct an array by executing a function over each coordinate."
So it will pass the coordinates of each element of the array ((0,0), then (0,1) ... etc) to construct an array. Is this really what you are trying to do?
You could change the lambda to something like this, but it really depends on what you are trying to do!
```
f = lambda x, y: np.linalg.norm([x,y],1)
```
| 11,092
|
31,965,153
|
I'm actually trying to program a sort of "rolegame", and actually I'm stuck in the process of creating the new Character files. I'm actually trying to make it so, if I call a character file, the system checks if it exists and, if not, it will create it. This is the code, just trying if it will create an empty file called `(charactername).char`:
```
import os
def call_char(name):
file = "%s.char" %name
if os.path.exists(file):
print ("file loaded")
open_char(file)
else:
print ("creating new character")
new_char(file)
"""creates a new character file"""
def new_char(file):
print ("Character %s."%file)
file(file, "w")
call_char("Volgrand")
```
However, I get this error while executing `call_char("Volgrand")`
```
File "E:\python\Juego foral\test.py", line 51, in new_char
file(file, "w")
TypeError: 'str' object is not callable"
```
I though that calling the variable `file` (a string) should work to create the new file.
|
2015/08/12
|
[
"https://Stackoverflow.com/questions/31965153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5219236/"
] |
This is a cordova bug - <https://issues.apache.org/jira/browse/CB-5398>.
You can change your path like this.
```
function uploadPhoto(imageURI) {
//alert(imageURI); return false;
//alert(imageURI);
if (imageURI.substring(0,21)=="content://com.android") {
photo_split=imageURI.split("%3A");
imageURI="content://media/external/images/media/"+photo_split[1];
}
var options = new FileUploadOptions();
options.fileKey="file";
//options.fileName=imageURI.substr(imageURI.lastIndexOf('/')+1)+'.png';
options.mimeType="image/jpeg";
var params = {};
params.value1 = "test";
params.value2 = "param";
options.fileName=imageURI.substr(imageURI.lastIndexOf('/')+1);
options.params = params;
options.chunkedMode = true; //this is important to send both data and files
//alert(options.fileName);
var ft = new FileTransfer();
ft.upload(imageURI, encodeURI("file_upload.php"), win, fail, options);
}
```
And send image request to file\_upload.php. Add some condition like this.
```
$file=$_FILES['file']['name'];
$new_file=explode('.', $file);
$img_file=end($new_file);
if($img_file=='png' || $img_file=='jpeg' || $img_file=='jpg')
{
$file_name=$new_file[0];
} else {
$file_name=$_FILES['file']['name'];
$_FILES['file']['name']=$_FILES['file']['name'].'.jpg';
}
move_uploaded_file($_FILES["file"]["tmp_name"], 'your_location'.$file_name);
```
|
You can specify `encodingType: Camera.EncodingType.JPEG` and render image in your success callback like this- `$('#yourElement).attr('src', "data:image/jpeg;base64," + ImageData);`
```
navigator.camera.getPicture(onSuccess, onFail, {
quality: 100,
destinationType: Camera.DestinationType.DATA_URL,
sourceType: Camera.PictureSourceType.SAVEDPHOTOALBUM,
allowEdit: true,
encodingType: Camera.EncodingType.JPEG,
targetWidth: 500,
targetHeight: 500,
popoverOptions: CameraPopoverOptions,
saveToPhotoAlbum: false,
correctOrientation: true
});
function onSuccess(imageData) {
console.log("Image URI success");
ImageData = imageData;
$('#yourElement).attr('src', "data:image/jpeg;base64," + ImageData);
}
```
| 11,093
|
69,334,001
|
When i am using "optimizer = keras.optimizers.Adam(learning\_rate)" i am getting this error
"AttributeError: module 'keras.optimizers' has no attribute 'Adam". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !
|
2021/09/26
|
[
"https://Stackoverflow.com/questions/69334001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17007363/"
] |
Use `tf.keras.optimizers.Adam(learning_rate)` instead of `keras.optimizers.Adam(learning_rate)`
|
I think you are using Keras directly. Instead of giving as from keras.distribute import —> give as from tensorflow.keras.distribute import
Hope this would help you.. It is working for me.
| 11,094
|
60,631,553
|
I would like to parametize the columns and my dataframe in an cursor.execute function. I'm using pymssql, because I like the fact that I can name the parametized columns. Yet I still don't know how to properly tell python that I'm referring to a specific dataframe and I would like to use this columns. Here is the last part of my code (I already tested the connection to my database etc. and it works):
```
with conn:
cursor = conn.cursor()
cursor.execute("""insert into [dbo].[testdb] (day, revenue) values (%(day)s, %(revenue)s)""", dataframe)
result = cursor.fetchall()
for row in result:
print(list(row))
```
I'm getting this error:
```
ValueError Traceback (most recent call last)
<ipython-input-52-037e289ce76e> in <module>
10 with conn:
11 cursor = conn.cursor()
---> 12 cursor.execute("""insert into [dbo].[testdb] (day, revenue) values (%(day)s, %(revenue)s""", dataframe)
13 result= cursor.fetchall()
14
src\pymssql.pyx in pymssql.Cursor.execute()
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\generic.py in __nonzero__(self)
1477 def __nonzero__(self):
1478 raise ValueError(
-> 1479 f"The truth value of a {type(self).__name__} is ambiguous. "
1480 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
1481 )
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
|
2020/03/11
|
[
"https://Stackoverflow.com/questions/60631553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12790189/"
] |
I had the same issue. Try to add after your Proxy:
`RequestHeader set X-Forwarded-Proto https` to your `...ssl.conf` which is in sites-available folder.
|
I had same issue, I was trying to setup a SSL termination reverse proxy with apache. I followed this [article](https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension).
Using `0.0.0.0` instead of localhost worked for me.
```
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName exemple.com
SSLCertificateFile /path/fullchain.pem
SSLCertificateKeyFile /path/privkey.pem
ProxyPass / http://0.0.0.0:80/
ProxyPassReverse / http://0.0.0.0:80/
</VirtualHost>
</IfModule>
```
| 11,104
|
1,383,863
|
If you install multiple versions of python (I currently have the default 2.5, installed 3.0.1 and now installed 2.6.2), it automatically puts stuff in `/usr/local`, and it also adjusts the path to include the `/Library/Frameworks/Python/Versions/theVersion/bin`, but whats the point of that when `/usr/local` is already on the PATH, and all installed versions (except the default 2.5, which is in `/usr/bin`) are in there? I removed the python framework paths from my PATH in `.bash_profile`, and I can still type `"python -V" => "Python 2.5.1"`, `"python2.6 -V" => "Python 2.6.2"`,`"python3 -V" => "Python 3.0.1"`. Just wondering why it puts it in `/usr/local`, and also changes the PATH. And is what I did fine? Thanks.
Also, the 2.6 installation made it the 'current' one, having `.../Python.framework/Versions/Current` point to 2.6., So plain 'python' things in `/usr/local/bin` point to 2.6, but it doesn't matter because `usr/bin` comes first and things with the same name in there point to 2.5 stuff.. Anyway, 2.5 comes with leopard, I installed 3.0.1 just to have the latest version (that has a dmg file), and now I installed 2.6.2 for use with pygame.
EDIT: OK, here's how I understand it. When you install, say, Python 2.6.2:
A bunch of symlinks are added to `/usr/local/bin`, so when there's a `#! /usr/local/bin/python` shebang in a python script, it will run, and in `/Applications/Python 2.6`, the Python Launcher is made default application to run .py files, which uses `/usr/local/bin/pythonw`, and `/Library/Frameworks/Python.framework/Versions/2.6/bin` is created and added to the front of the path, so `which python` will get the python in there, and also `#! /usr/bin/env python` shebang's will run correctly.
|
2009/09/05
|
[
"https://Stackoverflow.com/questions/1383863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/148195/"
] |
There's no a priori guarantee that /usr/local/bin will stay on the PATH (especially it will not necessarily stay "in front of" /usr/bin!-), so it's perfectly reasonable for an installer to ensure the specifically needed /Library/.../bin directory does get on the PATH. Plus, it may be the case that the /Library/.../bin has supplementary stuff that doesn't get symlinked into /usr/local/bin, although I believe that's not currently the case with recent Mac standard distributions of Python.
If you know that the way you'll arrange your path, and the exact set of executables you'll be using, are entirely satisfied from /usr/local/bin, then it's quite OK for you to remove the /Library/etc directories from your own path, of course.
|
I just noticed/encountered this issue on my Mac. I have Python 2.5.4, 2.6.2, and 3.1.1 on my machine, and was looking for a way to easily change between them at will. That is when I noticed all the symlinks for the executables, which I found in both '/usr/bin' and '/usr/local/bin'. I ripped all the non-version specific symlinks out, leaving python2.5, python2.6, etc, and wrote a bash shell script that I can run as root to change one symlink I use to direct the path to the version of my choice
'/Library/Frameworks/Python.framework/Versions/Current'
The only bad thing about ripping the symlinks out, is if some other application needed them for some reason. My opinion as to why these symlinks are created is similar to Alex's assessment, the installer is trying to cover all of the bases. All of my versions have been installed by an installer, though I've been trying to compile my own to enable full 64-bit support, and when compiling and installing your own you can choose to not have the symlinks created or the PATH modified during installation.
| 11,105
|
58,113,118
|
I have a python script that connect PostgresSQL.
Below is the script.
```
import psycopg2
conn = psycopg2.connect('connection string')
try:
curr = conn.cursor()
sql_strng = "SELECT * FROM tbl"
### Further operations###
except(Exception, psycopg2.Error) as error:
print("error",error)
finally:
if (conn):
conn.close()
```
The above code works well when I run it from Spyder. But when I try to run this from command prompt using a batch script it gives error as shown below.
My batch script:
```
C:\Users\Anaconda3\python.exe \path\to\python\file
```
The above batch script is throwing error as follows.
```
if(conn):
NameError: name 'conn' is not defined
```
Where I am missing out?
|
2019/09/26
|
[
"https://Stackoverflow.com/questions/58113118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7290715/"
] |
Just use `this.types.filter(({id}) => !this.selected_types.includes(id))`:
```js
let types = [{
id: 1,
name: "Hello"
},
{
id: 2,
name: "World"
},
{
id: 3,
name: "Jon Doe"
}
]
let selected_types = [1, 2];
let resArr = types.filter(({id}) => !selected_types.includes(id));
console.log(resArr);
```
|
**You can achieve Javascript's native method filter, which finally returns a new object**
```
let types = [{
id: 1,
name: "Hello"
},
{
id: 2,
name: "World"
},
{
id: 3,
name: "Jon Doe"
}
]
let selected_types = [1, 2];
types = types.filter(obj => {
if (selected_types.indexOf(obj.id) === -1) {
return obj
}
});
```
| 11,106
|
37,442,993
|
I have a csv file I wish to load into pandas, but the formatting is giving me some problems. The file is such:
>
> Version 1
>
>
> ,Date Time,Name,Value
>
>
> ,26/Jan/2016 07:35:52,Name1,340rqi
>
>
> ,26/Jan/2016 07:00:00,Name2,1.00E+005
>
>
> ,26/Jan/2016 07:00:00,Name3,pulled\_9
>
>
>
(It's a mess of a file, but the main point is that there is an empty 1st column and an empty 1st row with just 'Version 1' in position 0,0)
I am using the following code to get it into my DF:
```
filename_cv = '123456789.csv'
sheet_cv = filename_cv[:-4] #trimming off the .csv part
df_cv = pandas.read_csv(filename_cv, sheet_cv,engine='python')
```
But the output is not desirable. This is what I get:
>
> df\_cv
>
>
> Out[4]:
>
>
> Version 1
>
>
> 0 ,26/Jan/2016 07:35:52,Name1,340rqi
>
>
> 1 ,26/Jan/2016 07:00:00,Name2,1.00E+005
>
>
> 2 ,26/Jan/2016 07:00:00,Name3,pulled\_9
>
>
>
I think those leading commas are my problem, but is there a good way to get rid of them?
I know I can trim rows and change the index (skiprows), but those leading commas are the source of my issue I am sure.
I want the comma separate values to go into their own columns like normal.
What's wrong?
|
2016/05/25
|
[
"https://Stackoverflow.com/questions/37442993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6382244/"
] |
>
> When is DbConnection.StateChange called?
>
>
>
You can find out by looking at the Microsoft reference source code.
The [`StateChange`](http://referencesource.microsoft.com/#System.Data/System/Data/Common/DBConnection.cs,191b2f1d559f7e8d) event is raised by the [`DbConnection.OnStateChange`](http://referencesource.microsoft.com/#System.Data/System/Data/Common/DBConnection.cs,bc9689aff46dc5bc,references) function. Looking for references to this function yields only a few instances:
Firstly, in the `SqlConnection` class, `OnStateChange` is called only in the [`Close`](http://referencesource.microsoft.com/#System.Data/System/Data/SqlClient/SqlConnection.cs,9414944f8fe487ef,references) method.
Then in the [`DbConnectionHelper.cs`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionHelper.cs,9f986d2fc8fa438a) file, there's a partial class called `DBCONNECTIONOBJECT`. It looks like it's used for all `DbConnection`-derived classes using some build-time shenanigans. So you can consider it to be part of `SqlConnection`. In any case, it calls `OnStateChange` only from within the [`SetInnerConnectionEvent`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionHelper.cs,324) function.
As far as I can tell (the partial class nonsense makes it difficult), the `SqlConnection.SetInnerConnectionEvent` is only called from [`SqlConnectionFactory.SetInnerConnectionEvent`](http://referencesource.microsoft.com/#System.Data/System/Data/SqlClient/SqlConnectionFactory.cs,049addc994565208,references). And *that* is called from:
* [`DbConnectionClosed.TryOpenConnection`](http://referencesource.microsoft.com/System.Data/System/Data/ProviderBase/DbConnectionClosed.cs.html#131)
* [`DbConnectionInternal.TryOpenConnectionInternal`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionInternal.cs,d0c96b80d6d7cbb0,references)
* [`DbConnectionInternal.CloseConnection`](http://referencesource.microsoft.com/#System.Data/System/Data/ProviderBase/DbConnectionInternal.cs,478)
So, in summary - the event is only raised in response to client-side actions - there does not appear to be any polling of the connection-state built into `SQLConnection`.
>
> Is there a way this behavior can be changed?
>
>
>
Looking at the source code, I can't see one. As others have suggested, you could implement your own polling, of course.
|
The `StateChange` event is meant for the state of the connection, not the instance of the database server. To get the state of the database server,
>
> The StateChange event occurs when the state of the event changes from
> closed to opened, or opened to closed.
>
>
>
From MSDN: <https://msdn.microsoft.com/en-us/library/system.data.common.dbconnection.statechange(v=vs.110).aspx>
If you're going to roll your own monitor for the database, then you may consider using a method that returns true/false if the connection is available and ping that method on a schedule. You could even wrap a method to do this in an endless loop repeating after a duration of time and raise it's own event when this "state" really changes then.
Here's a quick method from another SO answer that is a simple approach:
```
/// <summary>
/// Test that the server is connected
/// </summary>
/// <param name="connectionString">The connection string</param>
/// <returns>true if the connection is opened</returns>
private static bool IsServerConnected(string connectionString)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
try
{
connection.Open();
return true;
}
catch (SqlException)
{
return false;
}
}
}
```
Source: <https://stackoverflow.com/a/9943871/4154421>
| 11,107
|
12,850,550
|
I'm reading conflicting reports about using PostgreSQL on Amazon's Elastic Beanstalk for python (Django).
Some sources say it isn't possible: (http://www.forbes.com/sites/netapp/2012/08/20/amazon-cloud-elastic-beanstalk-paas-python/). I've been through a dummy app setup, and it does seem that MySQL is the only option (amongst other ones that aren't Postgres).
However, I've found fragments around the place mentioning that it is possible - even if they're very light on detail.
I need to know the following:
1. Is it possible to run a PostgreSQL database with a Django app on Elastic Beanstalk?
2. If it's possible, is it worth the trouble?
3. If it's possible, how would you set it up?
|
2012/10/12
|
[
"https://Stackoverflow.com/questions/12850550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1133318/"
] |
Postgre is now selectable from the AWS RDS configurations. Validated through Elastic Beanstalk application setup 2014-01-27.
|
>
> Is it possible to run a PostgreSQL database with a Django app on
> Elastic Beanstalk?
>
>
>
Yes. The dummy app setup you mention refers to the use of an Amazon Relational Database Service. At the moment PostgreSQL is not available as an Amazon RDS, but you can configure your beanstalk AMI to act as a local PostgreSQL server or set up your own PostgreSQL RDS.
>
> If it's possible, is it worth the trouble?
>
>
>
This is really a question about whether it is worth using an RDS or going it alone, which boils down to questions of cost, effort, usage, required efficiency etc. It is very simple to switch database engines serving django so if you change your mind it is easy to switch set up.
>
> If it's possible, how would you set it up?
>
>
>
Essentially you need to customise your beanstalk AMI by installing a PostgreSQL database server on an Amazon linux EB backed AMI instance.
Advice / instructions regarding customising beanstalk AMIs are here:
<http://blog.jetztgrad.net/2011/02/how-to-customize-an-amazon-elastic-beanstalk-instance/>
<https://forums.aws.amazon.com/thread.jspa?messageID=219630𵧮>
[Customizing an Elastic Beanstalk AMI](https://stackoverflow.com/questions/12002445/customizing-an-elastic-beanstalk-ami?rq=1)
| 11,108
|
65,238,577
|
I have a simple Users resource with a put method to update all user information except user password. According to Flask-Restx docs when a model has set the strict and validation params to true, a validation error will be thrown if an unspecified param is provided in the request. However, this doesn't seem to be working for me.
Model definition:
```py
from flask_restx import Namespace, Resource, fields, marshal
users_ns = Namespace("users")
user = users_ns.model(
"user",
{
"user_name": fields.String(example="some_user", required=True),
"email": fields.String(example="some.user@email", required=True),
"is_admin": fields.Boolean(example="False"),
"is_deactivated": fields.Boolean(example="False"),
"created_date": fields.DateTime(example="2020-12-01T01:59:39.297904"),
"last_modified_date": fields.DateTime(example="2020-12-01T01:59:39.297904"),
"uri": fields.Url("api.user"),
},
strict=True,
)
user_post = users_ns.inherit(
"user_post", user, {"password": fields.String(required=True)}
) # Used for when
```
Resource and method definition:
```py
from api.models import Users
class User(Resource):
@users_ns.marshal_with(user)
@users_ns.expect(user, validate=True)
def put(self, id):
"""
Update a specified user.
"""
user = Users.query.get_or_404(id)
body = request.get_json()
user.update(body)
return user
```
Failing Test:
```py
def test_update_user_invalid_password_param(self, client, db):
""" User endpoint should return 400 when user attempts to pass password param to update. """
data = {
"user_name": "some_user",
"email": "some.user@email.com",
"password": "newpassword",
}
response = client.put(url_for("api.user", id=1), json=data)
assert response.status_code == 400
```
The response.status\_code here is 200 because no validation error is thrown for the unspecified param passed in the body of the request.
Am I using the strict param improperly? Am I misunderstanding the behavior of strict?
UPDATED: I've added the test for strict model param from Flask-RestX repo (can be found [here](https://github.com/python-restx/flask-restx/blob/master/tests/test_namespace.py)) for more context on expected behavior:
```py
def test_api_payload_strict_verification(self, app, client):
api = restx.Api(app, validate=True)
ns = restx.Namespace("apples")
api.add_namespace(ns)
fields = ns.model(
"Person",
{
"name": restx.fields.String(required=True),
"age": restx.fields.Integer,
"birthdate": restx.fields.DateTime,
},
strict=True,
)
@ns.route("/validation/")
class Payload(restx.Resource):
payload = None
@ns.expect(fields)
def post(self):
Payload.payload = ns.payload
return {}
data = {
"name": "John Doe",
"agge": 15, # typo
}
resp = client.post_json("/apples/validation/", data, status=400)
assert re.match("Additional properties are not allowed \(u*'agge' was unexpected\)", resp["errors"][""])
```
|
2020/12/10
|
[
"https://Stackoverflow.com/questions/65238577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8184470/"
] |
I resolved my issue by pulling the latest version of Flask-RESTX from Github. The strict parameter for models was merged after Flask-RESTX version 0.2.0 was released on Pypi in March of 2020 (see the closed [issue](https://github.com/python-restx/flask-restx/issues/264) in Flask-RESTX repo for more context). My confusion arose because the documentation appears to represent the latest state of master and not the last Pypi release.
|
It's been a while since I touched on this but from what I can tell, I don't think you are using the strict param correctly. From the documentation [here](https://flask-restx.readthedocs.io/en/latest/_modules/flask_restx/model.html), the `:param bool strict` is defined as
>
> :param bool strict: validation should raise an error when there is param not provided in schema
>
>
>
But in your last snippet of code, you are trying to validate with the dictionary `data` with the body of the request.
If I recall well again, for this sort of task you need to use (and as you mentioned) a [RequestParser](https://flask-restplus.readthedocs.io/en/stable/api.html#flask_restplus.reqparse.RequestParser). There's a good example of it here - [flask - something more strict than @api.expect for input data?](https://stackoverflow.com/a/41250621/6505847)
| 11,109
|
10,816,816
|
I was doing some practice problems in [Coding Bat](http://codingbat.com/python), and came across this one..
```
Given 3 int values, a b c, return their sum. However, if one of the values is the same as another of the values, it does not count towards the sum.
lone_sum(1, 2, 3) → 6
lone_sum(3, 2, 3) → 2
lone_sum(3, 3, 3) → 0
```
My solution was the following.
```
def lone_sum(a, b, c):
sum = a+b+c
if a == b:
if a == c:
sum -= 3 * a
else:
sum -= 2 * a
elif b == c:
sum -= 2 * b
elif a == c:
sum -= 2 * a
return sum
```
Is there a more pythonic way of doing this?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10816816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1088536/"
] |
Another possibility that works for an arbitrary number of arguments:
```
from collections import Counter
def lone_sum(*args):
return sum(x for x, c in Counter(args).items() if c == 1)
```
Note that in Python 2, you should use `iteritems` to avoid building a temporary list.
|
```
def lone_sum(a, b, c):
z = (a,b,c)
x = []
for item in z:
if z.count(item)==1:
x.append(item)
return sum(x)
```
| 11,110
|
47,286,349
|
i need a simple python code which makes a number menu, that doesn't take up many lines
```
print ("Pick an option")
menu =0
Menu = input("""
1. Check Password
2. Generate Password
3. Quit
""")
if (menu) == 1:
Password = input("Please enter the password you want to check")
points =0
```
i tried this but it did not work how i thought it would. i thought this code would work as i have tried it before and it worked but i must have made a mistake in this one.
anyone have any suggestions?
thanks
this is my full code:
```
print ("Pick an option")
menu =0
Menu = input("""
1. Check Password
2. Generate Password
3. Quit
""")
if (menu) == 1:
Password = input("Please enter the password you want to check")
points =0
smybols = ['!','%','^','&','*','(',')','-','_','=','+',]
querty =
["qwertyuiop","QWERTYUIOP","asdfghjl","ASDFGHJKL","zxcvbnm","ZXCVBNM"]
if len(password) >24:
print ('password is too long It must be between 8 and 24 characters')
elif len(password) <8:
print ('password is too short It must be between 8 and 24 characters')
elif len(password) >=8 and len(password) <= 24:
print ('password ok\n')
```
|
2017/11/14
|
[
"https://Stackoverflow.com/questions/47286349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8938872/"
] |
You problem seems to arise from the fact that you use `flatMap` so if there is no data in the DB for a given `id` and you get an empty `Observable`, `flatMap` just produces no output for such `id`. So it looks like what you need is [defaultIfEmpty](http://reactivex.io/documentation/operators/defaultifempty.html) which is translated to Scala's [`orElse`](http://reactivex.io/rxscala/scaladoc/index.html#rx.lang.scala.Observable@orElse[U>:T](default:=>U):rx.lang.scala.Observable[U]). You can use `orElse` to return some default value inside `flatMap`. So to modify your example:
```
def fetchFromDb(id: Int): Observable[String] = {
if (id <= 3)
Observable.just(s"Document #$id")
else
Observable.empty
}
def gotValue(idsToFetch: Seq[Int]): Observable[(Int, Boolean)] = {
Observable.from(idsToFetch).flatMap((id: Int) => fetchFromDb(id).map(_ => (id, true)).orElse((id, false)))
}
println(gotValue(Seq(1, 2, 3, 4, 5, 6)).toBlocking.toList)
```
which prints
>
> List((1,true), (2,true), (3,true), (4,false), (5,false), (6,false))
>
>
>
Or you can use `Option` to return `Some(JsonDocument)` or `None` such as
```
def gotValueEx(idsToFetch: Seq[Int]): Observable[(Int, Option[String])] = {
Observable.from(idsToFetch).flatMap((id: Int) => fetchFromDb(id).map(doc => (id, Option(doc))).orElse((id, None)))
}
println(gotValueEx(Seq(1, 2, 3, 4, 5, 6)).toBlocking.toList)
```
which prints
>
> List((1,Some(Document #1)), (2,Some(Document #2)), (3,Some(Document #3)), (4,None), (5,None), (6,None))
>
>
>
|
One way of doing this is the following:
**(1)** convert sequence of ids to `Observable` and `map` it with
```
id => (id, false)
```
... so you'll get an observable of type `Observable[(Int, Boolean)]` (lets call this new observable `first`).
**(2)** fetch data from database and `map` every fetched row to from:
```
(some_id, true)
```
... inside `Observable[(Int, Boolean)]` (lets call this observable `last`)
**(3)** [concat](http://reactivex.io/documentation/operators/concat.html) `first` and `last`.
**(4)** [toMap](http://reactivex.io/rxscala/scaladoc/index.html#rx.lang.scala.Observable@toMap[K,V](keySelector:T=%3EK,valueSelector:T=%3EV):rx.lang.scala.Observable[Map[K,V]]) result of (3). Duplicate elements coming from `first` will be dropped in process. (this will be your `resultObsrvable`)
**(5)** (possibly) collect the first and only element of the observable (your map). You might not want to do this at all, but if you do, you should really understand implications of blocking to collect result at this point. In any case, this step really depends on your application specifics (how threading\scheduling\io is organized) but brute-force approach should look something like this (refer to [this demo](https://github.com/ReactiveX/RxScala/blob/0.x/examples/src/test/scala/examples/RxScalaDemo.scala) for more specifics):
```
Await.result(resultObsrvable.toBlocking.toFuture, 2 seconds)
```
| 11,120
|
13,047,458
|
I'm trying to set speed limits on downloading/uploading files and found that twisted provides [twisted.protocols.policies.ThrottlingFactory](http://twistedmatrix.com/documents/current/api/twisted.protocols.policies.ThrottlingFactory.html) to handle this job, but I can't get it right. I set `readLimit` and `writeLimit`, but file is still downloading on a maximum speed. What am I doing wrong?
```py
from twisted.protocols.basic import FileSender
from twisted.protocols.policies import ThrottlingFactory
from twisted.web import server, resource
from twisted.internet import reactor
import os
class DownloadPage(resource.Resource):
isLeaf = True
def __init__(self, producer):
self.producer = producer
def render(self, request):
size = os.stat(somefile).st_size
request.setHeader('Content-Type', 'application/octet-stream')
request.setHeader('Content-Length', size)
request.setHeader('Content-Disposition', 'attachment; filename="' + somefile + '"')
request.setHeader('Accept-Ranges', 'bytes')
fp = open(somefile, 'rb')
d = self.producer.beginFileTransfer(fp, request)
def err(error):
print "error %s", error
def cbFinished(ignored):
fp.close()
request.finish()
d.addErrback(err).addCallback(cbFinished)
return server.NOT_DONE_YET
producer = FileSender()
root_resource = resource.Resource()
root_resource.putChild('download', DownloadPage(producer))
site = server.Site(root_resource)
tsite = ThrottlingFactory(site, readLimit=10000, writeLimit=10000)
tsite.protocol.producer = producer
reactor.listenTCP(8080, tsite)
reactor.run()
```
**UPDATE**
So sometime after I run it:
```
2012-10-25 09:17:03+0600 [-] Unhandled Error
Traceback (most recent call last):
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/application/app.py", line 402, in startReactor
self.config, oldstdout, oldstderr, self.profiler, reactor)
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/application/app.py", line 323, in runReactorWithLogging
reactor.run()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/internet/base.py", line 1169, in run
self.mainLoop()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/internet/base.py", line 1178, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/internet/base.py", line 800, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/policies.py", line 334, in unthrottleWrites
p.unthrottleWrites()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/policies.py", line 225, in unthrottleWrites
self.producer.resumeProducing()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/basic.py", line 919, in resumeProducing
self.consumer.unregisterProducer()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/web/http.py", line 811, in unregisterProducer
self.transport.unregisterProducer()
File "/home/chambylov/environments/transfer/local/lib/python2.7/site-packages/twisted/protocols/policies.py", line 209, in unregisterProducer
del self.producer
exceptions.AttributeError: ThrottlingProtocol instance has no attribute 'producer'
```
I see that I'm not supposed to assign producer like I do know `tsite.protocol.producer = producer`, I'm new to Twisted and I don't know how to do that another way.
|
2012/10/24
|
[
"https://Stackoverflow.com/questions/13047458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1770691/"
] |
This does **not** look like **clustering** to me.
Instead, I figure you want a simple **decision tree classification**.
It should already be available in Rapidminer.
|
You could use the "Generate Attributes" operator.
This creates new attributes from existing ones.
It would be relatively tiresome to create all the rules but they would be something like
cluster : if (((A==0)&&(B==0)&&(C==0)),1,0)
| 11,122
|
2,537,929
|
I have a python logger set up, using python's logging module. I want to store the string I'm using with the logging Formatter object in a configuration file using the ConfigParser module.
The format string is stored in a dictionary of settings in a separate file that handles the reading and writing of the config file. The problem I have is that python still tries to format the file and falls over when it reads all the logging-module-specific formatting flags.
```
{
"log_level":logging.debug,
"log_name":"C:\\Temp\\logfile.log",
"format_string":
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
}
```
My question is simple: how can I disable the formatting functionality here while keeping it elsewhere. My initial reaction was copious use of the backslash to escape the various percent symbols, but that of course permanently breaks the formatting such that it wont work even when I need it to.
I should also mention, since it was bought up in the comments, that ConfigParser does some internal interpolation that causes the trip-up. Here is my traceback:
```
Traceback (most recent call last):
File "initialconfig.py", line 52, in <module>
"%(asctime)s %(levelname)s: %(module)s, line %(lineno)d - %(message)s"
File "initialconfig.py", line 31, in add_settings
self.set(section_name, setting_name, default_value)
File "C:\Python26\lib\ConfigParser.py", line 668, in set
"position %d" % (value, m.start()))
ValueError: invalid interpolation syntax in '%(asctime)s %(levelname)s: %(module
)s, line %(lineno)d - %(message)s' at position 10
```
Also, general pointers on good settings-file practices would be nice. This is the first time I've done anything significant with ConfigParser (or logging for that matter).
Thanks in advance,
Dominic
|
2010/03/29
|
[
"https://Stackoverflow.com/questions/2537929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56815/"
] |
Did you try to escape percents with `%%`?
|
There's `RawConfigParser` which is like `ConfigParser` without the interpolation behaviour. If you don't use the interpolation feature in any other part of the configuration file, you can simply replace `ConfigParser` with `RawConfigParser` in your code.
See the documentation of [RawConfigParser](http://docs.python.org/library/configparser.html#ConfigParser.RawConfigParser) for more details.
| 11,123
|
15,863,657
|
Kinda a newbie in python, starting to lean on how python works with strings and iteration over strings.
Had worked on a chunk of so-called code 'Palindrome', would you take a look at which part exactly it is going wrong?
```
def palindrome(s):
if len(s) < 1:
return True
else:
i = 0
j = len(s) - 1
r = s[::-1]
print "s is %s" % s,
print "r is %s" % r
while s[j] == r[i] and j != 0:
print "s[j] is %s" % s[j],
print "; r[i] is %s" % r[i]
i += 1
j -= 1
return True
return False
```
I've used all those print statements to make sure where does the code going. The program is supposed to compare a string's and its reversed defining whether it is a palindrome or not.
|
2013/04/07
|
[
"https://Stackoverflow.com/questions/15863657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1540033/"
] |
Below you have a little less complicated solution:
```
def is_palindrome(s):
return s == s[::-1]
```
In your version you are always returning `True` for all strings with `len(s) >= 1`
|
I may be missing something obvious, but shouldn't this be enough?
```
def ispalindrome(s):
return s == s[::-1]
```
| 11,133
|
23,554,644
|
I was able to get my flask app running as a service thanks to [Is it possible to run a Python script as a service in Windows? If possible, how?](https://stackoverflow.com/questions/32404/is-it-possible-to-run-a-python-script-as-a-service-in-windows-if-possible-how), but when it comes to stopping it i cannot. I have to terminate the process in task manager.
Here's my run.py which I turn into a service via run.py install:
```
from app import app
from multiprocessing import Process
import win32serviceutil
import win32service
import win32event
import servicemanager
import socket
class AppServerSvc (win32serviceutil.ServiceFramework):
_svc_name_ = "CCApp"
_svc_display_name_ = "CC App"
def __init__(self,args):
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
server.terminate()
server.join()
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
self.main()
def main(self):
server = Process(app.run(host = '192.168.1.6'))
server.start()
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(AppServerSvc)
```
I got the process stuff from this post: <http://librelist.com/browser/flask/2011/1/10/start-stop-flask/#a235e60dcaebaa1e134271e029f801fe> but unfortunately it doesn't work either.
The log file in Event Viewer says that the global variable 'server' is not defined. However, i've made server a global variable and it still gives me the same error.
|
2014/05/09
|
[
"https://Stackoverflow.com/questions/23554644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2917993/"
] |
You can stop the Werkzeug web server gracefully before you stop the Win32 server. Example:
```
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
@app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
```
If you add this to your Flask server you can then request a graceful server shutdown by sending a `POST` request to `/shutdown`. You can use requests or urllib2 to do this. Depending on your situation you may need to protect this route against unauthorized access.
Once the server has stopped I think you will have to no problem stopping the Win32 service.
Note that the shutdown code above appears in this [Flask snippet](http://flask.pocoo.org/snippets/67/).
|
I recommend you use <http://supervisord.org/>. Actually not work in Windows, but with Cygwin you can run supervisor as in Linux, including run as service.
For install Supervisord: <https://stackoverflow.com/a/18032347/3380763>
After install you must configure the app, here an example: <http://flaviusim.com/blog/Deploying-Flask-with-nginx-uWSGI-and-Supervisor/> (Is not necessary that you use Nginx with the Supervisor's configuration is enough)
| 11,134
|
1,817,780
|
I have created a python script which pulls data out of OLE streams in Word documents, but am having trouble converting the OLE2-formatted timestamp to something more human-readable :(
The timestamp which is pulled out is 12760233021 but I cannot for the life of me convert this to a date like 12 Mar 2007 or similar.
Any help is greatly appreciated.
EDIT:
OK I have ran the script over one of my word documents, which was created on **31/10/2009, 10:05:00**. The Create Date in the OLE DocumentSummaryInformation stream is **12901417500**.
Another example is a word doc created on 27/10/2009, 15:33:00, gives the Create Date of 12901091580 in the OLE DocumentSummaryInformation stream.
The MSDN documentation on the properties of these OLE streams is <http://msdn.microsoft.com/en-us/library/aa380376%28VS.85%29.aspx>
The def which pulls these streams out is given below:
```
import OleFileIO_PL as ole
def enumerateStreams(item):
# item is an arbitrary file
if ole.isOleFile('%s' % item):
loader = ole.OleFileIO('%s' % item)
# enumerate all the OLE streams in the office file
streams = loader.listdir()
streamProps = []
for stream in streams:
if stream[0] == '\x05SummaryInformation':
# get all the properties fro the SummaryInformation OLE stream
streamProps.append(loader.getproperties(stream))
elif stream[0] == '\x05DocumentSummaryInformation':
# get all the properties from the DocumentSummaryInformation stream
streamProps.append(loader.getproperties(stream))
return streamProps
```
|
2009/11/30
|
[
"https://Stackoverflow.com/questions/1817780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Well, Python 3.0 and 3.1 are already released, so you can check this out for yourself. The end result was that map and filter were kept as built-ins, and lambda was also kept. The only change was that reduce was moved to the functools module; you just need to do
```
from functools import reduce
```
to use it.
Future 3.x releases can be expected to remain backwards-compatible with 3.0 and 3.1 in this respect.
|
In Python 3.x, Python continues to have a rich set of functional-ish tools built in: list comprehensions, generator expressions, iterators and generators, and functions like `any()` and `all()` that have short-circuit evaluation wherever possible.
Python's "Benevolent Dictator For Life" floated the idea of removing `map()` because you can trivially reproduce its effects with a list comprehension:
```
lst2 = map(foo, lst)
lst3 = [foo(x) for x in lst]
lst2 == lst3 # evaluates True
```
Python's `lambda` feature has not been removed or renamed, and likely never will be. However, it will likely never become more powerful, either. Python's `lambda` is restricted to a single expression; it cannot include statements and it cannot include multiple lines of Python code.
Python's plain-old-standard `def` defines a function object, which can be passed around just as easily as a `lambda` object. You can even unbind the name of the function after you define it, if you really want to do so.
Example:
```
# NOT LEGAL PYTHON
lst2 = map(lambda x: if foo(x): x**2; else: x, lst)
# perfectly legal Python
def lambda_function(x):
if foo(x):
return x**2
else:
return x
lst2 = map(lambda_function, lst)
del(lambda_function) # can unbind the name if you wish
```
Note that you could actually use the "ternary operator" in a `lambda` so the above example is a bit contrived.
```
lst2 = map(lambda x: x**2 if foo(x) else x, lst)
```
But some multiline functions are difficult to force into a `lambda` and are better handled as simple ordinary multiline functions.
Python 3.x has lost none of its functional power. There is some general feeling that list comprehensions and generator expressions are probably preferable to `map()`; in particular, generator expressions can sometimes be used to do the equivalent of a `map()` but without allocating a list and then freeing it again. For example:
```
total = sum(map(lst, foo))
total2 = sum(foo(x) for x in lst)
assert total == total2 # same result
```
In Python 2.x, the `map()` allocates a new list, which is summed and immediately freed. The generator expression gets the values one at a time and never ties up the memory of a whole list of values.
In Python 3.x, `map()` is "lazy" so both are about equally efficient. But as a result, in Python 3.x the ternary lambda example needs to be forced to expand into a list:
```
lst2 = list(map(lambda x: x**2 if foo(x) else x, lst))
```
Easier to just write the list comprehension!
```
lst2 = [x**2 if foo(x) else x for x in lst]
```
| 11,136
|
55,118,630
|
Is there a way to start a python script on a server from a webpage?
At work I've made a simple python script using selenium to do a routine job (open a webpage and click a few buttons).
I want to be able to start this remotely (still on company network) but due to security/permissions here at work I can't use telnet/shh/etc or install PHP (which I've read is one way to do it).
I've made simple python servers earlier, which can send/receive REST request, but I can't find a way to use javascript (which I'm somewhat comfortable with) to send from a webpage.
I've found several search results where people suggest AJAX, but I can't manage to make it work
It doesn't have to be anything more than a blank page with a single button on that sends a request to my server which in turn starts the script.
Thanks for any help.
|
2019/03/12
|
[
"https://Stackoverflow.com/questions/55118630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3379555/"
] |
Strings are immutable. They cannot change. Any action you perform on them will result in a "new" string that is returned by the method you called upon it.
Read more about it: [Immutability of Strings in Java](https://stackoverflow.com/questions/1552301/immutability-of-strings-in-java)
So in your example, if you wish to change your string you need to do to overwrite the variable that references your original string, to point to the new string value.
```
String tempString= "abc is very easy";
tempString = tempString.replace("very","not");
System.out.println("tempString is "+tempString);
```
Under the hood for your understanding:
>
> Assign String "abc is very easy" to memory address 0x03333
>
> Assign address 0x03333 to variable tempString
>
> Call method replace, create new modified string at address 0x05555
>
> Assign address 0x05555 to temp String variable
>
> Return temp String variable from method replace
>
> Assign address from temp String variable to variable tempString
>
> Address from tempString is now 0x05555.
>
> If string at 0x03333 was not in code defined string and in StringPool String at address 0x03333 will be garbage collected.
>
>
>
|
You can reassign `tempString` with its new value :
```
String tempString= "abc is very easy";
tempString = tempString.replace("very","not");
System.out.println("tempString is "+tempString);
```
Result is :
```
tempString is abc is not easy
```
Best
| 11,137
|
29,999,482
|
I try to "click" Javascript alert for reboot confirmation in DSL modem with a Python script as follows:
```
#!/usr/bin/env python
import selenium
import time
from selenium import webdriver
cap = {u'acceptSslCerts': True,
u'applicationCacheEnabled': True,
u'browserConnectionEnabled': True,
u'browserName': u'phantomjs',
u'cssSelectorsEnabled': True,
u'databaseEnabled': False,
u'driverName': u'ghostdriver',
u'driverVersion': u'1.1.0',
u'handlesAlerts': True,
u'javascriptEnabled': True,
u'locationContextEnabled': False,
u'nativeEvents': True,
u'platform': u'linux-unknown-64bit',
u'proxy': {u'proxyType': u'direct'},
u'rotatable': False,
u'takesScreenshot': True,
u'version': u'1.9.8',
u'webStorageEnabled': False}
driver = webdriver.PhantomJS('/usr/lib/node_modules/phantomjs/bin/phantomjs', desired_capabilities=cap)
driver.get('http://username:passwd@192.168.1.254')
sbtn = driver.find_element_by_id('reboto_btn')
sbtn.click()
time.sleep(4)
al = driver.switch_to_alert()
print al.accept()
```
However, I get exception pasted below even though I do set `handlesAlerts` in `desired_capabilities`.
How can I fix that? What's the reason for the exception?
Exception:
```
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
/usr/local/bin/pjs/asus_reboot.py in <module>()
36 #ipdb.set_trace()
37
---> 38 print al.accept()
39
40 #print al.text
/usr/local/venvs/asusreboot/local/lib/python2.7/site-packages/selenium/webdriver/common/alert.pyc in accept(self)
76 Alert(driver).accept() # Confirm a alert dialog.
77 """
---> 78 self.driver.execute(Command.ACCEPT_ALERT)
79
80 def send_keys(self, keysToSend):
/usr/local/venvs/asusreboot/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.pyc in execute(self, driver_command, params)
173 response = self.command_executor.execute(driver_command, params)
174 if response:
--> 175 self.error_handler.check_response(response)
176 response['value'] = self._unwrap_value(
177 response.get('value', None))
/usr/local/venvs/asusreboot/local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.pyc in check_response(self, response)
134 if exception_class == ErrorInResponseException:
135 raise exception_class(response, value)
--> 136 raise exception_class(value)
137 message = ''
138 if 'message' in value:
WebDriverException: Message: Invalid Command Method - {"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"53","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:36590","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\"sessionId\": \"fc97c240-f098-11e4-ae53-e17f38effd6c\"}","url":"/accept_alert","urlParsed":{"anchor":"","query":"","file":"accept_alert","directory":"/","path":"/accept_alert","relative":"/accept_alert","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/accept_alert","queryKey":{},"chunks":["accept_alert"]},"urlOriginal":"/session/fc97cea0-f098-11e4-ae53-e17f38eaad6c/accept_alert"}
```
|
2015/05/02
|
[
"https://Stackoverflow.com/questions/29999482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2022518/"
] |
As PhantomJs has no support for Alert boxes .you need to use executor for this.
```
driver.execute_script("window.confirm = function(msg) { return true; }");
```
|
In java, driver.switchTo().alert().accept(); will do the job.
I am not sure, why you are using "print al.accept()", probably are you trying to print text? then alert.getText() will do in java, sorry if i am wrong, because i am sure in python.
Thank You,
Murali
<http://seleniumtrainer.com/>
| 11,138
|
69,694,596
|
This is my code. It should send message to the channel when user join the server.
```py
@client.event
async def on_member_join(member):
print('+') #this works perfectly
ch = client.get_channel(84319995256905728)
await ch.send(f"{member.name} has joined")
```
But error was occur. This is the output:
```py
Ignoring exception in on_member_join
Traceback (most recent call last):
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 351, in _run_event
await coro(*args, **kwargs)
File "main.py", line 30, in on_member_join
await ch.send(f"{member.name} has joined")
AttributeError: 'NoneType' object has no attribute 'send'
```
I had enabled server member intents in dev portal.
```py
intents = nextcord.Intents.default()
intents.members = True
client = commands.Bot(command_prefix='=',intents=intents)
```
May I know how to fix it?
|
2021/10/24
|
[
"https://Stackoverflow.com/questions/69694596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17067135/"
] |
You can approximate the requirement this way:
```
from collections.abc import Iterable
def f2(sequence_type, *elements):
if isinstance(elements[0],Iterable):
return sequence_type(elements[0])
else:
return sequence_type(elements[:1])
```
which is close, but fails for `f(list, 'ab')` which returns `['a', 'b']` not `['ab']`
It is hard to see why one would expect a *Python* function to treat strings of length 2 differently from strings of length 1. The language itself says that `list('ab') == ['a', 'b']`.
I suspect that is an expectation imported from languages like C that treat characters and strings as different datatypes, in other words I have reservations about that aspect of the question.
But saying you don't like the spec isn't a recipe for success, so that special treatment has to be coded as such:
```
def f(sequence_type, elements):
if isinstance(elements, str) and len(elements) > 1 and sequence_type != str:
return sequence_type([elements])
else:
return f2(sequence_type, elements)
```
The result is generic but the special-casing can't really be called clean.
|
The examples for `str` doesn't really fit the description of the function, but here are some implementations that pass your test cases - you want to wrap the element into a collection first for tuple/list/set before converting:
```py
def f(sequence_type, element):
return sequence_type([element] if sequence_type != str else element)
```
```py
def f(sequence_type, element):
return sequence_type((element,) if sequence_type != str else element)
```
```py
def f(sequence_type, element):
return sequence_type({element} if sequence_type != str else element)
```
| 11,139
|
52,879,261
|
I have a spring boot application and i am trying to implement the spring security to override the default username and password generated by spring .but its not working. spring still using the default user credentials.
```
@EnableWebSecurity
@Configuration
public class WebSecurityConfigurtion extends WebSecurityConfigurerAdapter {
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication().withUser("demo").password("demo").roles("USER").and().withUser("admin")
.password("admin").roles("USER", "ADMIN");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().anyRequest().authenticated().and().httpBasic();
}
```
and this is my main spring boot class
```
@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class,
DataSourceTransactionManagerAutoConfiguration.class ,SecurityAutoConfiguration.class})
@EnableAsync
@EntityScan({ "com.demo.entity" })
@EnableJpaRepositories(basePackages = {"com.demo.repositories"})
@ComponentScan({ "com.demo.controller", "com.demo.service" })
@Import({SwaggerConfig.class})
public class UdeManagerServiceApplication extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(UdeManagerServiceApplication.class);
}
public static void main(String[] args) {
SpringApplication.run(UdeManagerServiceApplication.class, args);
}
```
This is the logs, and it shows the generated password
```
22:26:48.537 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Included patterns for restart : []
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.settings.DevToolsSettings - Excluded patterns for restart : [/spring-boot-actuator/target/classes/, /spring-boot-devtools/target/classes/, /spring-boot/target/classes/, /spring-boot-starter-[\w-]+/, /spring-boot-autoconfigure/target/classes/, /spring-boot-starter/target/classes/]
22:26:48.540 [main] DEBUG org.springframework.boot.devtools.restart.ChangeableUrls - Matching URLs for reloading : [file:/C:/Work/Spring/UdeManagerService/target/classes/]
2018-10-18 22:26:48.797 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'configurationProperties' with highest search precedence
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.4.RELEASE)
2018-10-18 22:26:49.218 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletConfigInitParams' with lowest search precedence
2018-10-18 22:26:49.219 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'servletContextInitParams' with lowest search precedence
2018-10-18 22:26:49.220 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemProperties' with lowest search precedence
2018-10-18 22:26:49.221 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'systemEnvironment' with lowest search precedence
2018-10-18 22:26:49.222 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Initialized StandardServletEnvironment with PropertySources [StubPropertySource {name='servletConfigInitParams'}, StubPropertySource {name='servletContextInitParams'}, MapPropertySource {name='systemProperties'}, SystemEnvironmentPropertySource {name='systemEnvironment'}]
2018-10-18 22:26:49.967 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Starting UdeManagerServiceApplication on TRV-LT-ANSAR with PID 7652 (C:\Work\Spring\UdeManagerService\target\classes started by Ansar.Samad in C:\Work\Spring\UdeManagerService)
2018-10-18 22:26:49.973 DEBUG 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Running with Spring Boot v2.0.4.RELEASE, Spring v5.0.8.RELEASE
2018-10-18 22:26:49.976 INFO 7652 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : No active profile set, falling back to default profiles: default
2018-10-18 22:26:50.614 INFO 7652 --- [ restartedMain] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@23daa694: startup date [Thu Oct 18 22:26:50 IST 2018]; root of context hierarchy
2018-10-18 22:26:53.258 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Removing PropertySource 'defaultProperties'
2018-10-18 22:26:53.428 INFO 7652 --- [ restartedMain] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$82d409fb] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-10-18 22:26:53.936 INFO 7652 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2018-10-18 22:26:53.959 INFO 7652 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-10-18 22:26:53.960 INFO 7652 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.32
2018-10-18 22:26:53.966 INFO 7652 --- [ost-startStop-1] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [C:\Program Files\Java\jdk1.8.0_162\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:/Program Files (x86)/Java/jre1.8.0_181/bin/client;C:/Program Files (x86)/Java/jre1.8.0_181/bin;C:/Program Files (x86)/Java/jre1.8.0_181/lib/i386;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin;C:\phython3.6\Scripts\;C:\phython3.6\;C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\Java\jre1.8.0_131\bin;C:\cuia\bat;C:\apache-maven-3.5.0-bin\apache-maven-3.5.0\bin;JAVA_HOME;SPRING_HOME;C:\Program Files (x86)\Symantec\VIP Access Client\;C:\phython3.6;C:\Program Files\Git\cmd;C:\Program Files\dotnet\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;TESTNG_HOME;C:\TestNg\testng-6.8.jar;C:\python-3.6.5\Scripts;C:\python-3.6.5;C:\Program Files (x86)\Yarn\bin\;C:\Program Files\nodejs\;;C:\Program Files\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Local\Yarn\bin;C:\Users\Ansar.Samad\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\Ansar.Samad\AppData\Roaming\npm;C:\spring-tool-suite-3.9.5.RELEASE-e4.8.0-win32\sts-bundle\sts-3.9.5.RELEASE;;.]
2018-10-18 22:26:54.081 INFO 7652 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2018-10-18 22:26:54.082 DEBUG 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Published root WebApplicationContext as ServletContext attribute with name [org.springframework.web.context.WebApplicationContext.ROOT]
2018-10-18 22:26:54.082 INFO 7652 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3468 ms
2018-10-18 22:26:54.192 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-10-18 22:26:54.193 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2018-10-18 22:26:54.194 INFO 7652 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Servlet dispatcherServlet mapped to [/]
2018-10-18 22:26:54.274 DEBUG 7652 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Replacing PropertySource 'servletContextInitParams' with 'servletContextInitParams'
2018-10-18 22:26:54.308 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : Driver class com.mysql.jdbc.Driver found in Thread context class loader org.springframework.boot.devtools.restart.classloader.RestartClassLoader@4e843ef9
2018-10-18 22:26:54.407 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : HikariPool-1 - configuration:
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : allowPoolSuspension.............false
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : autoCommit......................true
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : catalog.........................none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionInitSql...............none
2018-10-18 22:26:54.409 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTestQuery.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : connectionTimeout...............30000
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSource......................none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceClassName.............none
2018-10-18 22:26:54.410 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceJNDI..................none
2018-10-18 22:26:54.411 DEBUG 7652 --- [ restartedMain] com.zaxxer.hikari.HikariConfig : dataSourceProperties............{password=<masked>}
2018-10-18 22:22018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationAuthReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHiddenReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationHttpMethodReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParameterReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationImplicitParametersReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNicknameIntoUniqueIdReader': no URL paths identified
2018-10-18 22:35:45.240 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'operationNotesReader': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemProperties': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'systemEnvironment': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'org.springframework.context.annotation.ConfigurationClassPostProcessor.importRegistry': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'messageSource': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'applicationEventMulticaster': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'servletContext': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextParameters': no URL paths identified
2018-10-18 22:35:45.256 DEBUG 6720 --- [ restartedMain] o.s.w.s.h.BeanNameUrlHandlerMapping : Rejected bean name 'contextAttributes': no URL paths identified
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.256 INFO 6720 --- [ restartedMain] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-10-18 22:35:45.271 DEBUG 6720 --- [ restartedMain] .m.m.a.ExceptionHandlerExceptionResolver : Looking for exception mappings: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@275b060b: startup date [Thu Oct 18 22:35:35 IST 2018]; root of context hierarchy
2018-10-18 22:35:45.599 INFO 6720 --- [ restartedMain] .s.s.UserDetailsServiceAutoConfiguration :
Using generated security password: e67dca07-3bc3-49e2-bc33-84c9643ef09a
2018-10-18 22:35:45.740 INFO 6720 --- [ restartedMain] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: org.springframework.security.web.util.matcher.AnyRequestMatcher@1, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6abf32c0, org.springframework.security.web.context.SecurityContextPersistenceFilter@b2b202e, org.springframework.security.web.header.HeaderWriterFilter@23ff67f5, org.springframework.security.web.csrf.CsrfFilter@a493c40, org.springframework.security.web.authentication.logout.LogoutFilter@776a660, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@334ce4e9, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@1f4c6b47, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4a4c14a3, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7793ba1a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4edda6d8, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3e492163, org.springframework.security.web.session.SessionManagementFilter@68661010, org.springframework.security.web.access.ExceptionTranslationFilter@6920b121, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@35279a68]
2018-10-18 22:35:45.836 INFO 6720 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-10-18 22:35:45.867 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Bean with name 'dataSource' has been autodetected for JMX exposure
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.j.e.a.AnnotationMBeanExporter : Located MBean 'dataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=dataSource,type=HikariDataSource]
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2018-10-18 22:35:45.883 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
2018-10-18 22:35:45.899 INFO 6720 --- [ restartedMain] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
2018-10-18 22:35:45.914 INFO 6720 --- [ restartedMain] s.d.s.w.s.ApiListingReferenceScanner : Scanning for api listing references
2018-10-18 22:35:46.086 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: testUsingPOST_1
2018-10-18 22:35:46.101 INFO 6720 --- [ restartedMain] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: getDataUsingGET_1
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Looking for resource handler mappings
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**/favicon.ico", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/], class path resource []], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3914bc92]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/webjars/**", locations=[class path resource [META-INF/resources/webjars/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@3dc6f1f0]
2018-10-18 22:35:46.117 DEBUG 6720 --- [ restartedMain] o.s.w.s.resource.ResourceUrlProvider : Found resource handler mapping: URL pattern="/**", locations=[class path resource [META-INF/resources/], class path resource [resources/], class path resource [static/], class path resource [public/], ServletContext resource [/]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@315362d4]
2018-10-18 22:35:47.196 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@edd709c
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2018-10-18 22:35:49.180 DEBUG 6720 --- [ restartedMain] o.s.w.c.s.StandardServletEnvironment : Adding PropertySource 'server.ports' with highest search precedence
2018-10-18 22:35:49.180 INFO 6720 --- [ restartedMain] c.c.U.UdeManagerServiceApplication : Started UdeManagerServiceApplication in 14.132 seconds (JVM running for 14.884)
2018-10-18 22:35:50.929 DEBUG 6720 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection@7a5e8413
```
|
2018/10/18
|
[
"https://Stackoverflow.com/questions/52879261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10525271/"
] |
This can be done using following properties on `application.properties` or `application.yaml` file.
```
spring.security.user.name
spring.security.user.password
spring.security.user.roles
```
Take a look at [this.](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html)
|
Issue fixed by adding the config package where the security classes are availble into the component scanning of the main class.
| 11,142
|
26,374,866
|
I'm using the [Unirest library](http://unirest.io/python.html) for making async web requests with Python. I've read the documentation, but I wasn't able to find if I can use proxy with it. Maybe I'm just blind and there's a way to use it with Unirest?
Or is there some other way to specify proxy for Python? Proxies should be changed from script itself after making some requests, so this way should allow me to do it.
Thanks in advance.
|
2014/10/15
|
[
"https://Stackoverflow.com/questions/26374866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2298183/"
] |
I know nothing about Unirest, but, In all the scripts I wrote that requierd proxy support I used SocksiPy (<http://socksipy.sourceforge.net>) module. It support HTTP, SOCKS4 and SOCKS5 and it s really easy to use. :)
|
Would something like this work for you?
[1] <https://github.com/obriencj/python-promises>
| 11,145
|
23,543,202
|
While reviewing the system library `socket.py` implementation I came across this code
```
try:
import errno
except ImportError:
errno = None
EBADF = getattr(errno, 'EBADF', 9)
EINTR = getattr(errno, 'EINTR', 4)
```
Is this code just a relic of a bygone age, or there are platforms/implementations out there for which there is no `errno` module?
To be more explicit, is it safe to `import errno` without an exception handler?
I'm interested in answers for both python 2.x and 3.x.
**Edit**
To clarify the question: I have to test error codes encoded in `IOError` exceptions raised inside the `socket` module. The above code, which is in the [cpython code base](http://hg.python.org/cpython/file/219502cf57eb/Lib/socket.py), made me suspicious about known situations in which `socket` is available, but `import errno` would fail. Maybe a minor question, but I would like to avoid unnecessary code.
|
2014/05/08
|
[
"https://Stackoverflow.com/questions/23543202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1499402/"
] |
The reason this might be a bit tricky is because the parent nodes have some set defaults.
Set width and height to 100% in initialization of `Reveal`:
```
Reveal.initialize({
width: "100%",
height:"100%"
});
```
Ensure that the slide (ie. `section`) uses the whole space:
```
.full {
height:100%;
width:100%;
}
```
and finally set the position of the caption:
```
.caption {
bottom:0px;
right:0px;
position:absolute;
}
```
[full code](http://plnkr.co/edit/ojxLHQr8WI7np8XkQkpA?p=preview)
|
Can you provide an example of the HTML that has the class `.reveal`? Add `position:absolute` to your selector rule. If you want the caption to sit flush at the bottom corner of each slide, it's best to set the position to `bottom:0px` instead of `top`.
For example:
```
<style type="text/css">
.reveal .reveal_section {
position: relative;
}
.reveal .caption {
position: absolute;
bottom: 0px;
left: 50px;
}
.reveal .caption a {
background: #FFF;
color: #666;
padding: 10px;
border: 1px dashed #999;
}
</style>
<div class="reveal">
<section class="reveal_section" data-background="latimes.png">
<small class="caption">[Some Text](http://a.link.com/whatever)</small>
<aside class="notes">some notes</aside>
</section>
</div>
```
| 11,146
|
34,624,964
|
I want to extract certain information from the output of a program. But my method does not work. I write a rather simple script.
```
#!/usr/bin/env python
print "first hello world."
print "second"
```
After making the script executable, I type `./test | grep "first|second"`. I expect it to show the two sentences. But it does not show anything. Why?
|
2016/01/06
|
[
"https://Stackoverflow.com/questions/34624964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5719744/"
] |
Having researched this myself just now, it looks as though the Meteor version 1.4 release will be updated to version 3.2 of MongoDB, in which "*32-bit binaries are deprecated*"
* [Github ticket for the updating of MongoDB](https://github.com/meteor/meteor/issues/6957)
* [MongoDB declaration that 3.2 has deprecated 32-bit binaries](https://docs.mongodb.com/manual/installation/#deprecation-of-32-bit-versions)
* [Meteor 1.4 announcement](https://forums.meteor.com/t/coming-soon-meteor-1-4/23260)
If upgrading your MongoDB instance is mandatory **now**, then unfortunately it looks like the only way is to manually upgrade the binaries yourself. If you do this, I would suggest you make a backup of them just in-case it messes up.
To upgrade to version 3.2 you first need to [upgrade to version 3.0](https://docs.mongodb.com/manual/release-notes/3.0-upgrade/), then you can [upgrade to version 3.2](https://docs.mongodb.com/manual/release-notes/3.2-upgrade/)
|
I guess you didn't install the right version of Mongo if you have a 32 bits version.
check out their installation guide:
<https://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/>
First download the right 64 bits version for Windows:
<https://www.mongodb.org/downloads#production>
and follow the instructions:
>
> Install MongoDB
>
>
> Interactive Installation 1 Install MongoDB for Windows. In Windows
> Explorer, locate the downloaded MongoDB .msi file, which typically is
> located in the default Downloads folder. Double-click the .msi file. A
> set of screens will appear to guide you through the installation
> process.
>
>
> You may specify an installation directory if you choose the “Custom”
> installation option.
>
>
> NOTE These instructions assume that you have installed MongoDB to
> C:\mongodb. MongoDB is self-contained and does not have any other
> system dependencies. You can run MongoDB from any folder you choose.
> You may install MongoDB in any folder (e.g. D:\test\mongodb).
>
>
>
| 11,147
|
8,070,186
|
Is there a way with the boto python API to specify tags when creating an instance? I'm trying to avoid having to create an instance, fetch it and then add tags. It would be much easier to have the instance either pre-configured to have certain tags or to specify tags when I execute the following command:
```
ec2server.create_instance(
ec2_conn, ami_name, security_group, instance_type_name, key_pair_name, user_data
)
```
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8070186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/301816/"
] |
This answer was accurate at the time it was written but is now out of date. The AWS API's and Libraries (such as boto3) can now take a "TagSpecification" parameter that allows you to specify tags when running the "create\_instances" call.
---
Tags cannot be made until the instance has been created. Even though the function is called create\_instance, what it's really doing is reserving and instance. Then that instance may or may not be launched. (Usually it is, but sometimes...)
So, you cannot add a tag until it's been launched. And there's no way to tell if it's been launched without polling for it. Like so:
```
reservation = conn.run_instances( ... )
# NOTE: this isn't ideal, and assumes you're reserving one instance. Use a for loop, ideally.
instance = reservation.instances[0]
# Check up on its status every so often
status = instance.update()
while status == 'pending':
time.sleep(10)
status = instance.update()
if status == 'running':
instance.add_tag("Name","{{INSERT NAME}}")
else:
print('Instance status: ' + status)
return None
# Now that the status is running, it's not yet launched. The only way to tell if it's fully up is to try to SSH in.
if status == "running":
retry = True
while retry:
try:
# SSH into the box here. I personally use fabric
retry = False
except:
time.sleep(10)
# If we've reached this point, the instance is up and running, and we can SSH and do as we will with it. Or, there never was an instance to begin with.
```
|
This method has worked for me:
```
rsvn = image.run(
... standard options ...
)
sleep(1)
for instance in rsvn.instances:
instance.add_tag('<tag name>', <tag value>)
```
| 11,148
|
41,231,632
|
I am taking a course that uses ipython notebook. When I try to download the notebook (through File -> Download as -> ipython notebook), I get a file that ends with ".ipynb.json". It doesn't open as an ipython notebook but as a .json file so something like this:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n",
"\n",
"---"
]
},
...
}
```
I've tried deleting the ".json" in the file name and it doesn't work. How can I convert this file back to something that can be opened and run as an ipython notebook? Thank you very much!
|
2016/12/19
|
[
"https://Stackoverflow.com/questions/41231632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6841599/"
] |
Are you trying download this from Github? Especially on Google Chrome browsers, I've had issues download .ipynb files using right click > **Save link as...** I'm not sure if other browsers have this issue (Microsoft Edge, Mozilla Firefox, Safari, etc.).
This causes issues since when downloading, it doesn't completely download the file usually and it becomes corrupted so you can't download an IPython notebook that may run properly. One trick when trying to download .ipynb files on Github is to click it, click **Raw**, then copy everything (Ctrl + A) and paste it into a blank file (using text editors such as Notepad, Notepad++, Vim, etc.) and save it as "whatever\_file\_name\_you\_choose.ipynb". Then you should be able to properly run this file, assuming a non-corrupted file was uploaded to Github.
A lot of people with very large, complicated IPython notebooks on Github will inevitably run into this issue when simply trying to download with **Save link as...**. Hopefully this helps!
|
I opened it as/with nbviewer and then selected it all and saved it as a "txt" file that I then opened in Notepad++. I then resaved it as a file with the extension ipynb and opened it in my jupyter notebook ok.
| 11,154
|
49,737,148
|
I am trying to create a CNN model in Keras with multiple conv3d to work on cifar10 dataset. But facing the following issue:
>
> ValueError: ('The specified size contains a dimension with value <=
> 0', (-8000, 256))
>
>
>
Below is my code that I am trying to execute.
```python
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv3D, MaxPooling3D
from keras.optimizers import SGD
import os
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 20
learning_rate = 0.01
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
img_rows = x_train.shape[1]
img_cols = x_train.shape[2]
colors = x_train.shape[3]
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1,colors, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1,colors, img_rows, img_cols)
input_shape = (1, colors, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, colors, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, colors, 1)
input_shape = (img_rows, img_cols, colors, 1)
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu',input_shape=input_shape))
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Dropout(0.25))
model.add(Conv3D(64, kernel_size=(3, 3, 3),activation='relu'))
model.add(Conv3D(64, kernel_size=(3, 3, 3),activation='relu'))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
sgd=SGD(lr=learning_rate)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=sgd,
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
I have tried with *single* conv3d and it *worked* but the accuracy was very low. Code snippet as below:
```python
model = Sequential()
model.add(Conv3D(32, kernel_size=(3, 3, 3),activation='relu',input_shape=input_shape))
model.add(MaxPooling3D(pool_size=(2, 2, 1)))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
```
|
2018/04/09
|
[
"https://Stackoverflow.com/questions/49737148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2511239/"
] |
What do you gain by putting this string in your scenario. IMO all you are doing is making the scenario harder to read!
What do you lose by putting this string in your scenario?
Well first of all you now have to have at least two things the determine the exact contents of the string, the thing in the application that creates it and the hardcoded string in your scenario. So you are repeating yourself.
In addition you've increased the cost of change. Lets say we want our strings to change from 'Breedx' to 'Breed: x'. Now you have to change every scenario that looks at the drop down. This will take much much longer than changing the code.
So what can you do instead?
Change your scenario step so that it becomes `Then I should see the patient breeds` and delegate the HOW of the presentation of the breeds and even the sort of control that the breeds are presented in to something that is outside of Cucumber e.g. a helper method called by a step definition, or perhaps even something in your codebase.
|
Try with a datatable approach. You will have to add a `DataTable` argument in the stepdefinition.
```
Then Drop-dow patient_breed contains
'Breed1'
'Breed2'
...
...
...
'Breed20']
```
For a multiline approach try the below. In this you will have to add a `String` argument to the stepdefinition.
```
Then Drop-dow patient_breed contains
"""
['Breed1','Breed2',.... Breed20']
"""
```
| 11,164
|
2,460,407
|
I want to develop a anonymous chat website like <http://omgele.com>.
I know that this website is developed in python using `twisted matrix` framework. Using twisted matrix it's easy to develop such website.
But I am very comfortable in Java and have 1 year's experience with it, and dont know python.
1. What should I do? Should I start
learning python to take advantage
of the twisted matrix framework?
**OR**
2. Should I develop it in java?If so
which framework you would suggest to
do so?
|
2010/03/17
|
[
"https://Stackoverflow.com/questions/2460407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/291241/"
] |
*I would politely ask the people at omgele.com for a copy of their code and study it to*
1. learn Python and twisted matrix
2. decide to use it or if I decide against it, to apply what I learned from them to write my own Java site
unfortunately, the source code is not likely to be available..
Still I advise to learn from others, and if at all possible, join them to improve the code.
|
Learning Python can be an informative, interesting, and valuable process. When you really get going, you will probably find you can develop more rapidly than in Java. Twisted is an fairly well-executed framework which lets you avoid many of the pitfalls you can run into with asynchronous IO; it has top-notch implementations of quite a few protocols and a passionate, competent support community.
If you're interested in the knowledge and experience you'll gain doing so, go ahead and learn Python and use Twisted. If you feel pretty solid with your knowledge of Java you can probably read the [official tutorial](http://docs.python.org/tutorial/) a couple times then start hacking away. Twisted can take a while to click, but it's really not all that hard.
| 11,166
|
12,142,174
|
I want to call a Python script from C, passing some arguments that are needed in the script.
The script I want to use is mrsync, or [multicast remote sync](http://sourceforge.net/projects/mrsync/). I got this working from command line, by calling:
```
python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata
```
>
> -m is the list containing the target ip-addresses.
> -s is the directory that contains the files to be synced.
> -t is the directory on the target machines where the files will be put.
>
>
>
So far I managed to run a Python script without parameters, by using the following C program:
```
Py_Initialize();
FILE* file = fopen("/tmp/myfile.py", "r");
PyRun_SimpleFile(file, "/tmp/myfile.py");
Py_Finalize();
```
This works fine. However, I can't find how I can pass these argument to the `PyRun_SimpleFile(..)` method.
|
2012/08/27
|
[
"https://Stackoverflow.com/questions/12142174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/960585/"
] |
Seems like you're looking for an answer using the python development APIs from Python.h. Here's an example for you that should work:
```
#My python script called mypy.py
import sys
if len(sys.argv) != 2:
sys.exit("Not enough args")
ca_one = str(sys.argv[1])
ca_two = str(sys.argv[2])
print "My command line args are " + ca_one + " and " + ca_two
```
And then the C code to pass these args:
```
//My code file
#include <stdio.h>
#include <python2.7/Python.h>
void main()
{
FILE* file;
int argc;
char * argv[3];
argc = 3;
argv[0] = "mypy.py";
argv[1] = "-m";
argv[2] = "/tmp/targets.list";
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
```
If you can pass the arguments into your C function this task becomes even easier:
```
void main(int argc, char *argv[])
{
FILE* file;
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
```
You can just pass those straight through. Now my solutions only used 2 command line args for the sake of time, but you can use the same concept for all 6 that you need to pass... and of course there's cleaner ways to capture the args on the python side too, but that's just the basic idea.
Hope it helps!
|
You have two options.
1. Call
```
system("python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata")
```
in your C code.
2. Actually use the API that `mrsync` (hopefully) defines. This is more flexible, but much more complicated. The first step would be to work out how you would perform the above operation as a Python function call. If `mrsync` has been written nicely, there will be a function `mrsync.sync` (say) that you call as
```
mrsync.sync("/tmp/targets.list", "/tmp/sourcedata", "/tmp/targetdata")
```
Once you've worked out how to do that, you can call the function directly from the C code using the Python API.
| 11,174
|
20,412,091
|
I am trying to make this 2 n body diagram to work in vpython, it seems that is working but something is wrong with my center or mass, or something, i don't really know. The 2 n body system is shifting and is not staying still.
```
from visual import*
mt= 1.99e30 #Kg
G=6.67e-11 #N*(m/kg)^2
#Binary System stars
StarA=sphere(pos= (3.73e11,0,0),radius=5.28e10,opacity=0.5,color=color.green,make_trail=True, interval=10)
StarB=sphere(pos=(-3.73e11,0,0),radius=4.86e10,opacity=0.5, color=color.blue,make_trail=True, interval=10)
#StarC=sphere(pos=(3.44e13,0,0),radius=2.92e11,opacity=0.5, color=color.yellow,make_trail=True, interval=10)
#mass of binary stars
StarA.mass= 1.45e30 #kg
StarB.mass= 1.37e30 #kg
#StarC.mass= 6.16e29 #kg
#initial velocities of binary system
StarA.velocity =vector(0,8181.2,0)
StarB.velocity =vector(0,-8181.2,0)
#StarC.velocity= vector(0,1289.4,0)
#Time step for each binary star
dt=1e5
StarA.pos= StarA.pos + StarA.velocity*dt
StarB.pos= StarB.pos + StarB.velocity*dt
#StarC.pos= StarC.pos + StarC.velocity*dt
#Lists
objects=[]
objects.append(StarA)
objects.append(StarB)
#objects.append(StarC)
#center of mass
Ycm=0
Xcm=0
Zcm=0
Vcmx=0
Vcmy=0
Vcmz=0
TotalMass=0
for each in objects:
TotalMass=TotalMass+each.mass
Xcm= Xcm + each.pos.x*each.mass
Ycm= Ycm + each.pos.y*each.mass
Zcm= Zcm + each.pos.z*each.mass
Vcmx= Vcmx + each.velocity.x*each.mass
Vcmy= Vcmy + each.velocity.y*each.mass
Vcmz= Vcmz + each.velocity.z*each.mass
for each in objects:
Xcm=Xcm/TotalMass
Ycm=Ycm/TotalMass
Zcm=Zcm/TotalMass
Vcmx=Vcmx/TotalMass
Vcmy=Vcmy/TotalMass
Vcmz=Vcmz/TotalMass
each.pos=each.pos-vector(Xcm,Ycm,Zcm)
each.velocity=each.velocity-vector(Vcmx,Vcmy,Vcmz)
#Code for Triple system
firstStep=0
while True:
rate(200)
for i in objects:
i.acceleration = vector(0,0,0)
for j in objects:
if i != j:
dist = j.pos - i.pos
i.acceleration = i.acceleration + G * j.mass * dist / mag(dist)**3
for i in objects:
if firstStep==0:
i.velocity = i.velocity + i.acceleration*dt
firstStep=1
else:
i.velocity = i.velocity + i.acceleration*dt
i.pos = i.pos + i.velocity*dt
```
|
2013/12/05
|
[
"https://Stackoverflow.com/questions/20412091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2839580/"
] |
Two points I'd like to make about your problem:
1. You're using Euler's method of integration. That is the "i.velocity = i.velocity + i.acceleration\*dt" part of your code. This method is not very accurate, especially with oscillatory problems like this one. That's part of the reason you're noticing the drift in your system. I'd recommend using the [RK4 method of integration.](http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) It's a bit more complicated but gives very good results, although [Verlet integration](http://en.wikipedia.org/wiki/Verlet_integration) may be more useful for you here.
2. Your system may have a net momentum in some direction. You have them starting with the same speed, but have different masses, so the net momentum is non-zero. To correct for this transform the system into a zero momentum reference frame. To do this add up the momentum (mass times velocity vector) of all the stars, then divide by the total mass of the system. This will give you a velocity vector to subtract off from all your initial velocity vectors of the stars, and so your center of mass will not move.
Hope this helps!
|
This is wrong:
```
for each in objects:
Xcm=Xcm/TotalMass
...
```
This division should only happen once. And then the averages should be removed from the objects, as in
```
Xcm=Xcm/TotalMass
...
for each in objects:
each.pos.x -= Xcm
...
```
| 11,175
|
28,550,511
|
I am trying to learn how to use callbacks between C and Python by way of Cython and have been looking at [this demo](https://github.com/cython/cython/tree/master/Demos/callback). I would like a Python function applied to one std::vector/numpy.array and store the results in another. I can compile and run without errors, but ultimately the vector y is
not being changed.
C++ header
```
// callback.hpp
#include<vector>
typedef double (*Callback)( void *apply, double &x );
void function( Callback callback, void *apply, vector<double> &x,
vector<double> &y );
```
C++ source
```
// callback.cpp
#include "callback.hpp"
#include <iostream>
using namespace std;
void function( Callback callback, void* apply,
vector<double> &x, vector<double> &y ) {
int n = x.size();
for(int i=0;i<n;++i) {
y[i] = callback(apply,x[i]);
std::cout << y[i] << std::endl;
}
```
Cython header
```
# cy_callback.pxd
import cython
from libcpp.vector cimport vector
cdef extern from "callback.hpp":
ctypedef double (*Callback)( void *apply, double &x )
void function( Callback callback, void* apply, vector[double] &x,
vector[double] &y )
```
Cython source
```
# cy_callback.pyx
from cy_callback cimport function
from libcpp.vector cimport vector
def pyfun(f,x,y):
function( cb, <void*> f, <vector[double]&> x, <vector[double]&> y )
cdef double cb(void* f, double &x):
return (<object>f)(x)
```
I compile with fairly boilerplate setup: python setup.py build\_ext -i
```
# setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy
import os
os.environ["CC"] = "g++"
os.environ["CXX"] = "g++"
setup( name = 'callback',
ext_modules=[Extension("callback",
sources=["cy_callback.pyx","callback.cpp"],
language="c++",
include_dirs=[numpy.get_include()])],
cmdclass = {'build_ext': build_ext},
)
```
And finally test with the Python script
```
# test.py
import numpy as np
from callback import pyfun
x = np.arange(11)
y = np.zeros(11)
pyfun(lambda x:x**2,x,y)
print(y)
```
When the elements of y are set in callback.cpp, the correct values are being
printed to the screen, which means that pyfun is indeed being evaluated correctly, however, at the Python level, y remains all zeros.
Any idea what I'm doing incorrectly?
|
2015/02/16
|
[
"https://Stackoverflow.com/questions/28550511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2308288/"
] |
Thanks, Ian. Based on your suggestion, I changed the code to return a vector instead of trying to modify it in place. This works, although is admittedly not particularly efficient
callback.hpp
```
typedef double (*Callback)( void *apply, double x );
vector<double> function( Callback callback, void *apply,
const vector<double> &x );
```
callback.cpp
```
vector<double> function( Callback callback, void* apply,
const vector<double> &x ) {
int n = x.size();
vector<double> y(n);
for(int i=0;i<n;++i) {
y[i] = callback(apply,x[i]);
}
return y;
}
```
cy\_callback.pxd
```
cdef extern from "callback.hpp":
ctypedef double (*Callback)( void *apply, const double &x )
vector[double] function( Callback callback, void* apply,
vector[double] &x )
```
cy\_callback.pyx
```
from cy_callback cimport function
from libcpp.vector cimport vector
def pyfun(f,x):
return function( cb, <void*> f, <const vector[double]&> x )
cdef double cb(void* f, double x ):
return (<object>f)(x)
```
test.py
```
import numpy as np
from callback import pyfun
x = np.arange(11)
y = np.zeros(11)
def f(x):
return x*x
y = pyfun(f,x)
print(y)
```
|
There is an implicit copy of the array made when you cast it to be a vector.
There isn't currently any way to have a vector take ownership of memory that has already been allocated, so the only workaround will be to copy the values manually or by exposing `std::copy` to cython. See [How to cheaply assign C-style array to std::vector?](https://stackoverflow.com/questions/5836324/how-to-cheaply-assign-c-style-array-to-stdvector).
| 11,176
|
67,798,070
|
I am more or less following [this example](http://4/1AY0e-g4pMh6JPfkexh5nvWf9lvug3sHK98_jxAnwhsYlrB3F20Jkp350PKY) to integrate the ray tune hyperparameter library with the huggingface transformers library using my own dataset.
Here is my script:
```
import ray
from ray import tune
from ray.tune import CLIReporter
from ray.tune.examples.pbt_transformers.utils import download_data, \
build_compute_metrics_fn
from ray.tune.schedulers import PopulationBasedTraining
from transformers import glue_tasks_num_labels, AutoConfig, \
AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments
def get_model():
# tokenizer = AutoTokenizer.from_pretrained(model_name, additional_special_tokens = ['[CHARACTER]'])
model = ElectraForSequenceClassification.from_pretrained('google/electra-small-discriminator', num_labels=2)
model.resize_token_embeddings(len(tokenizer))
return model
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
"electra_hp_tune",
report_to = "wandb",
learning_rate=2e-5, # config
do_train=True,
do_eval=True,
evaluation_strategy="epoch",
load_best_model_at_end=True,
num_train_epochs=2, # config
per_device_train_batch_size=16, # config
per_device_eval_batch_size=16, # config
warmup_steps=0,
weight_decay=0.1, # config
logging_dir="./logs",
)
trainer = Trainer(
model_init=get_model,
args=training_args,
train_dataset=chunked_encoded_dataset['train'],
eval_dataset=chunked_encoded_dataset['validation'],
compute_metrics=compute_metrics
)
tune_config = {
"per_device_train_batch_size": 32,
"per_device_eval_batch_size": 32,
"num_train_epochs": tune.choice([2, 3, 4, 5])
}
scheduler = PopulationBasedTraining(
time_attr="training_iteration",
metric="eval_acc",
mode="max",
perturbation_interval=1,
hyperparam_mutations={
"weight_decay": tune.uniform(0.0, 0.3),
"learning_rate": tune.uniform(1e-5, 2.5e-5),
"per_device_train_batch_size": [16, 32, 64],
})
reporter = CLIReporter(
parameter_columns={
"weight_decay": "w_decay",
"learning_rate": "lr",
"per_device_train_batch_size": "train_bs/gpu",
"num_train_epochs": "num_epochs"
},
metric_columns=[
"eval_f1", "eval_loss", "epoch", "training_iteration"
])
from ray.tune.integration.wandb import WandbLogger
trainer.hyperparameter_search(
hp_space=lambda _: tune_config,
backend="ray",
n_trials=10,
scheduler=scheduler,
keep_checkpoints_num=1,
checkpoint_score_attr="training_iteration",
progress_reporter=reporter,
name="tune_transformer_gr")
```
The last function call (to trainer.hyperparameter\_search) is when the error is raised. The error message is:
>
> AttributeError: module 'pickle' has no attribute 'PickleBuffer'
>
>
>
And here is the full stack trace:
>
>
>
> ---
>
>
> AttributeError Traceback (most recent call
> last)
>
>
> in ()
> 8 checkpoint\_score\_attr="training\_iteration",
> 9 progress\_reporter=reporter,
> ---> 10 name="tune\_transformer\_gr")
>
>
> 14 frames
>
>
> /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in
> hyperparameter\_search(self, hp\_space, compute\_objective, n\_trials,
> direction, backend, hp\_name, \*\*kwargs) 1666 1667
>
> run\_hp\_search = run\_hp\_search\_optuna if backend ==
> HPSearchBackend.OPTUNA else run\_hp\_search\_ray
> -> 1668 best\_run = run\_hp\_search(self, n\_trials, direction, \*\*kwargs) 1669 1670 self.hp\_search\_backend = None
>
>
> /usr/local/lib/python3.7/dist-packages/transformers/integrations.py in
> run\_hp\_search\_ray(trainer, n\_trials, direction, \*\*kwargs)
> 231
> 232 analysis = ray.tune.run(
> --> 233 ray.tune.with\_parameters(\_objective, local\_trainer=trainer),
> 234 config=trainer.hp\_space(None),
> 235 num\_samples=n\_trials,
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py in
> with\_parameters(trainable, \*\*kwargs)
> 294 prefix = f"{str(trainable)}\_"
> 295 for k, v in kwargs.items():
> --> 296 parameter\_registry.put(prefix + k, v)
> 297
> 298 trainable\_name = getattr(trainable, "**name**", "tune\_with\_parameters")
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/registry.py in
> put(self, k, v)
> 160 self.to\_flush[k] = v
> 161 if ray.is\_initialized():
> --> 162 self.flush()
> 163
> 164 def get(self, k):
>
>
> /usr/local/lib/python3.7/dist-packages/ray/tune/registry.py in
> flush(self)
> 169 def flush(self):
> 170 for k, v in self.to\_flush.items():
> --> 171 self.references[k] = ray.put(v)
> 172 self.to\_flush.clear()
> 173
>
>
> /usr/local/lib/python3.7/dist-packages/ray/\_private/client\_mode\_hook.py
> in wrapper(\*args, \*\*kwargs)
> 45 if client\_mode\_should\_convert():
> 46 return getattr(ray, func.**name**)(\*args, \*\*kwargs)
> ---> 47 return func(\*args, \*\*kwargs)
> 48
> 49 return wrapper
>
>
> /usr/local/lib/python3.7/dist-packages/ray/worker.py in put(value)
>
> 1512 with profiling.profile("ray.put"): 1513 try:
> -> 1514 object\_ref = worker.put\_object(value) 1515 except ObjectStoreFullError: 1516 logger.info(
>
>
> /usr/local/lib/python3.7/dist-packages/ray/worker.py in
> put\_object(self, value, object\_ref)
> 259 "inserting with an ObjectRef")
> 260
> --> 261 serialized\_value = self.get\_serialization\_context().serialize(value)
> 262 # This *must* be the first place that we construct this python
> 263 # ObjectRef because an entry with 0 local references is created when
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> serialize(self, value)
> 322 return RawSerializedObject(value)
> 323 else:
> --> 324 return self.\_serialize\_to\_msgpack(value)
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_msgpack(self, value)
> 302 metadata = ray\_constants.OBJECT\_METADATA\_TYPE\_PYTHON
> 303 pickle5\_serialized\_object =
>
> --> 304 self.\_serialize\_to\_pickle5(metadata, python\_objects)
> 305 else:
> 306 pickle5\_serialized\_object = None
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_pickle5(self, metadata, value)
> 262 except Exception as e:
> 263 self.get\_and\_clear\_contained\_object\_refs()
> --> 264 raise e
> 265 finally:
> 266 self.set\_out\_of\_band\_serialization()
>
>
> /usr/local/lib/python3.7/dist-packages/ray/serialization.py in
> \_serialize\_to\_pickle5(self, metadata, value)
> 259 self.set\_in\_band\_serialization()
> 260 inband = pickle.dumps(
> --> 261 value, protocol=5, buffer\_callback=writer.buffer\_callback)
> 262 except Exception as e:
> 263 self.get\_and\_clear\_contained\_object\_refs()
>
>
> /usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle\_fast.py
> in dumps(obj, protocol, buffer\_callback)
> 71 file, protocol=protocol, buffer\_callback=buffer\_callback
> 72 )
> ---> 73 cp.dump(obj)
> 74 return file.getvalue()
> 75
>
>
> /usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle\_fast.py
> in dump(self, obj)
> 578 def dump(self, obj):
> 579 try:
> --> 580 return Pickler.dump(self, obj)
> 581 except RuntimeError as e:
> 582 if "recursion" in e.args[0]:
>
>
> /usr/local/lib/python3.7/dist-packages/pyarrow/io.pxi in
> pyarrow.lib.Buffer.**reduce\_ex**()
>
>
> AttributeError: module 'pickle' has no attribute 'PickleBuffer'
>
>
>
My environment set-up:
* Am using Google Colab
* Platform: Linux-5.4.109+-x86\_64-with-Ubuntu-18.04-bionic
* Python version: 3.7.10
* Transformers version: 4.6.1
* ray version: 1.3.0
What I have tried:
* Updating pickle
* Installed and imported pickle5 as pickle
* Made sure that I did not have a python file with the name of 'pickle' in my immediate directory
Where is this bug coming from and how can I resolve it?
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67798070",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7254514/"
] |
I had the same error when trying to use pickle.dump(), for me it worked to downgrade pickle5 from version 0.0.11 to 0.0.10
|
Not a "real" solution but at least a workaround. For me this issue was occurring on Python 3.7. Switching to Python 3.8 solved the issue.
| 11,177
|
66,559,058
|
I would like to convert a string `temp.filename.txt` to `temp\.filename\.txt` using python
Tried string replace method but the output is not as expected
```
filename = "temp.filename.txt"
filename.replace(".", "\.")
output: 'temp\\.filename\\.txt'
```
|
2021/03/10
|
[
"https://Stackoverflow.com/questions/66559058",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3572886/"
] |
`\` is a special character, which is *represented* as `\\`, this doesn't mean your string actually contains 2 `\` characters.
(as suggested by @saipy, if you *print* your string, only single `\` should show up...)
|
```
filename = "temp.filename.txt"
result=filename.replace(".", "\.")
print(result)
```
[I stored a result in variable(result) its working fine check this](https://i.stack.imgur.com/klc8N.png)
| 11,179
|
57,379,888
|
The Source Code
---------------
I have a bit of code requiring that I call a property setter to test wether or not locking functionaliy of a class is working (some functions of the class are `async`, requiring that a padlock boolean be set during their execution). The setter has been written to raise a `RuntimeError` if the lock has been set for the instance.
Here is the code:
```
@filename.setter
def filename(self, value):
if not self.__padlock:
self.__filename = value
else:
self.__events.on_error("data_store is locked. you should be awaiting safe_unlock if you wish to "
"change the source.")
```
As you can see here, if `self.__padlock` is `True`, a `RuntimeError` is raised. The issue arises when attempting to assert the setter with python `unittest` .
The Problem
===========
It appears that `unittest` lacks functionality needed to assert wether or not a property setter raises an exception.
Attempting to use `assertRaises` doesn't obviously work:
```
# Non-working concept
self.assertRaises(RuntimeError, my_object.filename, "testfile.txt")
```
The Question
============
How does one assert that a class's property setter will raise a given exception in a python `unittest.TestCase` method?
|
2019/08/06
|
[
"https://Stackoverflow.com/questions/57379888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7423333/"
] |
You need to actually *invoke* the setter, via an assignment. This is simple to do, as long as you use `assertRaises` as a context manager.
```
with self.assertRaises(RuntimeError):
my_object.filename = "testfile.txt"
```
---
If you couldn't do that, you would have to fall back to an explicit `try` statement (which gets tricky, because you need to handle both "no exception" and "exception other than `RuntimeError`" separately.
```
try:
my_object.filename = "testfile.txt"
except RuntimeError:
pass
except Exception:
raise AssertionError("something other than RuntimeError")
else:
raise AssertionError("no RuntimeError")
```
or (more) explicitly invoke the setter:
```
self.assertRaises(RuntimeError, setattr, myobject, 'filename', 'testfile.txt')
```
or worse, *explicitly* invoke the setter:
```
self.assertRaises(RuntimeError, type(myobject).filename.fset, myobject, 'testfile.txt')
```
In other words, three cheers for context managers!
|
You can use the `setattr` method like so:
```
self.assertRaises(ValueError, setattr, p, "name", None)
```
In the above example, we will try to set `p.name` equal to `None` and check if there is a `ValueError` raised.
| 11,180
|
13,212,300
|
I don't know if this question has duplicates , but i haven't found one yet.
when using python you can create GUI fastly , but sometimes you cannot find a method to do what you want. for example i have the following problem:
let's suppose that there is a canvas called K with a rectangle with ID=1(canvas item id , not memory id) in it.
if i want to redraw the item i can delete it and then redraw it with new settings.
```
K.delete(1)
K.create_rectangle(x1,y1,x2,y2,options...)
```
here is the problem:the object id changes; how can i redraw or move or resize the rectangle or simply change it without changing its id with a method?for example:
```
K.foo(1,options....)
```
if there isn't such a method , then i should create a list with the canvas object ids , but it is not elegant and not fast.for example:
```
ItemIds=[None,None,etc...]
ItemIds[0]=K.create_rectangle(old options...)
K.delete(ItemIds[0])
ItemIds[0]=K.create_rectangle(new options...)
```
|
2012/11/03
|
[
"https://Stackoverflow.com/questions/13212300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1569222/"
] |
You can use [`Canvas.itemconfig`](https://web.archive.org/web/20201108093851id_/http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.itemconfig-method):
```
item = K.create_rectangle(x1,y1,x2,y2,options...)
K.itemconfig(item,options)
```
To move the item, you can use [`Canvas.move`](https://web.archive.org/web/20201108093851id_/http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.move-method)
---
```
import Tkinter as tk
root = tk.Tk()
canvas = tk.Canvas(root)
canvas.pack()
item = canvas.create_rectangle(50, 25, 150, 75, fill="blue")
def callback():
canvas.itemconfig(item,fill='red')
button = tk.Button(root,text='Push me!',command=callback)
button.pack()
root.mainloop()
```
|
I searched around and found the perfect Tkinter method for resizing. canvas.coords() does the trick. just feed it your new coordinates and it's "good to go". Python 3.4
PS. don't forget the first param is the id.
| 11,181
|
4,156,464
|
I need to parse a series of short strings that are comprised of 3 parts: a question and 2 possible answers. The string will follow a consistent format:
This is the question "answer\_option\_1 is in quotes" "answer\_option\_2 is in quotes"
I need to identify the question part and the two possible answer choices that are in single or double quotes.
Ex.:
What color is the sky today? "blue" or "grey"
Who will win the game 'Michigan' 'Ohio State'
How do I do this in python?
|
2010/11/11
|
[
"https://Stackoverflow.com/questions/4156464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504732/"
] |
```
>>> import re
>>> s = "Who will win the game 'Michigan' 'Ohio State'"
>>> re.match(r'(.+)\s+([\'"])(.+?)\2\s+([\'"])(.+?)\4', s).groups()
('Who will win the game', "'", 'Michigan', "'", 'Ohio State')
```
|
One possibility is that you can use regex.
```
import re
robj = re.compile(r'^(.*) [\"\'](.*)[\"\'].*[\"\'](.*)[\"\']')
str1 = "Who will win the game 'Michigan' 'Ohio State'"
r1 = robj.match(str1)
print r1.groups()
str2 = 'What color is the sky today? "blue" or "grey"'
r2 = robj.match(str2)
r2.groups()
```
Output:
```
('Who will win the game', 'Michigan', 'Ohio State')
('What color is the sky today?', 'blue', 'grey')
```
| 11,182
|
41,535,881
|
I'm new to Conda package management and I want to get the latest version of Python to use f-strings in my code. Currently my version is (`python -V`):
```
Python 3.5.2 :: Anaconda 4.2.0 (x86_64)
```
How would I upgrade to Python 3.6?
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41535881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4343241/"
] |
I found [this page](https://support.anaconda.com/customer/en/portal/articles/2797011-updating-anaconda-to-python-3-6) with detailed instructions to upgrade Anaconda to a major newer version of Python (from Anaconda 4.0+). First,
```
conda update conda
conda remove argcomplete conda-manager
```
I also had to `conda remove` some packages not on the official list:
* backports\_abc
* beautiful-soup
* blaze-core
Depending on packages installed on your system, you may get additional `UnsatisfiableError` errors - simply add those packages to the remove list. Next, install the version of Python,
```
conda install python==3.6
```
which takes a while, after which a message indicated to `conda install anaconda-client`, so I did
```
conda install anaconda-client
```
which said it's already there. Finally, following the directions,
```
conda update anaconda
```
I did this in the Windows 10 command prompt, but things should be similar in Mac OS X.
|
Only solution that works was create a new conda env with the name you want (you will, unfortunately, delete the old one to keep the name). Then create a new env with a new python version and re-run your `install.sh` script with the conda/pip installs (or the yaml file or whatever you use to keep your requirements):
```
conda remove --name original_name --all
conda create --name original_name python=3.8
sh install.sh # or whatever you usually do to install dependencies
```
doing `conda install python=3.8` doesn't work for me. Also, why do you want 3.6? Move forward with the word ;)
---
Note bellow doesn't work:
=========================
If you want to update the conda version of your previous env what you can also do is the following (more complicated than it should be because [you cannot rename envs in conda](https://stackoverflow.com/questions/42231764/how-can-i-rename-a-conda-environment)):
1. create a temporary new location for your current env:
```
conda create --name temporary_env_name --clone original_env_name
```
2. delete the original env (so that the new env can have that name):
```
conda deactivate
conda remove --name original_env_name --all # or its alias: `conda env remove --name original_env_name`
```
3. then create the new empty env with the python version you want and clone the original env:
```
conda create --name original_env_name python=3.8 --clone temporary_env_name
```
| 11,186
|
42,549,482
|
Here is the pseudo code:
```
class Foo (list):
def methods...
foo=Foo()
foo.readin()
rule='....'
bar=[for x in foo if x.match(rule)]
```
Here, bar is of a list, however I'd like it to be a instance of Foo, The only way I know is to create a for loop and append items one by one:
```
bar=Foo()
for item in foo:
if item.match(rule):
bar.append(item)
```
So I'd like to know if there is any more concise way or more pythonic way to do this ?
|
2017/03/02
|
[
"https://Stackoverflow.com/questions/42549482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4250879/"
] |
You can pass in a [generator expression](https://docs.python.org/3/tutorial/classes.html#generator-expressions) to the `Foo()` call:
```
bar = Foo(x for x in foo if x.match(rule))
```
(When passing a generator expression to a call, where it is the only argument, you can drop the parentheses you normally would put around a generator expression).
|
It seems another answer got deleted because the original answerer deleted it. So I post the other way here for completeness. If the origin answerer restore he's answer I will delete this answer myself.
another way to do this is to use the build in filter function. the code is :
```
bar=filter( lambda x: x.match(rule) , foo )
```
I guess why the original answer guy deleted his answer because it is said to be a not encouraged using the filter function. I have done some research before asked this question. But I think I learned a lot from that answer. Because I have tried to use the filter function myself , but never figured out how to use it correctly. So this answer taught me how to read the manual Correctly, and no matter what, it is still a valid way to solve my problem. So here, if the original answerer can see my post, thank you I appreciate your help and it surely helped me.
updated:
as what said by Martijn in the comment, this is not a valid answer. I'll keep this anwser because this talk is good. but this is not a valid way to solve my problem.
| 11,196
|
51,011,204
|
I have a 1D array X with both +/- elements. I'm isolating their signs as follows:
```
idxN, idxP = X<0, X>=0
```
Now I want to create an array whose value depends on the sign of X. I was trying to compute this but it gives the captioned syntax error.
```
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
y(idxP) = X(idxP)+[math.log(np.exp(-x)+1) for x in X(idxP)];
```
Is the LHS assignment the culprit?
Thanks.
[Edit] The full code is as follows:
```
y = np.zeros(X.shape)
idxN, idxP = X<0, X>=0
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
y(idxP) = X(idxP)+[math.log(np.exp(-x)+1) for x in X(idxP)];
return y
```
The traceback is:
```
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
File "<ipython-input-63-9a4488f04660>", line 1
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
^
SyntaxError: can't assign to function call
```
|
2018/06/24
|
[
"https://Stackoverflow.com/questions/51011204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9473446/"
] |
In some programming languages like Matlab, indexes are references with parentheses. In Python, indexes are represented with square brackets.
If I have a list, `mylist = [1,2,3,4]`, I reference elements like this:
```
> mylist[1]
2
```
Wen you say `y(idxN)`, Python thinks you are trying to pass `idxN` as an argument a function named `y`.
|
I got it to work like this:
```
y = np.zeros(X.shape)
idxN, idxP = X<0, X>=0
yn,yp,xn,xp = y[idxN], y[idxP],X[idxN],X[idxP]
yn = [math.log(1+np.exp(x)) for x in xn]
yp = xp+[math.log(np.exp(-x)+1) for x in xp];
```
If there is a better way, please let me know. Thanks.
| 11,197
|
13,283,628
|
I have created a mezzanine project and its name is mezzanine-heroku-test
I create a Procfile that has the content as follow:
**web: python manage.py run\_gunicorn -b "0.0.0.0:$PORT" -w 3**
Next, I access to the website to test and I receive the error: Internal Server Error.
So, Could you please help me deploy mezzanine on heroku step by step or some suggestion?
Thank you so much.
|
2012/11/08
|
[
"https://Stackoverflow.com/questions/13283628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/875781/"
] |
Two possibilities:
1. There is a table within another schema ("database" in mysql terminology) which has a FK reference
2. The innodb internal data dictionary is out of sync with the mysql one.
You can see which table it was (one of them, anyway) by doing a "SHOW ENGINE INNODB STATUS" after the drop fails.
If it turns out to be the latter case, I'd dump and restore the whole server if you can.
MySQL 5.1 and above will give you the name of the table with the FK in the error message.
and try this too
Please refer this question [stackoverflow](https://stackoverflow.com/questions/3334619/cannot-delete-or-update-a-parent-row-a-foreign-key-constraint-fails)
Disable foreign key checking
```
DISABLE KEYS
```
Do make sure to `SET FOREIGN_KEY_CHECKS=1;`
|
Instead of using default innodb storage engine you can easily configure django to use MyISAM. In the later case, it wouldn't restrict you from performing various operation because of the relationships. Both have their positive and negative points.
| 11,198
|
39,961,414
|
I am new to learning regex in python and I'm wondering how do I use regex in python to store the integers(positive and negative) i want into a list!
For example
This is the data in a list.
```
data =
[u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=-5,B=5)',
u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=5,Y=5)',
u'\x1b[0m[\x1b[1m\x1b[10m\xbb\x1b[0m\x1b[36m]\x1b[0m : ']
```
How do I extract the integer values of A and B (negative and positive) and store them in a variable so that I can work with the numbers?
I tried smth like this but the list is empty ..
```
for line in data[0]:
pattern = re.compile("([A-Z]=(-?\d+?),[A-Z]=(-?\d+?))")
store = pattern.findall(line)
print store
```
Thank you and appreciate it
|
2016/10/10
|
[
"https://Stackoverflow.com/questions/39961414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6949864/"
] |
When you use `$(".form-control")`, jquery select all `.form-control` element. But you need to select target element using `this` variable in event function and use [`.prev()`](https://api.jquery.com/prev/) to select previous element.
```js
$(".show").mousedown(function(){
$(this).prev().attr('type','text');
}).mouseup(function(){
$(this).prev().attr('type','password');
}).mouseout(function(){
$(this).prev().attr('type','password');
});
```
|
Just target the previous input instead of all inputs with the given class
```js
$(".form-control").on("keyup", function() {
if ($(this).val())
$(this).next(".show").show();
else
$(this).next(".show").hide();
}).trigger('keyup');
$(".show").mousedown(function() {
$(this).prev(".form-control").prop('type', 'text');
}).mouseup(function() {
$(this).prev(".form-control").prop('type', 'password');
}).mouseout(function() {
$(this).prev(".form-control").prop('type', 'password');
});
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<form name="resetting_form" method="post" action="">
<div class="form-group has-feedback">
<input type="password" id="password_first" required="required" placeholder="New Password" class="form-control">
<span class="show">show</span>
</div>
<div class="form-group has-feedback">
<input type="password" id="password_second" required="required" placeholder="Repeat Password" class="form-control">
<span class="show">show</span>
</div>
<input type="submit" class="btn btn-primary" value="Submit">
</form>
```
| 11,199
|
69,516,584
|
i am running npm install command on my project but getting error
>
> Build failed with error code: 1
>
>
>
Part of the log posted below.
```
0 verbose cli [
0 verbose cli 'C:\\Program Files\\nodejs\\node.exe',
0 verbose cli 'C:\\Users\\vined\\AppData\\Roaming\\npm\\node_modules\\npm\\bin\\npm-cli.js',
0 verbose cli 'install'
0 verbose cli ]
1 info using npm@8.0.0
2 info using node@v16.9.1
3 timing npm:load:whichnode Completed in 1ms
4 timing config:load:defaults Completed in 5ms
5 timing config:load:file:C:\Users\vined\AppData\Roaming\npm\node_modules\npm\npmrc Completed in 7ms
6 timing config:load:builtin Completed in 7ms
7 timing config:load:cli Completed in 6ms
8 timing config:load:env Completed in 1ms
9 timing config:load:file:E:\plugin-applepay-master\.npmrc Completed in 1ms
10 timing config:load:project Completed in 9ms
11 timing config:load:file:C:\Users\vined\.npmrc Completed in 2ms
12 timing config:load:user Completed in 3ms
13 timing config:load:file:C:\Users\vined\AppData\Roaming\npm\etc\npmrc Completed in 0ms
14 timing config:load:global Completed in 1ms
15 timing config:load:validate Completed in 5ms
16 timing config:load:credentials Completed in 3ms
17 timing config:load:setEnvs Completed in 3ms
18 timing config:load Completed in 43ms
19 timing npm:load:configload Completed in 43ms
20 timing npm:load:setTitle Completed in 6ms
21 timing npm:load:setupLog Completed in 1ms
22 timing config:load:flatten Completed in 11ms
23 timing npm:load:cleanupLog Completed in 8ms
24 timing npm:load:configScope Completed in 0ms
25 timing npm:load:projectScope Completed in 2ms
26 timing npm:load Completed in 76ms
27 timing arborist:ctor Completed in 3ms
28 timing idealTree:init Completed in 3609ms
29 warn old lockfile The package-lock.json file was created with an old version of npm,
29 warn old lockfile so supplemental metadata must be fetched from the registry.
29 warn old lockfile
29 warn old lockfile This is a one-time fix-up, please be patient...
30 silly inflate node_modules/@tridnguyen/config
31 silly inflate node_modules/@webassemblyjs/ast
32 http fetch GET 200 https://registry.npmjs.org/@webassemblyjs%2Fast 891ms (cache revalidated)
33 silly inflate node_modules/@webassemblyjs/floating-point-hex-parser
34 http fetch GET 200 https://registry.npmjs.org/@webassemblyjs%2Ffloating-point-hex-parser 153ms (cache revalidated)
35 silly inflate node_modules/@webassemblyjs/helper-api-error
36 http fetch GET 200 https://registry.npmjs.org/@webassemblyjs%2Fhelper-api-error 155ms (cache revalidated)
37 silly inflate node_modules/@webassemblyjs/helper-buffer
38 http fetch GET 200 https://registry.npmjs.org/@webassemblyjs%2Fhelper-buffer 184ms (cache revalidated)
39 silly inflate node_modules/@webassemblyjs/helper-code-frame
40 http fetch GET 200 https://registry.npmjs.org/@webassemblyjs%2Fhelper-code-frame 115ms (cache revalidated)
3313 timing reifyNode:node_modules/events Completed in 56407ms
3314 timing reifyNode:node_modules/optimist Completed in 56748ms
3315 timing reifyNode:node_modules/es6-map Completed in 56420ms
3316 timing reifyNode:node_modules/argparse Completed in 56262ms
3317 timing reifyNode:node_modules/dwupload/node_modules/yargs Completed in 56467ms
3318 timing reifyNode:node_modules/iconv-lite Completed in 56651ms
3319 timing reifyNode:node_modules/stylelint/node_modules/table Completed in 57303ms
3320 timing reifyNode:node_modules/public-encrypt Completed in 57110ms
3321 timing reifyNode:node_modules/source-map Completed in 57327ms
3322 timing reifyNode:node_modules/cacache Completed in 56598ms
3323 timing reifyNode:node_modules/uglify-js Completed in 57809ms
3324 timing reifyNode:node_modules/terser/node_modules/source-map Completed in 57789ms
3325 timing reifyNode:node_modules/stylelint/node_modules/source-map Completed in 57764ms
3326 timing reifyNode:node_modules/terser-webpack-plugin/node_modules/source-map Completed in 57799ms
3327 timing reifyNode:node_modules/uglify-js/node_modules/source-map Completed in 57833ms
3328 timing reifyNode:node_modules/postcss-reporter/node_modules/source-map Completed in 57525ms
3329 timing reifyNode:node_modules/postcss-sass/node_modules/source-map Completed in 57540ms
3330 timing reifyNode:node_modules/webpack-sources/node_modules/source-map Completed in 58016ms
3331 timing reifyNode:node_modules/postcss-loader/node_modules/source-map Completed in 57507ms
3332 timing reifyNode:node_modules/dom-serializer/node_modules/entities Completed in 57096ms
3333 timing reifyNode:node_modules/postcss-modules-values/node_modules/source-map Completed in 57555ms
3334 timing reifyNode:node_modules/sugarss/node_modules/source-map Completed in 57841ms
3335 timing reifyNode:node_modules/postcss-modules-scope/node_modules/source-map Completed in 57565ms
3336 timing reifyNode:node_modules/postcss-modules-extract-imports/node_modules/source-map Completed in 57556ms
3337 timing reifyNode:node_modules/postcss-safe-parser/node_modules/source-map Completed in 57598ms
3338 timing reifyNode:node_modules/postcss-scss/node_modules/source-map Completed in 57615ms
3339 timing reifyNode:node_modules/postcss-modules-local-by-default/node_modules/source-map Completed in 57573ms
3340 timing reifyNode:node_modules/css/node_modules/source-map Completed in 57087ms
3341 timing reifyNode:node_modules/handlebars/node_modules/source-map Completed in 57240ms
3342 timing reifyNode:node_modules/icss-utils/node_modules/source-map Completed in 57289ms
3343 timing reifyNode:node_modules/source-map-support/node_modules/source-map Completed in 57832ms
3344 timing reifyNode:node_modules/stylelint-scss/node_modules/postcss-selector-parser Completed in 58076ms
3345 timing reifyNode:node_modules/stylelint/node_modules/postcss-selector-parser Completed in 58064ms
3346 timing reifyNode:node_modules/schema-utils/node_modules/ajv-keywords Completed in 58034ms
3347 timing reifyNode:node_modules/stylelint/node_modules/ajv-keywords Completed in 58129ms
3348 timing reifyNode:node_modules/webpack/node_modules/ajv-keywords Completed in 58363ms
3349 timing reifyNode:node_modules/terser-webpack-plugin/node_modules/ajv-keywords Completed in 58232ms
3350 timing reifyNode:node_modules/domhandler Completed in 57567ms
3351 timing reifyNode:node_modules/json-schema Completed in 58264ms
3352 timing reifyNode:node_modules/stream-http Completed in 59326ms
3353 timing reifyNode:node_modules/diff Completed in 59764ms
3354 timing reifyNode:node_modules/pako Completed in 60450ms
3355 timing reifyNode:node_modules/es6-set Completed in 60124ms
3356 timing reifyNode:node_modules/@webassemblyjs/ast Completed in 59888ms
3357 timing reifyNode:node_modules/sshpk Completed in 60786ms
3358 timing reifyNode:node_modules/js-yaml Completed in 60636ms
3359 timing reifyNode:node_modules/terser Completed in 61207ms
3360 timing reifyNode:node_modules/cosmiconfig/node_modules/js-yaml Completed in 60646ms
3361 timing reifyNode:node_modules/stylelint/node_modules/js-yaml Completed in 61401ms
3362 timing reifyNode:node_modules/enhanced-resolve Completed in 60729ms
3363 timing reifyNode:node_modules/yargs Completed in 61701ms
3364 timing reifyNode:node_modules/shelljs Completed in 61498ms
3365 timing reifyNode:node_modules/escodegen/node_modules/source-map Completed in 61116ms
3366 timing reifyNode:node_modules/jszip Completed in 61402ms
3367 timing reifyNode:node_modules/postcss Completed in 61593ms
3368 timing reifyNode:node_modules/sgmf-scripts/node_modules/shelljs Completed in 61827ms
3369 timing reifyNode:node_modules/remark-stringify Completed in 61911ms
3370 timing reifyNode:node_modules/chai Completed in 61596ms
3371 timing reifyNode:node_modules/postcss-modules-values/node_modules/postcss Completed in 62197ms
3372 timing reifyNode:node_modules/postcss-scss/node_modules/postcss Completed in 62232ms
3373 timing reifyNode:node_modules/postcss-safe-parser/node_modules/postcss Completed in 62227ms
3374 timing reifyNode:node_modules/postcss-modules-scope/node_modules/postcss Completed in 62210ms
3375 timing reifyNode:node_modules/stylelint/node_modules/postcss Completed in 62474ms
3376 timing reifyNode:node_modules/postcss-modules-local-by-default/node_modules/postcss Completed in 62218ms
3377 timing reifyNode:node_modules/icss-utils/node_modules/postcss Completed in 61909ms
3378 timing reifyNode:node_modules/sugarss/node_modules/postcss Completed in 62507ms
3379 timing reifyNode:node_modules/postcss-modules-extract-imports/node_modules/postcss Completed in 62221ms
3380 timing reifyNode:node_modules/postcss-loader/node_modules/postcss Completed in 62209ms
3381 timing reifyNode:node_modules/postcss-sass/node_modules/postcss Completed in 62271ms
3382 timing reifyNode:node_modules/postcss-reporter/node_modules/postcss Completed in 62264ms
3383 warn deprecated buffer@4.9.1: This version of 'buffer' is out-of-date. You must update to v4.9.2 or newer
3384 timing reifyNode:node_modules/node-libs-browser/node_modules/buffer Completed in 62235ms
3385 timing reifyNode:node_modules/remark-parse Completed in 64660ms
3386 timing reifyNode:node_modules/nan Completed in 64578ms
3387 timing reifyNode:node_modules/bluebird Completed in 64116ms
3388 timing reifyNode:node_modules/sinon Completed in 64900ms
3389 timing reifyNode:node_modules/node-notifier Completed in 64751ms
3390 timing reifyNode:node_modules/proxyquire Completed in 64949ms
3391 timing reifyNode:node_modules/coa Completed in 64496ms
3392 timing reifyNode:node_modules/eslint-plugin-import Completed in 64649ms
3393 timing reifyNode:node_modules/cacache/node_modules/bluebird Completed in 64551ms
3394 timing reifyNode:node_modules/acorn-jsx/node_modules/acorn Completed in 64749ms
3395 timing reifyNode:node_modules/postcss-less Completed in 65491ms
3396 timing reifyNode:node_modules/csso Completed in 65141ms
3397 timing reifyNode:node_modules/table Completed in 66025ms
3398 timing reifyNode:node_modules/crc Completed in 65252ms
3399 timing reifyNode:node_modules/escope Completed in 65347ms
3400 timing reifyNode:node_modules/mocha Completed in 65706ms
3401 warn deprecated istanbul@0.4.5: This module is no longer maintained, try this instead:
3401 warn deprecated npm i nyc
3401 warn deprecated Visit https://istanbul.js.org/integrations for other alternatives.
3402 timing reifyNode:node_modules/istanbul Completed in 65794ms
3403 timing reifyNode:node_modules/autoprefixer Completed in 65865ms
3404 warn deprecated svgo@0.7.2: This SVGO version is no longer supported. Upgrade to v2.x.x.
3405 timing reifyNode:node_modules/svgo Completed in 66839ms
3406 timing reifyNode:node_modules/uri-js Completed in 66975ms
3407 timing reifyNode:node_modules/sgmf-scripts Completed in 67253ms
3408 timing reifyNode:node_modules/stylelint/node_modules/autoprefixer Completed in 67610ms
3409 timing reifyNode:node_modules/istanbul/node_modules/resolve Completed in 68223ms
3410 timing reifyNode:node_modules/schema-utils/node_modules/ajv Completed in 70277ms
3411 timing reifyNode:node_modules/webpack/node_modules/ajv Completed in 70600ms
3412 timing reifyNode:node_modules/terser-webpack-plugin/node_modules/ajv Completed in 70472ms
3413 timing reifyNode:node_modules/har-validator/node_modules/ajv Completed in 69904ms
3414 timing reifyNode:node_modules/stylelint/node_modules/ajv Completed in 70477ms
3415 timing reifyNode:node_modules/resolve Completed in 70397ms
3416 warn deprecated tar@2.2.2: This version of tar is no longer supported, and will not receive security updates. Please upgrade asap.
3417 timing reifyNode:node_modules/tar Completed in 70821ms
3418 timing reifyNode:node_modules/ajv Completed in 70421ms
3419 timing reifyNode:node_modules/neo-async Completed in 70973ms
3420 timing reifyNode:node_modules/type Completed in 71535ms
3421 timing reifyNode:node_modules/node-gyp Completed in 71977ms
3422 timing reifyNode:node_modules/extract-text-webpack-plugin/node_modules/async Completed in 71756ms
3423 timing reifyNode:node_modules/archiver/node_modules/async Completed in 71520ms
3424 timing reifyNode:node_modules/handlebars Completed in 72430ms
3425 timing reifyNode:node_modules/stylelint-scss Completed in 74065ms
3426 timing reifyNode:node_modules/babel-runtime Completed in 75225ms
3427 timing reifyNode:node_modules/eslint Completed in 78354ms
3428 timing reifyNode:node_modules/webpack Completed in 79374ms
3429 timing reifyNode:node_modules/node-sass Completed in 79028ms
3430 warn deprecated webdriverio@4.14.4: outdated version, please use @next
3431 timing reifyNode:node_modules/webdriverio Completed in 86402ms
3432 timing reifyNode:node_modules/stylelint Completed in 86352ms
3433 timing reifyNode:node_modules/cleave.js Completed in 88431ms
3434 timing reifyNode:node_modules/caniuse-lite Completed in 89944ms
3435 timing reifyNode:node_modules/es5-ext Completed in 90823ms
3436 timing reifyNode:node_modules/caniuse-db Completed in 91715ms
3437 timing reifyNode:node_modules/lodash Completed in 92690ms
3438 warn deprecated core-js@2.6.10: core-js@<3.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Please, upgrade your dependencies to the actual version of core-js.
3439 timing reifyNode:node_modules/core-js Completed in 94192ms
3440 timing reify:unpack Completed in 95219ms
3441 timing reify:unretire Completed in 4ms
3442 timing build:queue Completed in 240ms
3443 timing build:link:node_modules/atob Completed in 596ms
3444 timing build:link:node_modules/acorn-jsx/node_modules/acorn Completed in 559ms
3445 timing build:link:node_modules/node-gyp/node_modules/semver Completed in 557ms
3446 timing build:link:node_modules/sass-loader/node_modules/semver Completed in 556ms
3447 timing build:link:node_modules/sgmf-scripts/node_modules/shelljs Completed in 555ms
3448 timing build:link:node_modules/stylelint/node_modules/autoprefixer Completed in 554ms
3449 timing build:link:node_modules/webpack/node_modules/acorn Completed in 552ms
3450 timing build:link:node_modules/acorn Completed in 607ms
3451 timing build:link:node_modules/browserslist Completed in 603ms
3452 timing build:link:node_modules/cssesc Completed in 602ms
3453 timing build:link:node_modules/csso Completed in 634ms
3454 timing build:link:node_modules/errno Completed in 634ms
3455 timing build:link:node_modules/escodegen Completed in 633ms
3456 timing build:link:node_modules/eslint Completed in 631ms
3457 timing build:link:node_modules/esprima Completed in 630ms
3458 timing build:link:node_modules/gonzales-pe Completed in 628ms
3459 timing build:link:node_modules/he Completed in 626ms
3460 timing build:link:node_modules/handlebars Completed in 627ms
3461 timing build:link:node_modules/in-publish Completed in 626ms
3462 timing build:link:node_modules/istanbul Completed in 624ms
3463 timing build:link:node_modules/js-yaml Completed in 623ms
3464 timing build:link:node_modules/jsesc Completed in 622ms
3465 timing build:link:node_modules/json5 Completed in 622ms
3466 timing build:link:node_modules/mkdirp Completed in 620ms
3467 timing build:link:node_modules/mocha Completed in 619ms
3468 timing build:link:node_modules/node-gyp Completed in 618ms
3469 timing build:link:node_modules/miller-rabin Completed in 620ms
3470 timing build:link:node_modules/node-sass Completed in 617ms
3471 timing build:link:node_modules/nopt Completed in 614ms
3472 timing build:link:node_modules/onchange Completed in 613ms
3473 timing build:link:node_modules/regjsparser Completed in 612ms
3474 timing build:link:node_modules/rimraf Completed in 611ms
3475 timing build:link:node_modules/sass-graph Completed in 611ms
3476 timing build:link:node_modules/semver Completed in 610ms
3477 timing build:link:node_modules/sgmf-scripts Completed in 609ms
3478 timing build:link:node_modules/sha.js Completed in 609ms
3479 timing build:link:node_modules/shelljs Completed in 608ms
3480 timing build:link:node_modules/specificity Completed in 607ms
3481 timing build:link:node_modules/sshpk Completed in 607ms
3482 timing build:link:node_modules/strip-indent Completed in 606ms
3483 timing build:link:node_modules/stylelint Completed in 605ms
3484 timing build:link:node_modules/svgo Completed in 604ms
3485 timing build:link:node_modules/terser Completed in 603ms
3486 timing build:link:node_modules/tree-kill Completed in 602ms
3487 timing build:link:node_modules/uglify-js Completed in 602ms
3488 timing build:link:node_modules/uuid Completed in 601ms
3489 timing build:link:node_modules/webdriverio Completed in 599ms
3490 timing build:link:node_modules/watch Completed in 600ms
3491 timing build:link:node_modules/webpack Completed in 598ms
3492 timing build:link:node_modules/which Completed in 597ms
3493 timing build:link:node_modules/window-size Completed in 597ms
3494 timing build:link:node_modules/cosmiconfig/node_modules/esprima Completed in 596ms
3495 timing build:link:node_modules/cosmiconfig/node_modules/js-yaml Completed in 594ms
3496 timing build:link:node_modules/stylelint/node_modules/browserslist Completed in 590ms
3497 timing build:link:node_modules/stylelint/node_modules/esprima Completed in 590ms
3498 timing build:link:node_modules/stylelint/node_modules/js-yaml Completed in 588ms
3499 timing build:link:node_modules/dwupload Completed in 885ms
3500 timing build:link Completed in 891ms
3501 info run node-sass@4.12.0 install node_modules/node-sass node scripts/install.js
3502 info run node-sass@4.12.0 install { code: 0, signal: null }
3503 timing build:run:install:node_modules/node-sass Completed in 2303ms
3504 timing build:run:install Completed in 2304ms
3505 info run core-js@2.6.10 postinstall node_modules/core-js node postinstall || echo "ignore"
3506 info run node-sass@4.12.0 postinstall node_modules/node-sass node scripts/build.js
3507 info run core-js@2.6.10 postinstall { code: 0, signal: null }
3508 timing build:run:postinstall:node_modules/core-js Completed in 822ms
3509 info run node-sass@4.12.0 postinstall { code: 1, signal: null }
3510 timing reify:rollback:createSparse Completed in 23958ms
3511 timing reify:rollback:retireShallow Completed in 0ms
3512 timing command:install Completed in 265527ms
3513 verbose stack Error: command failed
3513 verbose stack at ChildProcess.<anonymous> (C:\Users\vined\AppData\Roaming\npm\node_modules\npm\node_modules\@npmcli\promise-spawn\index.js:64:27)
3513 verbose stack at ChildProcess.emit (node:events:394:28)
3513 verbose stack at maybeClose (node:internal/child_process:1064:16)
3513 verbose stack at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5)
3514 verbose pkgid node-sass@4.12.0
3515 verbose cwd E:\plugin-applepay-master
3516 verbose Windows_NT 10.0.19043
3517 verbose argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Users\\vined\\AppData\\Roaming\\npm\\node_modules\\npm\\bin\\npm-cli.js" "install"
3518 verbose node v16.9.1
3519 verbose npm v8.0.0
3520 error code 1
3521 error path E:\plugin-applepay-master\node_modules\node-sass
3522 error command failed
3523 error command C:\WINDOWS\system32\cmd.exe /d /s /c node scripts/build.js
3524 error Building: C:\Program Files\nodejs\node.exe E:\plugin-applepay-master\node_modules\node-gyp\bin\node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
3525 error gyp info it worked if it ends with ok
3525 error gyp verb cli [
3525 error gyp verb cli 'C:\\Program Files\\nodejs\\node.exe',
3525 error gyp verb cli 'E:\\plugin-applepay-master\\node_modules\\node-gyp\\bin\\node-gyp.js',
3525 error gyp verb cli 'rebuild',
3525 error gyp verb cli '--verbose',
3525 error gyp verb cli '--libsass_ext=',
3525 error gyp verb cli '--libsass_cflags=',
3525 error gyp verb cli '--libsass_ldflags=',
3525 error gyp verb cli '--libsass_library='
3525 error gyp verb cli ]
3525 error gyp info using node-gyp@3.8.0
3525 error gyp info using node@16.9.1 | win32 | x64
3525 error gyp verb command rebuild []
3525 error gyp verb command clean []
3525 error gyp verb clean removing "build" directory
3525 error gyp verb command configure []
3525 error gyp verb check python checking for Python executable "python2" in the PATH
3525 error gyp verb `which` failed Error: not found: python2
3525 error gyp verb `which` failed at getNotFoundError (E:\plugin-applepay-master\node_modules\which\which.js:13:12)
3525 error gyp verb `which` failed at F (E:\plugin-applepay-master\node_modules\which\which.js:68:19)
3525 error gyp verb `which` failed at E (E:\plugin-applepay-master\node_modules\which\which.js:80:29)
3525 error gyp verb `which` failed at E:\plugin-applepay-master\node_modules\which\which.js:89:16
3525 error gyp verb `which` failed at E:\plugin-applepay-master\node_modules\isexe\index.js:42:5
3525 error gyp verb `which` failed at E:\plugin-applepay-master\node_modules\isexe\windows.js:36:5
3525 error gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:198:21)
3525 error gyp verb `which` failed python2 Error: not found: python2
3525 error gyp verb `which` failed at getNotFoundError (E:\plugin-applepay-master\node_modules\which\which.js:13:12)
3525 error gyp verb `which` failed at F (E:\plugin-applepay-master\node_modules\which\which.js:68:19)
3525 error gyp verb `which` failed at E (E:\plugin-applepay-master\node_modules\which\which.js:80:29)
3525 error gyp verb `which` failed at E:\plugin-applepay-master\node_modules\which\which.js:89:16
3525 error gyp verb `which` failed at E:\plugin-applepay-master\node_modules\isexe\index.js:42:5
3525 error gyp verb `which` failed at E:\plugin-applepay-master\node_modules\isexe\windows.js:36:5
3525 error gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:198:21) {
3525 error gyp verb `which` failed code: 'ENOENT'
3525 error gyp verb `which` failed }
3525 error gyp verb check python checking for Python executable "python" in the PATH
3525 error gyp verb `which` succeeded python C:\Python310\python.EXE
3525 error gyp ERR! configure error
3525 error gyp ERR! stack Error: Command failed: C:\Python310\python.EXE -c import sys; print "%s.%s.%s" % sys.version_info[:3];
3525 error gyp ERR! stack File "<string>", line 1
3525 error gyp ERR! stack import sys; print "%s.%s.%s" % sys.version_info[:3];
3525 error gyp ERR! stack ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3525 error gyp ERR! stack SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
3525 error gyp ERR! stack
3525 error gyp ERR! stack at ChildProcess.exithandler (node:child_process:397:12)
3525 error gyp ERR! stack at ChildProcess.emit (node:events:394:28)
3525 error gyp ERR! stack at maybeClose (node:internal/child_process:1064:16)
3525 error gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5)
3525 error gyp ERR! System Windows_NT 10.0.19043
3525 error gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "E:\\plugin-applepay-master\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
3525 error gyp ERR! cwd E:\plugin-applepay-master\node_modules\node-sass
3525 error gyp ERR! node -v v16.9.1
3525 error gyp ERR! node-gyp -v v3.8.0
3525 error gyp ERR! not ok
3525 error Build failed with error code: 1
3526 verbose exit 1
```
******Below is my package.Json******
```
{
"name": "plugin_applepay",
"version": "6.0.0",
"description": "ApplePay plugin for Storefront Reference Architecture",
"main": "index.js",
"scripts": {
"test": "sgmf-scripts --test",
"compile:scss": "sgmf-scripts --compile css",
"compile:js": "sgmf-scripts --compile js",
"build": "npm run compile:js && npm run compile:scss",
"upload": "sgmf-scripts --upload",
"uploadCartridge": "sgmf-scripts --uploadCartridge plugin_applepay",
"lint": "sgmf-scripts --lint js && sgmf-scripts --lint css"
},
"repository": {
"type": "git",
"url": "git+https://github.com/SalesforceCommerceCloud/plugin-applepay.git"
},
"author": "Ilya Volodin <ivolodin@demandware.com>",
"license": "MIT",
"homepage": "https://github.com/SalesforceCommerceCloud/plugin-applepay",
"devDependencies": {
"chai": "^3.5.0",
"cleave.js": "1.5.0",
"css-loader": "^0.28.11",
"eslint": "^3.19.0",
"eslint-config-airbnb-base": "^5.0.3",
"eslint-plugin-import": "^1.16.0",
"mocha": "^5.2.0",
"mocha-junit-reporter": "^1.12.0",
"node-sass": "^4.12.0",
"postcss-loader": "^2.1.6",
"proxyquire": "1.7.4",
"sass-loader": "^7.3.1",
"sgmf-scripts": "^2.3.0",
"shelljs": "^0.7.7",
"sinon": "^1.17.7",
"stylelint": "^8.4.0",
"stylelint-config-standard": "^17.0.0",
"stylelint-scss": "^2.5.0"
},
"browserslist": [
"last 2 versions",
"ie >= 10"
],
"paths": {
"base": "../storefront-reference-architecture/cartridges/app_storefront_base/"
}
}
```
It was working fine till 15 days back. I had re-installed my windows and it stopped working any think to do with windows ENV variable.
|
2021/10/10
|
[
"https://Stackoverflow.com/questions/69516584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16418162/"
] |
You can create a second list that stores the valid index values.
```
import random
our = [3, 3, 0, 3, 3, 7]
index = []
for i in range(0, len(our)-1) :
if our[i] != 0 :
index.append(i)
# index: [0, 1, 3, 4]
random_index = random.choice(index)
```
**EDIT:** You can perform a sanity check for a non-zero value being present.
The `any()` function returns `True` if any element of an iterable is `True`. `0` is treated as `False` and all non-zero numbers are `True`.
```
valid_index = any(our[:-1])
if valid_index:
index = []
for i in range(0, len(our)-1) :
if our[i] != 0 :
index.append(i)
```
|
You can use a while loop to check if the number equals 0 or not.
```
import random
our = [3, 6, 2, 0, 3, 0, 5]
random_number = 0
while random_number == 0:
random_index = random.randint(0, len(our)-2)
random_number = our[random_index]
print(random_number)
```
| 11,202
|
16,276,913
|
at the moment I do:
```
def get_inet_ip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('mysite.com', 80))
return s.getsockname()[0]
```
This was based on:
[Finding local IP addresses using Python's stdlib](https://stackoverflow.com/questions/166506/finding-local-ip-addresses-using-pythons-stdlib)
However, This looks a bit dubious. As far as I can tell, it opens a socket to mysite.com:80, and then returns the first address for that socket, assuming it to be an IPv4 address. This seems a bit dodgy... i dont think we can ever guaranteee that to be the case.
Thats my first question, is it safe? On an IPv6-enable server, could the IPv6 address ever be returned unexpectedly?
My second question, is how do I get the IPv6 address in a similar way. Im going to modify the function to take an optional ipv6 paramater.
|
2013/04/29
|
[
"https://Stackoverflow.com/questions/16276913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2194306/"
] |
The question is, do you just want to connect, or do you really want the address?
If you just want to connect, you can do
```
s = socket.create_connection(('mysite.com', 80))
```
and have the connection established.
However, if you are interested in the address, you can go one of these ways:
```
def get_ip_6(host, port=0):
import socket
# search only for the wanted v6 addresses
result = socket.getaddrinfo(host, port, socket.AF_INET6)
return result # or:
return result[0][4][0] # just returns the first answer and only the address
```
or, to be closer to [another, already presented solution](https://stackoverflow.com/a/16277577/296974):
```
def get_ip_6(host, port=0):
# search for all addresses, but take only the v6 ones
alladdr = socket.getaddrinfo(host,port)
ip6 = filter(
lambda x: x[0] == socket.AF_INET6, # means its ip6
alladdr
)
# if you want just the sockaddr
# return map(lambda x:x[4],ip6)
return list(ip6)[0][4][0]
```
|
You should be using the function [socket.getaddrinfo()](http://docs.python.org/2/library/socket.html#socket.getaddrinfo)
Example code to get IPv6
```
def get_ip_6(host,port=80):
# discard the (family, socktype, proto, canonname) part of the tuple
# and make sure the ips are unique
alladdr = list(
set(
map(
lambda x: x[4],
socket.getaddrinfo(host,port)
)
)
)
ip6 = filter(
lambda x: ':' in x[0], # means its ip6
alladdr
)
return ip6
```
| 11,203
|
4,093,118
|
Is there a way to increment the year on filtered objects using the update() method?
I am using:
```
python 2.6.5
django 1.2.1 final
mysql Ver 14.14 Distrib 5.1.41
```
I know it's possible to do something like this:
```
today = datetime.datetime.today()
for event in Event.objects.filter(end_date__lt=today).iterator():
event.start_date = festival.start_date + datetime.timedelta(365)
event.end_date = festival.end_date + datetime.timedelta(365)
event.save()
```
However, in some cases, I would prefer to use the update() method.
```
# This does not work..
Event.objects.all().update(
start_date=F('start_date') + datetime.timedelta(365),
end_date=F('end_date') + datetime.timedelta(365)
)
```
With the example above, I get:
```
Warning: Truncated incorrect DOUBLE value: '365 0:0:0'
```
The sql query it's trying to make is:
```
UPDATE `events_event` SET `start_date` = `events_event`.`start_date` + 365 days, 0:00:00, `end_date` = `events_event`.`end_date` + 365 days, 0:00:00
```
I found something in the mysql guide, but this is raw sql!
```
SELECT DATE_ADD('2008-12-15', INTERVAL 1 YEAR);
```
Any idea?
|
2010/11/04
|
[
"https://Stackoverflow.com/questions/4093118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/208347/"
] |
One potential cause of *"Warning: Data truncated for column X"* exception is the use of non-whole day values for the timedelta being added to the **DateField** - it is fine in python, but fails when written to the mysql db. If you have a **DateTimeField**, it works too, since the precision of the persisted field matches the precision of the timedelta.
I.e.:
```
>>> from django.db.models import F
>>> from datetime import timedelta
>>> from myapp.models import MyModel
>>> [field for field in MyModel._meta.fields if field.name == 'valid_until'][0]
<django.db.models.fields.DateField object at 0x3d72fd0>
>>> [field for field in MyModel._meta.fields if field.name == 'timestamp'][0]
<django.db.models.fields.DateTimeField object at 0x43756d0>
>>> MyModel.objects.filter(pk=1).update(valid_until=F('valid_until') + timedelta(days=3))
1L
>>> MyModel.objects.filter(pk=1).update(valid_until=F('valid_until') + timedelta(days=3.5))
Traceback (most recent call last):
...
Warning: Data truncated for column 'valid_until' at row 1
>>> MyModel.objects.filter(pk=1).update(timestamp=F('timestamp') + timedelta(days=3.5))
1L
```
|
Quick but ugly:
```
>>> a.created.timetuple()
time.struct_time(tm_year=2000, tm_mon=11, tm_mday=2, tm_hour=2, tm_min=35, tm_se
c=14, tm_wday=3, tm_yday=307, tm_isdst=-1)
>>> time = list(a.created.timetuple())
>>> time[0] = time[0] + 1
>>> time
[2001, 11, 2, 2, 35, 14, 3, 307, -1]
>>>
```
| 11,204
|
58,537,324
|
For multiple `csv` files in a folder, I hope to loop all files ends with `csv` and merge as one excel file, here I give two examples:
**first.csv**
```
date a b
0 2019.1 1.0 NaN
1 2019.2 NaN 2.0
2 2019.3 3.0 2.0
3 2019.4 3.0 NaN
```
**second.csv**
```
date c d
0 2019.1 1.0 NaN
1 2019.2 5.0 2.0
2 2019.3 3.0 7.0
3 2019.4 6.0 NaN
4 2019.5 NaN 10.0
```
**...**
My desired output is like this, merging them based on `date`:
```
date a b c d
0 2019/1/31 1.0 NaN 1.0 NaN
1 2019/2/28 NaN 2.0 5.0 2.0
2 2019/3/31 3.0 2.0 3.0 7.0
3 2019/4/30 3.0 NaN 6.0 NaN
4 2019/5/31 NaN NaN NaN 10.0
```
I have edited the following code, but obviously there are some parts about `date` convert and merge `dfs` are incorrect:
```
import numpy as np
import pandas as pd
import glob
dfs = pd.DataFrame()
for file_name in glob.glob("*.csv"):
# print(file_name)
df = pd.read_csv(file_name, engine='python', skiprows=2, encoding='utf-8')
df = df.dropna()
df = df.dropna(axis = 1)
df['date'] = pd.to_datetime(df['date'], format='%Y.%m')
...
dfs = pd.merge(df1, df2, on = 'date', how= "outer")
# save the data frame
writer = pd.ExcelWriter('output.xlsx')
dfs.to_excel(writer,'sheet1')
writer.save()
```
Please help me. Thank you.
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58537324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8410477/"
] |
You can use Linq to perform your shift, here is a simple method you can use
```
public int[] shiftRight(int[] array, int shift)
{
var result = new List<int>();
var toTake = array.Take(shift);
var toSkip = array.Skip(shift);
result.AddRange(toSkip);
result.AddRange(toTake);
return result.ToArray();
}
```
|
If this is only for strings and the wraparound is necessary I would suggest to use `str.Substring(0,shift)`
and append it to `str.Substring(shift)` (don't try to reinvent the weel)
(some info about the substring method: [String.Substring](https://learn.microsoft.com/en-us/dotnet/api/system.string.substring?view=netframework-4.8) )
otherwise the reason why it did not work is because you only saved the first value of the array instead of all the values you wanted to shift.
---
Do not save only the first value in the array
```cs
var x = strArray[0];
```
use
```cs
string[] x = new string[shift];
for (int i = 0; i < shift; i++)
{
x[i] = strArray[i];
}
```
instead so you collect all the values you need to add to the end.
>
> EDIT: forgot the shifting
>
>
>
Shift the old data to the left
```cs
for (int i = 0; i < strArray.Length-shift; i++)
{
strArray[i] = strArray[i+shift];
}
```
And replace
```cs
strArray[strArray.Length - shift] = x;
```
for
```cs
for(int i = 0; i < x.Length; i++){
int newlocation = (strArray.Length - shift)+i;
strArray[newlocation] = x[i];
}
```
also replace the print loop for
```cs
Console.WriteLine(string.Join(" ", strArray));
```
its easier and makes it one line instead of three lines of code.
(also the docs for the Join function [string.Join](https://learn.microsoft.com/en-us/dotnet/api/system.string.join?view=netframework-4.8))
| 11,207
|
67,539,918
|
After installing Flask, When I used
`from flask import Flask`
to check if flask is properly installed or not, it gave the following error
```
>>> from flask import Flask
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/flask/__init__.py", line 3, in <module>
from werkzeug.exceptions import abort
File "/usr/local/lib/python3.5/dist-packages/werkzeug/__init__.py", line 1, in <module>
from .serving import run_simple
File "/usr/local/lib/python3.5/dist-packages/werkzeug/serving.py", line 151
server: "BaseWSGIServer"
^
SyntaxError: invalid syntax
```
How to solve this issue.
|
2021/05/14
|
[
"https://Stackoverflow.com/questions/67539918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12194111/"
] |
Flask is now at 2.0.0, which has ratcheted forward on requirements.
If you're on a system that is still on Python3.5, your alternative is to install the most recent in the 1.x line, and put
```
Flask==1.1.4
```
in your `requirements.txt`, or
```
venv/bin/pip install Flask==1.1.4
```
to install it manually.
|
The error indicates that you're running Python 3.5. Variable annotations that the library is attempting to use weren't [introduced until 3.6 though](https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations).
Upgrade to at least 3.6 to solve this. 3.9 is available too, so unless you need to use an older version for some reason, you might as well just to a full upgrade of your version.
| 11,209
|
37,316,731
|
I have a bash script which reads variable from environment and then passes it to the python script like this
```
#!/usr/bin/env bash
if [ -n "${my_param}" ]
then
my_param_str="--my_param ${my_param}"
fi
python -u my_script.py ${my_param_str}
```
Corresponding python script look like this
```
parser = argparse.ArgumentParser(description='My script')
parser.add_argument('--my_param',
type=str,
default='')
parsed_args = parser.parse_args()
print(parsed_args.description)
```
I want to provide string with dash characters "Some -- string" as an argument, but it doesn't work via bash script, but calling directly through command line works ok.
```
export my_param="Some -- string"
./launch_my_script.sh
```
gives an error `unrecognized arguments: -- string` and
`python my_script.py --my_param "Some -- string"` works well.
I've tried to play with **nargs** and escape `my_param_str="--my_param '${my_param}'"` this way, but both solutions didn't work.
Any workaround for this case? Or different approach how to handle this?
|
2016/05/19
|
[
"https://Stackoverflow.com/questions/37316731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1744914/"
] |
For the limited example you can basically get the same behavior just using
```
arg = os.environ.get('my_param', '')
```
where the first argument to `get` is the variable name and the second is the default value used should the var not be in the environment.
|
The quotes you write around the string do not get preserved when you assign a Bash string variable:
```
$ export my_param="Some -- string"
$ echo $my_param
Some -- string
```
You need to place the quotes around the variable again when you use it to create the `my_param_str`:
```
my_param_str="--my_param \"${my_param}\""
```
| 11,211
|
50,420,139
|
I am using python to generate a query text which I then send to the `SQL server`. The query is created in a function that accepts a list of strings which are then inserted into the query.
The query looks like:
```
SELECT *
FROM DB
WHERE last_word in ('red', 'phone', 'robin')
```
The issue is that here I have just 3 words, `red`, `phone`, and `robin`, but in another use case I have over 4,000 words and the response takes about 2 hours. How can I rewrite this query to make it more performant?
|
2018/05/18
|
[
"https://Stackoverflow.com/questions/50420139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1367204/"
] |
How many rows do you have in "DB"? Are there more "last\_word"s matching the 4000 words in the IN clause than not? If so, it would be better to use NOT IN, to exclude instead of include. Also, try to never use SELECT \* since this wildcard is very unperformant, it's better to explicitly define the columns you want to include in your query.
You could also try to put the 4000 words to match on in a (temporary) table or a CTE and then join on it, since joins usually work better than large loads of data within the IN clause. With this, I still recommend to not use the wildcard in the SELECT statement.
|
Try doing something like this:
```
SELECT *
FROM DB INNER JOIN WORDS_TABLE
ON DB.WORDS = WORDS_TABLE.WORDS;
```
Instead of the `*` use whatever you want to get.
`JOIN` in this case will be faster than the `IN` as you will have to write another inner query if you are using a table.
| 11,214
|
22,079,173
|
I can't seem to be able to get the python ldap module installed on my OS X Mavericks 10.9.1 machine.
Kernel details:
uname -a
Darwin 13.0.0 Darwin Kernel Version 13.0.0: Thu Sep 19 22:22:27 PDT 2013; root:xnu-2422.1.72~6/RELEASE\_X86\_64 x86\_64
I tried what was suggested here:
<http://projects.skurfer.com/posts/2011/python_ldap_lion/>
But when I try to use pip I get a different error
Modules/LDAPObject.c:18:10: fatal error: 'sasl.h' file not found
\*#include sasl.h
I also tried what was suggested here:
[python-ldap OS X 10.6 and Python 2.6](https://stackoverflow.com/questions/6475118/python-ldap-os-x-10-6-and-python-2-6)
But with the same error.
I am hoping someone could help me out here.
|
2014/02/27
|
[
"https://Stackoverflow.com/questions/22079173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1865366/"
] |
using pieces from both @hharnisc and @mick-t answers.
```
pip install python-ldap \
--global-option=build_ext \
--global-option="-I$(xcrun --show-sdk-path)/usr/include/sasl"
```
|
I had the same problem. I'm using Macports on my Mac and I have cyrus-sasl2 installed which provides sasl.h in /opt/local/include/sasl/. You can pass options to build\_ext using pip's global-option argument. To pass the include PATH to /opt/local/include/sasl/sasl.h run pip like this:
`pip install python-ldap --global-option=build_ext --global-option="-I/opt/local/include/sasl"`
Alternatively you could point it to whatever the output from `xcrun --show-sdk-path` provides. On my box that's:
`/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk`
Then you need to determine the PATH to the sasl header files. For me that's:
`/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/sasl/`
Let me know if that helps or you need a hand.
| 11,219
|
2,249,162
|
I'm currently indexing my music collection with python. Ideally I'd like my output file to be formatted as;
```
"Artist;
Album;
Tracks - length - bitrate - md5
Artist2;
Album2;
Tracks - length - bitrate - md5"
```
But I can't seem to work out how to achieve this. Any suggestions?
|
2010/02/12
|
[
"https://Stackoverflow.com/questions/2249162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You could do something like the following:
```
<% form_for @user, :url => { :action => "update" } do |user_form| %>
...
<% user_form.fields_for :profiles do |profiles_fields| %>
Phone Number: <%= profiles_fields.text_field :profile_mobile_number %>
<% end %>
<% end %>
```
But since you already have an association, then might as well use '[accepts\_nested\_attributes\_for](http://api.rubyonrails.org/classes/ActiveRecord/NestedAttributes/ClassMethods.html)'
|
You can use '[accepts\_nested\_attributes\_for](http://api.rubyonrails.org/classes/ActiveRecord/NestedAttributes/ClassMethods.html)' to do this; but there's a little trick in forms:
You must use the singular, and call fields\_for for each profile, like this:
```
<% form_for @user do |f| -%>
<% @user.profiles.each do %>
<% f.fields_for :profile_attributes, profile do |ff| -%>
<% end %>
```
Notice that is **:profile\_attributes**, instead of just **:profile**.
| 11,229
|
9,394,947
|
I have been having a few issues using the quadrature function in python 2.7 (part of the scipy.integrate module). The equation I am trying to integrate is simply:
```
x/(d^2) - (x^2)
```
The integration is between limits a and b. However, I need to do the integration at 40 different values of d and am not sure how to pass in the second argument so as to loop the integration over the values of d. Any help would be much appreciated and is quadrature the best way to evaluate this problem.
|
2012/02/22
|
[
"https://Stackoverflow.com/questions/9394947",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1171835/"
] |
```
In [9]: from scipy.integrate import quad
In [10]: a = 0
In [11]: b = 1
In [12]: [quad(lambda x, d: x/(d**2)-x**2, a, b, args=d) for d in range(2, 5)]
Out[12]:
[(-0.20833333333333334, 2.3717550132075781e-15),
(-0.27777777777777773, 3.0886887822595405e-15),
(-0.30208333333333337, 3.3546344203581545e-15)]
```
Change the `for d in range(2, 5)` as required.
|
```
from numpy import arange
from scipy.integrate import quad
beg = 0.
end = 4.
res = []
for d in arange(1., 40.):
res.append(quad(lambda x: x/(d**2.)-(x**2.), beg, end))
```
You can then access the results by
```
print res[0]
```
or even
```
print res
```
| 11,235
|
217,900
|
i want to write code for unit test to test my application code. I have different methods and now want to test these methods one by one in python script.
but i do not how to i write. can any one give me example of small code for unit testing in python.
i am thankful
|
2008/10/20
|
[
"https://Stackoverflow.com/questions/217900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17451/"
] |
Read the [unit testing framework section](http://www.python.org/doc/2.5.2/lib/module-unittest.html) of the [Python Library Reference](http://www.python.org/doc/2.5.2/lib/lib.html).
A [basic example](http://www.python.org/doc/2.5.2/lib/minimal-example.html) from the documentation:
```
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
def testshuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
def testchoice(self):
element = random.choice(self.seq)
self.assert_(element in self.seq)
def testsample(self):
self.assertRaises(ValueError, random.sample, self.seq, 20)
for element in random.sample(self.seq, 5):
self.assert_(element in self.seq)
if __name__ == '__main__':
unittest.main()
```
|
Here's an [example](http://www.python.org/doc/2.5.2/lib/minimal-example.html) and you might want to read a little more on [pythons unit testing](http://www.python.org/doc/2.5.2/lib/module-unittest.html).
| 11,238
|
64,529,777
|
I have developed an Azure ARM template to deploy an Ubuntu Linux machine that once provisioned a bash script will run to install a particular software. The software involves downloading some packages as well as pass an input parameter from the user in order to complete the configuration. The issue I am facing is that the script extension seems to work intermittently. I deployed it successfully once, and now it keeps failing all the time. Here is the error it returns after a few seconds that the custom script starts executing:
```
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "Conflict",
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"VMExtensionProvisioningError\",\r\n \"message\": \"VM has reported a failure when processing extension 'metaport-onboard'. Error message: \\\"Enable failed: failed to execute command: command terminated with exit status=1\\n[stdout]\\nReading package lists...\\nBuilding dependency tree...\\nReading state information...\\nsoftware-properties-common is already the newest version (0.96.24.32.14).\\nsoftware-properties-common set to manually installed.\\nThe following packages were automatically installed and are no longer required:\\n grub-pc-bin linux-headers-4.15.0-121\\nUse 'sudo apt autoremove' to remove them.\\n0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.\\nReading package lists...\\nBuilding dependency tree...\\nReading state information...\\nSome packages could not be installed. This may mean that you have\\nrequested an impossible situation or if you are using the unstable\\ndistribution that some required packages have not yet been created\\nor been moved out of Incoming.\\nThe following information may help to resolve the situation:\\n\\nThe following packages have unmet dependencies:\\n python3-pip : Depends: python3-distutils but it is not installable\\n Recommends: build-essential but it is not installable\\n Recommends: python3-dev (>= 3.2) but it is not installable\\n Recommends: python3-setuptools but it is not installable\\n Recommends: python3-wheel but it is not installable\\n\\n[stderr]\\n+ sudo apt-get -qq -y update\\n+ sudo apt-get -q -y install software-properties-common\\n+ sudo apt-get -q -y install python3-pip\\nE: Unable to correct problems, you have held broken packages.\\nNo passwd entry for user 'mpadmin'\\n\\\"\\r\\n\\r\\nMore information on troubleshooting is available at https://aka.ms/VMExtensionCSELinuxTroubleshoot \"\r\n }\r\n ]\r\n }\r\n}"
}
]
}
```
Below is the portion of the template where I defined the extension
```
{
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"apiVersion": "2019-12-01",
"location": "[variables('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]",
"[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworkName'))]",
"[resourceId('Microsoft.Network/natGateways', variables('natGatewayName'))]",
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('virtualMachineSize')]"
},
"osProfile": {
"computerName": "[variables('vmName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPasswordOrKey')]",
"linuxConfiguration": "[if(equals(parameters('authenticationType'), 'password'), json('null'), variables('linuxConfiguration'))]"
},
"storageProfile": {
"imageReference": {
"publisher": "[variables('imagePublisher')]",
"offer": "[variables('imageOffer')]",
"sku": "[variables('imageSKU')]",
"version": "[variables('imageVersion')]"
},
"osDisk": {
"name": "[concat(variables('vmName'), '_OSDisk')]",
"caching": "ReadWrite",
"createOption": "FromImage",
"managedDisk": {
"storageAccountType": "[variables('storageAccountType')]"
}
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]"
}
]
}
},
"resources": [
{
"name": "metaport-onboard",
"type": "extensions",
"apiVersion": "2019-03-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/', variables('vmName'))]",
"[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]",
"[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworkName'))]",
"[resourceId('Microsoft.Network/natGateways', variables('natGatewayName'))]",
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Extensions",
"type": "CustomScript",
"typeHandlerVersion": "2.1",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/willguibr/azure/main/Latest/MetaPort-Standalone-NATGW-v1.0/install_metaport.sh"
]
},
"protectedSettings": {
"commandToExecute": "[concat('sh install_metaport.sh ', parameters('metaTokenCode'))]"
}
}
}
]
}
]
}
```
The full template package is [here](https://github.com/willguibr/azure/tree/main/Latest/MetaPort-Standalone-NATGW-v1.0).
Anyone have any idea on how to prevent this issue or implement any correction that may be necessary?
|
2020/10/25
|
[
"https://Stackoverflow.com/questions/64529777",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14497508/"
] |
I ended up using this code
```
import * as React from "react";
import Form from "@rjsf/material-ui";
import {FormProps, ObjectFieldTemplateProps} from "@rjsf/core";
import Button from "@material-ui/core/Button/Button";
import styles from './LSForm.module.scss';
import Grid from "@material-ui/core/Grid/Grid";
interface Props extends FormProps<any> {
submitButtonText?: string
showSuccessMessage?: boolean
hide?: boolean
columns?: number
spacing?: 0 | 6 | 4 | 3 | 1 | 2 | 5 | 7 | 8 | 9 | 10 | undefined
}
const LSForm: React.FC<Props> = (props) => {
const getColumns = () => {
if (props.columns === 1) {
return 12;
}
if (props.columns === 2) {
return 6;
}
if (props.columns === 3) {
return 4;
}
if (props.columns === 4) {
return 3;
}
// Default one colum
return 12;
};
const getSpacing = (): 0 | 6 | 4 | 3 | 1 | 2 | 5 | 7 | 8 | 9 | 10 | undefined => {
let spacing: 0 | 6 | 4 | 3 | 1 | 2 | 5 | 7 | 8 | 9 | 10 | undefined = 0;
if (props.spacing != null) {
spacing = props.spacing;
}
return spacing;
};
const ObjectFieldTemplate = (props: ObjectFieldTemplateProps) => {
return (
<div>
<Grid container spacing={getSpacing()}>
{props.title}
{props.description}
{props.properties.map(element => {
return <Grid item xs={getColumns()}><div className="property-wrapper">{element.content}</div></Grid>
})}
</Grid>
</div>
);
};
return (
<>
{ !props.hide &&
<Form {...props} ObjectFieldTemplate={ObjectFieldTemplate}>
{ props.children &&
<div className={styles.actionBar}>
{props.children}
</div>
}
{ !props.children &&
<div className={styles.actionBar}>
<Button type={"submit"} disabled={props.disabled} variant="contained" color="primary">{ props.submitButtonText ? props.submitButtonText : "Speichern"}</Button>
</div>
}
</Form>
}
</>
);
};
export default LSForm;
```
User it as follow:
...
```
return (
<>
<LSForm schema={schema}
hide={hideForm}
spacing={3}
columns={3}
uiSchema={uiSchema}
onSubmit={(data) => onSubmit(data)}>
<Button type={"submit"} variant="contained" color="primary">Login</Button>
<Button variant="contained" color="primary">Forgot Password</Button>
</LSForm>
</>
);
```
...
|
I found a solution which is not my prefered one:
```
const ObjectFieldTemplate = (props: ObjectFieldTemplateProps) => {
return (
<div>
{props.title}
{props.description}
{props.properties.map(element => {
return <div className="property-wrapper">{element.content}</div>
})}
</div>
);
}
<Form ObjectFieldTemplate={ObjectFieldTemplate}>
```
Invetigating several hours of googling i found another solution:
```
const uiSchema = {
"email": {
"ui:widget": "email",
'ui:column': 'xs6'
},
"password": {
"ui:widget": "password",
'ui:column': 'xs6'
}
};
```
Problem here: ui:coumn: xs6 only works with bootstrap theme. Cause i am using material ui, this approach cannot be used or i miss something to get this running.
For more or better solutions i would really appriciate
| 11,241
|
68,835,056
|
I have a python script that I compiled to an EXE, one of the purposes of it is to extract a 7z file and save it to a destination.
If I'm running it from PyCharm everything works great, this is the code:
```
def delete_old_version_rename_new(self):
winutils.delete(self.old_version)
print("Extracting...")
Archive(f"{self.new_version}_0.7z").extractall(f"{self.destination}")
print("Finished")
os.rename(self.new_version, self.new_name)
```
I used pyinstaller to compile the program and used to command `pyinstaller --onefile --paths {path to site packages} main.py`
Thanks!
|
2021/08/18
|
[
"https://Stackoverflow.com/questions/68835056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14895566/"
] |
Single-file executables self-extract to a temporary folder and run from there, so any relative paths will be relative to that folder *not* the location of executable you originally ran.
My solution is a code snippet such as this:
```
if getattr(sys, 'frozen', False):
app_path = os.path.dirname(sys.executable)
else:
app_path = os.path.dirname(os.path.abspath(__file__))
```
which gives you the location of the executable in the single-file executable case or the location of the script in the case of running from source.
See the PyInstaller documentation [here](https://pyinstaller.readthedocs.io/en/stable/runtime-information.html).
|
Eventually i just used a 7z command line i took from <https://superuser.com/questions/95902/7-zip-and-unzipping-from-command-line>
and used os.system() to initialize it.
Since my program is command line based it worked even better since its providing a status on the extraction process.
Only downside is i have to move the 7z.exe to the directory of my program.
| 11,242
|
49,677,269
|
I'm trying to create an email template in Django which uses [Materialize.css](http://materializecss.com/). Here is the template code:
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>New Activation</title>
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0-beta/css/materialize.min.css">
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<style>
/* Use the same logo size and positioning as in the eBay email to families */
img.logo {
width: 80px;
margin: 30px 20px;
}
body {
background-color: #f3f3f3;
}
/* Align the column with the logo */
.col.lucy-col {
padding-left: 20px;
padding-right: 20px;
}
.card-title h4 {
margin-top: 0px;
}
/* Use Materialize's default light blue color for card-action links (instead of an orange one) */
.card-action.lucy-card-action a {
color: #039be5 !important;
}
/* Make the table more compact vertically */
td {
padding-top: 10px;
padding-bottom: 10px;
}
</style>
</head>
<body>
<img class="logo" src="https://s3-us-west-1.amazonaws.com/lucy-prod/images/logo.png" alt="LUCY"/>
<!-- Materialize table within a Materialize card (cf. http://materializecss.com/cards.html and http://materializecss.com/table.html) -->
<div class="row">
<div class="col lucy-col s12 m6">
<div class="card">
<div class="card-content">
<span class="card-title"><h4>New Activation</h4></span>
<table class="striped">
<tbody>
<tr>
<td><b>Name</b></td>
<td>{{ first_name }} {{ last_name }}</td>
</tr>
<tr>
<td><b>Birth parent</b></td>
<td>{{ birth_parent }}</td>
</tr>
<tr>
<td><b>Email</b></td>
<td><a href="mailto:{{email}}">{{ email }}</a></td>
</tr>
<tr>
<td><b>Phone</b></td>
<td>{{ phone }}</td>
</tr>
<tr>
<td><b>Address</b></td>
<td>{{ address|linebreaksbr }}</td>
</tr>
<tr>
<td><b>Company</b></td>
<td>{{ company }}</td>
</tr>
<tr>
<td><b>Company email</b></td>
<td><a href="mailto:{{ company_email }}">{{ company_email }}</a></td>
</tr>
<tr>
<td><b>Stage</b></td>
<td>{{ stage }}</td>
</tr>
<tr>
<td><b>Baby arrived?</b></td>
<td>{{ date }}</td>
</tr>
<tr>
<td><b>First child?</b></td>
<td>{{ is_first_child }}</td>
</tr>
<tr>
<td><b>Tell us more</b></td>
<td>{{ tell_us_more|linebreaksbr }}</td>
</tr>
</tbody>
</table>
</div>
<div class="card-action lucy-card-action">
<a href={{ url }}>Case Management</a>
</div>
</div>
</div>
</div>
</body>
</html>
```
When I hook it up to a view (passing in some dummy context), here is how it looks:
[](https://i.stack.imgur.com/TE033.png)
I'm trying to get this into an email using [`premailer`](https://pypi.python.org/pypi/premailer/3.1.1). Here is the code snippet where I'm sending the email:
```
from premailer import transform
from django.mail import EmailMultiAlternatives
...
message_html = transform(render_to_string('emails/activate_to_delivery.html', context))
email = EmailMultiAlternatives(
subject='A new family has activated!',
body=message_text,
from_email=settings.DEFAULT_FROM_EMAIL,
to=self._get_recipients(family),
reply_to=[family.point_of_contact_email])
email.attach_alternative(message_html, "text/html")
email.send()
```
However, when I send an email (through a `TestCase`), I get a bunch of `WARNING`s and `ERROR`s (of which I've included a subset below):
```
(venv) Kurts-MacBook-Pro-2:lucy-web kurtpeek$ python manage.py test activation.tests.ActivationEmailTest
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
WARNING Property: Unknown Property name. [18:3: word-wrap]
WARNING Property: Unknown Property name. [6:28081: -ms-text-size-adjust]
WARNING Property: Unknown Property name. [6:28107: -webkit-text-size-adjust]
WARNING Property: Unknown Property name. [6:28301: -webkit-box-sizing]
WARNING Property: Unknown Property name. [6:28463: -webkit-text-decoration-skip]
WARNING Property: Unknown Property name. [6:28557: -webkit-text-decoration]
WARNING Property: Unknown Property name. [6:28598: -moz-text-decoration]
ERROR Property: Invalid value for "CSS Level 2.1" property: underline dotted [6:28636: text-decoration]
WARNING Property: Unknown Property name. [6:29334: -webkit-appearance]
ERROR Property: Invalid value for "CSS3 Basic User Interface Module" property: 1px dotted ButtonText [6:29628: outline]
WARNING Property: Unknown Property name. [6:29704: -webkit-box-sizing]
WARNING Property: Unknown Property name. [6:29938: -webkit-box-sizing]
WARNING Property: Unknown Property name. [6:30114: -webkit-appearance]
WARNING Property: Unknown Property name. [6:30252: -webkit-appearance]
WARNING Property: Unknown Property name. [6:30305: -webkit-appearance]
WARNING Property: Unknown Property name. [6:30474: -webkit-box-sizing]
WARNING Property: Unknown Property name. [6:30545: -webkit-box-sizing]
ERROR Property: Invalid value for "CSS3 Basic User Interface Module" property: inherit [6:30572: box-sizing]
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name.
WARNING Property: Unknown Property name. [1:18: -ms-text-size-adjust]
WARNING Property: Unknown Property name. [1:44: -webkit-text-size-adjust]
WARNING Property: Unknown Property name. [1:30: -webkit-text-decoration-skip]
WARNING Property: Unknown Property name. [1:1: -webkit-box-sizing]
WARNING Property: Unknown Property name. [1:36: -webkit-tap-highlight-color]
ERROR Property: Invalid value for "CSS Level 2.1" property: 2.28rem [1:1: font-size]
ERROR Property: Invalid value for "CSS Level 2.1" property: 1.52rem 0 0.912rem 0 [1:36: margin]
WARNING Property: Unknown Property name. [1:1: -webkit-box-shadow]
ERROR Property: Invalid value for "CSS Level 2.1" property: 0.5rem 0 1rem 0 [1:19: margin]
WARNING Property: Unknown Property name. [1:64: -webkit-transition]
WARNING Property: Unknown Property name. [1:108: transition]
WARNING Property: Unknown Property name. [1:12: -webkit-box-sizing]
ERROR Property: Invalid value for "CSS Level 2.1" property: 0 0.75rem [1:64: padding]
.
----------------------------------------------------------------------
Ran 1 test in 7.051s
OK
Destroying test database for alias 'default'...
```
If I check my inbox, the email looks like this:
[](https://i.stack.imgur.com/9lUOR.png)
Is there any way to fix this - that is, to use Materialize in emails? (In my case, it only has to work for GMail).
|
2018/04/05
|
[
"https://Stackoverflow.com/questions/49677269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/995862/"
] |
E-mails do not support all CSS functions, I recommend you create your own CSS with alternative functions to achieve the same result.
Here's a handy website on which you can check what CSS you can and can't use for certain e-mail clients.
<https://www.campaignmonitor.com/css/>
|
Just to provide you an option, look into MJML.
It can be used programmatically with Node.
It can create responsive and emails with all the common email clients via what is essentially a combination of HTML and CSS with baked in fixes for displaying through the various email clients.
<https://mjml.io/>
"MJML is a markup language designed to reduce the pain of coding a responsive email. Its semantic syntax makes it easy and straightforward and its rich standard components library speeds up your development time and lightens your email codebase. MJML’s open-source engine generates high quality responsive HTML compliant with best practices."
Example:
[](https://i.stack.imgur.com/FFZql.png)
| 11,243
|
19,601,797
|
i will answer any questions i can
Basically I have a list of 70 words that I am looking for in over 500 files, and I need to replace them with new words and numbers.
ie... find "hello" and replace with "hello 233.4" but 70 words/numbers and 500+ files.
I found an informative post here, but I have been reading about sys.argv, re's, searches, replaces, etc.. etc.. etc.. I can not understand this bit of code. I have been "calling" (i think) it from "cmd" window on windows 7 with scriptname.py "-i " and "-o"...
if someone could put example input search list path "c:/input/file/path/searchlist.txt" and the example path for the file to be searched "c:/search/this/file/searchme.txt" in their correct positions please! (I will try and get it to repeat through every file in a folder on my own and highlight or bold the replacements on my own.)
I have tried many combinations... I could go over every modification ive made, and could type for days/pages/days/pages... each day/page getting dumber and dumber every time!
Thanks... OR IF YOU KNOW OF A DIFFERENT WAY, PLEASE SUGGEST ADVISE.
here is the link to the original post:
[Use Python to search one .txt file for a list of words or phrases (and show the context)](https://stackoverflow.com/questions/3007889/use-python-to-search-one-txt-file-for-a-list-of-words-or-phrases-and-show-the)
here is the code from the original post:
```
import re
import sys
def main():
if len(sys.argv) != 3:
print("Usage: %s fileofstufftofind filetofinditin" % sys.argv[0])
sys.exit(1)
with open(sys.argv[1]) as f:
patterns = [r'\b%s\b' % re.escape(s.strip()) for s in f]
there = re.compile('|'.join(patterns))
with open(sys.argv[2]) as f:
for i, s in enumerate(f):
if there.search(s):
print("Line %s: %r" % (i, s))
main()
```
|
2013/10/26
|
[
"https://Stackoverflow.com/questions/19601797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2779846/"
] |
The code that you posted above is probably to complex for what you need for your assignment. Perhaps something more simple like the following is easier to understand:
```
# example variables
word_mapping = [['horse', 'donkey'], ['left', 'right']]
filename = 'C:/search/this/file/searchme.txt'
# load the text from the file with 'r' for "reading"
file = open(filename, 'r')
text = file.read()
file.close()
# replace words in the text
for find_word, replacement in word_mapping:
text = text.replace(find_word, replacement)
# save the modified text to the file, 'w' for "writing"
file = open(filename, 'w')
file.write(text)
file.close()
```
---
For loading your list of words to replace, you could simply do something like:
```
words_path = 'C:/input/file/path/searchlist.txt'
with open(words_path) as f:
word_mapping = [line.split() for line in f]
```
`str.split()` splits a string on whitespace (spaces, tabs) by default, but you can split on other characters or even "words". If you have for example a comma separated file you use `line.split(',')` and it splits on comma's.
---
As an explanation to that code you posted above.. There are a couple of separate things happening, so lets break it down in a couple of pieces.
```
if len(sys.argv) != 3:
print("Usage: %s fileofstufftofind filetofinditin" % sys.argv[0])
sys.exit(1)
```
This particular script receives the paths to the wordslist and the target file as command line arguments, so you can run this script as `python script_name.py wordslist_file target_file`. In other words you don't hardcode the file paths in the script, but let the user provide them at run time.
This first part of the code checks how many command line parameters have been passed to the script, by checking the length of `sys.argv`, which is a list containing command line parameters as strings. When the number of command line parameter is not equal to 3, an error message is printed. The first (or zeroth) argument is the filename of the script, so that's why `sys.argv[0]` is printed as part of the error message.
```
with open(sys.argv[1]) as f:
patterns = [r'\b%s\b' % re.escape(s.strip()) for s in f]
there = re.compile('|'.join(patterns))
```
This opens a file with words (with filename equal to `sys.argv[1]`) and compiles regular expression objects for them. Regular expressions give you more control over what words are matched, but it has its own "mini-language" which can be quite confusing if you don't have experience with it. Note that this script only finds words and doesn't replace them, so the file with words it uses contains only one "word" per line.
```
with open(sys.argv[2]) as f:
for i, s in enumerate(f):
if there.search(s):
print("Line %s: %r" % (i, s))
```
This opens the target file (filename in the second command line parameter `sys.argv[2]` and loops over the lines in that file. If a line contains a word from the wordslist the whole line is printed.
|
Maybe can try this...
[Find all files in a directory with extension .txt in Python](https://stackoverflow.com/questions/3964681/find-all-files-in-directory-with-extension-txt-with-python)
Put all 500 files in the same directory and process from there.
| 11,244
|
48,600,481
|
I'm working with the following string:
```
'"name": "Gnosis", \n "symbol": "GNO", \n "rank": "99", \n "price_usd": "175.029", \n "price_btc": "0.0186887", \n "24h_volume_usd": "753877.0"'
```
and I have to use `re.sub()` in python to replace only the double quotes (`"`) that are enclosing the numbers, in order to parse it later in JSON. I've tried with some regular expressions, but without success. Here is my best attempt:
```
exp = re.compile(r': (")\D+\.*\D*(")', re.MULTILINE)
response = re.sub(exp, "", string)
```
I've searched a lot for a similar problem, but have not found another similar question.
### EDIT:
Finally I've used (thanks to [S. Kablar](https://stackoverflow.com/users/6101071/s-kablar)):
```
fomatted = re.sub(r'"(-*\d+(?:\.\d+)?)"', r"\1", string)
parsed = json.loads(formatted)
```
The problem is that [this endpoint](https://api.coinmarketcap.com/v1/ticker/) returns a bad formatted string as JSON.
Other users [answered](https://stackoverflow.com/a/48600555/9167585) "Parse the string first with json, and later convert numbers to float" with a for loop and, I think, is a very inneficient way to do it, also, you will be forced to select between int or float type for your response. To get out of doubt, I've wrote [this gist](https://gist.github.com/mondeja/a8215d34ee1c0c850d7b3f64ab6b2260) where I show you the comparations between the different approachs with benchmarking, and for now I'm going to trust in regex in this case.
Thanks everyone for your help
|
2018/02/03
|
[
"https://Stackoverflow.com/questions/48600481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9167585/"
] |
**Regex**: [`"(-?\d+(?:[\.,]\d+)?)"`](https://regex101.com/r/ySz6v5/3) **Substitution**: `\1`
Details:
* `()` Capturing group
* `(?:)` Non capturing group
* `\d` Matches a digit (equal to `[0-9]`)
* `+` Matches between one and unlimited times
* `?` Matches between zero and one times
* `\1` Group 1.
**Python code**:
```
def remove_quotes(text):
return re.sub(r"\"(-?\d+(?:[\.,]\d+)?)\"", r'\1', text)
remove_quotes('"percent_change_7d": "-23.43"') >> "percent_change_7d": -23.43
```
|
Parse the string first with json, and later convert numbers to floats:
```
string = '{"name": "Gnosis", \n "symbol": "GNO", \n "rank": "99", \n "price_usd": "175.029", \n "price_btc": "0.0186887", \n "24h_volume_usd": "753877.0"}'
data = json.loads(string)
response = {}
for key, value in data.items():
try:
value = int(value) if value.strip().isdigit() else float(value)
except ValueError:
pass
response[key] = value
```
| 11,245
|
36,839,650
|
I want to execute one python script in Jupyter, but I don't want to use the web browser (IPython Interactive terminal), I want to run a single command in the Linux terminal to load & run the python script, so that I can get the output from Jupyter.
I tried to run `jupyter notebook %run <my_script.py>`, but it seems jupyter can't recognize `%run` variable.
Is it possible to do that?
|
2016/04/25
|
[
"https://Stackoverflow.com/questions/36839650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6204018/"
] |
You can use the `jupyter console -i` command to run an interactive jupyter session in your terminal. From there you can run `import <my_script.py>`. Do note that this is not the intended use case of either jupyter or the notebook environment. You should run scripts using your normal python interpreter instead.
|
You can run this command to run an interactive jupyter session in your terminal.
>
> jupyter notebook
>
>
>
| 11,248
|
20,212,894
|
I have a Flask server running through port 5000, and it's fine. I can access it at <http://example.com:5000>
But is it possible to simply access it at <http://example.com>? I'm assuming that means I have to change the port from 5000 to 80. But when I try that on Flask, I get this error message when I run it.
```
Traceback (most recent call last):
File "xxxxxx.py", line 31, in <module>
app.run(host="0.0.0.0", port=int("80"), debug=True)
File "/usr/local/lib/python2.6/dist-packages/flask/app.py", line 772, in run
run_simple(host, port, self, **options)
File "/usr/local/lib/python2.6/dist-packages/werkzeug/serving.py", line 706, in run_simple
test_socket.bind((hostname, port))
File "<string>", line 1, in bind
socket.error: [Errno 98] Address already in use
```
Running `lsof -i :80` returns
```
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
apache2 467 root 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 4413 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14346 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14570 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14571 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
apache2 14573 www-data 3u IPv4 92108840 0t0 TCP *:www (LISTEN)
```
Do I need to kill these processes first? Is that safe? Or is there another way to keep Flask running on port 5000 but have the main website domain redirect somehow?
|
2013/11/26
|
[
"https://Stackoverflow.com/questions/20212894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3024025/"
] |
1- Stop other applications that are using port 80.
2- run application with port 80 :
```
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
```
|
You don't need to change port number for your application, just configure your www server (nginx or apache) to proxy queries to flask port. Pay attantion on `uWSGI`.
| 11,251
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.