qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
64,637,084
|
Could anyone please help me with why I am getting the below error, everything worked before when I used the same logic, after I converted my data type of date columns to the appropriate format.
Below is the line of code I am trying to run
```
data['OPEN_DT'] = data['OPEN_DT'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d') if len(x[:x.find ('-')]) == 4 else datetime.strptime(x, '%d-%m-%Y'))
```
Error being received :
```
AttributeError Traceback (most recent call last)
<ipython-input-93-f0a22bfffeee> in <module>
----> 1 data['OPEN_DT'] = data['OPEN_DT'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d') if len(x[:x.find ('-')]) == 4 else datetime.strptime(x, '%d-%m-%Y'))
~\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
3846 else:
3847 values = self.astype(object).values
-> 3848 mapped = lib.map_infer(values, f, convert=convert_dtype)
3849
3850 if len(mapped) and isinstance(mapped[0], Series):
pandas\_libs\lib.pyx in pandas._libs.lib.map_infer()
<ipython-input-93-f0a22bfffeee> in <lambda>(x)
----> 1 data['OPEN_DT'] = data['OPEN_DT'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d') if len(x[:x.find ('-')]) == 4 else datetime.strptime(x, '%d-%m-%Y'))
AttributeError: 'Timestamp' object has no attribute 'find'
```
ValueError: time data '30/09/2020' does not match format '%d-%m-%Y'
Many thanks.
|
2020/11/01
|
[
"https://Stackoverflow.com/questions/64637084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14559396/"
] |
I assumed that you don't want to repeat the add button, then,
I removed the button from the template, that way you can only add the input fields:
Here you can find a working example: <https://stackblitz.com/edit/js-gp6xjx?file=index.html>
```js
const data = [];
const appendContent = () => {
let form_content = document.querySelector(".form-content");
const template = document.createElement('template');
const html = `<div class="row">
<div class="form-group">
<input type="text" placeholder="Enter Name" class="form-control" name="name">
</div>
<div class="form-group">
<input type="text" placeholder="Enter Description" class="form-control" name="description">
</div>
</div>
`
template.innerHTML = html.trim();
form_content.appendChild(template.content.firstChild);
}
const fake_button = document.querySelector('.btn');
fake_button.addEventListener('click', e => {
record(e);
appendContent();
});
const record = (e) => {
const form = e.target.parentElement.parentElement;
const inputs = form.querySelectorAll('input');
const obj = {};
Array.from(inputs).forEach(input => {
obj[input.name]= input.value;
})
data.push(obj);
console.log(data);
}
appendContent();
```
```css
.btn {
display: block;
width: 70px;
cursor: pointer;
border: 1px solid #333;
padding: 5px 10px;
text-align: center;
}
```
```html
<form action="" method="post" id="create-db-form">
<div class="form-content"></div>
<div class="form-group btn-with-margin">
<span class="btn btn-primary">Add</span>
</div>
<button type="submit">Save</button>
</form>
```
|
Alright, so I was fiddling with it a little bit, I'm not very experienced with pure javascript. I came up with a few ideas:
1 - Separate submit and add field buttons.
When you press add field, it just adds new fields inside your form which will later be submitted as part of a complete form.
2 - Indexed forms
The idea here is to create a form of forms, with each pair of inputs consisting of a form itself. Later, when you want to submit, you just query all forms and each result will be under a 'name = index' prop.
I found this snippet to retrieve form values:
```
function getFormValues() {
var params = [];
for( var i=0; i<document.myform.elements.length; i++ )
{
var fieldName = document.myform.elements[i].name;
var fieldValue = document.myform.elements[i].value;
params.push({[fieldName]:fieldValue});
}
console.log(params);
}
```
I was testing it in this fiddle: <https://jsfiddle.net/968cL1uz/28/>
I only posted an answer to be able to write more about your problem, I'm sorry if this still doesn't solve it. Let's keep working on it :)
|
44,872,673
|
Let's say I have this code in `test.py`:
```
import sys
a = 'alfa'
b = 'beta'
c = 'gamma'
d = 'delta'
print(sys.argv[1])
```
Running `python test.py a` would then return `a`. How can I make it return `alfa` instead?
|
2017/07/02
|
[
"https://Stackoverflow.com/questions/44872673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4931616/"
] |
Using a dictionary that maps to those strings:
```
mapping = {'a': 'alfa', 'd': 'delta', 'b': 'beta', 'c': 'gamma'}
```
Then when you get your `sys.argv[1]` just access the value from your dictionary as:
```
print(mapping.get(sys.argv[1]))
```
Demo:
File: `so_question.py`
```
import sys
mapping = {'a': 'alfa', 'd': 'delta', 'b': 'beta', 'c': 'gamma'}
user_var = sys.argv[1]
user_var_value = mapping.get(user_var)
print("user_var_value is: {}".format(user_var_value))
```
In a shell:
```
▶ python so_question.py a
user_var_value is: alfa
```
|
You can also use the `globals` or `locals`:
```
import sys
a = 'alfa'
b = 'beta'
c = 'gamma'
d = 'delta'
print(globals().get(sys.argv[1]))
# or
print(locals().get(sys.argv[1]))
```
|
50,809,096
|
A few days ago I started getting the following error when using pip (1,2 or 3) to install.
\*
```
Traceback (most recent call last): File "/home/c4pta1n/.local/bin/pip", line 7, in <module>
from pip._internal import main File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_internal/__init__.py", line 42, in <module>
from pip._internal import cmdoptions File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_internal/cmdoptions.py", line 16, in <module>
from pip._internal.index import ( File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_internal/index.py", line 15, in <module>
from pip._vendor import html5lib, requests, six File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_vendor/requests/__init__.py", line 86, in <module>
from pip._vendor.urllib3.contrib import pyopenssl File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py", line 46, in <module>
import OpenSSL.SSL File "/usr/local/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL File "/usr/local/lib/python2.7/dist-packages/OpenSSL/crypto.py", line 13, in <module>
from cryptography.hazmat.primitives.asymmetric import dsa, rsa File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py", line 12, in <module>
@six.add_metaclass(abc.ABCMeta) AttributeError: 'module' object has no attribute 'add_metaclass'
```
\*
```
pip3 install pip --ignore-installed six
Traceback (most recent call last):
File "/usr/local/bin/pip3", line 11, in <module>
load_entry_point('pip==10.0.1', 'console_scripts', 'pip3')()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 476, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2700, in load_entry_point
return ep.load()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2318, in load
return self.resolve()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2324, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python2.7/dist-packages/pip/_internal/__init__.py", line 42, in <module>
from pip._internal import cmdoptions
File "/usr/local/lib/python2.7/dist-packages/pip/_internal/cmdoptions.py", line 16, in <module>
from pip._internal.index import (
File "/usr/local/lib/python2.7/dist-packages/pip/_internal/index.py", line 15, in <module>
from pip._vendor import html5lib, requests, six
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py", line 86, in <module>
from pip._vendor.urllib3.contrib import pyopenssl
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/contrib/pyopenssl.py", line 46, in <module>
import OpenSSL.SSL
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/crypto.py", line 13, in <module>
from cryptography.hazmat.primitives.asymmetric import dsa, rsa
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py", line 12, in <module>
@six.add_metaclass(abc.ABCMeta)
AttributeError: 'module' object has no attribute 'add_metaclass'
```
I have been researching and trying to troubleshoot this issue and I have not been able to narrow down the issue.
Just prior to noticing this issue I had updated my debian system using the standard repository and had no issues of note, I had also updated a few pip modules using pip3 install --update, I believe the modules I had updated were scapy and requests
I am unable to use pip for any command that I have tried, even "pip list" or any version of pip through 3.6.
I have uninstalled and reinstalled pip, virtualenv, and tried to manually remove the six.add\_metaclass-1.0\* folder from my distutils folder.
Nothing I have tried has created any change that I can see and I am not able to narrow down that any issue that I see written about is indeed similar or related to this specific issue.
I am hoping to find help to narrow this problem down further, correct it or be pointed in the direction of any information that could help me.
|
2018/06/12
|
[
"https://Stackoverflow.com/questions/50809096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8213561/"
] |
[six 1.3.0](https://github.com/benjaminp/six/blob/1.3.0/six.py) doesn't have `add_metaclass`. It was released in 2013 year. Really time to upgrade it.
|
I found the answer to my issue. Apparently some linux versions have specific versions of pip and six that have to be installed through the distro package manager directly in order to work. There are some nuanced changes in how Debian makes use of pip, especially regarding updates, and they have coded these changes in to their package manager and not to pip. When I recompiled Python I had uninstalled the entire python framework and I went to the source url's to recombine python and to download pip and any other dependencies. I figured since I was installing directly from the source that it would be fine... If you are using CentOS, Debian,Redhat and maybe others, then you must install pip from the package manager that is managed by your distro in order to avoid running into this error somewhere down the line.
|
28,849,386
|
How to remove T from time format `%Y-%m-%dT%H:%M:%S` in python?
Am using it in my html as
```
<b>Start:{{ start.date_start }}<br/>
```
|
2015/03/04
|
[
"https://Stackoverflow.com/questions/28849386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4631100/"
] |
```
@register.filter
def isotime(datestring):
datestring = str(datestring)
return datestring.replace("T"," ")
```
|
Manually format the datetime, don't rely on the default `str()` formatting. You can use [`datetime.datetime.isoformat()`](https://docs.python.org/2/library/datetime.html#datetime.datetime.isoformat) for example, passing in a space as the separator:
```
<b>Start:{{ start.date_start.isoformat(' ') }}<br/>
```
or you can use [`datetime.datetime.strftime()`](https://docs.python.org/2/library/datetime.html#datetime.datetime.strftime) to control formatting more finely:
```
<b>Start:{{ start.date_start.strftime('%Y-%m-%d %H:%M:%S') }}<br/>
```
|
22,882,427
|
I want to take input as string as raw\_input and want to use this value in another line for taking the input in python. My code is below:
```
p1 = raw_input('Enter the name of Player 1 :')
p2 = raw_input('Enter the name of Player 2 :')
p1 = input('Welcome %s > Enter your no:') % p1
```
Here in place of `%s` I want to put the value of `p1`.
Thanks in advance.
|
2014/04/05
|
[
"https://Stackoverflow.com/questions/22882427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3494397/"
] |
You can do (the vast majority will agree that this is the best way):
```
p1 = input('Welcome {0} > Enter your no:'.format(p1))
```
|
Try
```
input("Welcome " + p1 + "> Enter your no:")
```
It concatenates the value of `p1` to the input string
Also see [here](https://docs.python.org/2/library/string.html)
```
input("Welcome {0}, {1} > Enter your no".format(p1, p2)) #you can have multiple values
```
**EDIT**
Note that using `+` is [discouraged](http://legacy.python.org/dev/peps/pep-0008/#programming-recommendations).
|
22,882,427
|
I want to take input as string as raw\_input and want to use this value in another line for taking the input in python. My code is below:
```
p1 = raw_input('Enter the name of Player 1 :')
p2 = raw_input('Enter the name of Player 2 :')
p1 = input('Welcome %s > Enter your no:') % p1
```
Here in place of `%s` I want to put the value of `p1`.
Thanks in advance.
|
2014/04/05
|
[
"https://Stackoverflow.com/questions/22882427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3494397/"
] |
You can do (the vast majority will agree that this is the best way):
```
p1 = input('Welcome {0} > Enter your no:'.format(p1))
```
|
This doesn't work because Python interprets
```
p1 = input('Welcome %s > Enter your no:') % p1
```
As:
1. Get input, using the prompt `'Welcome %s > Enter your no:'`;
2. Try to insert `p1` into the *text returned by* `input`, which will cause a `TypeError` unless the user's number includes `'%s'`; and
3. Assign the result of that formatting back to `p1`.
The minimal fix here is:
```
p1 = input('Welcome %s > Enter your no:' % p1)
```
which will carry out the `%` formatting *before* using the string as a prompt, but I agree with the other answers that `str.format` is the preferred method for this.
|
22,882,427
|
I want to take input as string as raw\_input and want to use this value in another line for taking the input in python. My code is below:
```
p1 = raw_input('Enter the name of Player 1 :')
p2 = raw_input('Enter the name of Player 2 :')
p1 = input('Welcome %s > Enter your no:') % p1
```
Here in place of `%s` I want to put the value of `p1`.
Thanks in advance.
|
2014/04/05
|
[
"https://Stackoverflow.com/questions/22882427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3494397/"
] |
Try
```
input("Welcome " + p1 + "> Enter your no:")
```
It concatenates the value of `p1` to the input string
Also see [here](https://docs.python.org/2/library/string.html)
```
input("Welcome {0}, {1} > Enter your no".format(p1, p2)) #you can have multiple values
```
**EDIT**
Note that using `+` is [discouraged](http://legacy.python.org/dev/peps/pep-0008/#programming-recommendations).
|
This doesn't work because Python interprets
```
p1 = input('Welcome %s > Enter your no:') % p1
```
As:
1. Get input, using the prompt `'Welcome %s > Enter your no:'`;
2. Try to insert `p1` into the *text returned by* `input`, which will cause a `TypeError` unless the user's number includes `'%s'`; and
3. Assign the result of that formatting back to `p1`.
The minimal fix here is:
```
p1 = input('Welcome %s > Enter your no:' % p1)
```
which will carry out the `%` formatting *before* using the string as a prompt, but I agree with the other answers that `str.format` is the preferred method for this.
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Update your `pip` first:
```
pip install --upgrade pip
```
for Python 3:
```
pip3 install --upgrade pip
```
|
I tried everything said here without any luck, but found a workaround.
After running this command (and failing) : `bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg`
Go to the temporary directory the tool made (given in the output of the last command), then execute `python setup.py bdist_wheel`. The `.whl` file is in the `dist` folder.
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Install the [`wheel` package](https://pypi.python.org/pypi/wheel) first:
```
pip install wheel
```
The documentation isn't overly clear on this, but *"the wheel project provides a bdist\_wheel command for setuptools"* actually means *"the wheel **package**..."*.
|
I also ran into the error message `invalid command 'bdist_wheel'`
It turns out the package setup.py used distutils rather than setuptools.
Changing it as follows enabled me to build the wheel.
```
#from distutils.core import setup
from setuptools import setup
```
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
I also ran into this all of a sudden, after it had previously worked, and it was because I was inside a virtualenv, and `wheel` wasn’t installed in the virtualenv.
|
Throwing in another answer: Try checking your `PYTHONPATH`.
First, try to install `wheel` again:
```
pip install wheel
```
This should tell you where wheel is installed, eg:
```
Requirement already satisfied: wheel in /usr/local/lib/python3.5/dist-packages
```
Then add the location of wheel to your `PYTHONPATH`:
```
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.5/dist-packages/wheel
```
Now building a wheel should work fine.
```
python setup.py bdist_wheel
```
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Throwing in another answer: Try checking your `PYTHONPATH`.
First, try to install `wheel` again:
```
pip install wheel
```
This should tell you where wheel is installed, eg:
```
Requirement already satisfied: wheel in /usr/local/lib/python3.5/dist-packages
```
Then add the location of wheel to your `PYTHONPATH`:
```
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.5/dist-packages/wheel
```
Now building a wheel should work fine.
```
python setup.py bdist_wheel
```
|
I tried everything said here without any luck, but found a workaround.
After running this command (and failing) : `bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg`
Go to the temporary directory the tool made (given in the output of the last command), then execute `python setup.py bdist_wheel`. The `.whl` file is in the `dist` folder.
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Install the [`wheel` package](https://pypi.python.org/pypi/wheel) first:
```
pip install wheel
```
The documentation isn't overly clear on this, but *"the wheel project provides a bdist\_wheel command for setuptools"* actually means *"the wheel **package**..."*.
|
Throwing in another answer: Try checking your `PYTHONPATH`.
First, try to install `wheel` again:
```
pip install wheel
```
This should tell you where wheel is installed, eg:
```
Requirement already satisfied: wheel in /usr/local/lib/python3.5/dist-packages
```
Then add the location of wheel to your `PYTHONPATH`:
```
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.5/dist-packages/wheel
```
Now building a wheel should work fine.
```
python setup.py bdist_wheel
```
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Install the [`wheel` package](https://pypi.python.org/pypi/wheel) first:
```
pip install wheel
```
The documentation isn't overly clear on this, but *"the wheel project provides a bdist\_wheel command for setuptools"* actually means *"the wheel **package**..."*.
|
It could also be that you have a python3 system only.
You therefore have installed the necessary packages via pip3 install , like *pip3 install wheel*.
You'll need to build your stuff using python3 specifically.
```
python3 setup.py sdist
python3 setup.py bdist_wheel
```
Cheers.
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
I also ran into the error message `invalid command 'bdist_wheel'`
It turns out the package setup.py used distutils rather than setuptools.
Changing it as follows enabled me to build the wheel.
```
#from distutils.core import setup
from setuptools import setup
```
|
It could also be that you have a python3 system only.
You therefore have installed the necessary packages via pip3 install , like *pip3 install wheel*.
You'll need to build your stuff using python3 specifically.
```
python3 setup.py sdist
python3 setup.py bdist_wheel
```
Cheers.
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Update your `pip` first:
```
pip install --upgrade pip
```
for Python 3:
```
pip3 install --upgrade pip
```
|
Throwing in another answer: Try checking your `PYTHONPATH`.
First, try to install `wheel` again:
```
pip install wheel
```
This should tell you where wheel is installed, eg:
```
Requirement already satisfied: wheel in /usr/local/lib/python3.5/dist-packages
```
Then add the location of wheel to your `PYTHONPATH`:
```
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.5/dist-packages/wheel
```
Now building a wheel should work fine.
```
python setup.py bdist_wheel
```
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
I also ran into this all of a sudden, after it had previously worked, and it was because I was inside a virtualenv, and `wheel` wasn’t installed in the virtualenv.
|
It could also be that you have a python3 system only.
You therefore have installed the necessary packages via pip3 install , like *pip3 install wheel*.
You'll need to build your stuff using python3 specifically.
```
python3 setup.py sdist
python3 setup.py bdist_wheel
```
Cheers.
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Install the [`wheel` package](https://pypi.python.org/pypi/wheel) first:
```
pip install wheel
```
The documentation isn't overly clear on this, but *"the wheel project provides a bdist\_wheel command for setuptools"* actually means *"the wheel **package**..."*.
|
I also ran into this all of a sudden, after it had previously worked, and it was because I was inside a virtualenv, and `wheel` wasn’t installed in the virtualenv.
|
39,137,179
|
I am working on a rails application now that needs to run a single python script whenever a button is clicked on our apps home page. I am trying to figure out a way to have rails run this script, and both of my attempts so far have failed.
My first try was to use the exec(..) command to just run the "python script.py", but when I do this it seems to run the file but terminate the rails server so I would need to manually reboot it each time.
My second try was to install the gem "RubyPython" and attempt from there, but I am at a loss at what to do once I have it running.. I can not find any examples of people using it to run or load a python script.
Any help for this would be appreciated.
|
2016/08/25
|
[
"https://Stackoverflow.com/questions/39137179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5805587/"
] |
Here are ways to execute a shell script
```
`python pythonscript.py`
```
or
```
system( "python pythonscript.py" )
```
or
```
exec(" python pythonscript.py")
```
exec replaces the current process by running the given external command.
Returns none, the current process is replaced and never continues.
|
`exec` replaces the current process with the new one. You want to run it as a subprocess. See [When to use each method of launching a subprocess in Ruby](https://stackoverflow.com/questions/7212573/when-to-use-each-method-of-launching-a-subprocess-in-ruby) for an overview; I suggest using either backticks for a simple process, or `popen`/`popen3` when more control is required.
Alternately, you can use [`rubypython`](https://github.com/halostatue/rubypython), the Ruby-Python bridge gem, to execute Python from Ruby itself (especially if you'll be executing the same Python function repeatedly). To do so, you would need to make your script into a proper Python module, start the bridge when the Rails app starts, then use `RubyPython.import` to get a reference to your module into your Ruby code. The examples on the gem's GitHub page are quite simple, and should be sufficient for your purpose.
|
65,148,247
|
For an unknown reason, I ran into a docker error when I tried to run a `docker-compose up` on my project this morning.
My web container isn't able to connect to the db host and `nc` still returning
>
> web\_1 | nc: bad address 'db'
>
>
>
There is the relevant part of my docker-compose definition :
```yaml
version: '3.2'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=
mailhog:
# mailhog declaration
volumes:
postgres_data:
```
I've suspected the network to be broken and it actually is. This is what I get when I inspect the docker network relative to this project :
(`docker network inspect my_docker_network`)
```json
[
{
"Name": "my_docker_network",
"Id": "f09c148d9f3253d999e276c8b1061314e5d3e1f305f6124666e2e32a8e0d9efd",
"Created": "2020-11-18T13:30:29.710456682-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {}, // <=== This is empty !
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "my-project"
}
}
]
```
### Versions :
Docker : 18.09.1, build 4c52b90
Docker-compose : 1.21.0, build unknown
|
2020/12/04
|
[
"https://Stackoverflow.com/questions/65148247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3950328/"
] |
I was able to fix that by running `docker-compose down && docker-compose up` but it could be kinda bad if your down was removing all your volumes and so, your data...
The inspection of networking is now alright :
```json
[
{
"Name": "my_docker_network",
"Id": "236c45042b03c3a2922d9a9fabf644048901c66b3c1fd15507aca2c464c1d7ef",
"Created": "2020-12-04T12:04:40.765889533-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0939787203f2e222f2db380e8d5b36928e95bc7242c58df56b3e6e419efdd280": {
"Name": "my_docker_db_1",
"EndpointID": "af206a7e957682d3d9aee2ec0ffae2c51638cbe8821d3b20eb786165a0159c9d",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"ae90bd27539e89d0b26e0768aec765431ee623f45856e13797f3ba0262cca3f2": {
"Name": "my_docker_web_1",
"EndpointID": "09b5cefed6c5b49d31497419fd5784dcd887a23875e6c998209615c7ec8863f4",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"f2d3e46ab544b146bdc0aafba9fddb4e6c9d9ffd02c2015627516c7d6ff17567": {
"Name": "my_docker_mailhog_1",
"EndpointID": "242a693e6752f05985c377cd7c30f6781f0576bcd5ffede98f77f82efff8c78f",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "my_docker_project"
}
}
]
```
But : **Does someone have any idea of what happened and how to prevent this problem to reappear ?**
|
I had the same problem, but with rabbitmq service in my compose file. At first I solved it by deleting all existing container and volumes on my machine, (but it happened again here and then) but later I updated the rabbitmq image version to latest in `docker-compose.yml`:
```
image: rabbitmq:latest
```
and the problem did not reappear afterwards...
|
66,386,685
|
I'm working on a secure system where internet access is restricted. My company will let my install python and libraries, but they only allow the unblocking of specific urls temporarily. So I need to know what urls do I need to unblock to install python and what urls I need to unblock to execute
**pip install pandas**
**pip install requests**
**pip install xlrd**
among others.
Alternatively I would also be happy if I could just find a url to manually install each library.
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66386685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12986251/"
] |
You can do all that in one loop - that would be way faster. To know the correct position to put the number in, add extra counter for each array.
### Your kind of approach
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int[] odd = new int[10];
int[] even = new int[10];
int oddPos = 0;
int evenPos = 0;
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
However this would not be the best solution as you (in most cases) cannot determine the length of `odd` and `even` arrays beforehand. Then you should use either arraylists or count the values of each or something else.
### More dynamic approach
As stated before - you need to determine the size of the arrays at first
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int oddCount = 0, evenCount = 0;
int oddPos = 0, evenPos = 0;
//get the count of each type
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0)
oddCount++;
else
evenCount++;
}
//define arrays in correct sizes
int[] odd = new int[oddCount];
int[] even = new int[evenCount];
//put values in arrays
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
|
the approach for detecting `odd` and `even` numbers is correct, But I think the problem with the code you wrote is that the length of `odd` and `even` arrays, isn't determinant. so for this matter, I suggest using `ArrayList<Integer>`, let's say you get the array in a function input, and want arrays in the output (I'll mix the arrays in the output for better performance. but separating the functions for each list extracting is also ok depending on what you're going to do with them).
### Solution
```java
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class Test {
public static Integer[][] separateOddnEven(int[] input) {
Integer[][] output = new Integer[2][];
List<Integer> odds = new ArrayList<>();
List<Integer> evens = new ArrayList<>();
for (int i = 0; i < input.length; ++i) {
int temp = input[i];
if (temp % 2 == 0)
evens.add(temp);
else
odds.add(temp);
}
// alternative is to use these Arraylists directly
output[0] = new Integer[odds.size()];
output[1] = new Integer[evens.size()];
output[0] = odds.toArray(output[0]);
output[1] = evens.toArray(output[1]);
return output; // index 0 has odd numbers and index 1 has even numbers.
}
public static void main(String[] args) {
int[] input = {0, 21, 24, 22, 14, 15, 16, 18};
Integer[][] output = separateOddnEven(input);
System.out.println("odd numbers :");
System.out.println(Arrays.toString(output[0]));
System.out.println("even numbers :");
System.out.println(Arrays.toString(output[1]));
}
}
```
### output :
```
odd numbers :
[21, 15]
even numbers :
[0, 24, 22, 14, 16, 18]
```
|
66,386,685
|
I'm working on a secure system where internet access is restricted. My company will let my install python and libraries, but they only allow the unblocking of specific urls temporarily. So I need to know what urls do I need to unblock to install python and what urls I need to unblock to execute
**pip install pandas**
**pip install requests**
**pip install xlrd**
among others.
Alternatively I would also be happy if I could just find a url to manually install each library.
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66386685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12986251/"
] |
You can do all that in one loop - that would be way faster. To know the correct position to put the number in, add extra counter for each array.
### Your kind of approach
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int[] odd = new int[10];
int[] even = new int[10];
int oddPos = 0;
int evenPos = 0;
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
However this would not be the best solution as you (in most cases) cannot determine the length of `odd` and `even` arrays beforehand. Then you should use either arraylists or count the values of each or something else.
### More dynamic approach
As stated before - you need to determine the size of the arrays at first
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int oddCount = 0, evenCount = 0;
int oddPos = 0, evenPos = 0;
//get the count of each type
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0)
oddCount++;
else
evenCount++;
}
//define arrays in correct sizes
int[] odd = new int[oddCount];
int[] even = new int[evenCount];
//put values in arrays
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
|
*in lambda (3 lines)*
```java
int[] nums = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
```
separate even and odd `nums` with `partitioningBy`:
```java
Map<Boolean, List<Integer>> map = IntStream.of(nums)
.boxed().collect(partitioningBy(n -> (n & 1) == 0));
```
…and transform the resulting `List<Integer>` for even and odd to `int[]`:
```java
int[] even = map.get(true).stream().mapToInt(i -> i).toArray();
int[] odd = map.get(false).stream().mapToInt(i -> i).toArray();
System.out.println("even numbers: " + Arrays.toString(even));
System.out.println("odd numbers: " + Arrays.toString(odd));
```
```
even numbers: [2, 4, 6, 8, 10, 12, 14, 16]
odd numbers: [1, 3, 5, 7, 9, 11, 13, 15]
```
|
66,386,685
|
I'm working on a secure system where internet access is restricted. My company will let my install python and libraries, but they only allow the unblocking of specific urls temporarily. So I need to know what urls do I need to unblock to install python and what urls I need to unblock to execute
**pip install pandas**
**pip install requests**
**pip install xlrd**
among others.
Alternatively I would also be happy if I could just find a url to manually install each library.
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66386685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12986251/"
] |
You can do all that in one loop - that would be way faster. To know the correct position to put the number in, add extra counter for each array.
### Your kind of approach
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int[] odd = new int[10];
int[] even = new int[10];
int oddPos = 0;
int evenPos = 0;
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
However this would not be the best solution as you (in most cases) cannot determine the length of `odd` and `even` arrays beforehand. Then you should use either arraylists or count the values of each or something else.
### More dynamic approach
As stated before - you need to determine the size of the arrays at first
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int oddCount = 0, evenCount = 0;
int oddPos = 0, evenPos = 0;
//get the count of each type
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0)
oddCount++;
else
evenCount++;
}
//define arrays in correct sizes
int[] odd = new int[oddCount];
int[] even = new int[evenCount];
//put values in arrays
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
|
You can collect a 2d array with two rows: *even* and *odd* as follows:
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
```
```java
// a 2d array of two rows: even and odd
int[][] arr = new int[2][];
// process a 1d array and fill a 2d array
Arrays.stream(num).boxed()
// Map<Integer,List<Integer>>
.collect(Collectors.toMap(
// key: 0 - even, 1 - odd
n -> n % 2,
// value - a list of one
// element, i.e. number
n -> new ArrayList<>(List.of(n)),
// merge duplicates
(list1, list2) -> {
list1.addAll(list2);
return list1;
}))
// fill the rows of a 2d array: even and odd
.forEach((key, value) -> arr[key] = value.stream()
.mapToInt(Integer::intValue).toArray());
```
```java
// output
System.out.println("Even: " + Arrays.toString(arr[0]));
// Even: [2, 4, 6, 8, 10, 12, 14, 16]
System.out.println("Odd: " + Arrays.toString(arr[1]));
// Odd: [1, 3, 5, 7, 9, 11, 13, 15]
```
|
66,386,685
|
I'm working on a secure system where internet access is restricted. My company will let my install python and libraries, but they only allow the unblocking of specific urls temporarily. So I need to know what urls do I need to unblock to install python and what urls I need to unblock to execute
**pip install pandas**
**pip install requests**
**pip install xlrd**
among others.
Alternatively I would also be happy if I could just find a url to manually install each library.
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66386685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12986251/"
] |
the approach for detecting `odd` and `even` numbers is correct, But I think the problem with the code you wrote is that the length of `odd` and `even` arrays, isn't determinant. so for this matter, I suggest using `ArrayList<Integer>`, let's say you get the array in a function input, and want arrays in the output (I'll mix the arrays in the output for better performance. but separating the functions for each list extracting is also ok depending on what you're going to do with them).
### Solution
```java
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class Test {
public static Integer[][] separateOddnEven(int[] input) {
Integer[][] output = new Integer[2][];
List<Integer> odds = new ArrayList<>();
List<Integer> evens = new ArrayList<>();
for (int i = 0; i < input.length; ++i) {
int temp = input[i];
if (temp % 2 == 0)
evens.add(temp);
else
odds.add(temp);
}
// alternative is to use these Arraylists directly
output[0] = new Integer[odds.size()];
output[1] = new Integer[evens.size()];
output[0] = odds.toArray(output[0]);
output[1] = evens.toArray(output[1]);
return output; // index 0 has odd numbers and index 1 has even numbers.
}
public static void main(String[] args) {
int[] input = {0, 21, 24, 22, 14, 15, 16, 18};
Integer[][] output = separateOddnEven(input);
System.out.println("odd numbers :");
System.out.println(Arrays.toString(output[0]));
System.out.println("even numbers :");
System.out.println(Arrays.toString(output[1]));
}
}
```
### output :
```
odd numbers :
[21, 15]
even numbers :
[0, 24, 22, 14, 16, 18]
```
|
*in lambda (3 lines)*
```java
int[] nums = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
```
separate even and odd `nums` with `partitioningBy`:
```java
Map<Boolean, List<Integer>> map = IntStream.of(nums)
.boxed().collect(partitioningBy(n -> (n & 1) == 0));
```
…and transform the resulting `List<Integer>` for even and odd to `int[]`:
```java
int[] even = map.get(true).stream().mapToInt(i -> i).toArray();
int[] odd = map.get(false).stream().mapToInt(i -> i).toArray();
System.out.println("even numbers: " + Arrays.toString(even));
System.out.println("odd numbers: " + Arrays.toString(odd));
```
```
even numbers: [2, 4, 6, 8, 10, 12, 14, 16]
odd numbers: [1, 3, 5, 7, 9, 11, 13, 15]
```
|
66,386,685
|
I'm working on a secure system where internet access is restricted. My company will let my install python and libraries, but they only allow the unblocking of specific urls temporarily. So I need to know what urls do I need to unblock to install python and what urls I need to unblock to execute
**pip install pandas**
**pip install requests**
**pip install xlrd**
among others.
Alternatively I would also be happy if I could just find a url to manually install each library.
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66386685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12986251/"
] |
the approach for detecting `odd` and `even` numbers is correct, But I think the problem with the code you wrote is that the length of `odd` and `even` arrays, isn't determinant. so for this matter, I suggest using `ArrayList<Integer>`, let's say you get the array in a function input, and want arrays in the output (I'll mix the arrays in the output for better performance. but separating the functions for each list extracting is also ok depending on what you're going to do with them).
### Solution
```java
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class Test {
public static Integer[][] separateOddnEven(int[] input) {
Integer[][] output = new Integer[2][];
List<Integer> odds = new ArrayList<>();
List<Integer> evens = new ArrayList<>();
for (int i = 0; i < input.length; ++i) {
int temp = input[i];
if (temp % 2 == 0)
evens.add(temp);
else
odds.add(temp);
}
// alternative is to use these Arraylists directly
output[0] = new Integer[odds.size()];
output[1] = new Integer[evens.size()];
output[0] = odds.toArray(output[0]);
output[1] = evens.toArray(output[1]);
return output; // index 0 has odd numbers and index 1 has even numbers.
}
public static void main(String[] args) {
int[] input = {0, 21, 24, 22, 14, 15, 16, 18};
Integer[][] output = separateOddnEven(input);
System.out.println("odd numbers :");
System.out.println(Arrays.toString(output[0]));
System.out.println("even numbers :");
System.out.println(Arrays.toString(output[1]));
}
}
```
### output :
```
odd numbers :
[21, 15]
even numbers :
[0, 24, 22, 14, 16, 18]
```
|
You can collect a 2d array with two rows: *even* and *odd* as follows:
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
```
```java
// a 2d array of two rows: even and odd
int[][] arr = new int[2][];
// process a 1d array and fill a 2d array
Arrays.stream(num).boxed()
// Map<Integer,List<Integer>>
.collect(Collectors.toMap(
// key: 0 - even, 1 - odd
n -> n % 2,
// value - a list of one
// element, i.e. number
n -> new ArrayList<>(List.of(n)),
// merge duplicates
(list1, list2) -> {
list1.addAll(list2);
return list1;
}))
// fill the rows of a 2d array: even and odd
.forEach((key, value) -> arr[key] = value.stream()
.mapToInt(Integer::intValue).toArray());
```
```java
// output
System.out.println("Even: " + Arrays.toString(arr[0]));
// Even: [2, 4, 6, 8, 10, 12, 14, 16]
System.out.println("Odd: " + Arrays.toString(arr[1]));
// Odd: [1, 3, 5, 7, 9, 11, 13, 15]
```
|
70,187,603
|
I am able to create an image via az cli commands with:
```
az vm create --resource-group $RG2 \
--name $VM_NAME --image $(az sig image-version show \
--resource-group $RG \
--gallery-name $SIG \
--gallery-image-definition $SIG_IMAGE_DEFINITION \
--gallery-image-version $VERSION \
--query id -o tsv) \
--size $SIZE \
--public-ip-address "" \
--assign-identity $(az identity show --resource-group $RG2 --name $IDENTITY --query id -o tsv) \
--ssh-key-values $SSH_KEY_PATH \
--authentication-type ssh \
--admin-username admin
```
This works great. I am trying to do the same with python.
I see examples where they are creating everything in that, resource groups, nics, subnets, vnets etc but that is not what I need. I am literally trying to do what this az cli is doing. Is there a way to do this with python?
How do we address the setting of public ip to nothing so it does not set one? I want it to use the vnet, subnet, etc that the resource groups already has defined just like the az cli.
|
2021/12/01
|
[
"https://Stackoverflow.com/questions/70187603",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/372429/"
] |
You can modify the [example script](https://learn.microsoft.com/en-us/azure/developer/python/azure-sdk-example-virtual-machines?tabs=cmd) in our doc to do this. Essentially, you need to get rid of step 4. and modify step 5 to not send a public IP when creating the NIC. This has been validated in my own subscription.
```
# Import the needed credential and management objects from the libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute import ComputeManagementClient
import os
print(f"Provisioning a virtual machine...some operations might take a minute or two.")
# Acquire a credential object using CLI-based authentication.
credential = AzureCliCredential()
# Retrieve subscription ID from environment variable.
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
# Step 1: Provision a resource group
# Obtain the management object for resources, using the credentials from the CLI login.
resource_client = ResourceManagementClient(credential, subscription_id)
# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = "PythonAzureExample-VM-rg"
LOCATION = "westus2"
# Provision the resource group.
rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{
"location": LOCATION
}
)
print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region")
# For details on the previous code, see Example: Provision a resource group
# at https://learn.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
# Step 2: provision a virtual network
# A virtual machine requires a network interface client (NIC). A NIC requires
# a virtual network and subnet along with an IP address. Therefore we must provision
# these downstream components first, then provision the NIC, after which we
# can provision the VM.
# Network and IP address names
VNET_NAME = "python-example-vnet"
SUBNET_NAME = "python-example-subnet"
IP_NAME = "python-example-ip"
IP_CONFIG_NAME = "python-example-ip-config"
NIC_NAME = "python-example-nic"
# Obtain the management object for networks
network_client = NetworkManagementClient(credential, subscription_id)
# Provision the virtual network and wait for completion
poller = network_client.virtual_networks.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME,
{
"location": LOCATION,
"address_space": {
"address_prefixes": ["10.0.0.0/16"]
}
}
)
vnet_result = poller.result()
print(f"Provisioned virtual network {vnet_result.name} with address prefixes {vnet_result.address_space.address_prefixes}")
# Step 3: Provision the subnet and wait for completion
poller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME, SUBNET_NAME,
{ "address_prefix": "10.0.0.0/24" }
)
subnet_result = poller.result()
print(f"Provisioned virtual subnet {subnet_result.name} with address prefix {subnet_result.address_prefix}")
# Step 4: Provision an IP address and wait for completion
# Removed as not needed
# Step 5: Provision the network interface client
poller = network_client.network_interfaces.begin_create_or_update(RESOURCE_GROUP_NAME,
NIC_NAME,
{
"location": LOCATION,
"ip_configurations": [ {
"name": IP_CONFIG_NAME,
"subnet": { "id": subnet_result.id }
}]
}
)
nic_result = poller.result()
print(f"Provisioned network interface client {nic_result.name}")
# Step 6: Provision the virtual machine
# Obtain the management object for virtual machines
compute_client = ComputeManagementClient(credential, subscription_id)
VM_NAME = "ExampleVM"
USERNAME = "azureuser"
PASSWORD = "ChangePa$$w0rd24"
print(f"Provisioning virtual machine {VM_NAME}; this operation might take a few minutes.")
# Provision the VM specifying only minimal arguments, which defaults to an Ubuntu 18.04 VM
# on a Standard DS1 v2 plan with a public IP address and a default virtual network/subnet.
poller = compute_client.virtual_machines.begin_create_or_update(RESOURCE_GROUP_NAME, VM_NAME,
{
"location": LOCATION,
"storage_profile": {
"image_reference": {
"publisher": 'Canonical',
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"hardware_profile": {
"vm_size": "Standard_DS1_v2"
},
"os_profile": {
"computer_name": VM_NAME,
"admin_username": USERNAME,
"admin_password": PASSWORD
},
"network_profile": {
"network_interfaces": [{
"id": nic_result.id,
}]
}
}
)
vm_result = poller.result()
print(f"Provisioned virtual machine {vm_result.name}")
```
|
```
resource_name = f"myserver{random.randint(1000, 9999)}"
VNET_NAME = "myteam-vpn-vnet"
SUBNET_NAME = "myteam-subnet"
IP_NAME = resource_name + "-ip"
IP_CONFIG_NAME = resource_name + "-ip-config"
NIC_NAME = resource_name + "-nic"
Subnet=network_client.subnets.get(resource_group_name, VNET_NAME, SUBNET_NAME)
# Step 5: Provision the network interface client
poller = network_client.network_interfaces.begin_create_or_update(resource_group_name,
NIC_NAME,
{
"location": location,
"ip_configurations": [{
"name": IP_CONFIG_NAME,
"subnet": { "id": Subnet.id },
}]
}
)
nic_result = poller.result()
```
Yes, we removed steps 4 and 5 above as suggested. The nic was then applied in the vm creation as such:
```
"network_profile": {
"network_interfaces": [
{
"id": nic_result.id
}
]
},
```
|
61,037,527
|
I want to run my code on GPU provided by Kaggle. I am able to run my code on CPU though but unable to migrate it properly to run on Kaggle GPU I guess.
On running this
```
with tf.device("/device:GPU:0"):
hist = model.fit(x=X_train, y=Y_train, validation_data=(X_test, Y_test), batch_size=25, epochs=20, callbacks=callbacks_list)
```
and getting this error
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-28-cdb8b009cd85> in <module>
1 with tf.device("/device:GPU:0"):
----> 2 hist = model.fit(x=X_train, y=Y_train, validation_data=(X_test, Y_test), batch_size=25, epochs=20, callbacks=callbacks_list)
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
817 self._assert_compile_was_called()
818 self._check_call_args('evaluate')
--> 819
820 func = self._select_training_loop(x)
821 return func.evaluate(
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
233
234 recreate_training_iterator = (
--> 235 training_data_adapter.should_recreate_iterator(steps_per_epoch))
236 if not steps_per_epoch:
237 # TODO(b/139762795): Add step inference for when steps is None to
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
591 class_weights=None,
592 shuffle=False,
--> 593 steps=None,
594 distribution_strategy=None,
595 max_queue_size=10,
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
704 """Provide a scope for running one batch."""
705 batch_logs = {'batch': step, 'size': size}
--> 706 self.callbacks._call_batch_hook(
707 mode, 'begin', step, batch_logs)
708 self.progbar.on_batch_begin(step, batch_logs)
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, sample_weight_modes, batch_size, epochs, steps, shuffle, **kwargs)
355 sample_weights = _process_numpy_inputs(sample_weights)
356
--> 357 # If sample_weights are not specified for an output use 1.0 as weights.
358 if (sample_weights is not None and
359 any([sw is None for sw in sample_weights])):
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in slice_inputs(self, indices_dataset, inputs)
381 if steps and not batch_size:
382 batch_size = int(math.ceil(num_samples/steps))
--> 383
384 if not batch_size:
385 raise ValueError(
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py in from_tensors(tensors)
564 existing iterators.
565
--> 566 Args:
567 unused_dummy: Ignored value.
568
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py in __init__(self, element)
2763 init_args: A nested structure representing the arguments to `init_func`.
2764 init_func: A TensorFlow function that will be called on `init_args` each
-> 2765 time a C++ iterator over this dataset is constructed. Returns a nested
2766 structure representing the "state" of the dataset.
2767 next_func: A TensorFlow function that will be called on the result of
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/data/util/structure.py in normalize_element(element)
111 ops.convert_to_tensor(t, name="component_%d" % i))
112 return nest.pack_sequence_as(element, normalized_components)
--> 113
114
115 def convert_legacy_structure(output_types, output_shapes, output_classes):
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1312 return ret
1313 raise TypeError("%sCannot convert %r with type %s to Tensor: "
-> 1314 "no conversion function registered." %
1315 (_error_prefix(name), value, type(value)))
1316
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_conversion_registry.py in _default_conversion_function(***failed resolving arguments***)
50 def _default_conversion_function(value, dtype, name, as_ref):
51 del as_ref # Unused.
---> 52 return constant_op.constant(value, dtype, name=name)
53
54
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in constant(value, dtype, shape, name)
256 return _eager_fill(shape.as_list(), t, ctx)
257 raise TypeError("Eager execution of tf.constant with unsupported shape "
--> 258 "(value has %d elements, shape is %s with %d elements)." %
259 (num_t, shape, shape.num_elements()))
260 g = ops.get_default_graph()
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
264 value, dtype=dtype, shape=shape, verify_shape=verify_shape,
265 allow_broadcast=allow_broadcast))
--> 266 dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
267 const_tensor = g.create_op(
268 "Const", [], [dtype_value.type],
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
94 dtype = dtypes.as_dtype(dtype).as_datatype_enum
95 ctx.ensure_initialized()
---> 96 return ops.EagerTensor(value, ctx.device_name, dtype)
97
98
RuntimeError: Can't copy Tensor with type string to device /job:localhost/replica:0/task:0/device:GPU:0.
```
I have also tried installing different tensorflow versions like latest tensorflow, tensorflow-gpu, tensorflow-gpu=1.12, but got no success.
Though I am able to list out CPUs and GPUs by using
`from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())`
Please help!
|
2020/04/05
|
[
"https://Stackoverflow.com/questions/61037527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9872938/"
] |
Can you try to update `@material-ui/core` by running
```
npm update
```
|
As described in the Material-UI project [CHANGELOG](https://github.com/mui-org/material-ui/releases/tag/v4.9.9) of the latest version (which is **v4.9.9** the time I'm writing this answer), there is a change related to `createSvgIcon`
[](https://i.stack.imgur.com/n8NYJ.png)
The complete conversation of team can be found [here](https://github.com/mui-org/material-ui/pull/20308).
**When I encountered the problem?**
When running a React project and I wanted to use the [Autocomplete](https://material-ui.com/components/autocomplete/) component from `@material-ui/lab`.
**How I solved it?**
I upgraded `@material-ui/core` package to v4.9.9 using this command:
`yarn upgrade @material-ui/core --latest`
|
67,327,106
|
I am trying to load a serialized xgboost model from a pickle file.
```
import pickle
def load_pkl(fname):
with open(fname, 'rb') as f:
obj = pickle.load(f)
return obj
model = load_pkl('model_0_unrestricted.pkl')
```
while printing the model object, I am getting the following error in linux(AWS Sagemaker Notebook)
```
~/anaconda3/envs/python3/lib/python3.6/site-packages/xgboost/sklearn.py in get_params(self, deep)
436 if k == 'type' and type(self).__name__ != v:
437 msg = 'Current model type: {}, '.format(type(self).__name__) + \
--> 438 'type of model in file: {}'.format(v)
439 raise TypeError(msg)
440 if k == 'type':
~/anaconda3/envs/python3/lib/python3.6/site-packages/sklearn/base.py in get_params(self, deep)
193 out = dict()
194 for key in self._get_param_names():
--> 195 value = getattr(self, key)
196 if deep and hasattr(value, 'get_params'):
197 deep_items = value.get_params().items()
AttributeError: 'XGBClassifier' object has no attribute 'use_label_encoder'
```
Can you please help to fix the issue?
It is working fine in my local mac.
Ref: xgboost:1.4.1 installation log (Mac)
```
Collecting xgboost
Downloading xgboost-1.4.1-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl (1.2 MB)
```
But not working on AWS
Ref: xgboost:1.4.1 installation log (SM Notebook, linux machine)
```
Collecting xgboost
Using cached xgboost-1.4.1-py3-none-manylinux2010_x86_64.whl (166.7 MB)
```
Thanks
|
2021/04/30
|
[
"https://Stackoverflow.com/questions/67327106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2140489/"
] |
Looks like you upgraded xgboost.
You may consider downgrading to 1.2.0 by:
```
pip install xgboost==1.2.0
```
|
I tried testing on notebook running on ubuntu, it seems to work fine, however can you check how are you initializing your classifier ? This is what I tried :
```
import numpy as np
import pickle
from scipy.stats import uniform, randint
from sklearn.datasets import load_breast_cancer, load_diabetes, load_wine
from sklearn.metrics import auc, accuracy_score, confusion_matrix, mean_squared_error
from sklearn.model_selection import cross_val_score, GridSearchCV, KFold,RandomizedSearchCV, train_test_split
import xgboost as xgb
cancer = load_breast_cancer()
X = cancer.data
y = cancer.target
xgb_model = xgb.XGBClassifier(objective="binary:logistic", random_state=45)
xgb_model.fit(X, y)
pickle.dump(xgb_model, open("xgb_model.pkl", "wb"))
```
Load the model back using your function and output it :
```
def load_pkl(fname):
with open(fname, 'rb') as f:
obj = pickle.load(f)
return obj
model = load_pkl('xgb_model.pkl')
model
```
Below is the output :
```
XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, gamma=0, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.300000012, max_delta_step=0, max_depth=6,
min_child_weight=1, missing=nan, monotone_constraints='()',
n_estimators=100, n_jobs=8, num_parallel_tree=1, random_state=45,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
```
|
67,327,106
|
I am trying to load a serialized xgboost model from a pickle file.
```
import pickle
def load_pkl(fname):
with open(fname, 'rb') as f:
obj = pickle.load(f)
return obj
model = load_pkl('model_0_unrestricted.pkl')
```
while printing the model object, I am getting the following error in linux(AWS Sagemaker Notebook)
```
~/anaconda3/envs/python3/lib/python3.6/site-packages/xgboost/sklearn.py in get_params(self, deep)
436 if k == 'type' and type(self).__name__ != v:
437 msg = 'Current model type: {}, '.format(type(self).__name__) + \
--> 438 'type of model in file: {}'.format(v)
439 raise TypeError(msg)
440 if k == 'type':
~/anaconda3/envs/python3/lib/python3.6/site-packages/sklearn/base.py in get_params(self, deep)
193 out = dict()
194 for key in self._get_param_names():
--> 195 value = getattr(self, key)
196 if deep and hasattr(value, 'get_params'):
197 deep_items = value.get_params().items()
AttributeError: 'XGBClassifier' object has no attribute 'use_label_encoder'
```
Can you please help to fix the issue?
It is working fine in my local mac.
Ref: xgboost:1.4.1 installation log (Mac)
```
Collecting xgboost
Downloading xgboost-1.4.1-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl (1.2 MB)
```
But not working on AWS
Ref: xgboost:1.4.1 installation log (SM Notebook, linux machine)
```
Collecting xgboost
Using cached xgboost-1.4.1-py3-none-manylinux2010_x86_64.whl (166.7 MB)
```
Thanks
|
2021/04/30
|
[
"https://Stackoverflow.com/questions/67327106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2140489/"
] |
Looks like you upgraded xgboost.
You may consider downgrading to 1.2.0 by:
```
pip install xgboost==1.2.0
```
|
I suspect the pickled model you are loading in was modified in someway to have that additional method prior to being saved. Either that or as @vbhatt said, you may be modifying some aspect of your classifier prior to loading it in. This has happened to me before when using custom models in Pytorch Lightning.
If you haven't modified the base model at all, please ensure that you are using the same version from within the notebook as well, could be the venv in the notebook has a different version?
|
52,104,644
|
I have the following function which basically asks user to enter the choice for "X" or "O". I used the while loop to keep asking user until I get the answer that's either "X" or "O".
```
def player_input():
choice = ''
while choice != "X" and choice != "O":
choice = input("Player 1, choose X or O: ")
pl1 = choice
if pl1 == "X":
pl2 = "O"
else:
pl2 = "X"
return (pl1, pl2)
```
The above code works fine but I quite don't understand how that 'and' works in this particular scenario. If I understand it right, 'and' means both conditions have to be true. However, choice can only be either "X" or "O" at any given time.
Please help me understand this. Apologies in advance if you think this is a dumb question. I am new to python and programming in general.
Thank you!
|
2018/08/30
|
[
"https://Stackoverflow.com/questions/52104644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9114293/"
] |
Loop indexing is well known in Python to be an incredibly slow operation. By replacing a loop with array slicing, and a list with a Numpy array, we see increases @ 3x:
```
import numpy as np
import timeit
def generate_primes_original(limit):
boolean_list = [False] * 2 + [True] * (limit - 1)
for n in range(2, int(limit ** 0.5 + 1)):
if boolean_list[n] == True:
for i in range(n ** 2, limit + 1, n):
boolean_list[i] = False
return np.array(boolean_list,dtype=np.bool)
def generate_primes_fast(limit):
boolean_list = np.array([False] * 2 + [True] * (limit - 1),dtype=bool)
for n in range(2, int(limit ** 0.5 + 1)):
if boolean_list[n]:
boolean_list[n*n:limit+1:n] = False
return boolean_list
limit = 1000
print(timeit.timeit("generate_primes_fast(%d)"%limit, setup="from __main__ import generate_primes_fast"))
# 30.90620080102235 seconds
print(timeit.timeit("generate_primes_original(%d)"%limit, setup="from __main__ import generate_primes_original"))
# 91.12803511600941 seconds
assert np.array_equal(generate_primes_fast(limit),generate_primes_original(limit))
# [nothing to stdout - they are equal]
```
To gain even more speed, one option is to use [numpy vectorization](https://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html). Looking at the outer loop, it's not immediately obvious how one could vectorize that.
Second, you will see dramatic speed-ups if you port to [Cython](http://docs.cython.org/en/latest/src/userguide/numpy_tutorial.html#numpy-tutorial), which should be a fairly seamless process.
Edit: you may also see improvements by changing things like `n**2 => math.pow(n,2)`, but minor improvements like that are inconsequential compared to the bigger problem, which is the iterator.
|
If your are still using Python 2 use xrange instead of range for greater speed
|
32,478,825
|
I am using python and scikit-learn to find the cosine similarity between two strings(specifically, names).The program is able to find the similarity score between two strings but, when strings are abbreviated, it shows some undesirable output.
e.g- String1 ="K KAPOOR",String2="L KAPOOR"
The cosine similarity score of these strings is 1(maximum) while the two strings are entirely different names.Is there a way to modify it, in order to get some desired results.
My code is:
```
# -*- coding: utf-8 -*-
"""
Created on Wed Sep 9 14:40:21 2015
@author: gauge
"""
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
documents=("K KAPOOR","L KAPOOR")
tfidf_vectorizer=TfidfVectorizer()
tfidf_matrix=tfidf_vectorizer.fit_transform(documents)
#print tfidf_matrix.shape
cs=cosine_similarity(tfidf_matrix[0:1],tfidf_matrix)
print cs
```
|
2015/09/09
|
[
"https://Stackoverflow.com/questions/32478825",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4994653/"
] |
As mentioned in the other answer, the cosine similarity is one because the two strings have **the exact same representation**.
That means that this code:
```
tfidf_vectorizer=TfidfVectorizer()
tfidf_matrix=tfidf_vectorizer.fit_transform(documents)
```
produces, well:
```
print(tfidf_matrix.toarray())
[[ 1.]
[ 1.]]
```
This means that the two strings/documents (here the rows in the array) have the same representation.
That is because the `TfidfVectorizer` tokenizes your document using **word tokens**, and keeps only words with **at least 2 characters**.
So you could do one of the following:
1. Use:
```
tfidf_vectorizer=TfidfVectorizer(analyzer="char")
```
to get character n-grams instead of word n-grams.
2. Change the token pattern so that it keeps one-letter tokens:
```
tfidf_vectorizer=TfidfVectorizer(token_pattern=u'(?u)\\b\w+\\b')
```
This is just a simple modification from the default pattern you can see in the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html). Note that I had to escape the `\b` occurrences in the regular expression as I was getting an 'empty vocabulary' error.
Hope this helps.
|
>
> String1 ="K KAPOOR", String2="L KAPOOR" The cosine similarity score of these strings is 1 (maximum) while the two strings are entirely different names. Is there a way to modify it, in order to get some desired results.
>
>
>
**It depends.** You are facing an issue because the vector representation of these two strings are exactly the same.
Cosine similarity between to strings is **1** because they are **same**. Not because they are same strings but represented with the **same vector**.
If you want them to be different, then you need to represent them different. To do that you need to train your algorithm with enough words that occur multiple times in a corpus.
Also it is high likely that these two strings might be converted to something like 'KAPOOR' in the preprocessing.
|
2,545,655
|
Using Python 2.6.4, windows
With the following script I want to test a certain xmlrpc server. I call a non-existent function and hope for a traceback with an error. Instead, the function does not return. What could be the cause?
```
import xmlrpclib
s = xmlrpclib.Server("http://127.0.0.1:80", verbose=True)
s.functioncall()
```
The output is:
```
send: 'POST /RPC2 HTTP/1.0\r\nHost: 127.0.0.1:80\r\nUser-Agent: xmlrpclib.py/1.0
.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: 106\r\n\
r\n'
send: "<?xml version='1.0'?>\n<methodCall>\n<methodName>functioncall</methodName
>\n<params>\n</params>\n</methodCall>\n"
reply: 'HTTP/1.1 200 OK\r\n'
header: Content-Type: text/xml
header: Cache-Control: no-cache
header: Content-Length: 376
header: Date: Tue, 30 Mar 2010 13:27:21 GMT
body: '<?xml version="1.0"?>\r\n<methodResponse>\r\n<fault>\r\n<value>\r\n<struc
t>\r\n<member>\r\n<name>faultCode</name>\r\n<value><i4>1</i4></value>\r\n</membe
r>\r\n<member>\r\n<name>faultString</name>\r\n<value><string>PVSS00ctrl (2), 2
010.03.30 15:27:21.395, CTRL, SEVERE, 72, Function not defined, functioncall
, , \n</string></value>\r\n</member>\r\n</struct>\r\n</value>\r\n</fault>\r\n</m
ethodResponse>\r\n'
```
(here the program hangs and does not return until I kill the server)
edit: the server is written in c++, using its own xmlrpc library
edit: found an issue that looks like the same problem <http://bugs.python.org/issue1727418>
|
2010/03/30
|
[
"https://Stackoverflow.com/questions/2545655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/80500/"
] |
As you noticed, this is a bug in the server (the client claims to understand 1.0 and the server ignores that and responds in 1.1 anyway, so doesn't close the socket). Python has a workaround for such buggy servers in 2.7 and 3.2, see [this issue](http://bugs.python.org/issue6267), but that workaround wasn't in 2.6.4. Unfortunately, from 2.6.5's [NEWS.txt](http://www.python.org/download/releases/2.6.5/NEWS.txt) it looks like we haven't backported it to 2.6.5 either. The patch for the workaround in 2.7 is [here](http://svn.python.org/view?view=rev&revision=73638), perhaps you can try applying it to 2.6.5 yourself if it's just impossible to fix the buggy server...?
|
Most likely, the server you're testing does not close the TCP connection once it has sent the response back to your client. Thus the client hangs, waiting for the server to close the connection before it can return from the function.
|
59,524,498
|
I am trying to create a seaborn Facetgrid to plot the normality distribution of all columns in my dataFrame decathlon. The data looks as such:
```
P100m Plj Psp Phj P400m P110h Ppv Pdt Pjt P1500
0 938 1061 773 859 896 911 880 732 757 752
1 839 975 870 749 887 878 880 823 863 741
2 814 866 841 887 921 939 819 778 884 691
3 872 898 789 878 848 879 790 790 861 804
4 892 913 742 803 816 869 1004 789 854 699
... ... ... ... ... ... ... ... ... ...
7963 755 760 604 714 812 794 482 571 539 780
7964 830 845 524 767 786 783 601 573 562 535
7965 819 804 653 840 791 699 659 461 448 632
7966 804 720 539 758 830 782 731 487 425 729
7967 687 809 692 714 565 741 804 527 738 523
```
I am relatively new to python and I can't understand my error. My attempt to format the data and create the grid is as such:
```
import seaborn as sns
df_stacked = decathlon.stack().reset_index(1).rename({'level_1': 'column', 0: 'values'}, axis=1)
g = sns.FacetGrid(df_stacked, row = 'column')
g = g.map(plt.hist, "values")
```
However I recieve the following error:
```
ValueError: Axes instance argument was not found in a figure
```
Can anyone explain what exactly this error means and how I would go about fixing it?
**EDIT**
`df_stacked` looks as such:
```
column values
0 P100m 938
0 Plj 1061
0 Psp 773
0 Phj 859
0 P400m 896
... ...
7967 P110h 741
7967 Ppv 804
7967 Pdt 527
7967 Pjt 738
7967 P1500 523
```
|
2019/12/30
|
[
"https://Stackoverflow.com/questions/59524498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10574250/"
] |
I encountered this similar issue when running a Jupyter Notebook.
My solution involved:
1. Restart the notebook
2. Re-run the imports `%matplotlib inline; import matplotlib.pyplot as plt`
|
As you did not post a full working example its a bit of guessing.
What might go wrong is in the line where you have `g = g.map(plt.hist, "values")` because the error comes from deep within matplotlib. You can see this [here](https://stackoverflow.com/questions/40399631/valueerror-axes-instance-argument-was-not-found-in-a-figure) in this SO question where its another function `pylab.sca(axes[i])` outside matplotlib due to not being in that module available, is being triggered by matplotlib.
Likely you installed/updated something in your (conda?) environment (changes in environment paths?) and after the next reboot it was found.
I also wonder how you come up with `plt.hist` ... fully typed it should resemble `matplotlib.pyplot.hist` ... but guessing... (waiting for your updated example code).
|
60,182,791
|
I have tried uploading file to Google Drive from my local system using a Python script but I keep getting HttpError 403. The script is as follows:
```python
from googleapiclient.http import MediaFileUpload
from googleapiclient import discovery
import httplib2
import auth
SCOPES = "https://www.googleapis.com/auth/drive"
CLIENT_SECRET_FILE = "client_secret.json"
APPLICATION_NAME = "test"
authInst = auth.auth(SCOPES, CLIENT_SECRET_FILE, APPLICATION_NAME)
credentials = authInst.getCredentials()
http = credentials.authorize(httplib2.Http())
drive_serivce = discovery.build('drive', 'v3', credentials=credentials)
file_metadata = {'name': 'gb1.png'}
media = MediaFileUpload('./gb.png',
mimetype='image/png')
file = drive_serivce.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
print('File ID: %s' % file.get('id'))
```
The error is :
```python
googleapiclient.errors.HttpError: <HttpError 403 when requesting
https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart&alt=json&fields=id
returned "Insufficient Permission: Request had insufficient authentication scopes.">
```
Am I using the right scope in the code or missing anything ?
I also tried a script I found online and it is working fine but the issue is that it takes a static token, which expires after some time. So how can I refresh the token dynamically?
Here is my code:
```python
import json
import requests
headers = {
"Authorization": "Bearer TOKEN"}
para = {
"name": "account.csv",
"parents": ["FOLDER_ID"]
}
files = {
'data': ('metadata', json.dumps(para), 'application/json; charset=UTF-8'),
'file': ('mimeType', open("./test.csv", "rb"))
}
r = requests.post(
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart",
headers=headers,
files=files
)
print(r.text)
```
|
2020/02/12
|
[
"https://Stackoverflow.com/questions/60182791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7640700/"
] |
Try with this:
```
var str = @"email from Ram at 10:10 am"" ""email from Ramesh at 10:15 am"" ""email from Rajan at 10:20 am"" ""email from Rakesh at 10:25 am";
string[] sl=str.Trim().Split(new string[] { "\" \"" }, StringSplitOptions.None);
foreach(string st in sl) {
Console.WriteLine(st);
}
```
**Output:**
>
> email from Ram at 10:10 am
>
>
> email from Ramesh at 10:15 am
>
>
> email from Rajan at 10:20 am
>
>
> email from Rakesh at 10:25 am
>
>
>
Check results here: <https://dotnetfiddle.net/5zlfJf>
|
It is possible to use additional `"` as they are part of the string literal. And they will be interpreted by the compiler as a single ":
```
var str = @"email from Ram at 10:10 am"" ""email from Ramesh at 10:15 am"" ""email from Rajan at 10:20 am"" ""email from Rakesh at 10:25 am";
var splitted = str.Split(new string[] { @""" """ }, StringSplitOptions.None);
```
or another way:
Try to use `Split`:
```
var str = @"email from Ram at 10:10 am"" ""email from Ramesh at 10:15 am"" ""email from Rajan at 10:20 am"" ""email from Rakesh at 10:25 am";
var splitted = str.Split(new []{ '"'}, StringSplitOptions.RemoveEmptyEntries)
.Where(s=> !string.IsNullOrWhiteSpace(s)).ToList();
```
|
9,833,152
|
>
> **Possible Duplicate:**
>
> [RegEx match open tags except XHTML self-contained tags](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags)
>
>
>
If I have a string that looks something like...
```
"<tr><td>123</td><td>234</td>...<td>697</td></tr>"
```
Basically a table row with n cells.
What's the easiest way in python to get the values of each cell. That is I just want the values "123", "234", "697" stored in a list or array or what ever is easiest.
I've tried to use regular expressions, when I use
```
re.match
```
I am not able to get it to find anything. If I try with
```
re.search
```
I can only get the first cell. But I want to get all the cells. If I can't do this with n cells, how would you do it with a fixed number of cells?
|
2012/03/23
|
[
"https://Stackoverflow.com/questions/9833152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/399523/"
] |
If that markup is part of a larger set of markup, you should prefer a tool with a HTML parser.
One such tool is [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/).
Here's one way to find what you need using that tool:
```
>>> markup = '''"<tr><td>123</td><td>234</td>...<td>697</td></tr>"'''
>>> from bs4 import BeautifulSoup as bs
>>> soup = bs(markup)
>>> for i in soup.find_all('td'):
... print(i.text)
```
Result:
```
123
234
697
```
|
Don't do this. Just use a proper HTML parser, and use something like xpath to get the elements you want.
A lot of people like lxml. For this task, you will probably want to use the BeautifulSoup backend, or use BeautifulSoup directly, because this is presumably not markup from a source known to generate well-formed, valid documents.
|
9,833,152
|
>
> **Possible Duplicate:**
>
> [RegEx match open tags except XHTML self-contained tags](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags)
>
>
>
If I have a string that looks something like...
```
"<tr><td>123</td><td>234</td>...<td>697</td></tr>"
```
Basically a table row with n cells.
What's the easiest way in python to get the values of each cell. That is I just want the values "123", "234", "697" stored in a list or array or what ever is easiest.
I've tried to use regular expressions, when I use
```
re.match
```
I am not able to get it to find anything. If I try with
```
re.search
```
I can only get the first cell. But I want to get all the cells. If I can't do this with n cells, how would you do it with a fixed number of cells?
|
2012/03/23
|
[
"https://Stackoverflow.com/questions/9833152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/399523/"
] |
If that markup is part of a larger set of markup, you should prefer a tool with a HTML parser.
One such tool is [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/).
Here's one way to find what you need using that tool:
```
>>> markup = '''"<tr><td>123</td><td>234</td>...<td>697</td></tr>"'''
>>> from bs4 import BeautifulSoup as bs
>>> soup = bs(markup)
>>> for i in soup.find_all('td'):
... print(i.text)
```
Result:
```
123
234
697
```
|
When using [lxml](https://lxml.de/), an element tree gets created. Each element in the element tree holds information about a tag.
```
from lxml import etree
root = etree.XML("<root><a x='123'>aText<b/><c/><b/></a></root>")
elements = root.findall(".//a")
tag = elements[0].tag
attr = elements[0].attrib
```
|
58,350,100
|
I am trying to solve this [Dynamic Array problem](https://www.hackerrank.com/challenges/dynamic-array/problem?isFullScreen=true) on HackerRank. This is my code:
```py
#!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'dynamicArray' function below.
#
# The function is expected to return an INTEGER_ARRAY.
# The function accepts following parameters:
# 1. INTEGER n
# 2. 2D_INTEGER_ARRAY queries
#
def dynamicArray(n, queries):
lastAnswer = 0
a = []
array_result = []
for k in range(n):
a.append([])
for i in queries:
x = i[1]
y = i[2]
if i[0] == 1:
seq = ((x ^ lastAnswer) % n)
a[seq].append(y)
elif i[0] == 2:
seq = ((x ^ lastAnswer) % n)
lastAnswer = a[seq][y]
array_result.append(lastAnswer)
return array_result
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
first_multiple_input = input().rstrip().split()
n = int(first_multiple_input[0])
q = int(first_multiple_input[1])
queries = [] # 1 0 5, 1 1 7, 1 0 3, ...
for _ in range(q):
queries.append(list(map(int, input().rstrip().split())))
result = dynamicArray(n, queries)
fptr.write('\n'.join(map(str, result)))
fptr.write('\n')
fptr.close()
```
I am getting a runtime error:
>
> Traceback (most recent call last):
>
>
> File "Solution.py", line 50, in
>
>
> fptr.write('\n'.join(map(str, result)))
>
>
> TypeError: 'NoneType' object is not iterable
>
>
>
Can anyone help me with this, I can't seem to find a solution.
This is the input:
>
> 2 5
>
>
> 1 0 5
>
>
> 1 1 7
>
>
> 1 0 3
>
>
> 2 1 0
>
>
> 2 1 1
>
>
>
Thanks.
>
> Update: It seems like this input is working now, thanks to @cireo but the code is not working for other test cases. What the problem with this code?
>
>
>
|
2019/10/12
|
[
"https://Stackoverflow.com/questions/58350100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8598839/"
] |
you can try this, it works totally fine.(no runtime error)
==========================================================
>
> Replace your dynamicArray function with this code. Hopefully this will be helpful for you (^\_^).
>
>
>
def dynamicArray(n, queries):
```
col = [[] for i in range(n)]
res = []
lastanswer = 0
for q in queries:
data = (q[1]^lastanswer)%n
if q[0] == 1:
col[data].append(q[2])
elif q[0] == 2:
ind_x = q[2]%len(col[data])
lastanswer = col[data][ind_x]
res.append(lastanswer)
return res
```
|
The answer to your question lies in the boilerplate provided by hackerrank.
`# The function is expected to return an INTEGER_ARRAY.`
You can also see that `result = dynamicArray(n, queries)` is expected to return a list of integers from `map(str, result)`, which throws the exception.
In your code you do `print(lastAnswer)`, but you probably want
```
+ ret = []
...
- print(lastAnswer)
+ ret.append(lastAnswer)
+ return ret
```
instead.
Since you do not return anything, the function returns `None` by default, which cannot be iterated over by `map`.
|
58,350,100
|
I am trying to solve this [Dynamic Array problem](https://www.hackerrank.com/challenges/dynamic-array/problem?isFullScreen=true) on HackerRank. This is my code:
```py
#!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'dynamicArray' function below.
#
# The function is expected to return an INTEGER_ARRAY.
# The function accepts following parameters:
# 1. INTEGER n
# 2. 2D_INTEGER_ARRAY queries
#
def dynamicArray(n, queries):
lastAnswer = 0
a = []
array_result = []
for k in range(n):
a.append([])
for i in queries:
x = i[1]
y = i[2]
if i[0] == 1:
seq = ((x ^ lastAnswer) % n)
a[seq].append(y)
elif i[0] == 2:
seq = ((x ^ lastAnswer) % n)
lastAnswer = a[seq][y]
array_result.append(lastAnswer)
return array_result
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
first_multiple_input = input().rstrip().split()
n = int(first_multiple_input[0])
q = int(first_multiple_input[1])
queries = [] # 1 0 5, 1 1 7, 1 0 3, ...
for _ in range(q):
queries.append(list(map(int, input().rstrip().split())))
result = dynamicArray(n, queries)
fptr.write('\n'.join(map(str, result)))
fptr.write('\n')
fptr.close()
```
I am getting a runtime error:
>
> Traceback (most recent call last):
>
>
> File "Solution.py", line 50, in
>
>
> fptr.write('\n'.join(map(str, result)))
>
>
> TypeError: 'NoneType' object is not iterable
>
>
>
Can anyone help me with this, I can't seem to find a solution.
This is the input:
>
> 2 5
>
>
> 1 0 5
>
>
> 1 1 7
>
>
> 1 0 3
>
>
> 2 1 0
>
>
> 2 1 1
>
>
>
Thanks.
>
> Update: It seems like this input is working now, thanks to @cireo but the code is not working for other test cases. What the problem with this code?
>
>
>
|
2019/10/12
|
[
"https://Stackoverflow.com/questions/58350100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8598839/"
] |
The answer to your question lies in the boilerplate provided by hackerrank.
`# The function is expected to return an INTEGER_ARRAY.`
You can also see that `result = dynamicArray(n, queries)` is expected to return a list of integers from `map(str, result)`, which throws the exception.
In your code you do `print(lastAnswer)`, but you probably want
```
+ ret = []
...
- print(lastAnswer)
+ ret.append(lastAnswer)
+ return ret
```
instead.
Since you do not return anything, the function returns `None` by default, which cannot be iterated over by `map`.
|
```
def dynamicArray(n, queries):
# Write your code here
arr=[[]for i in range(0,n)]
lastAnswer=0
answers=[]
for query in queries:
if query[0]==1:
idx= (query[1]^lastAnswer)%n
arr[idx].append(query[2])
if query[0]==2:
idx= (query[1]^lastAnswer)%n
lastAnswer= arr[idx][query[2]% len(arr[idx])]
answers.append(lastAnswer)
return answers
```
|
58,350,100
|
I am trying to solve this [Dynamic Array problem](https://www.hackerrank.com/challenges/dynamic-array/problem?isFullScreen=true) on HackerRank. This is my code:
```py
#!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'dynamicArray' function below.
#
# The function is expected to return an INTEGER_ARRAY.
# The function accepts following parameters:
# 1. INTEGER n
# 2. 2D_INTEGER_ARRAY queries
#
def dynamicArray(n, queries):
lastAnswer = 0
a = []
array_result = []
for k in range(n):
a.append([])
for i in queries:
x = i[1]
y = i[2]
if i[0] == 1:
seq = ((x ^ lastAnswer) % n)
a[seq].append(y)
elif i[0] == 2:
seq = ((x ^ lastAnswer) % n)
lastAnswer = a[seq][y]
array_result.append(lastAnswer)
return array_result
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
first_multiple_input = input().rstrip().split()
n = int(first_multiple_input[0])
q = int(first_multiple_input[1])
queries = [] # 1 0 5, 1 1 7, 1 0 3, ...
for _ in range(q):
queries.append(list(map(int, input().rstrip().split())))
result = dynamicArray(n, queries)
fptr.write('\n'.join(map(str, result)))
fptr.write('\n')
fptr.close()
```
I am getting a runtime error:
>
> Traceback (most recent call last):
>
>
> File "Solution.py", line 50, in
>
>
> fptr.write('\n'.join(map(str, result)))
>
>
> TypeError: 'NoneType' object is not iterable
>
>
>
Can anyone help me with this, I can't seem to find a solution.
This is the input:
>
> 2 5
>
>
> 1 0 5
>
>
> 1 1 7
>
>
> 1 0 3
>
>
> 2 1 0
>
>
> 2 1 1
>
>
>
Thanks.
>
> Update: It seems like this input is working now, thanks to @cireo but the code is not working for other test cases. What the problem with this code?
>
>
>
|
2019/10/12
|
[
"https://Stackoverflow.com/questions/58350100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8598839/"
] |
you can try this, it works totally fine.(no runtime error)
==========================================================
>
> Replace your dynamicArray function with this code. Hopefully this will be helpful for you (^\_^).
>
>
>
def dynamicArray(n, queries):
```
col = [[] for i in range(n)]
res = []
lastanswer = 0
for q in queries:
data = (q[1]^lastanswer)%n
if q[0] == 1:
col[data].append(q[2])
elif q[0] == 2:
ind_x = q[2]%len(col[data])
lastanswer = col[data][ind_x]
res.append(lastanswer)
return res
```
|
```
def dynamicArray(n, queries):
# Write your code here
arr=[[]for i in range(0,n)]
lastAnswer=0
answers=[]
for query in queries:
if query[0]==1:
idx= (query[1]^lastAnswer)%n
arr[idx].append(query[2])
if query[0]==2:
idx= (query[1]^lastAnswer)%n
lastAnswer= arr[idx][query[2]% len(arr[idx])]
answers.append(lastAnswer)
return answers
```
|
23,211,546
|
I had asked a similar question [here](https://stackoverflow.com/questions/23159053/re-read-a-file-from-start-after-the-program-finishes-reading-it-python/23159107) and the answer that I get was to use the `seek()` method. Now I am doing the following:
```
with open("total.csv", 'rb') as input1:
time.sleep(3)
input1.seek(0)
reader = csv.reader(input1, delimiter="\t")
for row in reader:
#Read the CSV row by row.
```
However, I want to navigate to the first record of the CSV **within the same for loop**. I know that my loop won't terminate that ways but that's precisely what I want. I don't want the `for` loop to end and if it reaches the last record I want to navigate back to the first record and read the whole file all over again (and keep reading it). How do I do that?
Thanks!
|
2014/04/22
|
[
"https://Stackoverflow.com/questions/23211546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3534055/"
] |
For simplicity, create an generator:
```
def repeated_reader(input, reader):
while True:
input.seek(0)
for row in reader:
yield row
with open("total.csv", 'rb') as input1:
reader = csv.reader(input1, delimiter="\t")
for row in repeated_reader(input1, reader):
#Read the CSV row by row.
```
|
Does it have to be in the `for`-loop? You could achieve this behaviour like this (untested):
```
with open("total.csv", 'rb') as input1:
time.sleep(3)
reader = csv.reader(input1, delimiter="\t")
while True:
input1.seek(0)
for row in reader:
#Read the CSV row by row.
```
|
23,211,546
|
I had asked a similar question [here](https://stackoverflow.com/questions/23159053/re-read-a-file-from-start-after-the-program-finishes-reading-it-python/23159107) and the answer that I get was to use the `seek()` method. Now I am doing the following:
```
with open("total.csv", 'rb') as input1:
time.sleep(3)
input1.seek(0)
reader = csv.reader(input1, delimiter="\t")
for row in reader:
#Read the CSV row by row.
```
However, I want to navigate to the first record of the CSV **within the same for loop**. I know that my loop won't terminate that ways but that's precisely what I want. I don't want the `for` loop to end and if it reaches the last record I want to navigate back to the first record and read the whole file all over again (and keep reading it). How do I do that?
Thanks!
|
2014/04/22
|
[
"https://Stackoverflow.com/questions/23211546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3534055/"
] |
Does it have to be in the `for`-loop? You could achieve this behaviour like this (untested):
```
with open("total.csv", 'rb') as input1:
time.sleep(3)
reader = csv.reader(input1, delimiter="\t")
while True:
input1.seek(0)
for row in reader:
#Read the CSV row by row.
```
|
I actually calculated the total number of rows in the CSV and when I was on the last row I did `input1.seek(0)`
```
row_count = sum(1 for row in csv.reader(open('total.csv')))
print row_count
row_count2 = 0
with open("total.csv", 'rb') as input1:
time.sleep(3)
input1.seek(0)
reader = csv.reader(input1, delimiter="\t")
for row in reader:
count += 1
#Read the CSV row by row.
if row_count2 == row_count:
row_count2 = 0
time.sleep(3)
input1.seek(0)
```
|
23,211,546
|
I had asked a similar question [here](https://stackoverflow.com/questions/23159053/re-read-a-file-from-start-after-the-program-finishes-reading-it-python/23159107) and the answer that I get was to use the `seek()` method. Now I am doing the following:
```
with open("total.csv", 'rb') as input1:
time.sleep(3)
input1.seek(0)
reader = csv.reader(input1, delimiter="\t")
for row in reader:
#Read the CSV row by row.
```
However, I want to navigate to the first record of the CSV **within the same for loop**. I know that my loop won't terminate that ways but that's precisely what I want. I don't want the `for` loop to end and if it reaches the last record I want to navigate back to the first record and read the whole file all over again (and keep reading it). How do I do that?
Thanks!
|
2014/04/22
|
[
"https://Stackoverflow.com/questions/23211546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3534055/"
] |
For simplicity, create an generator:
```
def repeated_reader(input, reader):
while True:
input.seek(0)
for row in reader:
yield row
with open("total.csv", 'rb') as input1:
reader = csv.reader(input1, delimiter="\t")
for row in repeated_reader(input1, reader):
#Read the CSV row by row.
```
|
I actually calculated the total number of rows in the CSV and when I was on the last row I did `input1.seek(0)`
```
row_count = sum(1 for row in csv.reader(open('total.csv')))
print row_count
row_count2 = 0
with open("total.csv", 'rb') as input1:
time.sleep(3)
input1.seek(0)
reader = csv.reader(input1, delimiter="\t")
for row in reader:
count += 1
#Read the CSV row by row.
if row_count2 == row_count:
row_count2 = 0
time.sleep(3)
input1.seek(0)
```
|
49,783,902
|
In python, if I use a ternary operator:
```
x = a if <condition> else b
```
Is `a` executed even if `condition` is false? Or does `condition` evaluate first and then goes to either `a` or `b` depending on the result?
|
2018/04/11
|
[
"https://Stackoverflow.com/questions/49783902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3754760/"
] |
The condition is evaluated first, if it is False, `a` is not evaluated: [documentation](https://docs.python.org/3/reference/expressions.html#conditional-expressions).
|
It gets evaluated depending if meets the condition. For example:
```
condition = True
print(2 if condition else 1/0)
#Output is 2
print((1/0, 2)[condition])
#ZeroDivisionError is raised
```
No matter if `1/0` raise an error, is never evaluated as the condition was True on the evaluation.
Sames happen in the other way:
```
condition = False
print(1/0 if condition else 2)
#Output is 2
```
|
58,512,790
|
I'm wanting to wrap some c++ code in python using swig, and I need to be able to use numpy.i to convert numpy arrays to vectors.
This has been quite the frustrating process, as I haven't been able to find any useful info online as to where I actually get numpy.i from.
This is what I currently have running:
numpy 1.17.3
swig 2.0.12
python 3.7.3
Debian 4.9.2
From reading <https://docs.scipy.org/doc/numpy/reference/swig.interface-file.html> I'm told that numpy.i should be located in tools/swig/numpy.i, though the only place on my machine that I can find numpy.i is in a python 2.7 folder which I've upgraded from. My working version of python (3.7.3) holds no such file.
```
$ locate numpy.i
/usr/lib/python2.7/dist-packages/instant/swig/numpy.i
```
**What I've tried:**
* copying the numpy.i (as described above) into my working folder. This is at least recognized by my test.i file when I call %include "numpy.i", but it doesn't seem to allow usage of numpy.i calls.
* Copying this code <https://github.com/numpy/numpy/blob/master/tools/swig/numpy.i> into a new file called numpy.i and putting that in my folder, but I get lots of errors when I try to run it.
**Is there a standard way to get the proper numpy.i version? Where would I download it from, and where should I put it?**
I've included some code below as reference:
test.i:
```
%module test
%{
#define SWIG_FILE_WITH_INIT
#include "test.h"
%}
%include "numpy.i" //this doesn't seem to do anything
%init %{
import_array();
%}
%apply (int DIM1) {(char x)}; //this doesn't seem to do anything
%include "test.h"
```
test.h:
```
#include <iostream>
void char_print(char x);
```
test.cpp:
```
#include "test.h"
void char_print(char x) {
std::cout << x << std::endl;
return;
}
```
tester.py:
```
import test
test.char_print(5) #nothing is printed, since this isn't converted properly to a char.
```
This is just a simple example, but I've tried using numpy.i in many different ways (including copying and pasting other people's code that works for them) but it consistently doesn't change anything whether I have it in my test.i file or not.
**Where/how do I get numpy.i?**
|
2019/10/22
|
[
"https://Stackoverflow.com/questions/58512790",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12114274/"
] |
**Problem:** The numpy.i file I copied over from the python2.7 package isn't compatible, and the compatible version isn't included in the installation package when you go through anaconda (still not sure why they'd do that).
**Answer:** Find which version of numpy you're running, then go here (<https://github.com/numpy/numpy/releases>) and download the numpy-[your\_version].zip file, then specifically copy the numpy.i file, found in numpy-[your\_version]/tools/swig/. Now paste that numpy.i into your project working directory.
|
You should download new numpy.i file from <https://github.com/numpy/numpy/blob/master/tools/swig/numpy.i>. In this numpy.i file have no PyFile\_Check function, which python3 don't support. If you still use
`/usr/lib/python2.7/dist-packages/instant/swig/numpy.i`, your code may appear error `undefined symbol: PyFile_Check` because only python2 can support `PyFile_Check` function.
By the way, when the error `undefined symbol: PyFile_Check` occurs, it is not necessarily a problem with SWIG.
|
45,966,355
|
I would like to write a function which performs efficiently this "strange" sort (I am sorry for this pseudocode, it seems to me to be the clearest way to introduce the problem):
```
l=[[A,B,C,...]]
while some list in l is not sorted (increasingly) do
find a non-sorted list (say A) in l
find the first two non-sorted elements of A (i.e. A=[...,b,a,...] with b>a)
l=[[...,a,b,...],[...,b+a,...],B,C,...]
```
Two important things should be mentioned:
1. The sorting is dependent on the choice of the first two
non-sorted elements: `if A=[...,b,a,r,...], r<a<b` and we choose to
sort wrt to `(a,r)` then the final result won't be the same. This is
why we fix the two first non-sorted elements of `A`.
2. Sorting this way always comes to an end.
An example:
```
In: Sort([[4,5,3,10]])
Out: [[3,4,5,10],[5,7,10],[10,12],[22],[4,8,10]]
```
since
```
(a,b)=(5,3): [4,5,3,10]->[[4,3,5,10],[4,8,10]]
(a,b)=(4,3): [[4,3,5,10],[4,8,10]]->[[3,4,5,10],[7,5,10],[4,8,10]]
(a,b)=(7,5): [[3,4,5,10],[7,5,10],[4,8,10]]->[[3,4,5,10],[5,7,10],[12,10],[4,8,10]]
(a,b)=(12,10): [[3,4,5,10],[5,7,10],[12,10],[4,8,10]]->[[3,4,5,10],[5,7,10],[10,12],[22],[4,8,10]]
```
Thank you for your help!
**EDIT**
Why am I considering this problem:
I am trying to do some computations with the Universal Enveloping Algebra of a Lie algebra. This is a mathematical object generated by products of some generators x\_1,...x\_n. We have a nice description of a generating set (it amounts to the ordered lists in the question), but when exchanging two generators, we need to take into account the commutator of these two elements (this is the sum of the elements in the question). I haven't given a solution to this question because it would be close to the worst one you can think of. I would like to know how you would implement this in a good way, so that it is pythonic and fast. I am not asking for a complete solution, only some clues. I am willing to solve it by myself .
|
2017/08/30
|
[
"https://Stackoverflow.com/questions/45966355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7871040/"
] |
Here's a simple implementation that could use some improvement:
```
def strange_sort(lists_to_sort):
# reverse so pop and append can be used
lists_to_sort = lists_to_sort[::-1]
sorted_list_of_lists = []
while lists_to_sort:
l = lists_to_sort.pop()
i = 0
# l[:i] is sorted
while i < len(l) - 1:
if l[i] > l[i + 1]:
# add list with element sum to stack
lists_to_sort.append(l[:i] + [l[i] + l[i + 1]] + l[i + 2:])
# reverse elements
l[i], l[i + 1] = l[i + 1], l[i]
# go back if necessary
if i > 0 and l[i - 1] > l [i]:
i -= 1
continue
# move on to next index
i += 1
# done sorting list
sorted_list_of_lists.append(l)
return sorted_list_of_lists
print(strange_sort([[4,5,3,10]]))
```
This keeps track of which lists are left to sort by using a stack. The time complexity is pretty good, but I don't think it's ideal
|
Firstly you would have to implement a `while` loop which would check if all of the numbers inside of the lists are sorted. I will be using `all` which checks if all the objects inside a sequence are `True`.
```
def a_sorting_function_of_some_sort(list_to_sort):
while not all([all([number <= numbers_list[numbers_list.index(number) + 1] for number in numbers_list
if not number == numbers_list[-1]])
for numbers_list in list_to_sort]):
for numbers_list in list_to_sort:
# There's nothing to do if the list contains just one number
if len(numbers_list) > 1:
for number in numbers_list:
number_index = numbers_list.index(number)
try:
next_number_index = number_index + 1
next_number = numbers_list[next_number_index]
# If IndexError is raised here, it means we don't have any other numbers to check against,
# so we break this numbers iteration to go to the next list iteration
except IndexError:
break
if not number < next_number:
numbers_list_index = list_to_sort.index(numbers_list)
list_to_sort.insert(numbers_list_index + 1, [*numbers_list[:number_index], number + next_number,
*numbers_list[next_number_index + 1:]])
numbers_list[number_index] = next_number
numbers_list[next_number_index] = number
# We also need to break after parsing unsorted numbers
break
return list_to_sort
```
|
55,218,096
|
Right now I am trying to write a python script which could give a binary result to check if my machine is connected to Corporate\_VPN (Connection\_Name) OR Not connected to Corporate\_VPN.
I have tried few articles and post which I could find but with no success.
Here are some:
I have tried this post: [Getting Connected VPN Name in Python](https://stackoverflow.com/questions/36816282/getting-connected-vpn-name-in-python)
And tried:
```
import NetworkManager
for conn in NetworkManager.NetworkManager.ActiveConnections:
print('Name: %s; vpn?: %s' % (conn.Id, conn.Vpn))
```
I am getting this error:
```
ImportError
Traceback (most recent call last)
<ipython-input-6-52b1e422fff2> in <module>()
----> 1 import NetworkManager
2
3 for conn in NetworkManager.NetworkManager.ActiveConnections:
4 print('Name: %s; vpn?: %s' % (conn.Id, conn.Vpn))
ImportError: No module named 'NetworkManager'
```
When tried to "pip install python-NetworManager" I got this error:
```
Failed building wheel for dbus-python
Running setup.py clean for dbus-python
Successfully built python-networkmanager
Failed to build dbus-python
Installing collected packages: dbus-python, python-networkmanager
Running setup.py install for dbus-python ... error
Complete output from command C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-p1feeotm\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-91dmsyv1\install-record.txt --single-version-externally-managed --compile:
running install
running build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-p1feeotm\dbus-python\build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-p1feeotm\dbus-python\build\temp.win-amd64-3.6
error: [WinError 193] %1 is not a valid Win32 application
----------------------------------------
Command "C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-p1feeotm\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-91dmsyv1\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\samola\AppData\Local\Temp\1\pip-install-p1feeotm\dbus-python\
```
Later when I tried to "pip install dbus-python" i got this error:
```
Failed building wheel for dbus-python
Running setup.py clean for dbus-python
Failed to build dbus-python
Installing collected packages: dbus-python
Running setup.py install for dbus-python ... error
Complete output from command C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-lp5w3k60\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-7mvtqy_d\install-record.txt --single-version-externally-managed --compile:
running install
running build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-lp5w3k60\dbus-python\build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-lp5w3k60\dbus-python\build\temp.win-amd64-3.6
error: [WinError 193] %1 is not a valid Win32 application
----------------------------------------
Command "C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-lp5w3k60\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-7mvtqy_d\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\samola\AppData\Local\Temp\1\pip-install-lp5w3k60\dbus-python\
```
I have also tried following POST as well with no help:<https://www.reddit.com/r/learnpython/comments/5qkpu1/python_script_to_check_if_connected_to_vpn_or_not/>
```
host = *******
ping = subprocess.Popen(["ping.exe","-n","1","-w","1",host],stdout = subprocess.PIPE).communicate()[0]
if ('unreachable' in str(ping)) or ('timed' in str(ping)) or ('failure' in str(ping)):
ping_chk = 0
else:
ping_chk = 1
if ping_chk == 1:
print ("VPN Connected")
else:
print ("VPN Not Connected")
```
Throwing me error:
```
File "<ipython-input-5-6f992511172f>", line 1
host = 192.168.*.*
^
SyntaxError: invalid syntax
```
I am not sure what wrong I am doing right now.
Note: I am doing all this in Corporate VPN connection.
|
2019/03/18
|
[
"https://Stackoverflow.com/questions/55218096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8042963/"
] |
For three numbers specifically, there are two basic approaches:
* you can sort the three numbers and return the middle number from the sorted array. For this, a three-stage sorting network is generally useful. To build this, use this primitive which swaps `r0` and `r1` if `r0` is larger than `r1`, using `r3` as a temporary register:
```
cmp r0, r1 # if (r0 > r1)
movgt r3, r1 # r2 = r1
movgt r1, r0 # r1 = r0
movgt r0, r3 # r0 = r1
```
* alternatively, you can compute the maximum and minimum of the three numbers and subtract it from their sum, yielding the number in the middle. For example, if the three numbers are in `r0`, `r1`, and `r2`, this can be done by:
```
cmp r0, r1 # if (r0 > r1)
movgt r3, r0 # then r3 = r0 (r3 is max)
movgt r4, r1 # then r4 = r1 (r4 is min)
movle r3, r1 # else r3 = r1
movle r4, r0 # else r4 = r0
cmp r2, r3 # if (r2 > r3)
movgt r3, r2 # then r3 = r2
cmp r4, r2 # if (r4 > r2)
movgt r4, r2 # then r4 = r2
add r5, r0, r1 # r5 = r0 + r1 (r5 is middle)
add r5, r5, r2 # r5 = r5 + r2
sub r5, r5, r3 # r5 = r5 - r3
sub r5, r5, r4 # r5 = r5 - r5
```
|
What was the exact problem you encountered? Your `CMP` instruction is fine, and will set the status flags depending on the relative values of `R0` and `R1`, so you can then use a conditional branch (e.g. `BHI` or `BGT`) or one of the `IT` family of instructions that will allow you to execute other instructions conditionally using the same condition codes.
There's a quick reference for the Thumb-2 instruction set (as used by most of the Cortex-M devices, which I'll assume you're using in the absence of any other information) [here](http://infocenter.arm.com/help/topic/com.arm.doc.qrc0001m/QRC0001_UAL.pdf), and there's plenty more documentation on conditional branching and conditional execution on the ARM Infocenter site, for example [here](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0552a/CIHDFHCC.html) for the Cortex-M3.
|
56,744,322
|
I am creating an E-commerce now when I try adding an Item into the cart it returns the error above?
It is complaining about this line of code in the view:
```
else:
order.items.add(order_item)
```
View
```
def add_to_cart(request, slug):
item = get_object_or_404(Item, slug=slug)
order_item = OrderItem.objects.get_or_create(
item=item,
user = request.user,
ordered = False
)
order_qs = Order.objects.filter(user=request.user, ordered=False)
if order_qs.exists():
order = order_qs[0]
#check if the order item is in the order
if order.items.filter(item__slug=item.slug).exists():
order_item.quantity += 1
order_item.save()
else:
order.items.add(order_item)
else:
ordered_date = timezone.now()
order = Order.objects.create(user=request.user, ordered_date=ordered_date)
order.items.add(order_item)
return redirect("core:product", slug=slug)
```
Model
```
class OrderItem(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
ordered = models.BooleanField(default=False)
item = models.ForeignKey(Item, on_delete=models.CASCADE)
quantity = models.IntegerField(default=1)
def __str__(self):
return f"{self.quantity} of {self.item.title}"
class Order(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
items = models.ManyToManyField(OrderItem)
start_date = models.DateTimeField(auto_now_add= True)
ordered_date = models.DateTimeField()
ordered = models.BooleanField(default=False)
def __str__(self):
return self.user.username
```
Traceback
```
Internal Server Error: /add-to-cart/pants-2/
Traceback (most recent call last):
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/core/views.py", line 38, in add_to_cart
order.items.add(order_item)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 965, in add
through_defaults=through_defaults,
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 1092, in _add_items
'%s__in' % target_field_name: new_ids,
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/query.py", line 892, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/query.py", line 910, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1290, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1318, in _add_q
split_subq=split_subq, simple_col=simple_col,
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1251, in build_filter
condition = self.build_lookup(lookups, col, value)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1116, in build_lookup
lookup = lookup_class(lhs, rhs)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/lookups.py", line 20, in __init__
self.rhs = self.get_prep_lookup()
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_lookups.py", line 59, in get_prep_lookup
self.rhs = [target_field.get_prep_value(v) for v in self.rhs]
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_lookups.py", line 59, in <listcomp>
self.rhs = [target_field.get_prep_value(v) for v in self.rhs]
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/__init__.py", line 966, in get_prep_value
return int(value)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'OrderItem'
[24/Jun/2019 20:49:06] "GET /add-to-cart/pants-2/ HTTP/1.1" 500 150701
```
I need it to create a new item into the cart if it doesn't exist and if it exists inside the cart it increments the similar item by 1, instead of creating the same thing all over again.
|
2019/06/24
|
[
"https://Stackoverflow.com/questions/56744322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10374065/"
] |
The Django [**`get_or_create(..)`** [Django-doc]](https://docs.djangoproject.com/en/dev/ref/models/querysets/#get-or-create), does *not* return a model instance, it returns a 2-tuple with the object, and a boolean (whether it created a record or not). Or as written in the documentation:
>
> (..)
>
>
> Returns a tuple of `(object, created)`, where `object` is the
> retrieved or created object and `created` is a boolean specifying
> whether a new object was created.
>
>
> (..)
>
>
>
You can easily fix this by using iterable unpacking however:
```
def add_to_cart(request, slug):
item = get_object_or_404(Item, slug=slug)
**order\_item, \_\_** = OrderItem.objects.get_or_create(
item=item,
user = request.user,
ordered = False
)
order_qs = Order.objects.filter(user=request.user, ordered=False)
if order_qs.exists():
order = order_qs[0]
#check if the order item is in the order
if order.items.filter(item__slug=item.slug).exists():
order_item.quantity += 1
order_item.save()
else:
order.items.add(order_item)
else:
ordered_date = timezone.now()
order = Order.objects.create(user=request.user, ordered_date=ordered_date)
order.items.add(order_item)
return redirect("core:product", slug=slug)
```
Here we thus assign the result of `OrderItem.objects.get_or_create(..)` to `order_item, __`, with `__` a "throw away variable".
|
add the below line into this. Just add 'created' as below.
```
order_item, created = OrderItem.objects.get_or_create(
item=item,
user = request.user,
ordered = False
)
```
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
This function inserts a char at a postion for a string:
```
def insert(char,position,string):
return string[:position] + char + string[position:]
```
|
Python strings cannot be mutated. What we can do is create another string with the hyphen inserted in between, as per your wish.
Consider the string s = "12345678aaaa12345678bbbbbbbb"
Giving `s[:8] + '-' + s[8:] will give you 12345678-aaaa12345678bbbbbbbb`
You can give the hyphen as you wish by adjusting the `:` values.
For more methods to add the hyphen, refer to this question thread for answer as to how to insert hypForhen.
[Add string in a certain position in Python](https://stackoverflow.com/questions/5254445/add-string-in-a-certain-position-in-python)
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
Python strings cannot be mutated. What we can do is create another string with the hyphen inserted in between, as per your wish.
Consider the string s = "12345678aaaa12345678bbbbbbbb"
Giving `s[:8] + '-' + s[8:] will give you 12345678-aaaa12345678bbbbbbbb`
You can give the hyphen as you wish by adjusting the `:` values.
For more methods to add the hyphen, refer to this question thread for answer as to how to insert hypForhen.
[Add string in a certain position in Python](https://stackoverflow.com/questions/5254445/add-string-in-a-certain-position-in-python)
|
Simplest solution:
```
str = '12345678aaaa12345678bbbbbbbb'
indexes = [8, 4, 4, 4]
i = -1
for index in indexes:
i = i + index + 1
str = str[:i] + '-' + str[i:]
print str
```
Prints: `12345678-aaaa-1234-5678-bbbbbbbb`
You are free to change `indexes` array to achieve what you want.
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
Python strings cannot be mutated. What we can do is create another string with the hyphen inserted in between, as per your wish.
Consider the string s = "12345678aaaa12345678bbbbbbbb"
Giving `s[:8] + '-' + s[8:] will give you 12345678-aaaa12345678bbbbbbbb`
You can give the hyphen as you wish by adjusting the `:` values.
For more methods to add the hyphen, refer to this question thread for answer as to how to insert hypForhen.
[Add string in a certain position in Python](https://stackoverflow.com/questions/5254445/add-string-in-a-certain-position-in-python)
|
You can follow this process :
```
def insert_(str, idx):
strlist = list(str)
strlist.insert(idx, '-')
return ''.join(strlist)
str = '12345678aaaa12345678bbbbbbbb'
indexes = [8, 4, 4, 4]
resStr = ""
idx = 0
for val in indexes:
idx += val
resStr = insert_(str,idx)
str = resStr
idx += 1
print(str)
```
output :
```
12345678-aaaa-1234-5678-bbbbbbbb
```
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
Python strings cannot be mutated. What we can do is create another string with the hyphen inserted in between, as per your wish.
Consider the string s = "12345678aaaa12345678bbbbbbbb"
Giving `s[:8] + '-' + s[8:] will give you 12345678-aaaa12345678bbbbbbbb`
You can give the hyphen as you wish by adjusting the `:` values.
For more methods to add the hyphen, refer to this question thread for answer as to how to insert hypForhen.
[Add string in a certain position in Python](https://stackoverflow.com/questions/5254445/add-string-in-a-certain-position-in-python)
|
This doesn't exactly create the string you want but posting it anyway.
It finds all the indexes where digit becomes alpha and vice versa.
Then it inserts "-" at these indexes.
```
a = "12345678aaaa12345678bbbbbbbb"
lst = list(a)
index = []
for ind,i in enumerate(list(a)[:-1]):
if (i.isdigit() and lst[ind+1].isalpha()) or (i.isalpha() and lst[ind+1].isdigit()):
index.append(ind)
for i in index[::-1]:
lst.insert(i+1,"-")
''.join(lst)
```
'12345678-aaaa-12345678-bbbbbbbb'
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
Python strings cannot be mutated. What we can do is create another string with the hyphen inserted in between, as per your wish.
Consider the string s = "12345678aaaa12345678bbbbbbbb"
Giving `s[:8] + '-' + s[8:] will give you 12345678-aaaa12345678bbbbbbbb`
You can give the hyphen as you wish by adjusting the `:` values.
For more methods to add the hyphen, refer to this question thread for answer as to how to insert hypForhen.
[Add string in a certain position in Python](https://stackoverflow.com/questions/5254445/add-string-in-a-certain-position-in-python)
|
If your want do this in one time , you can like this.
str = "12345678aaaa12345678bbbbbbbb"
```
def insert(char,positions,string):
result = ""
for post in range(0, len(positions)):
print(positions[post])
if post == 0:
result += string[:positions[post]] + char
elif post == (len(positions) -1 ):
result += string[positions[post-1]:positions[post]] + char + string[positions[post]:]
else:
result += string[positions[post-1]:positions[post]] + char
print(result)
return result
insert("-", [8, 12, 16, 20], str)
```
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
This function inserts a char at a postion for a string:
```
def insert(char,position,string):
return string[:position] + char + string[position:]
```
|
Simplest solution:
```
str = '12345678aaaa12345678bbbbbbbb'
indexes = [8, 4, 4, 4]
i = -1
for index in indexes:
i = i + index + 1
str = str[:i] + '-' + str[i:]
print str
```
Prints: `12345678-aaaa-1234-5678-bbbbbbbb`
You are free to change `indexes` array to achieve what you want.
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
This function inserts a char at a postion for a string:
```
def insert(char,position,string):
return string[:position] + char + string[position:]
```
|
You can follow this process :
```
def insert_(str, idx):
strlist = list(str)
strlist.insert(idx, '-')
return ''.join(strlist)
str = '12345678aaaa12345678bbbbbbbb'
indexes = [8, 4, 4, 4]
resStr = ""
idx = 0
for val in indexes:
idx += val
resStr = insert_(str,idx)
str = resStr
idx += 1
print(str)
```
output :
```
12345678-aaaa-1234-5678-bbbbbbbb
```
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
This function inserts a char at a postion for a string:
```
def insert(char,position,string):
return string[:position] + char + string[position:]
```
|
This doesn't exactly create the string you want but posting it anyway.
It finds all the indexes where digit becomes alpha and vice versa.
Then it inserts "-" at these indexes.
```
a = "12345678aaaa12345678bbbbbbbb"
lst = list(a)
index = []
for ind,i in enumerate(list(a)[:-1]):
if (i.isdigit() and lst[ind+1].isalpha()) or (i.isalpha() and lst[ind+1].isdigit()):
index.append(ind)
for i in index[::-1]:
lst.insert(i+1,"-")
''.join(lst)
```
'12345678-aaaa-12345678-bbbbbbbb'
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
This function inserts a char at a postion for a string:
```
def insert(char,position,string):
return string[:position] + char + string[position:]
```
|
If your want do this in one time , you can like this.
str = "12345678aaaa12345678bbbbbbbb"
```
def insert(char,positions,string):
result = ""
for post in range(0, len(positions)):
print(positions[post])
if post == 0:
result += string[:positions[post]] + char
elif post == (len(positions) -1 ):
result += string[positions[post-1]:positions[post]] + char + string[positions[post]:]
else:
result += string[positions[post-1]:positions[post]] + char
print(result)
return result
insert("-", [8, 12, 16, 20], str)
```
|
14,412,907
|
I'm trying to scrape the [NDTV](http://en.wikipedia.org/wiki/NDTV) website for news titles. [This](http://archives.ndtv.com/articles/2012-01.html) is the page I'm using as a HTML source. I'm using BeautifulSoup (bs4) to handle the HTML code, and I've got everything working, except my code breaks when I encounter the hindi titles in the page I linked to.
My code so far is :
```
import urllib2
from bs4 import BeautifulSoup
htmlUrl = "http://archives.ndtv.com/articles/2012-01.html"
FileName = "NDTV_2012_01.txt"
fptr = open(FileName, "w")
fptr.seek(0)
page = urllib2.urlopen(htmlUrl)
soup = BeautifulSoup(page, from_encoding="UTF-8")
li = soup.findAll( 'li')
for link_tag in li:
hypref = link_tag.find('a').contents[0]
strhyp = str(hypref)
fptr.write(strhyp)
fptr.write("\n")
```
The error I get is :
```
Traceback (most recent call last):
File "./ScrapeTemplate.py", line 30, in <module>
strhyp = str(hypref)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5: ordinal not in range(128)
```
I got the same error even when I didn't include the `from_encoding` parameter. I initially used it as `fromEncoding`, but python warned me that it was deprecated usage.
How do I fix this? From what I've read I need to either avoid the hindi titles or explicitly encode it into non-ascii text, but I don't know how to do that. Any help would be greatly appreciated!
|
2013/01/19
|
[
"https://Stackoverflow.com/questions/14412907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1765768/"
] |
What you see is a NavigableString instance (which is derived from the Python unicode type):
```
(Pdb) hypref.encode('utf-8')
'NDTV'
(Pdb) hypref.__class__
<class 'bs4.element.NavigableString'>
(Pdb) hypref.__class__.__bases__
(<type 'unicode'>, <class 'bs4.element.PageElement'>)
```
You need to convert to utf-8 using
```
hypref.encode('utf-8')
```
|
```
strhyp = hypref.encode('utf-8')
```
<http://joelonsoftware.com/articles/Unicode.html>
|
29,318,565
|
I am writing a raingauge precipitation calculator based in the radius of the raingauge. When I run my script, I have this error message:
```
Type de raingauge radius [cm]: 5.0
Traceback (most recent call last):
File "pluviometro.py", line 27, in <module>
area_bocal = (pi * (raio_bocal * raio_bocal)) # cm.cm
TypeError: can't multiply sequence by non-int of type 'str'
```
I am using
`raio_bocal = input("Type de raingauge radius [cm]:")`
for data input. When using `Python2` the typecasting was automatic.
How can I have a `float` entered value using `python3`?
|
2015/03/28
|
[
"https://Stackoverflow.com/questions/29318565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/824522/"
] |
As mentioned in the [docs](https://docs.python.org/3/library/functions.html#input)
>
> The function then reads a line from input, **converts it to a string** (stripping a trailing newline), and returns that
>
>
>
So you need to type cast it to `float` explicitly
```
raio_bocal = float(input("Type de raingauge radius [cm]:"))
```
|
You need to cast to float, input returns a string in python3:
```
float(input("Type de raingauge radius [cm]:"))
```
Probably safer use a while loop with a try/except when casting input.
```
while True:
inp = input("Type de raingauge radius [cm]:")
try:
raio_bocal = float(inp)
break
except ValueError:
print("Invalid input")
```
|
35,258,492
|
I have a directory containing a certificate bundle, a Python script and a Node script. Both scripts make a GET request to the same URL and are provided with the same certificate bundle. The Python script makes the request as expected however the node script throws this error:
>
> { [Error: unable to verify the first certificate] code: 'UNABLE\_TO\_VERIFY\_LEAF\_SIGNATURE' }
>
>
>
The Python script *(Python 3.4.3 and the [requests](https://github.com/kennethreitz/requests) library)*:
```
import requests
print(requests.get(url, verify='/tmp/cert/cacert.pem'))
```
The node script *(Node 4.2.6 and the [request](https://github.com/request/request) library)*:
```
var fs = require('fs');
var request = require('request');
request.get({
url: url,
agentOptions: {
ca: fs.readFileSync('/tmp/cert/cacert.pem')
}
}, function (error, response, body) {
if (error) {
console.log(error);
} else {
console.log(body);
}
});
```
Both are using the same OpenSSL version:
```
$ python -c 'import ssl; print(ssl.OPENSSL_VERSION)'
OpenSSL 1.0.2e-fips 3 Dec 2015
$ node -pe process.versions.openssl
1.0.2e
```
I don't believe the problem to be with the certificate bundle and I don't want to turn off host verification in Node.
Does anybody know why Node is throwing this error?
|
2016/02/07
|
[
"https://Stackoverflow.com/questions/35258492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1066031/"
] |
The [documentation](https://nodejs.org/api/https.html#https_https_request_options_callback) describes the `ca` option as follows:
>
> **ca: A string, Buffer or array of strings or Buffers of trusted certificates in PEM format. If this is omitted several well known "root" CAs will be used, like VeriSign. These are used to authorize connections.**
>
>
>
So it doesn't expect a CA bundle. The fix is simple however, just split the bundle like so:
```
var fs = require('fs');
var request = require('request');
var certs = fs.readFileSync('/tmp/cert/cacert.pem').toString().split("\n\n");
request.get({
url: url,
agentOptions: {
ca: certs
}
}, function (error, response, body) {
if (error) {
console.log(error);
} else {
console.log(body);
}
});
```
|
Maybe you can use this module that fixes the problem, by downloading certificates usually used by browsers.
<https://www.npmjs.com/package/ssl-root-cas>
|
43,935,569
|
my device will sent json data like this:
```
[{"channel":924125000, "sf":10, "time":"2017-05-11T16:56:15", "gwip":"192.168.1.125", "gwid":"00004c4978dbf5b4", "repeater":"00000000ffffffff", "systype":5, "rssi":-108.0, "snr":17.0, "snr_max":23.3, "snr_min":10.8, "macAddr":"00000000000000c3", "data":"4702483016331210179183", "frameCnt":1, "fport":2}]
```
but sometimes i received multiple json data(two or more):
```
[{"channel":924125000, "sf":10, "time":"2017-05-11T16:56:15", "gwip":"192.168.1.125", "gwid":"00001c497b48dbf5", "repeater":"00000000ffffffff", "systype":5, "rssi":-108.0, "snr":17.0, "snr_max":23.3, "snr_min":10.8, "macAddr":"00000000050100e8", "data":"4702483016331210179183", "frameCnt":1, "fport":2}],[{"channel":924125000, "sf":10, "time":"2017-05-11T16:56:15", "gwip":"192.168.1.125", "gwid":"00001c497b48dbf5", "repeater":"00000000ffffffff", "systype":5, "rssi":-108.0, "snr":17.0, "snr_max":23.3, "snr_min":10.8, "macAddr":"00000000050100e8", "data":"4702483016331210179183", "frameCnt":1, "fport":2}]
```
when i parse multiple json data
```
json_Dict = json.loads(jsonData)
```
then
`File "/usr/lib/python2.7/json/decoder.py", line 369, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 303 - line 1 column 1818 (char 302 - 1817)`
how can parse every multiple json data ?
thanks for your help
|
2017/05/12
|
[
"https://Stackoverflow.com/questions/43935569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8002033/"
] |
because you have multiple objects in your json you should include them in a list :
```
json_List = json.loads('[' + jsonData + ']')
```
|
Paste it in a Tool like [JSONLINT](https://jsonlint.com/)
and you get:
>
> Error: Parse error on line 17:
> ...": 1, "fport": 2}], [{ "channel": 924
> ---------------------^
> Expecting 'EOF', got ','
>
>
>
which is the cause of your error. This is not *valid* JSON.
The correct structure would be something like `[[...],[...]]`. You have `[...],[...]`, which is not correct.
|
5,524,241
|
I have two custom Django fields, a `JSONField` and a `CompressedField`, both of which work well. I would like to also have a `CompressedJSONField`, and I was rather hoping I could do this:
```
class CompressedJSONField(JSONField, CompressedField):
pass
```
but on import I get:
```
RuntimeError: maximum recursion depth exceeded while calling a Python object
```
I can find information about using models with multiple inheritance in Django, but nothing about doing the same with fields. Should this be possible? Or should I just give up at this stage?
**edit:**
Just to be clear, I don't *think* this has anything to do with the specifics of my code, as the following code has exactly the same problem:
```
class CustomField(models.TextField, models.CharField):
pass
```
**edit 2:**
I'm using Python 2.6.6 and Django 1.3 at present. Here is the full code of my stripped-right-down test example:
`customfields.py`
-----------------
```
from django.db import models
class CompressedField(models.TextField):
""" Standard TextField with automatic compression/decompression. """
__metaclass__ = models.SubfieldBase
description = 'Field which compresses stored data.'
def to_python(self, value):
return value
def get_db_prep_value(self, value, **kwargs):
return super(CompressedField, self)\
.get_db_prep_value(value, prepared=True)
class JSONField(models.TextField):
""" JSONField with automatic serialization/deserialization. """
__metaclass__ = models.SubfieldBase
description = 'Field which stores a JSON object'
def to_python(self, value):
return value
def get_db_prep_save(self, value, **kwargs):
return super(JSONField, self).get_db_prep_save(value, **kwargs)
class CompressedJSONField(JSONField, CompressedField):
pass
```
`models.py`
-----------
```
from django.db import models
from customfields import CompressedField, JSONField, CompressedJSONField
class TestModel(models.Model):
name = models.CharField(max_length=150)
compressed_field = CompressedField()
json_field = JSONField()
compressed_json_field = CompressedJSONField()
def __unicode__(self):
return self.name
```
as soon as I add the `compressed_json_field = CompressedJSONField()` line I get errors when initializing Django.
|
2011/04/02
|
[
"https://Stackoverflow.com/questions/5524241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/88411/"
] |
after doing a few quick tests i found that if you remove the **metaclass** from the JSON and compressed fields and put it in the compressedJSON field it compiles. if you then need the JSON or Compressed fields then subclass them and jusst add the `__metaclass__ = models.SubfieldBase`
i have to admit that i didn't do any heavy testing with this:
```
from django.db import models
class CompressedField(models.TextField):
""" Standard TextField with automatic compression/decompression. """
description = 'Field which compresses stored data.'
def to_python(self, value):
return value
def get_db_prep_value(self, value, **kwargs):
return super(CompressedField, self).get_db_prep_value(value, prepared=True)
class JSONField(models.TextField):
""" JSONField with automatic serialization/deserialization. """
description = 'Field which stores a JSON object'
def to_python(self, value):
return value
def get_db_prep_save(self, value, **kwargs):
return super(JSONField, self).get_db_prep_save(value, **kwargs)
class CompressedJSONField(JSONField, CompressedField):
__metaclass__ = models.SubfieldBase
class TestModel(models.Model):
name = models.CharField(max_length=150)
#compressed_field = CompressedField()
#json_field = JSONField()
compressed_json_field = CompressedJSONField()
def __unicode__(self):
return self.name
```
if you then want to uses the JSON and Commpressed fields separately i assume this idea will work:
```
class JSONFieldSubClass(JSONField):
__metaclass__ = models.SubfieldBase
```
Honestly ... I don't really understand any of this.
**EDIT base method hack**
```
class CompressedJSONField(JSONField, CompressedField):
__metaclass__ = models.SubfieldBase
def to_python(self, value):
value = JSONField.to_python(self, value)
value = CompressedField.to_python(self, value)
return value
```
the other way is to make the to\_python() on the classes have unique names and call them in your inherited classes to\_python() methods
or maybe check out this [answer](https://stackoverflow.com/questions/2611892/get-python-class-parents/2611897#2611897)
**EDIT**
after some reading if you implement a call to `super(class, self).method(args)` in the first base to\_python() then it will call the second base. If you use super consistently then you shouldn't have any problems. <http://docs.python.org/library/functions.html#super> is worth checking out and <http://www.artima.com/weblogs/viewpost.jsp?thread=237121>
```
class base1(object):
def name(self, value):
print "base1", value
super(base1, self).name(value)
def to_python(self, value):
value = value + " base 1 "
if(hasattr(super(base1, self), "to_python")):
value = super(base1, self).to_python(value)
return value
class base2(object):
def name(self, value):
print "base2", value
def to_python(self, value):
value = value + " base 2 "
if(hasattr(super(base2, self), "to_python")):
value = super(base2, self).to_python(value)
return value
class superClass(base1, base2):
def name(self, value):
super(superClass, self).name(value)
print "super Class", value
```
|
It is hard to understand when exactly you are getting that error. But looking at DJango code, there is simlar implementation (multiple inheritance)
refer: **class ImageFieldFile(ImageFile, FieldFile)**
in django/db/models/fields
|
430,226
|
I need to poll a web service, in this case twitter's API, and I'm wondering what the conventional wisdom is on this topic. I'm not sure whether this is important, but I've always found feedback useful in the past.
A couple scenarios I've come up with:
1. The querying process starts every X seconds, eg a cron job runs a python script
2. A process continually loops and queries at each iteration, eg ... well, here is where I enter unfamiliar territory. Do I just run a python script that doesn't end?
Thanks for your advice.
ps - regarding the particulars of twitter: I know that it sends emails for following and direct messages, but sometimes one might want the flexibility of parsing @replies. In those cases, I believe polling is as good as it gets.
pps - twitter limits bots to 100 requests per 60 minutes. I don't know if this also limits web scraping or rss feed reading. Anyone know how easy or hard it is to be whitelisted?
Thanks again.
|
2009/01/10
|
[
"https://Stackoverflow.com/questions/430226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
"Do I just run a python script that doesn't end?"
How is this unfamiliar territory?
```
import time
polling_interval = 36.0 # (100 requests in 3600 seconds)
running= True
while running:
start= time.clock()
poll_twitter()
anything_else_that_seems_important()
work_duration = time.clock() - start
time.sleep( polling_interval - work_duration )
```
It's just a loop.
|
You should have a page that is like a Ping or Heartbeat page. The you have another process that "tickles" or hits that page, usually you can do this in your Control Panel of your web host, or use a cron if you have a local access. Then this script can keep statistics of how often it has polled in a database or some data store and then you poll the service as often as you really need to, of course limiting it to whatever the providers limit is. You definitely don't want to (and certainly don't want to rely) on a python scrip that "doesn't end." :)
|
42,673,016
|
I tried to make a checkbutton which is supposed to activate a function "rond" but it's not working... What have I done wrong ?
```
from tkinter import*
def rond():
if okok.get()==1:
print("ok")
okok = BooleanVar()
okok.set(0)
root = Tk()
can = Canvas(root, width=200, height=150, bg="light yellow")
can.bind("<ButtonPress-1>", variable=okok, onvalue=1, offvalue=0, command=rond)
can.pack(side="top")
root.mainloop()
```
After it has run this appears:
`Traceback (most recent call last):
File "/PycharmProjects/untitled/testtest.py", line 7, in <module>
okok = BooleanVar()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tkinter/__init__. py", line 389, in __init__
Variable.__init__(self, master, value, name)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tkinter/__init__.py", line 233, in __init__
self._root = master._root()
AttributeError: 'NoneType' object has no attribute '_root'`
|
2017/03/08
|
[
"https://Stackoverflow.com/questions/42673016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7678528/"
] |
There are three problems:
1. The exception you are getting is because you have to create `root = Tk()` before the `BooleanVar`.
2. As already noted, you should use the [`Checkbutton`](http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/checkbutton.html) widget instead of `Canvas`. The `command` then goes directly into the constructor; no `bind` neaded. Also, your `onvalue` and `offvalue` are the same as the default, so those are not really needed, either.
```
can = Checkbutton(root, width=20, height=15, bg="light yellow",
variable=okok, onvalue=1, offvalue=0, command=rond)
```
3. Without an image icon, the `width` and `height` will be in characters (i.e. lines and columns of text), so the numbers you entered are much too high. Alternatively, provide an image icon.
|
It looks like you're using canvas and not the check button. I would try something like this:
cbutton = Checkbutton(root, etc, etc)
or check out effbot.org for a good resource.
|
70,150,128
|
This is my project structure
[](https://i.stack.imgur.com/BrsjM.png)
I am able to access the default SQLite database `db.sqlite3` created by Django, by importing the models directly inside of my views files
Like - `from basic.models import table1`
Now, I have another Database called `UTF.db` which is created by someone else, and I want to access it's data and perform normal QuerySet operations on the retrieved data
The problem is I don't know how to import the tables inside that database, as they are not inside any model file inside my project as it's created by someone else
I tried adding the tables inside the `UTF.db` database to a `models.py` file by first adding it to the `settings.py` file like the following
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
},
'otherdb':{
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'UTF.db',
}
}
```
And then using the `inspectdb` command to add the tables to an existing `models.py` file
The command I tried out -
`python manage.py inspectdb > models.py`
But, that just causes my models.py file to get emptied out
Does anyone know how this can be solved?
In the end, I wish to import the table data inside of my views files by importing the respective model
|
2021/11/29
|
[
"https://Stackoverflow.com/questions/70150128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11827709/"
] |
You can specify specific database as specified in [Documentation](https://docs.djangoproject.com/en/3.2/ref/django-admin/#cmdoption-inspectdb-database)
```
python manage.py inspectdb --database=otherdb > your_app/models.py
```
Also if possible putting otherdb in a different App is better.
|
You can attach the second database to the first one and use it from within the first one. You can use tables from both databases in single sql query.
Here is the doc <https://www.sqlite.org/lang_attach.html>.
```
attach database '/path/to/dbfile.sqlite' as db_remote;
select *
from some_table
join db_remote.remote_table on ....
detach database db_remote;
```
|
70,150,128
|
This is my project structure
[](https://i.stack.imgur.com/BrsjM.png)
I am able to access the default SQLite database `db.sqlite3` created by Django, by importing the models directly inside of my views files
Like - `from basic.models import table1`
Now, I have another Database called `UTF.db` which is created by someone else, and I want to access it's data and perform normal QuerySet operations on the retrieved data
The problem is I don't know how to import the tables inside that database, as they are not inside any model file inside my project as it's created by someone else
I tried adding the tables inside the `UTF.db` database to a `models.py` file by first adding it to the `settings.py` file like the following
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
},
'otherdb':{
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'UTF.db',
}
}
```
And then using the `inspectdb` command to add the tables to an existing `models.py` file
The command I tried out -
`python manage.py inspectdb > models.py`
But, that just causes my models.py file to get emptied out
Does anyone know how this can be solved?
In the end, I wish to import the table data inside of my views files by importing the respective model
|
2021/11/29
|
[
"https://Stackoverflow.com/questions/70150128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11827709/"
] |
You can specify specific database as specified in [Documentation](https://docs.djangoproject.com/en/3.2/ref/django-admin/#cmdoption-inspectdb-database)
```
python manage.py inspectdb --database=otherdb > your_app/models.py
```
Also if possible putting otherdb in a different App is better.
|
@sevdimali's answer works fine to create the new models from an existing database
Once that is done you need to use the command -
`python manage.py makemigrations`
To add the changes to the migrations folder
Then use the command
`python manage.py migrate --fake-initial`
To add the new changes to your database
**Note**: Use --fake-initial to ignore the already created tables while adding your new tables (so that they don't get recreated)
If there are no pre-existing models you can use
`python manage.py migrate`
Just like you do it normally
---
Now while importing the Database model inside of your view you can write
`from Appname.models import Tablename`
Where Appname is the App in which you added your new models
And to access it's data
`qs=Tablename.objects.using('otherdb').all()`
**Note**: `otherdb` is the name of the external database specified in the `settings.py` file
Now you can access all of your data in a similar way!!
|
6,282,519
|
I'm not sure if I'm even asking this question correctly. I just built my first real program and I want to make it available to people in my office. I'm not sure if I will have access to the shared server, but I was hoping I could simply package the program (I hope I'm using this term correctly) and upload it to a website for my coworkers to download.
I know how to zip a file, but something tells me it's a little more complicated than that :) In fact, some of the people in my office who need the program installed do not have python on their computers already, and I would rather avoid asking everyone to install python before downloading my .py files from my hosting server.
So, is there an easy way to package my program, along with python and the other dependencies, for simple distribution from a website? I tried searching for the answer but I can't find exactly what I'm looking for. Oh, and since this is the first time I have done this- are there any precautions I need to take when sharing these files so that everything runs smoothly?
|
2011/06/08
|
[
"https://Stackoverflow.com/questions/6282519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1382299/"
] |
[PyInstaller](http://www.pyinstaller.org/) or [py2exe](http://www.py2exe.org/) can package your Python program.
Both are actively maintained. PyInstaller is actively maintained. py2exe has not been updated for at least a year. I've used each with success.
Also there is [cx\_Freeze](http://cx-freeze.sourceforge.net/) which I have not used.
|
Take a look at <http://www.py2exe.org/>
|
63,890,399
|
I am working with a dataframe which looks similar to this
```
Ind Pos Sample Ct LogConc RelConc
1 B1 wt1A 26.93 -2.0247878 0.009445223
2 B2 wt1A 27.14 -2.0960951 0.008015026
3 B3 wt1B 26.76 -1.9670628 0.010787907
4 B4 wt1B 26.94 -2.0281834 0.009371662
5 B5 wt1C 26.01 -1.7123939 0.019391264
6 B6 wt1C 26.08 -1.7361630 0.018358492
7 B7 wt1D 25.68 -1.6003396 0.025099232
8 B8 wt1D 25.75 -1.6241087 0.023762457
9 B9 wt1E 22.11 -0.3881154 0.409151879
10 B10 wt1E 22.21 -0.4220713 0.378380453
11 B11 dko1A 22.20 -0.4186757 0.381350463
12 B12 dko1A 22.10 -0.3847199 0.412363423
```
My goal is to calculate the sample wise average of the RelConc, which would result in a dataframe which would look something like this.
```
Ind Pos Sample Ct LogConc RelConc AverageRelConc
1 B1 wt1A 26.93 -2.0247878 0.009445223 0.008730124
2 B2 wt1A 27.14 -2.0960951 0.008015026 0.008730124
3 B3 wt1B 26.76 -1.9670628 0.010787907 0.010079785
4 B4 wt1B 26.94 -2.0281834 0.009371662 0.010079785
5 B5 wt1C 26.01 -1.7123939 0.019391264 0.018874878
6 B6 wt1C 26.08 -1.7361630 0.018358492 0.018874878
7 B7 wt1D 25.68 -1.6003396 0.025099232 0.024430845
8 B8 wt1D 25.75 -1.6241087 0.023762457 0.024430845
9 B9 wt1E 22.11 -0.3881154 0.409151879 0.393766166
10 B10 wt1E 22.21 -0.4220713 0.378380453 0.393766166
11 B11 dko1A 22.20 -0.4186757 0.381350463 0.396856943
12 B12 dko1A 22.10 -0.3847199 0.412363423 0.396856943
```
I am fairly new to R and have no idea how to accomplish such a seemingly simple task. In python, I'd probably loop through each row and check if I have encountered a new sample name and then calculate the average for all samples above. However this seems not very "R like".
If somebody could point me to a solution, I'd be very happy!
Cheers!
|
2020/09/14
|
[
"https://Stackoverflow.com/questions/63890399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11637268/"
] |
In `base R`, we can use `ave` and it is very fast
```
df1$AverageRelConc <- with(df1, ave(RelConc, Sample))
```
-output
```
df1$AverageRelConc
#[1] 0.008730125 0.008730125 0.010079784 0.010079784 0.018874878 0.018874878 0.024430844 0.024430844 0.393766166 0.393766166
#[11] 0.396856943 0.396856943
```
---
Or using `tidyverse`, we group by 'Sample' and get the `mean` of 'RelConc'
```
library(dplyr)
df1 %>%
group_by(Sample) %>%
mutate(AverageRelConc = mean(RelConc, na.rm = TRUE))
```
-output
```
# A tibble: 12 x 7
# Groups: Sample [6]
# Ind Pos Sample Ct LogConc RelConc AverageRelConc
# <int> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
# 1 1 B1 wt1A 26.9 -2.02 0.00945 0.00873
# 2 2 B2 wt1A 27.1 -2.10 0.00802 0.00873
# 3 3 B3 wt1B 26.8 -1.97 0.0108 0.0101
# 4 4 B4 wt1B 26.9 -2.03 0.00937 0.0101
# 5 5 B5 wt1C 26.0 -1.71 0.0194 0.0189
# 6 6 B6 wt1C 26.1 -1.74 0.0184 0.0189
# 7 7 B7 wt1D 25.7 -1.60 0.0251 0.0244
# 8 8 B8 wt1D 25.8 -1.62 0.0238 0.0244
# 9 9 B9 wt1E 22.1 -0.388 0.409 0.394
#10 10 B10 wt1E 22.2 -0.422 0.378 0.394
#11 11 B11 dko1A 22.2 -0.419 0.381 0.397
#12 12 B12 dko1A 22.1 -0.385 0.412 0.397
```
### data
```
df1 <- structure(list(Ind = 1:12, Pos = c("B1", "B2", "B3", "B4", "B5",
"B6", "B7", "B8", "B9", "B10", "B11", "B12"), Sample = c("wt1A",
"wt1A", "wt1B", "wt1B", "wt1C", "wt1C", "wt1D", "wt1D", "wt1E",
"wt1E", "dko1A", "dko1A"), Ct = c(26.93, 27.14, 26.76, 26.94,
26.01, 26.08, 25.68, 25.75, 22.11, 22.21, 22.2, 22.1), LogConc = c(-2.0247878,
-2.0960951, -1.9670628, -2.0281834, -1.7123939, -1.736163, -1.6003396,
-1.6241087, -0.3881154, -0.4220713, -0.4186757, -0.3847199),
RelConc = c(0.009445223, 0.008015026, 0.010787907, 0.009371662,
0.019391264, 0.018358492, 0.025099232, 0.023762457, 0.409151879,
0.378380453, 0.381350463, 0.412363423)), class = "data.frame",
row.names = c(NA,
-12L))
```
|
Try this `tidyverse` option:
```
library(tidyverse)
#Code
df %>% group_by(Sample) %>%
mutate(AvgRelConc=mean(RelConc,na.rm=T))
```
Output:
```
# A tibble: 12 x 7
# Groups: Sample [6]
Ind Pos Sample Ct LogConc RelConc AvgRelConc
<int> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 1 B1 wt1A 26.9 -2.02 0.00945 0.00873
2 2 B2 wt1A 27.1 -2.10 0.00802 0.00873
3 3 B3 wt1B 26.8 -1.97 0.0108 0.0101
4 4 B4 wt1B 26.9 -2.03 0.00937 0.0101
5 5 B5 wt1C 26.0 -1.71 0.0194 0.0189
6 6 B6 wt1C 26.1 -1.74 0.0184 0.0189
7 7 B7 wt1D 25.7 -1.60 0.0251 0.0244
8 8 B8 wt1D 25.8 -1.62 0.0238 0.0244
9 9 B9 wt1E 22.1 -0.388 0.409 0.394
10 10 B10 wt1E 22.2 -0.422 0.378 0.394
11 11 B11 dko1A 22.2 -0.419 0.381 0.397
12 12 B12 dko1A 22.1 -0.385 0.412 0.397
```
Some data used:
```
#Data
df <- structure(list(Ind = 1:12, Pos = c("B1", "B2", "B3", "B4", "B5",
"B6", "B7", "B8", "B9", "B10", "B11", "B12"), Sample = c("wt1A",
"wt1A", "wt1B", "wt1B", "wt1C", "wt1C", "wt1D", "wt1D", "wt1E",
"wt1E", "dko1A", "dko1A"), Ct = c(26.93, 27.14, 26.76, 26.94,
26.01, 26.08, 25.68, 25.75, 22.11, 22.21, 22.2, 22.1), LogConc = c(-2.0247878,
-2.0960951, -1.9670628, -2.0281834, -1.7123939, -1.736163, -1.6003396,
-1.6241087, -0.3881154, -0.4220713, -0.4186757, -0.3847199),
RelConc = c(0.009445223, 0.008015026, 0.010787907, 0.009371662,
0.019391264, 0.018358492, 0.025099232, 0.023762457, 0.409151879,
0.378380453, 0.381350463, 0.412363423)), class = "data.frame", row.names = c(NA,
-12L))
```
Or you could use `aggregate()` and save the results in a different dataframe and after that you can join with original `df`:
```
#Compute means
dfmeans <- aggregate(RelConc~Sample,df,mean,na.rm=T)
#Now match
df$AvgRelConc <- dfmeans[match(df$Sample,dfmeans$Sample),"RelConc"]
```
Output:
```
Ind Pos Sample Ct LogConc RelConc AvgRelConc
1 1 B1 wt1A 26.93 -2.0247878 0.009445223 0.008730125
2 2 B2 wt1A 27.14 -2.0960951 0.008015026 0.008730125
3 3 B3 wt1B 26.76 -1.9670628 0.010787907 0.010079784
4 4 B4 wt1B 26.94 -2.0281834 0.009371662 0.010079784
5 5 B5 wt1C 26.01 -1.7123939 0.019391264 0.018874878
6 6 B6 wt1C 26.08 -1.7361630 0.018358492 0.018874878
7 7 B7 wt1D 25.68 -1.6003396 0.025099232 0.024430844
8 8 B8 wt1D 25.75 -1.6241087 0.023762457 0.024430844
9 9 B9 wt1E 22.11 -0.3881154 0.409151879 0.393766166
10 10 B10 wt1E 22.21 -0.4220713 0.378380453 0.393766166
11 11 B11 dko1A 22.20 -0.4186757 0.381350463 0.396856943
12 12 B12 dko1A 22.10 -0.3847199 0.412363423 0.396856943
```
|
4,666,527
|
Does anyone have some good resources on learning more advanced regular expressions
I keep having problems where I want to make sure something is not enclosed in quotation marks
i.e. I am trying to make an expression that will match lines in a python file containing an equality, i.e.
```
a = 4
```
which is easy enough, but I am having trouble devising an expression that would be able to separate out multiple terms or ones wrapped in quotes like these:
```
a, b = b, a
a,b = "You say yes, ", "i say no"
```
|
2011/01/12
|
[
"https://Stackoverflow.com/questions/4666527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/392485/"
] |
Parsing code with regular expressions is generally not a good idea, as the grammar of a programming language is not a regular language. I'm not much of a python programmer, but I think you would be a lot better off parsing python code with python modules such as [this one](http://docs.python.org/library/parser.html) or [this one](http://docs.python.org/library/ast.html)
|
Python has an excellent [Language Reference](http://docs.python.org/reference/index.html) that also includes [descriptions of the lexical analysis and syntax](http://docs.python.org/reference/introduction.html#notation).
In your case both statements are [assignments](http://docs.python.org/reference/simple_stmts.html#assignment-statements) with a [list of targets](http://docs.python.org/reference/simple_stmts.html#grammar-token-target_list) on the left hand side and and a [list of expressions](http://docs.python.org/reference/expressions.html#grammar-token-expression_list) on the right hand side.
But since parts of that grammar part are context-free and not regular, you can’t use regular expressions (unless they support some kind of recursive patterns). So better use a proper parser [as Jonas H suggested](https://stackoverflow.com/questions/4666527/reqular-expression-to-seperate-equalitys/4666575#4666575).
|
4,666,527
|
Does anyone have some good resources on learning more advanced regular expressions
I keep having problems where I want to make sure something is not enclosed in quotation marks
i.e. I am trying to make an expression that will match lines in a python file containing an equality, i.e.
```
a = 4
```
which is easy enough, but I am having trouble devising an expression that would be able to separate out multiple terms or ones wrapped in quotes like these:
```
a, b = b, a
a,b = "You say yes, ", "i say no"
```
|
2011/01/12
|
[
"https://Stackoverflow.com/questions/4666527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/392485/"
] |
A think that you have to tokenize the expression for correct evaluation but you can detect the pattern using the following regex
```
r'\s+(\w+)(\s*,\s*\w+)*\s*=\s*(.*?)(\s*,\s*.*?)*'
```
If group(2) and group(4) are not empty you have to tokenize the expression
Note that if you have
a,b = f(b,a), g(a,b)
It is hard to analyze
|
Python has an excellent [Language Reference](http://docs.python.org/reference/index.html) that also includes [descriptions of the lexical analysis and syntax](http://docs.python.org/reference/introduction.html#notation).
In your case both statements are [assignments](http://docs.python.org/reference/simple_stmts.html#assignment-statements) with a [list of targets](http://docs.python.org/reference/simple_stmts.html#grammar-token-target_list) on the left hand side and and a [list of expressions](http://docs.python.org/reference/expressions.html#grammar-token-expression_list) on the right hand side.
But since parts of that grammar part are context-free and not regular, you can’t use regular expressions (unless they support some kind of recursive patterns). So better use a proper parser [as Jonas H suggested](https://stackoverflow.com/questions/4666527/reqular-expression-to-seperate-equalitys/4666575#4666575).
|
55,338,811
|
I'm currently working on a small project to learn python. This project creates a random forest, then sets the forest up on fire to stimulate a forest fire. So I managed to create the forest out using a function. The forest is just an array of 0s and 1s. 0 to represent water, 1 to present a tree.
So now I'm currently really stuck on how can I stimulate a forest fire with my array. I do know the logic behind how the fire should be started and spread, but I do not know how should I write it as a code.
The logic is that:
1. I'll use 2 to represent a fire, and 3 to represent burnt areas. So when trees get burned, all the 1s in the array will become 2, then followed by 3. Water, represented by 0, will not be affected. I think this part needs to be done with a for-loop. So one iteration of the loop will change 1 into 2, then the next for-loop will change 2 into 3, and repeat till the end of the array.
2. The fire needs to start from the center of the forest, so I need to figure out the positional index of the center of the array and check that if it is 1, and not 0 to initiate the fire. This can be done with a if-else condition.
3. The fire will then spread outwards to adjacent 1s in north, south, east, west direction, and so on and so forth.
So I'm having trouble writing up the loops to replace 1 with 2, then 2 with 3 such that it spreads from one tree to another.
I managed to write a function to create the random forest. The problem is with setting the forest up on fire. I've tried to write some for-loops, but I really have no idea how should I approach this problem.
```
#Define parameters for createForest Function. Sets the parameters for the forest too.
width = int(5)
height = int(5)
density = float(0.7) # probability of spawning a tree
forest = [[]]
#Making a random forest
def createForest(width, height, density):
forest = np.random.choice(2, size=(width, height), p=[(1-density), density])
return forest
print(createForest(width, height, density))
forest = createForest(width, height, density) # updates forest into the list
```
This would print out an array of 0s and 1s in random order:
```
[[1 0 1 1 1]
[1 1 1 1 1]
[0 0 1 1 1]
[1 1 1 1 1]
[1 1 1 1 1]]
```
|
2019/03/25
|
[
"https://Stackoverflow.com/questions/55338811",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11255128/"
] |
use not exists
```
select mygroup from table_name t1
where not exists( select 1 from table_name t2 where t1.var2=t2.var1
and t1.mygroup=t2.mygroup)
and t1.var2 is not null
```
|
Another approach to use cte and temptables:
1. Find out the var2 values that is not included in var1 for the same mygroup
2. List the mygroups and group them there var2 in the list you have found in step 1.
Try below:
```
create table #temp (mygroup int, var1 int, var2 int)
insert into #temp values
(1 , 1, null),
(1 , 2, 1),
(1 , 3, 2),
(1 , 4, null),
(2 , 23, 23 ),
(2 , 24, 20 ),
(2 , 26, null),
(3 , 30, 10),
(3 , 20, null),
(3 , 10, null)
;with cte as (
select t.mygroup, t.var1, t2.var2
from #temp t
inner join #temp t2 on t2.var2=t.var1 and t2.mygroup = t.mygroup
)
select var2
into #notIncludeList
from #temp
where var2 not in (select var1 from cte)
select mygroup
from #temp
where var2 in (select var2 from #notIncludeList)
group by mygroup
```
This solution worked in MsSql-2014.
|
59,289,903
|
Please help me make sense of this big fat error output. At this point I don't know which end is up. I have been spinning my wheels for days on this.
This is **not** the first/only package installation that has given me these errors, but the project ran fine anyway, so I ignored it. Now I want a new package, and it won't install. I did not set up this project.
Using *React, Webpack, and Yarn* on *MacOS 10.14.6* with *Node* running through `nvm`. I have *Xcode command line tools* installed independently, and full *XCode* installed. I have not disturbed the `yarn.lock` file since receiving this project, but I believe I did update a few packages through yarn at some point.
```
myprojectweb $ yarn add redux-persist
yarn add v1.19.1
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
warning " > @date-io/moment@0.0.2" has incorrect peer dependency "moment@^2.22.2".
warning " > @material-ui/icons@3.0.2" has incorrect peer dependency "@material-ui/core@^3.0.0".
warning " > @material-ui/pickers@3.1.2" has unmet peer dependency "@date-io/core@^1.3.6".
warning " > connected-react-router@6.5.0" has unmet peer dependency "react-router@^4.3.1 || ^5.0.0".
warning " > mdi-material-ui@5.7.0" has incorrect peer dependency "@material-ui/core@^1.0.0 || ^3.0.0".
warning " > redux-persist@6.0.0" has incorrect peer dependency "redux@>4.0.0".
[4/4] Building fresh packages...
[1/4] ⠐ node-sass
[2/4] ⠐ deasync
[-/4] ⠐ waiting...
error /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync: Command failed.
Exit code: 1
Command: node ./build.js
Arguments:
Directory: /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync
Output:
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.5
gyp info using node@12.13.0 | darwin | x64
gyp info find Python using Python version 2.7.16 found at "/usr/bin/python"
gyp info spawn /usr/bin/python
gyp info spawn args [
gyp info spawn args '/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/Users/csf/Library/Caches/node-gyp/12.13.0',
gyp info spawn args '-Dnode_gyp_dir=/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/Users/csf/Library/Caches/node-gyp/12.13.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
CXX(target) Release/obj.target/deasync/src/deasync.o
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:22:1: warning: 'ToBoolean' is deprecated: ToBoolean can never throw. Use Local version. [-Wdeprecated-declarations]
X(Boolean)
^
../../nan/nan_converters_43_inl.h:18:12: note: expanded from macro 'X'
val->To ## TYPE(isolate->GetCurrentContext()) \
^
<scratch space>:213:1: note: expanded from here
ToBoolean
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2567:3: note: 'ToBoolean' has been explicitly marked deprecated here
V8_DEPRECATED("ToBoolean can never throw. Use Local version.",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:40:1: warning: 'BooleanValue' is deprecated: BooleanValue can never throw. Use Isolate version. [-Wdeprecated-declarations]
X(bool, Boolean)
^
../../nan/nan_converters_43_inl.h:37:15: note: expanded from macro 'X'
return val->NAME ## Value(isolate->GetCurrentContext()); \
^
<scratch space>:220:1: note: expanded from here
BooleanValue
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2605:3: note: 'BooleanValue' has been explicitly marked deprecated here
V8_DEPRECATED("BooleanValue can never throw. Use Isolate version.",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:103:42: error: no viable conversion from 'v8::Isolate *' to 'Local<v8::Context>'
return scope.Escape(v8::Function::New( isolate
^~~~~~~
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'v8::Isolate *' to 'const v8::Local<v8::Context> &' for 1st argument
class Local {
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit move constructor) not viable: no known conversion from 'v8::Isolate *' to 'v8::Local<v8::Context> &&' for 1st argument
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:187:13: note: candidate template ignored: could not match 'Local<type-parameter-0-0>' against 'v8::Isolate *'
V8_INLINE Local(Local<S> that)
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:4171:22: note: passing argument to parameter 'context' here
Local<Context> context, FunctionCallback callback,
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:337:37: error: too few arguments to function call, expected 2, have 1
return v8::StringObject::New(value).As<v8::StringObject>();
~~~~~~~~~~~~~~~~~~~~~ ^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:5426:3: note: 'New' declared here
static Local<Value> New(Isolate* isolate, Local<String> value);
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:337:58: error: expected '(' for function-style cast or type construction
return v8::StringObject::New(value).As<v8::StringObject>();
~~~~~~~~~~~~~~~~^
../../nan/nan_implementation_12_inl.h:337:60: error: expected expression
return v8::StringObject::New(value).As<v8::StringObject>();
^
```
**... MORE SIMILAR TRACE ERRORS, TOO MANY CHARACTERS FOR STACKOVERFLOW ...**
```
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:563:3: note: 'MarkIndependent' has been explicitly marked deprecated here
V8_DEPRECATED(
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:2690:
../../nan/nan_object_wrap.h:124:26: error: no member named 'IsNearDeath' in 'Nan::Persistent<v8::Object, v8::NonCopyablePersistentTraits<v8::Object> >'
assert(wrap->handle_.IsNearDeath());
~~~~~~~~~~~~~ ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/assert.h:93:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __FILE__, __LINE__, #e) : (void)0)
^
9 warnings and 8 errors generated.
make: *** [Release/obj.target/deasync/src/deasync.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:210:5)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
gyp ERR! System Darwin 18.7.0
gyp ERR! command "/Users/csf/.nvm/versions/node/v12.13.0/bin/node" "/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync
gyp ERR! node -v v12.13.0
gyp ERR! node-gyp -v v5.0.5
warning Error running install script for optional dependency: "/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents: Command failed.
Exit code: 1
Command: node install
Arguments:
Directory: /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents
Output:
node-pre-gyp info it worked if it ends with ok
node-pre-gyp info using node-pre-gyp@0.10.0
node-pre-gyp info using node@12.13.0 | darwin | x64
node-pre-gyp info check checked for \"/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\" (not found)
node-pre-gyp http GET https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
node-pre-gyp http 404 https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
node-pre-gyp WARN Tried to download(404): https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.4 and node@12.13.0 (node-v72 ABI, unknown) (falling back to source compile with node-gyp)
node-pre-gyp http 404 status code downloading tarball https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.3
gyp info using node@12.13.0 | darwin | x64
gyp info ok
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.3
gyp info using node@12.13.0 | darwin | x64
gyp info find Python using Python version 2.7.16 found at \"/usr/bin/python\"
gyp info spawn /usr/bin/python
gyp info spawn args [
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/Users/csf/Library/Caches/node-gyp/12.13.0',
gyp info spawn args '-Dnode_gyp_dir=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/Users/csf/Library/Caches/node-gyp/12.13.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp info ok
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.3
gyp info using node@12.13.0 | darwin | x64
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
SOLINK_MODULE(target) Release/.node
CXX(target) Release/obj.target/fse/fsevents.o
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:22:1: warning: 'ToBoolean' is deprecated: ToBoolean can never throw. Use Local version. [-Wdeprecated-declarations]
X(Boolean)
^
../../nan/nan_converters_43_inl.h:18:12: note: expanded from macro 'X'
val->To ## TYPE(isolate->GetCurrentContext()) \\\n ^
<scratch space>:213:1: note: expanded from here
ToBoolean
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2567:3: note: 'ToBoolean' has been explicitly marked deprecated here
V8_DEPRECATED(\"ToBoolean can never throw. Use Local version.\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:40:1: warning: 'BooleanValue' is deprecated: BooleanValue can never throw. Use Isolate version. [-Wdeprecated-declarations]
X(bool, Boolean)
^
../../nan/nan_converters_43_inl.h:37:15: note: expanded from macro 'X'
return val->NAME ## Value(isolate->GetCurrentContext()); \\\n ^
<scratch space>:220:1: note: expanded from here
BooleanValue
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2605:3: note: 'BooleanValue' has been explicitly marked deprecated here
V8_DEPRECATED(\"BooleanValue can never throw. Use Isolate version.\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:103:42: error: no viable conversion from 'v8::Isolate *' to 'Local<v8::Context>'
return scope.Escape(v8::Function::New( isolate
^~~~~~~
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'v8::Isolate *' to 'const v8::Local<v8::Context> &' for 1st argument
class Local {
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit move constructor) not viable: no known conversion from 'v8::Isolate *' to 'v8::Local<v8::Context> &&' for 1st argument
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:187:13: note: candidate template ignored: could not match 'Local<type-parameter-0-0>' against 'v8::Isolate *'
V8_INLINE Local(Local<S> that)
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:4171:22: note: passing argument to parameter 'context' here
Local<Context> context, FunctionCallback callback,
^
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:337:37: error: too few arguments to function call, expected 2, have 1
return v8::StringObject::New(value).As<v8::StringObject>();
~~~~~~~~~~~~~~~~~~~~~ ^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:5426:3: note: 'New' declared here
static Local<Value> New(Isolate* isolate, Local<String> value);
^
```
**... MORE SIMILAR TRACE ERRORS, TOO MANY CHARACTERS FOR STACKOVERFLOW ...**
```
In file included from ../fsevents.cc:82:
../src/constants.cc:107:11: warning: 'Set' is deprecated: Use maybe version [-Wdeprecated-declarations]
object->Set(Nan::New<v8::String>(\"kFSEventStreamEventFlagItemIsDir\").ToLocalChecked(), Nan::New<v8::Integer>(kFSEventStreamEventFlagItemIsDir));
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:3402:3: note: 'Set' has been explicitly marked deprecated here
V8_DEPRECATED(\"Use maybe version\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../fsevents.cc:82:
../src/constants.cc:108:11: warning: 'Set' is deprecated: Use maybe version [-Wdeprecated-declarations]
object->Set(Nan::New<v8::String>(\"kFSEventStreamEventFlagItemIsSymlink\").ToLocalChecked(), Nan::New<v8::Integer>(kFSEventStreamEventFlagItemIsSymlink));
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:3402:3: note: 'Set' has been explicitly marked deprecated here
V8_DEPRECATED(\"Use maybe version\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
../fsevents.cc:85:16: error: variable has incomplete type 'void'
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
^
../fsevents.cc:85:31: error: no member named 'Handle' in namespace 'v8'
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
~~~~^
../fsevents.cc:85:48: error: expected '(' for function-style cast or type construction
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
~~~~~~~~~~^
../fsevents.cc:85:50: error: use of undeclared identifier 'exports'
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
^
../fsevents.cc:85:58: error: expected ';' after top level declarator
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
^
;
30 warnings and 14 errors generated.
make: *** [Release/obj.target/fse/fsevents.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/lib/build.js:196:23)
gyp ERR! stack at ChildProcess.emit (events.js:210:5)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
gyp ERR! System Darwin 18.7.0
gyp ERR! command \"/Users/csf/.nvm/versions/node/v12.13.0/bin/node\" \"/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/bin/node-gyp.js\" \"build\" \"--fallback-to-build\" \"--module=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\" \"--module_name=fse\" \"--module_path=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64\" \"--napi_version=5\" \"--node_abi_napi=napi\"
gyp ERR! cwd /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents
gyp ERR! node -v v12.13.0
gyp ERR! node-gyp -v v5.0.3
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/Users/csf/.nvm/versions/node/v12.13.0/bin/node /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node --module_name=fse --module_path=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64 --napi_version=5 --node_abi_napi=napi' (1)
node-pre-gyp ERR! stack at ChildProcess.<anonymous> (/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:210:5)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:1021:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:283:5)
node-pre-gyp ERR! System Darwin 18.7.0
node-pre-gyp ERR! command \"/Users/csf/.nvm/versions/node/v12.13.0/bin/node\" \"/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\" \"install\" \"--fallback-to-build\"
node-pre-gyp ERR! cwd /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents
node-pre-gyp ERR! node -v v12.13.0
node-pre-gyp ERR! node-pre-gyp -v v0.10.0
node-pre-gyp ERR! not ok
Failed to execute '/Users/csf/.nvm/versions/node/v12.13.0/bin/node /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node --module_name=fse --module_path=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64 --napi_version=5 --node_abi_napi=napi' (1)"
info This module is OPTIONAL, you can safely ignore this error
myprojectweb $
```
|
2019/12/11
|
[
"https://Stackoverflow.com/questions/59289903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9068961/"
] |
I don't know *why* this worked, but running a regular yarn-upgrade cleared the errors. I still got warnings about dependencies.
I should have saved the terminal output from yarn-outdated before and after the upgrade, but alas, I did not.
I still show a few mismatched dependencies.
|
deasync try´s to compile itself if it did not find a precompiled version for current Node version.
This compilation has additional requirements so it is easier to use deasync/Node combinations where precompiled packages exists:
* <https://github.com/abbr/deasync/issues/106>
* <https://github.com/abbr/deasync-bin>
|
64,870,829
|
Let's say I have a list
`list = ['aa', 'bb', 'aa', 'aaa', 'bbb', 'bbbb', 'cc']`
if you do `list.sort()`
you get back
`['aa', 'aa', 'aaa', 'bb', 'bbb', 'bbbb', 'cc']`
Is there a way in **python 3** we can get
`['aaa', 'aa', 'aa', 'bbbb', 'bbb', 'bb', 'cc']`
So within the same lexicographical group order, pick the one with the bigger length first.
Thanks a ton for your help!
|
2020/11/17
|
[
"https://Stackoverflow.com/questions/64870829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10688867/"
] |
You can redefine what less-than means for the strings with a custom class. Use that class as the key for `list.sort` or `sorted`.
```
class C:
def __init__(self, val):
self.val = val
def __lt__(self, other):
min_len = min((len(self.val), len(other.val)))
if self.val[:min_len] == other.val[:min_len]:
return len(self.val) > len(other.val)
else:
return self.val < other.val
lst = ['aa', 'bb', 'aa', 'aaa', 'bbb', 'bbbb', 'cc']
slist = sorted(lst, key=C)
print(slist)
```
|
Tuples are ordered lexicographically, so you can use a tuple of (first character of string, negative length) as the sort key:
```python
list.sort(key=lambda s: (s[0], -len(s)))
```
|
64,870,829
|
Let's say I have a list
`list = ['aa', 'bb', 'aa', 'aaa', 'bbb', 'bbbb', 'cc']`
if you do `list.sort()`
you get back
`['aa', 'aa', 'aaa', 'bb', 'bbb', 'bbbb', 'cc']`
Is there a way in **python 3** we can get
`['aaa', 'aa', 'aa', 'bbbb', 'bbb', 'bb', 'cc']`
So within the same lexicographical group order, pick the one with the bigger length first.
Thanks a ton for your help!
|
2020/11/17
|
[
"https://Stackoverflow.com/questions/64870829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10688867/"
] |
You can redefine what less-than means for the strings with a custom class. Use that class as the key for `list.sort` or `sorted`.
```
class C:
def __init__(self, val):
self.val = val
def __lt__(self, other):
min_len = min((len(self.val), len(other.val)))
if self.val[:min_len] == other.val[:min_len]:
return len(self.val) > len(other.val)
else:
return self.val < other.val
lst = ['aa', 'bb', 'aa', 'aaa', 'bbb', 'bbbb', 'cc']
slist = sorted(lst, key=C)
print(slist)
```
|
You were able to explain in words how to compare two strings, so you can write this comparison function in python. However, since python 3, `.sort()` and `sorted()` both expect a **`key`**, rather than a comparison function.
* You can turn the comparison function into a key by using a class and defining its method `.__lt__`, as explained in [tdelaney's answer](https://stackoverflow.com/a/64871192/3080723);
* Or you can use [`functools.cmp_to_key`](https://docs.python.org/3/library/functools.html#functools.cmp_to_key), which was designed specifically for this purpose.
`cmp_to_key` expects a comparison function which returns -1, 0 or +1 respectively for less than, equal or greater than.
```py
def custom_compare_strings(s1, s2):
length = min(len(s1), len(s2))
if s1[:length] == s2[:length]:
return len(s2) - len(s1)
else:
return -1 if s1 < s2 else +1
lst = ['aa', 'bb', 'aa', 'aaa', 'bbb', 'bbbb', 'cc']
import functools
slist = sorted(lst, key=functools.cmp_to_key(custom_compare_strings))
print(slist)
# ['aaa', 'aa', 'aa', 'bbbb', 'bbb', 'bb', 'cc']
```
|
36,590,496
|
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "random_uuid",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''
headers = {
'content-type': "application/x-www-form-urlencoded",
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
```
In the above script, I created a UUID and assigned it to the variable `random_uuid`. I want the UUID that was created to be substituted inside json for the value `random_uuid` for the key `id`. But, the above script is not substituting the value of `random_uuid` and just using the variable `random_uuid` itself.
Can anyone please tell me what I'm doing wrong here?
Thanks in advance.
|
2016/04/13
|
[
"https://Stackoverflow.com/questions/36590496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133947/"
] |
You'll can use string formatting for that.
In your JSON string, replace random\_uuid with %s, than do:
```
payload = payload % random_uuid
```
Another option is to use `json.dumps` to create the json:
```
payload_dict = {
'id': random_uuid,
...
}
payload = json.dumps(payload_dict)
```
|
This code may help.
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "%s",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}''' % random_uuid
headers = {
'content-type': "application/x-www-form-urlencoded",
}
print payload
print(response.text)
```
|
36,590,496
|
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "random_uuid",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''
headers = {
'content-type': "application/x-www-form-urlencoded",
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
```
In the above script, I created a UUID and assigned it to the variable `random_uuid`. I want the UUID that was created to be substituted inside json for the value `random_uuid` for the key `id`. But, the above script is not substituting the value of `random_uuid` and just using the variable `random_uuid` itself.
Can anyone please tell me what I'm doing wrong here?
Thanks in advance.
|
2016/04/13
|
[
"https://Stackoverflow.com/questions/36590496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133947/"
] |
You'll can use string formatting for that.
In your JSON string, replace random\_uuid with %s, than do:
```
payload = payload % random_uuid
```
Another option is to use `json.dumps` to create the json:
```
payload_dict = {
'id': random_uuid,
...
}
payload = json.dumps(payload_dict)
```
|
Use `str.format` instead:
```
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "{0}",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''.format(random_uuid)
```
|
36,590,496
|
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "random_uuid",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''
headers = {
'content-type': "application/x-www-form-urlencoded",
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
```
In the above script, I created a UUID and assigned it to the variable `random_uuid`. I want the UUID that was created to be substituted inside json for the value `random_uuid` for the key `id`. But, the above script is not substituting the value of `random_uuid` and just using the variable `random_uuid` itself.
Can anyone please tell me what I'm doing wrong here?
Thanks in advance.
|
2016/04/13
|
[
"https://Stackoverflow.com/questions/36590496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133947/"
] |
You'll can use string formatting for that.
In your JSON string, replace random\_uuid with %s, than do:
```
payload = payload % random_uuid
```
Another option is to use `json.dumps` to create the json:
```
payload_dict = {
'id': random_uuid,
...
}
payload = json.dumps(payload_dict)
```
|
You could use direct JSON entry into a `dict`:
`payload = {
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": random_uuid,
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}`
|
36,590,496
|
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "random_uuid",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''
headers = {
'content-type': "application/x-www-form-urlencoded",
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
```
In the above script, I created a UUID and assigned it to the variable `random_uuid`. I want the UUID that was created to be substituted inside json for the value `random_uuid` for the key `id`. But, the above script is not substituting the value of `random_uuid` and just using the variable `random_uuid` itself.
Can anyone please tell me what I'm doing wrong here?
Thanks in advance.
|
2016/04/13
|
[
"https://Stackoverflow.com/questions/36590496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133947/"
] |
Use `str.format` instead:
```
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "{0}",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''.format(random_uuid)
```
|
This code may help.
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "%s",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}''' % random_uuid
headers = {
'content-type': "application/x-www-form-urlencoded",
}
print payload
print(response.text)
```
|
36,590,496
|
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "random_uuid",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''
headers = {
'content-type': "application/x-www-form-urlencoded",
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
```
In the above script, I created a UUID and assigned it to the variable `random_uuid`. I want the UUID that was created to be substituted inside json for the value `random_uuid` for the key `id`. But, the above script is not substituting the value of `random_uuid` and just using the variable `random_uuid` itself.
Can anyone please tell me what I'm doing wrong here?
Thanks in advance.
|
2016/04/13
|
[
"https://Stackoverflow.com/questions/36590496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133947/"
] |
You could use direct JSON entry into a `dict`:
`payload = {
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": random_uuid,
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}`
|
This code may help.
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "%s",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}''' % random_uuid
headers = {
'content-type': "application/x-www-form-urlencoded",
}
print payload
print(response.text)
```
|
23,414,509
|
I have common problem. I have some data and I want search in them. My issue is, that I dont know a proper data structures and algorhitm suitable for this situation.
There are two kind of objects - `Process` and `Package`. Both have some properties, but they are only data structures (dont have any methods). Next, there are PackageManager and something what can be called ProcessManager, which both have function returning list of files that belongs to some `Package` or files that is used by some `Process`.
So semantically, we can imagine these data as
### Packages:
* Package\_1
+ file\_1
\_ file\_2
+ file\_3
* Package\_2
+ file\_4
+ file\_5
+ file\_6
Actually file that belongs to Package\_k can *not* belong to Package\_l for k != l :-)
### Processes:
* Process\_1
+ file\_2
+ file\_3
* Process\_2
+ file\_1
Files used by processes corresponds to files owned by packages. Also, there the rule doesn't applies on this as for packages - that means, `n` processes can use one same file at the same time.
Now what is the task. I need to find some match between processes and packages - for given list of packages, I need to find list of processes which uses any of files owned by packages.
My temporary solution was making list of `[package_name, package_files]` and list of `[process_name, process_files]` and for every file from every package I was searching through every file of every process searching for match, but of course it could be only temporary solution vzhledem to horrible time complexity (even when I sort the files and use binary search to it).
What you can recommend me for this kind of searching please?
(I am coding it in python)
|
2014/05/01
|
[
"https://Stackoverflow.com/questions/23414509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3285282/"
] |
I would move the case into its own method on your Team model.
```
class Team
def tree(type)
...
end
end
```
Then in your controller you could just have the following
```
if @team = fetch_team
@output = @team.tree(params[:tree])
render json: @output
else
render json: {message: "team: '#{params[:id]}' not found"}, status: 404
end
```
|
You could write
```
if @team = fetch_team
@output = case params[:tree]
when 'parents' then @team.ancestor_ids
when 'children' then @team.child_ids
when 'full' then @team.full_tree
when nil then @team
else {message: "requested query parameter: '#{params[:tree]}' not defined"}
end
render json: @output
else
render json: {message: "team: '#{params[:id]}' not found"}, status: 404
end
```
|
42,471,570
|
I am trying to build a classification model. I have 1000 text documents in local folder. I want to divide them into training set and test set with a split ratio of 70:30(70 -> Training and 30 -> Test) What is the better approach to do so? I am using python.
---
I wanted a approach programatically to split the training set and test set. First to read the files in local directory. Second, to build a list of those files and shuffle them. Thirdly to split them into a training set and test set.
I tried a few ways by using built in python keywords and functions only to fail. Lastly I got the idea of approaching it. Also Cross-validation is a good option to be considered for the building general classification models.
|
2017/02/26
|
[
"https://Stackoverflow.com/questions/42471570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5024829/"
] |
that's quite simple if you use numpy, first load the documents and make them a numpy array, and then:
```
import numpy as np
docs = np.array([
'one', 'two', 'three', 'four', 'five',
'six', 'seven', 'eight', 'nine', 'ten',
])
idx = np.hstack((np.ones(7), np.zeros(3))) # generate indices
np.random.shuffle(idx) # shuffle to make training data and test data random
train = docs[idx == 1]
test = docs[idx == 0]
print(train)
print(test)
```
the result:
```
['one' 'two' 'three' 'six' 'eight' 'nine' 'ten']
['four' 'five' 'seven']
```
|
Just make a list of the filenames using `os.listdir()`. Use `collections.shuffle()` to shuffle the list, and then `training_files = filenames[:700]` and `testing_files = filenames[700:]`
|
42,471,570
|
I am trying to build a classification model. I have 1000 text documents in local folder. I want to divide them into training set and test set with a split ratio of 70:30(70 -> Training and 30 -> Test) What is the better approach to do so? I am using python.
---
I wanted a approach programatically to split the training set and test set. First to read the files in local directory. Second, to build a list of those files and shuffle them. Thirdly to split them into a training set and test set.
I tried a few ways by using built in python keywords and functions only to fail. Lastly I got the idea of approaching it. Also Cross-validation is a good option to be considered for the building general classification models.
|
2017/02/26
|
[
"https://Stackoverflow.com/questions/42471570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5024829/"
] |
Not sure *exactly* what you're after, so I'll try to be comprehensive. There will be a few steps:
1. Get a list of the files
2. Randomize the files
3. Split files into training and testing sets
4. Do the thing
1. Get a list of the files
==========================
Let's assume that your files all have the extension `.data` and they're all in the folder `/ml/data/`. What we want to do is get a list of all of these files. This is done simply with the `os` module. I'm assuming you have no subdirectories; this would change if there were.
```
import os
def get_file_list_from_dir(datadir):
all_files = os.listdir(os.path.abspath(datadir))
data_files = list(filter(lambda file: file.endswith('.data'), all_files))
return data_files
```
So if we were to call `get_file_list_from_dir('/ml/data')`, we would get back a list of all the `.data` files in that directory (equivalent in the shell to the glob `/ml/data/*.data`).
2. Randomize the files
======================
We don't want the sampling to be predictable, as that is considered a poor way to train an ML classifier.
```
from random import shuffle
def randomize_files(file_list):
shuffle(file_list)
```
Note that `random.shuffle` performs an *in-place* shuffling, so it modifies the existing list. (Of course this function is rather silly since you could just call `shuffle` instead of `randomize_files`; you can write this into another function to make it make more sense.)
3. Split files into training and testing sets
=============================================
I'll assume a 70:30 ratio instead of any specific number of documents. So:
```
from math import floor
def get_training_and_testing_sets(file_list):
split = 0.7
split_index = floor(len(file_list) * split)
training = file_list[:split_index]
testing = file_list[split_index:]
return training, testing
```
4. Do the thing
===============
This is the step where you open each file and do your training and testing. I'll leave this to you!
---
Cross-Validation
================
Out of curiosity, have you considered using [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics))? This is a method of splitting your data so that you use every document for training and testing. You can customize how many documents are used for training in each "fold". I could go more into depth on this if you like, but I won't if you don't want to do it.
Edit: Alright, since you requested I will explain this a little bit more.
So we have a 1000-document set of data. The idea of cross-validation is that you can use *all* of it for both training and testing — just not at once. We split the dataset into what we call "folds". The number of folds determines the size of the training and testing sets at any given point in time.
Let's say we want a 10-fold cross-validation system. This means that the training and testing algorithms will run ten times. The first time will train on documents 1-100 and test on 101-1000. The second fold will train on 101-200 and test on 1-100 and 201-1000.
If we did, say, a 40-fold CV system, the first fold would train on document 1-25 and test on 26-1000, the second fold would train on 26-40 and test on 1-25 and 51-1000, and on.
To implement such a system, we would still need to do steps (1) and (2) from above, but step (3) would be different. Instead of splitting into just two sets (one for training, one for testing), we could turn the function into a [*generator*](https://wiki.python.org/moin/Generators) — a function which we can iterate through like a list.
```
def cross_validate(data_files, folds):
if len(data_files) % folds != 0:
raise ValueError(
"invalid number of folds ({}) for the number of "
"documents ({})".format(folds, len(data_files))
)
fold_size = len(data_files) // folds
for split_index in range(0, len(data_files), fold_size):
training = data_files[split_index:split_index + fold_size]
testing = data_files[:split_index] + data_files[split_index + fold_size:]
yield training, testing
```
That `yield` keyword at the end is what makes this a generator. To use it, you would use it like so:
```
def ml_function(datadir, num_folds):
data_files = get_file_list_from_dir(datadir)
randomize_files(data_files)
for train_set, test_set in cross_validate(data_files, num_folds):
do_ml_training(train_set)
do_ml_testing(test_set)
```
Again, it's up to you to implement the actual functionality of your ML system.
As a disclaimer, I'm no expert by any means, haha. But let me know if you have any questions about anything I've written here!
|
Just make a list of the filenames using `os.listdir()`. Use `collections.shuffle()` to shuffle the list, and then `training_files = filenames[:700]` and `testing_files = filenames[700:]`
|
42,471,570
|
I am trying to build a classification model. I have 1000 text documents in local folder. I want to divide them into training set and test set with a split ratio of 70:30(70 -> Training and 30 -> Test) What is the better approach to do so? I am using python.
---
I wanted a approach programatically to split the training set and test set. First to read the files in local directory. Second, to build a list of those files and shuffle them. Thirdly to split them into a training set and test set.
I tried a few ways by using built in python keywords and functions only to fail. Lastly I got the idea of approaching it. Also Cross-validation is a good option to be considered for the building general classification models.
|
2017/02/26
|
[
"https://Stackoverflow.com/questions/42471570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5024829/"
] |
Not sure *exactly* what you're after, so I'll try to be comprehensive. There will be a few steps:
1. Get a list of the files
2. Randomize the files
3. Split files into training and testing sets
4. Do the thing
1. Get a list of the files
==========================
Let's assume that your files all have the extension `.data` and they're all in the folder `/ml/data/`. What we want to do is get a list of all of these files. This is done simply with the `os` module. I'm assuming you have no subdirectories; this would change if there were.
```
import os
def get_file_list_from_dir(datadir):
all_files = os.listdir(os.path.abspath(datadir))
data_files = list(filter(lambda file: file.endswith('.data'), all_files))
return data_files
```
So if we were to call `get_file_list_from_dir('/ml/data')`, we would get back a list of all the `.data` files in that directory (equivalent in the shell to the glob `/ml/data/*.data`).
2. Randomize the files
======================
We don't want the sampling to be predictable, as that is considered a poor way to train an ML classifier.
```
from random import shuffle
def randomize_files(file_list):
shuffle(file_list)
```
Note that `random.shuffle` performs an *in-place* shuffling, so it modifies the existing list. (Of course this function is rather silly since you could just call `shuffle` instead of `randomize_files`; you can write this into another function to make it make more sense.)
3. Split files into training and testing sets
=============================================
I'll assume a 70:30 ratio instead of any specific number of documents. So:
```
from math import floor
def get_training_and_testing_sets(file_list):
split = 0.7
split_index = floor(len(file_list) * split)
training = file_list[:split_index]
testing = file_list[split_index:]
return training, testing
```
4. Do the thing
===============
This is the step where you open each file and do your training and testing. I'll leave this to you!
---
Cross-Validation
================
Out of curiosity, have you considered using [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics))? This is a method of splitting your data so that you use every document for training and testing. You can customize how many documents are used for training in each "fold". I could go more into depth on this if you like, but I won't if you don't want to do it.
Edit: Alright, since you requested I will explain this a little bit more.
So we have a 1000-document set of data. The idea of cross-validation is that you can use *all* of it for both training and testing — just not at once. We split the dataset into what we call "folds". The number of folds determines the size of the training and testing sets at any given point in time.
Let's say we want a 10-fold cross-validation system. This means that the training and testing algorithms will run ten times. The first time will train on documents 1-100 and test on 101-1000. The second fold will train on 101-200 and test on 1-100 and 201-1000.
If we did, say, a 40-fold CV system, the first fold would train on document 1-25 and test on 26-1000, the second fold would train on 26-40 and test on 1-25 and 51-1000, and on.
To implement such a system, we would still need to do steps (1) and (2) from above, but step (3) would be different. Instead of splitting into just two sets (one for training, one for testing), we could turn the function into a [*generator*](https://wiki.python.org/moin/Generators) — a function which we can iterate through like a list.
```
def cross_validate(data_files, folds):
if len(data_files) % folds != 0:
raise ValueError(
"invalid number of folds ({}) for the number of "
"documents ({})".format(folds, len(data_files))
)
fold_size = len(data_files) // folds
for split_index in range(0, len(data_files), fold_size):
training = data_files[split_index:split_index + fold_size]
testing = data_files[:split_index] + data_files[split_index + fold_size:]
yield training, testing
```
That `yield` keyword at the end is what makes this a generator. To use it, you would use it like so:
```
def ml_function(datadir, num_folds):
data_files = get_file_list_from_dir(datadir)
randomize_files(data_files)
for train_set, test_set in cross_validate(data_files, num_folds):
do_ml_training(train_set)
do_ml_testing(test_set)
```
Again, it's up to you to implement the actual functionality of your ML system.
As a disclaimer, I'm no expert by any means, haha. But let me know if you have any questions about anything I've written here!
|
that's quite simple if you use numpy, first load the documents and make them a numpy array, and then:
```
import numpy as np
docs = np.array([
'one', 'two', 'three', 'four', 'five',
'six', 'seven', 'eight', 'nine', 'ten',
])
idx = np.hstack((np.ones(7), np.zeros(3))) # generate indices
np.random.shuffle(idx) # shuffle to make training data and test data random
train = docs[idx == 1]
test = docs[idx == 0]
print(train)
print(test)
```
the result:
```
['one' 'two' 'three' 'six' 'eight' 'nine' 'ten']
['four' 'five' 'seven']
```
|
42,471,570
|
I am trying to build a classification model. I have 1000 text documents in local folder. I want to divide them into training set and test set with a split ratio of 70:30(70 -> Training and 30 -> Test) What is the better approach to do so? I am using python.
---
I wanted a approach programatically to split the training set and test set. First to read the files in local directory. Second, to build a list of those files and shuffle them. Thirdly to split them into a training set and test set.
I tried a few ways by using built in python keywords and functions only to fail. Lastly I got the idea of approaching it. Also Cross-validation is a good option to be considered for the building general classification models.
|
2017/02/26
|
[
"https://Stackoverflow.com/questions/42471570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5024829/"
] |
that's quite simple if you use numpy, first load the documents and make them a numpy array, and then:
```
import numpy as np
docs = np.array([
'one', 'two', 'three', 'four', 'five',
'six', 'seven', 'eight', 'nine', 'ten',
])
idx = np.hstack((np.ones(7), np.zeros(3))) # generate indices
np.random.shuffle(idx) # shuffle to make training data and test data random
train = docs[idx == 1]
test = docs[idx == 0]
print(train)
print(test)
```
the result:
```
['one' 'two' 'three' 'six' 'eight' 'nine' 'ten']
['four' 'five' 'seven']
```
|
You can use train\_test\_split method provided by sklearn. See documentation here:
<http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html>
|
42,471,570
|
I am trying to build a classification model. I have 1000 text documents in local folder. I want to divide them into training set and test set with a split ratio of 70:30(70 -> Training and 30 -> Test) What is the better approach to do so? I am using python.
---
I wanted a approach programatically to split the training set and test set. First to read the files in local directory. Second, to build a list of those files and shuffle them. Thirdly to split them into a training set and test set.
I tried a few ways by using built in python keywords and functions only to fail. Lastly I got the idea of approaching it. Also Cross-validation is a good option to be considered for the building general classification models.
|
2017/02/26
|
[
"https://Stackoverflow.com/questions/42471570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5024829/"
] |
Not sure *exactly* what you're after, so I'll try to be comprehensive. There will be a few steps:
1. Get a list of the files
2. Randomize the files
3. Split files into training and testing sets
4. Do the thing
1. Get a list of the files
==========================
Let's assume that your files all have the extension `.data` and they're all in the folder `/ml/data/`. What we want to do is get a list of all of these files. This is done simply with the `os` module. I'm assuming you have no subdirectories; this would change if there were.
```
import os
def get_file_list_from_dir(datadir):
all_files = os.listdir(os.path.abspath(datadir))
data_files = list(filter(lambda file: file.endswith('.data'), all_files))
return data_files
```
So if we were to call `get_file_list_from_dir('/ml/data')`, we would get back a list of all the `.data` files in that directory (equivalent in the shell to the glob `/ml/data/*.data`).
2. Randomize the files
======================
We don't want the sampling to be predictable, as that is considered a poor way to train an ML classifier.
```
from random import shuffle
def randomize_files(file_list):
shuffle(file_list)
```
Note that `random.shuffle` performs an *in-place* shuffling, so it modifies the existing list. (Of course this function is rather silly since you could just call `shuffle` instead of `randomize_files`; you can write this into another function to make it make more sense.)
3. Split files into training and testing sets
=============================================
I'll assume a 70:30 ratio instead of any specific number of documents. So:
```
from math import floor
def get_training_and_testing_sets(file_list):
split = 0.7
split_index = floor(len(file_list) * split)
training = file_list[:split_index]
testing = file_list[split_index:]
return training, testing
```
4. Do the thing
===============
This is the step where you open each file and do your training and testing. I'll leave this to you!
---
Cross-Validation
================
Out of curiosity, have you considered using [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics))? This is a method of splitting your data so that you use every document for training and testing. You can customize how many documents are used for training in each "fold". I could go more into depth on this if you like, but I won't if you don't want to do it.
Edit: Alright, since you requested I will explain this a little bit more.
So we have a 1000-document set of data. The idea of cross-validation is that you can use *all* of it for both training and testing — just not at once. We split the dataset into what we call "folds". The number of folds determines the size of the training and testing sets at any given point in time.
Let's say we want a 10-fold cross-validation system. This means that the training and testing algorithms will run ten times. The first time will train on documents 1-100 and test on 101-1000. The second fold will train on 101-200 and test on 1-100 and 201-1000.
If we did, say, a 40-fold CV system, the first fold would train on document 1-25 and test on 26-1000, the second fold would train on 26-40 and test on 1-25 and 51-1000, and on.
To implement such a system, we would still need to do steps (1) and (2) from above, but step (3) would be different. Instead of splitting into just two sets (one for training, one for testing), we could turn the function into a [*generator*](https://wiki.python.org/moin/Generators) — a function which we can iterate through like a list.
```
def cross_validate(data_files, folds):
if len(data_files) % folds != 0:
raise ValueError(
"invalid number of folds ({}) for the number of "
"documents ({})".format(folds, len(data_files))
)
fold_size = len(data_files) // folds
for split_index in range(0, len(data_files), fold_size):
training = data_files[split_index:split_index + fold_size]
testing = data_files[:split_index] + data_files[split_index + fold_size:]
yield training, testing
```
That `yield` keyword at the end is what makes this a generator. To use it, you would use it like so:
```
def ml_function(datadir, num_folds):
data_files = get_file_list_from_dir(datadir)
randomize_files(data_files)
for train_set, test_set in cross_validate(data_files, num_folds):
do_ml_training(train_set)
do_ml_testing(test_set)
```
Again, it's up to you to implement the actual functionality of your ML system.
As a disclaimer, I'm no expert by any means, haha. But let me know if you have any questions about anything I've written here!
|
You can use train\_test\_split method provided by sklearn. See documentation here:
<http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html>
|
39,516,760
|
I have a string that I pull from a REST API that is actually a JSON.
I can't use `req.json()` as python doesn't format json correctly i.e. it is using single quotes and not double quotes, plus it puts a unicode symbol where there shouldn't be one. This means I can't use it to respond back to REST as the JSON is not formatted correctly.
However `r.text` prints json that I could use, if I could just tell python: "this is a json and not a string, take it just as it is and use it as a json".
Is there anyway I could do this? Or is there anyway to tell Python to properly format json object as per json spec (i.e. not have unicode characters, and use double quotes).
EDIT:
Apparently this wasn't clear, I apologize.
The issue is that I have to send back a proper JSON formatted object and NOT python object. Here is what I get:
r.text:
{"domain":"example.com", "Link":null, "monitor":"true"}
r.json():
{u'domain':u'example.com', u'Link": None, u'minotor':True}
This is NOT proper JSON formating. You can't have the unicode character, it isn't None it is null, and it isn't True it is true. You also should have double and not single quotes (not as big deal I think).
Hope this clarifies my issues.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39516760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1164102/"
] |
You can check if a string is valid json by catching the error.
```
import json
def is_json(myjson):
try:
json_object = json.loads(myjson)
except ValueError, e:
return False
return True
```
Test cases:
```
print is_json("{}") #prints True
print is_json("{asdf}") #prints False
print is_json('{ "age":100}') #prints True
print is_json("{'age':100 }") #prints False
print is_json("{\"age\":100 }") #prints True
print is_json('{"age":100 }') #prints True
print is_json('{"foo":[5,6.8],"foo":"bar"}') #prints True
```
|
```
import json
request_as_json = json.loads(r.text)
```
Then you can call things like `request_as_json['key']`
["More Info Here"](http://docs.python.org/library/json.html#json.loads)
|
39,516,760
|
I have a string that I pull from a REST API that is actually a JSON.
I can't use `req.json()` as python doesn't format json correctly i.e. it is using single quotes and not double quotes, plus it puts a unicode symbol where there shouldn't be one. This means I can't use it to respond back to REST as the JSON is not formatted correctly.
However `r.text` prints json that I could use, if I could just tell python: "this is a json and not a string, take it just as it is and use it as a json".
Is there anyway I could do this? Or is there anyway to tell Python to properly format json object as per json spec (i.e. not have unicode characters, and use double quotes).
EDIT:
Apparently this wasn't clear, I apologize.
The issue is that I have to send back a proper JSON formatted object and NOT python object. Here is what I get:
r.text:
{"domain":"example.com", "Link":null, "monitor":"true"}
r.json():
{u'domain':u'example.com', u'Link": None, u'minotor':True}
This is NOT proper JSON formating. You can't have the unicode character, it isn't None it is null, and it isn't True it is true. You also should have double and not single quotes (not as big deal I think).
Hope this clarifies my issues.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39516760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1164102/"
] |
You can check if a string is valid json by catching the error.
```
import json
def is_json(myjson):
try:
json_object = json.loads(myjson)
except ValueError, e:
return False
return True
```
Test cases:
```
print is_json("{}") #prints True
print is_json("{asdf}") #prints False
print is_json('{ "age":100}') #prints True
print is_json("{'age':100 }") #prints False
print is_json("{\"age\":100 }") #prints True
print is_json('{"age":100 }') #prints True
print is_json('{"foo":[5,6.8],"foo":"bar"}') #prints True
```
|
If you need to access the data in the JSON message, `req.json()` already does what you need. It parses the JSON message into a Python data structure, generally some nested format of lists and dicts. The result doesn't look like valid JSON text because it's not JSON any more; it's a data structure you can actually index and get data out of.
If you need a JSON string - a sequence of characters with braces and colons and double-quotation marks - perhaps because you're planning to send it over the network again, `req.text` is what you need.
|
56,590,075
|
I'm trying to read a timeseries of a single [WRF](https://www.mmm.ucar.edu/weather-research-and-forecasting-model) output variable. The time series is distributed, one timestamp per file, across more than 5000 netCDF files. Each file contains roughly 200 variables.
Is there a way to call xarray.open\_mfdataset() for only the variable I'm interested in? I can specify a single variable by providing a list to the 'data\_vars' argument, but it still reads everything for the 'minimal' case. For my files the 'minimal' case includes almost everything and is thus relatively slow.
Is my best bet to create a single netCDF file containing my variable of interest with something like [ncrcat](http://nco.sourceforge.net/nco.html#Concatenation), or is there a more streamlined way to do this entirely within xarray (or some other python tool)?
My netCDF files are netCDF4 (not netCDF4-classic), which seems to rule out [netCDF4.MFDataset()](http://unidata.github.io/netcdf4-python/netCDF4/index.html#netCDF4.MFDataset).
|
2019/06/14
|
[
"https://Stackoverflow.com/questions/56590075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1854821/"
] |
I'm not sure why providing the `data_vars=` argument still reads all data - I experienced the same issue reading WRF output. My workaround was to make a list of all the variables I didn't need (all 200+) and feed that to the `drop_variables=` argument. You can get a list of all variables and then just delete or comment out the ones you want to keep.
```
varlist = list(ds.variables)
```
|
As a follow up for the ones who will find this thread later.
Based on the documentation (but a bit hidden), the "data\_vars=" argument only works with Python 3.9.
|
56,590,075
|
I'm trying to read a timeseries of a single [WRF](https://www.mmm.ucar.edu/weather-research-and-forecasting-model) output variable. The time series is distributed, one timestamp per file, across more than 5000 netCDF files. Each file contains roughly 200 variables.
Is there a way to call xarray.open\_mfdataset() for only the variable I'm interested in? I can specify a single variable by providing a list to the 'data\_vars' argument, but it still reads everything for the 'minimal' case. For my files the 'minimal' case includes almost everything and is thus relatively slow.
Is my best bet to create a single netCDF file containing my variable of interest with something like [ncrcat](http://nco.sourceforge.net/nco.html#Concatenation), or is there a more streamlined way to do this entirely within xarray (or some other python tool)?
My netCDF files are netCDF4 (not netCDF4-classic), which seems to rule out [netCDF4.MFDataset()](http://unidata.github.io/netcdf4-python/netCDF4/index.html#netCDF4.MFDataset).
|
2019/06/14
|
[
"https://Stackoverflow.com/questions/56590075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1854821/"
] |
I'm not sure why providing the `data_vars=` argument still reads all data - I experienced the same issue reading WRF output. My workaround was to make a list of all the variables I didn't need (all 200+) and feed that to the `drop_variables=` argument. You can get a list of all variables and then just delete or comment out the ones you want to keep.
```
varlist = list(ds.variables)
```
|
Another option is to define a preprocessing function that defines the variables to keep via the "preprocess" keyword argument, e.g.:
```
preprocess=lambda ds: ds[variablelist]
```
|
56,590,075
|
I'm trying to read a timeseries of a single [WRF](https://www.mmm.ucar.edu/weather-research-and-forecasting-model) output variable. The time series is distributed, one timestamp per file, across more than 5000 netCDF files. Each file contains roughly 200 variables.
Is there a way to call xarray.open\_mfdataset() for only the variable I'm interested in? I can specify a single variable by providing a list to the 'data\_vars' argument, but it still reads everything for the 'minimal' case. For my files the 'minimal' case includes almost everything and is thus relatively slow.
Is my best bet to create a single netCDF file containing my variable of interest with something like [ncrcat](http://nco.sourceforge.net/nco.html#Concatenation), or is there a more streamlined way to do this entirely within xarray (or some other python tool)?
My netCDF files are netCDF4 (not netCDF4-classic), which seems to rule out [netCDF4.MFDataset()](http://unidata.github.io/netcdf4-python/netCDF4/index.html#netCDF4.MFDataset).
|
2019/06/14
|
[
"https://Stackoverflow.com/questions/56590075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1854821/"
] |
Another option is to define a preprocessing function that defines the variables to keep via the "preprocess" keyword argument, e.g.:
```
preprocess=lambda ds: ds[variablelist]
```
|
As a follow up for the ones who will find this thread later.
Based on the documentation (but a bit hidden), the "data\_vars=" argument only works with Python 3.9.
|
9,197,385
|
I'm using AWS for the first time and have just installed boto for python. I'm stuck at the step where it advices to:
"You can place this file either at /etc/boto.cfg for system-wide use or in the home directory of the user executing the commands as ~/.boto."
Honestly, I have no idea what to do. First, I can't find the boto.cfg and second I'm not sure which command to execute for the second option.
Also, when I deploy the application to my server, I'm assuming I need to do the same thing there too...
|
2012/02/08
|
[
"https://Stackoverflow.com/questions/9197385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/815878/"
] |
>
> "You can place this file either at /etc/boto.cfg for system-wide use
> or in the home directory of the user executing the commands as
> ~/.boto."
>
>
>
The former simply means that you might create a configuration file named `boto.cfg` within directory `/etc` (i.e. it won't necessarily be there already, depending on how [boto](https://github.com/boto/boto/) has been installed on your particular system).
The latter is indeed phrased a bit unfortunate - `~/.boto` means that boto will look for a configuration file named `.boto` within the home directory of the user executing the commands (i.e. Python scripts) which are facilitating the boto library.
You can read more about this in the boto wiki article [BotoConfig](http://docs.pythonboto.org/en/latest/boto_config_tut.html), e.g. regarding the question at hand:
>
> A boto config file is simply a .ini format configuration file that
> specifies values for options that control the behavior of the boto
> library. Upon startup, the boto library looks for configuration files
> in the following locations and in the following order:
>
>
> 1. /etc/boto.cfg - for site-wide settings that all users on this machine
> will use
> 2. ~/.boto - for user-specific settings
>
>
>
You'll indeed need to prepare a respective configuration file on the server your application is deployed to as well.
Good luck!
|
For those who want to configure the credentials in Windows:
1-Create your file with the name you want(e.g boto\_config.cfg) and place it in a location of your choice(e.g C:\Users\\configs).
2- Create an environment variable with the Name='BOTO\_CONFIG' and Value= file\_location/file\_name
3- Boto is now ready to work with credentials automatically configured!
* To create environment variables in Windows follow this tutorial: <http://www.onlinehowto.net/Tutorials/Windows-7/Creating-System-Environment-Variables-in-Windows-7/1705>
|
9,197,385
|
I'm using AWS for the first time and have just installed boto for python. I'm stuck at the step where it advices to:
"You can place this file either at /etc/boto.cfg for system-wide use or in the home directory of the user executing the commands as ~/.boto."
Honestly, I have no idea what to do. First, I can't find the boto.cfg and second I'm not sure which command to execute for the second option.
Also, when I deploy the application to my server, I'm assuming I need to do the same thing there too...
|
2012/02/08
|
[
"https://Stackoverflow.com/questions/9197385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/815878/"
] |
>
> "You can place this file either at /etc/boto.cfg for system-wide use
> or in the home directory of the user executing the commands as
> ~/.boto."
>
>
>
The former simply means that you might create a configuration file named `boto.cfg` within directory `/etc` (i.e. it won't necessarily be there already, depending on how [boto](https://github.com/boto/boto/) has been installed on your particular system).
The latter is indeed phrased a bit unfortunate - `~/.boto` means that boto will look for a configuration file named `.boto` within the home directory of the user executing the commands (i.e. Python scripts) which are facilitating the boto library.
You can read more about this in the boto wiki article [BotoConfig](http://docs.pythonboto.org/en/latest/boto_config_tut.html), e.g. regarding the question at hand:
>
> A boto config file is simply a .ini format configuration file that
> specifies values for options that control the behavior of the boto
> library. Upon startup, the boto library looks for configuration files
> in the following locations and in the following order:
>
>
> 1. /etc/boto.cfg - for site-wide settings that all users on this machine
> will use
> 2. ~/.boto - for user-specific settings
>
>
>
You'll indeed need to prepare a respective configuration file on the server your application is deployed to as well.
Good luck!
|
For anyone looking for information on the now-current `boto3`, it does not use a separate configuration file but rather respects the default one created by the aws cli when running `aws configure` (Ie, it will look at `~/.aws/config`)
|
45,382,917
|
I cannot successfully run the `optimize_for_inference` module on a simple, saved TensorFlow graph (Python 2.7; package installed by `pip install tensorflow-gpu==1.0.1`).
Background
==========
Saving TensorFlow Graph
-----------------------
Here's my Python script to generate and save a simple graph to add 5 to my input `x` `placeholder` operation.
```
import tensorflow as tf
# make and save a simple graph
G = tf.Graph()
with G.as_default():
x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
a = tf.Variable(5.0, name="a")
y = tf.add(a, x, name="y")
saver = tf.train.Saver()
with tf.Session(graph=G) as sess:
sess.run(tf.global_variables_initializer())
out = sess.run(fetches=[y], feed_dict={x: 1.0})
print(out)
saver.save(sess=sess, save_path="test_model")
```
Restoring TensorFlow Graph
--------------------------
I have a simple restore script that recreates the saved graph and restores graph params. Both the save/restore scripts produce the same output.
```
import tensorflow as tf
# Restore simple graph and test model output
G = tf.Graph()
with tf.Session(graph=G) as sess:
# recreate saved graph (structure)
saver = tf.train.import_meta_graph('./test_model.meta')
# restore net params
saver.restore(sess, tf.train.latest_checkpoint('./'))
x = G.get_operation_by_name("x").outputs[0]
y = G.get_operation_by_name("y").outputs
out = sess.run(fetches=[y], feed_dict={x: 1.0})
print(out[0])
```
Optimization Attempt
--------------------
But, while I don't expect much in terms of optimization, when I try to optimize the graph for inference, I get the following error message. The expected output node does not appear to be in the saved graph.
```
$ python -m tensorflow.python.tools.optimize_for_inference --input test_model.data-00000-of-00001 --output opt_model --input_names=x --output_names=y
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/{path}/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 141, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/{path}/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 90, in main
FLAGS.output_names.split(","), FLAGS.placeholder_type_enum)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference_lib.py", line 91, in optimize_for_inference
placeholder_type_enum)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/tools/strip_unused_lib.py", line 71, in strip_unused
output_node_names)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/framework/graph_util_impl.py", line 141, in extract_sub_graph
assert d in name_to_node_map, "%s is not in graph" % d
AssertionError: y is not in graph
```
Further investigation led me to inspect the checkpoint of the saved graph, which only shows 1 tensor (`a`, no `x` and no `y`).
```
(tf-1.0.1) $ python -m tensorflow.python.tools.inspect_checkpoint --file_name ./test_model --all_tensors
tensor_name: a
5.0
```
Specific Questions
==================
1. Why do I not see `x` and `y` in the checkpoint? Is it because they are operations and not tensors?
2. Since I need to provide input and output names to the `optimize_for_inference` module, how do I build the graph so I can reference the input and output nodes?
|
2017/07/28
|
[
"https://Stackoverflow.com/questions/45382917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5874320/"
] |
**Here is the detailed guide on how to optimize for inference:**
The `optimize_for_inference` module takes a `frozen binary GraphDef` file as input and outputs the `optimized Graph Def` file which you can use for inference. And to get the `frozen binary GraphDef file` you need to use the module `freeze_graph` which takes a `GraphDef proto`, a `SaverDef proto` and a set of variables stored in a checkpoint file. The steps to achieve that is given below:
1. Saving tensorflow graph
==========================
```
# make and save a simple graph
G = tf.Graph()
with G.as_default():
x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
a = tf.Variable(5.0, name="a")
y = tf.add(a, x, name="y")
saver = tf.train.Saver()
with tf.Session(graph=G) as sess:
sess.run(tf.global_variables_initializer())
out = sess.run(fetches=[y], feed_dict={x: 1.0})
# Save GraphDef
tf.train.write_graph(sess.graph_def,'.','graph.pb')
# Save checkpoint
saver.save(sess=sess, save_path="test_model")
```
2. Freeze graph
===============
```
python -m tensorflow.python.tools.freeze_graph --input_graph graph.pb --input_checkpoint test_model --output_graph graph_frozen.pb --output_node_names=y
```
3. Optimize for inference
=========================
```
python -m tensorflow.python.tools.optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=x --output_names=y
```
4. Using Optimized graph
========================
```
with tf.gfile.GFile('graph_optimized.pb', 'rb') as f:
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
G = tf.Graph()
with tf.Session(graph=G) as sess:
y, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0'])
print('Operations in Optimized Graph:')
print([op.name for op in G.get_operations()])
x = G.get_tensor_by_name('import/x:0')
out = sess.run(y, feed_dict={x: 1.0})
print(out)
#Output
#Operations in Optimized Graph:
#['import/x', 'import/a', 'import/y']
#6.0
```
5. For multiple output names
============================
If there are multiple output nodes, then specify : `output_node_names = 'boxes, scores, classes'` and import graph by,
```
boxes,scores,classes, = tf.import_graph_def(graph_def_optimized, return_elements=['boxes:0', 'scores:0', 'classes:0'])
```
|
1. You are doing it wrong: `input` is a graphdef file for the [script](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference.py) not the data part of the checkpoint. You need to freeze the model to a `.pb` file/ or get the prototxt for graph and use the optimize for inference script.
`This script takes either a frozen binary GraphDef file (where the weight
variables have been converted into constants by the freeze_graph script), or a
text GraphDef proto file (the weight variables are stored in a separate
checkpoint file), and outputs a new GraphDef with the optimizations applied.`
2. Get the graph proto file using [write\_graph](https://www.tensorflow.org/api_docs/python/tf/train/write_graph)
3. get the frozen model [freeze graph](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py)
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I use both Komodo Edit and Notepad++.
Notepad++ is a lot quicker to launch and it's more lightweight, so I often use it for quick one-off editing.
I use Komodo Edit for major projects, like my django and wxPython applications. KE is a full-featured IDE, so it has a lot more features.
Main advantages of Komodo Edit for programming Python:
* Manage groups of files as projects
* Use custom commands to run files, run nosetests/pylint, etc.
* Auto complete & syntax checking
* Mozilla extension system, with several useful extensions available
* Write macros in JavaScript or Python
* Spell checking
Some of the little things that Notepad++ is missing for Python development:
* Doesn't auto-indent after a colon
* ~~You can't set tabs/spaces on a file-type basis (I like to use tabs for HTML)~~
* ~~No code completion or tooltips~~
* No on-the-fly syntax checking
|
I haven't used Komodo yet (the download never quite finished on the slow connection I was on at the time), but I use Eclipse with PyDev regularly and enjoy the "IDE" features described by the other respondents. However, I'm also regularly frustrated by how much of a resource hog it is.
I downloaded Notepad++ recently (much smaller download size ;-) ) and have been enjoying it quite a bit. The editor itself is nice and fast and it looks to be extensible. I'm hoping to copy some of my favorite features from IDE into Notepad++ and migrate, at some distant point in the future.
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I have worked a bit with Python programming for Google App Engine, which I started out in Notepad++ and then recently shifted over to Komodo using two excellent startup tutorials - both of which are conveniently linked from [this blog post](http://blogs.activestate.com/2008/04/komodo-does-it "Komodo does it all: Google App Engine") (direct: [here](http://abaditya.com/2008/04/19/a-pseudo-ide-for-google-app-engine-komodo-edit/ "A pseudo IDE for Google App Engine") and [here](http://www.toddnemet.com/using-komodo-edit-as-an-ide-for-google-app-engine/ "Using Komodo Edit as an IDE for Google App Engine")).
* Komodo supports the basic
organization of your work into
Projects, which Notepad++ does not
(apart from physical folder
organization).
* The custom commands
toolbar is useful to keep track of
numerous frequently-used commands
and even link to URLs (like online
documentation and the like).
* It has a working (if sometimes clunky)
code-completion mechanism.
In short, it's an IDE which provides all the benefits thereof.
Notepad++ is simpler, much MUCH faster to load, and does support some basic configurable run commands; it's a fine choice if you like doing all your execution and debugging right in the commandline or Python shell. My advice is to try both!
|
Downloaded both myself. Like Komodo better.
Komodo Pros: Like it better. Does more. Looks like an IDE. Edits Django templates
Notepad++ Cons: Don't like it as much. Does less. Looks less like and IDE.
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
As far as I know , Notepad++ doesn't show you the docstring each method has .
|
If I had to choose between Notepad++ and Komodo i would choose PyScripter ;.)
Seriously I consider PyScripter as a great alternative...
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I use Komodo edit. The main reasons are: Intellisense (not as good as VisualStudio, but Python's a hard language to do intellisense for) and cross-platform compatibility. It's nice being able to use the same editor on my Windows machine, my linux machine, and my macbook with little to no change in feel.
|
A downside I found of Notepad++ for Python is that it tends (for me) to silently mix tabs and spaces. I know this is configurable, but it caught me out, especially when trying to work with other people using different editors / IDE's, so take care.
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I just downloaded and started using Komodo Edit. I've been using Notepad++ for awhile. Here is what I think about some of the features:
Komodo Edit Pros:
* You can jump to a function definition, even if it's in another file (I love this)
* There is a plugin that displays the list of classes, functions and such for the current file on the side. Notepad++ used to have a plugin like this, but it no longer works with the current version and hasn't been updated in a while.
Notepad++ Pros:
* If you select a word, it will highlight all of those words in the current document (makes it easier to find misspellings), without having to hit `Ctrl`+`F`.
* When working with HTML, when the cursor is on/in a tag, the starting and ending tags are both highlighted
Anyone know if either of those last 2 things is possible in Komodo Edit?
|
If I had to choose between Notepad++ and Komodo i would choose PyScripter ;.)
Seriously I consider PyScripter as a great alternative...
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
As far as I know , Notepad++ doesn't show you the docstring each method has .
|
Downloaded both myself. Like Komodo better.
Komodo Pros: Like it better. Does more. Looks like an IDE. Edits Django templates
Notepad++ Cons: Don't like it as much. Does less. Looks less like and IDE.
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I use Komodo edit. The main reasons are: Intellisense (not as good as VisualStudio, but Python's a hard language to do intellisense for) and cross-platform compatibility. It's nice being able to use the same editor on my Windows machine, my linux machine, and my macbook with little to no change in feel.
|
I haven't used Komodo yet (the download never quite finished on the slow connection I was on at the time), but I use Eclipse with PyDev regularly and enjoy the "IDE" features described by the other respondents. However, I'm also regularly frustrated by how much of a resource hog it is.
I downloaded Notepad++ recently (much smaller download size ;-) ) and have been enjoying it quite a bit. The editor itself is nice and fast and it looks to be extensible. I'm hoping to copy some of my favorite features from IDE into Notepad++ and migrate, at some distant point in the future.
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I have worked a bit with Python programming for Google App Engine, which I started out in Notepad++ and then recently shifted over to Komodo using two excellent startup tutorials - both of which are conveniently linked from [this blog post](http://blogs.activestate.com/2008/04/komodo-does-it "Komodo does it all: Google App Engine") (direct: [here](http://abaditya.com/2008/04/19/a-pseudo-ide-for-google-app-engine-komodo-edit/ "A pseudo IDE for Google App Engine") and [here](http://www.toddnemet.com/using-komodo-edit-as-an-ide-for-google-app-engine/ "Using Komodo Edit as an IDE for Google App Engine")).
* Komodo supports the basic
organization of your work into
Projects, which Notepad++ does not
(apart from physical folder
organization).
* The custom commands
toolbar is useful to keep track of
numerous frequently-used commands
and even link to URLs (like online
documentation and the like).
* It has a working (if sometimes clunky)
code-completion mechanism.
In short, it's an IDE which provides all the benefits thereof.
Notepad++ is simpler, much MUCH faster to load, and does support some basic configurable run commands; it's a fine choice if you like doing all your execution and debugging right in the commandline or Python shell. My advice is to try both!
|
If I had to choose between Notepad++ and Komodo i would choose PyScripter ;.)
Seriously I consider PyScripter as a great alternative...
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I have worked a bit with Python programming for Google App Engine, which I started out in Notepad++ and then recently shifted over to Komodo using two excellent startup tutorials - both of which are conveniently linked from [this blog post](http://blogs.activestate.com/2008/04/komodo-does-it "Komodo does it all: Google App Engine") (direct: [here](http://abaditya.com/2008/04/19/a-pseudo-ide-for-google-app-engine-komodo-edit/ "A pseudo IDE for Google App Engine") and [here](http://www.toddnemet.com/using-komodo-edit-as-an-ide-for-google-app-engine/ "Using Komodo Edit as an IDE for Google App Engine")).
* Komodo supports the basic
organization of your work into
Projects, which Notepad++ does not
(apart from physical folder
organization).
* The custom commands
toolbar is useful to keep track of
numerous frequently-used commands
and even link to URLs (like online
documentation and the like).
* It has a working (if sometimes clunky)
code-completion mechanism.
In short, it's an IDE which provides all the benefits thereof.
Notepad++ is simpler, much MUCH faster to load, and does support some basic configurable run commands; it's a fine choice if you like doing all your execution and debugging right in the commandline or Python shell. My advice is to try both!
|
A downside I found of Notepad++ for Python is that it tends (for me) to silently mix tabs and spaces. I know this is configurable, but it caught me out, especially when trying to work with other people using different editors / IDE's, so take care.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.