qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
You were close, but you need to set the revised object to a new variable. Also, you probably want to aggregate arrays since there are multiple 'completed'. This first creates the base object and then populates it using `reduce()` for both actions
```
let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}),
revised = todos.reduce((b, todo) => {
b[todo.status].push(todo);
return b;
}, keys);
```
```js
const todos = [{
id: 'a',
name: 'Buy dog',
action: 'a',
status: 'deleted',
},
{
id: 'b',
name: 'Buy food',
tooltip: null,
status: 'completed',
},
{
id: 'c',
name: 'Heal dog',
tooltip: null,
status: 'completed',
},
{
id: 'd',
name: 'Todo this',
action: 'd',
status: 'completed',
},
{
id: 'e',
name: 'Todo that',
action: 'e',
status: 'todo',
},
];
let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}),
revised = todos.reduce((b, todo) => {
b[todo.status].push(todo);
return b;
}, keys);
console.log(revised);
```
|
You can group by `status` by destructing the `todo` and recalling (spreading) the previous value or using a new array with the [nullish coalescing operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Nullish_coalescing_operator) (`??`). If your version of JS doesn't support that operator, you can fall back to using a [logical OR](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_OR) (`||`).
```js
const todos = [
{ id: 'a' , name: 'Buy dog' , action: 'a' , status: 'deleted' },
{ id: 'b' , name: 'Buy food' , tooltip: null , status: 'completed' },
{ id: 'c' , name: 'Heal dog' , tooltip: null , status: 'completed' },
{ id: 'd' , name: 'Todo this' , action: 'd' , status: 'completed' },
{ id: 'e' , name: 'Todo that' , action: 'e' , status: 'todo' }
];
const grouped = todos.reduce((acc, { status, ...rest }) => ({
...acc,
[status]: [...(acc[status] ?? []), { ...rest, status }]
}), {});
console.log(grouped);
```
```css
.as-console-wrapper { top: 0; max-height: 100% !important; }
```
|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
You were close, but you need to set the revised object to a new variable. Also, you probably want to aggregate arrays since there are multiple 'completed'. This first creates the base object and then populates it using `reduce()` for both actions
```
let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}),
revised = todos.reduce((b, todo) => {
b[todo.status].push(todo);
return b;
}, keys);
```
```js
const todos = [{
id: 'a',
name: 'Buy dog',
action: 'a',
status: 'deleted',
},
{
id: 'b',
name: 'Buy food',
tooltip: null,
status: 'completed',
},
{
id: 'c',
name: 'Heal dog',
tooltip: null,
status: 'completed',
},
{
id: 'd',
name: 'Todo this',
action: 'd',
status: 'completed',
},
{
id: 'e',
name: 'Todo that',
action: 'e',
status: 'todo',
},
];
let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}),
revised = todos.reduce((b, todo) => {
b[todo.status].push(todo);
return b;
}, keys);
console.log(revised);
```
|
You're only getting the most recent completed element from the original array because each object in your result is an object, not an array... It might be easier to start with a simple forEach instead of the reduce to see if that gets you what you need:
```js
const todos = [{
id: 'a',
name: 'Buy dog',
action: 'a',
status: 'deleted',
},
{
id: 'b',
name: 'Buy food',
tooltip: null,
status: 'completed',
},
{
id: 'c',
name: 'Heal dog',
tooltip: null,
status: 'completed',
},
{
id: 'd',
name: 'Todo this',
action: 'd',
status: 'completed',
},
{
id: 'e',
name: 'Todo that',
action: 'e',
status: 'todo',
},
];
let res = {}
todos.forEach(todo => {
res[todo.status]=res[todo.status]||[]
res[todo.status].push(todo)
})
console.log(res)
```
|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
```js
const todos = [{id: 'a',name: 'Buy dog',action: 'a',status: 'deleted'},{id: 'b',name: 'Buy food',tooltip: null,status: 'completed'},{id: 'c',name: 'Heal dog',tooltip: null,status: 'completed'},{id: 'd',name: 'Todo this',action: 'd',status: 'completed'},{id: 'e',name: 'Todo that',action: 'e',status: 'todo'}];
const result = todos.reduce((acc, todo) => {
return {
...acc,
[todo.status]: acc[todo.status]
? [...acc[todo.status], { ...acc[todo.status], ...todo }]
: [todo]
};
}, {});
console.log(result)
```
Not sure if this result is what you are looking for. Basically in the end we will have an object with status as key, and the value will be an array that act as a grouping
|
```
function getByValue(arr, value) {
var result = arr.filter(function(o){return o.status == value;} );
return result? result[0] : null; // or undefined
}
todo_obj = getByValue(arr, 'todo')
deleted_obj = getByValue(arr, 'deleted')
completed_obj = getByValue(arr, 'completed')
```
|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
```js
const todos = [{id: 'a',name: 'Buy dog',action: 'a',status: 'deleted'},{id: 'b',name: 'Buy food',tooltip: null,status: 'completed'},{id: 'c',name: 'Heal dog',tooltip: null,status: 'completed'},{id: 'd',name: 'Todo this',action: 'd',status: 'completed'},{id: 'e',name: 'Todo that',action: 'e',status: 'todo'}];
const result = todos.reduce((acc, todo) => {
return {
...acc,
[todo.status]: acc[todo.status]
? [...acc[todo.status], { ...acc[todo.status], ...todo }]
: [todo]
};
}, {});
console.log(result)
```
Not sure if this result is what you are looking for. Basically in the end we will have an object with status as key, and the value will be an array that act as a grouping
|
Your code is fine, you just need to use the reduce function as an output - it doesn't mutate `todos` it returns a new value.
```js
const todos = [{id: 'a',name: 'Buy dog',action: 'a',status: 'deleted',},{id: 'b',name: 'Buy food',tooltip: null,status: 'completed',},{id: 'c',name: 'Heal dog',tooltip: null,status: 'completed',},{id: 'd',name: 'Todo this',action: 'd',status: 'completed',},{id: 'e',name: 'Todo that',action: 'e',status: 'todo',},];
const out = todos.reduce((acc, todo) => {
return {
...acc,
[todo.status]: { ...acc[todo.status], ...todo }
};
}, {});
console.log(out);
```
```css
.as-console-wrapper {min-height:100%} /* make preview prettier */
```
|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
```js
const todos = [{id: 'a',name: 'Buy dog',action: 'a',status: 'deleted'},{id: 'b',name: 'Buy food',tooltip: null,status: 'completed'},{id: 'c',name: 'Heal dog',tooltip: null,status: 'completed'},{id: 'd',name: 'Todo this',action: 'd',status: 'completed'},{id: 'e',name: 'Todo that',action: 'e',status: 'todo'}];
const result = todos.reduce((acc, todo) => {
return {
...acc,
[todo.status]: acc[todo.status]
? [...acc[todo.status], { ...acc[todo.status], ...todo }]
: [todo]
};
}, {});
console.log(result)
```
Not sure if this result is what you are looking for. Basically in the end we will have an object with status as key, and the value will be an array that act as a grouping
|
You haven't been very clear in your question, I assume you want an output that looks like `{todo: [], completed: [], deleted: []}`.
In that case here is a simple solution.
```js
var todos = [{
id: 'a',
name: 'Buy dog',
action: 'a',
status: 'deleted',
},
{
id: 'b',
name: 'Buy food',
tooltip: null,
status: 'completed',
},
{
id: 'c',
name: 'Heal dog',
tooltip: null,
status: 'completed',
},
{
id: 'd',
name: 'Todo this',
action: 'd',
status: 'completed',
},
{
id: 'e',
name: 'Todo that',
action: 'e',
status: 'todo',
},
];
var result = todos.reduce(function(result, item) {
var statusList = result[item.status];
if (!statusList) {
statusList = [];
result[item.status] = statusList;
}
statusList.push(item);
// or use this to create a deep copy
// statusList.push(JSON.parse(JSON.stringify(item)));
return result;
}, {});
console.log(result);
```
|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
```js
const todos = [{id: 'a',name: 'Buy dog',action: 'a',status: 'deleted'},{id: 'b',name: 'Buy food',tooltip: null,status: 'completed'},{id: 'c',name: 'Heal dog',tooltip: null,status: 'completed'},{id: 'd',name: 'Todo this',action: 'd',status: 'completed'},{id: 'e',name: 'Todo that',action: 'e',status: 'todo'}];
const result = todos.reduce((acc, todo) => {
return {
...acc,
[todo.status]: acc[todo.status]
? [...acc[todo.status], { ...acc[todo.status], ...todo }]
: [todo]
};
}, {});
console.log(result)
```
Not sure if this result is what you are looking for. Basically in the end we will have an object with status as key, and the value will be an array that act as a grouping
|
You can group by `status` by destructing the `todo` and recalling (spreading) the previous value or using a new array with the [nullish coalescing operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Nullish_coalescing_operator) (`??`). If your version of JS doesn't support that operator, you can fall back to using a [logical OR](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_OR) (`||`).
```js
const todos = [
{ id: 'a' , name: 'Buy dog' , action: 'a' , status: 'deleted' },
{ id: 'b' , name: 'Buy food' , tooltip: null , status: 'completed' },
{ id: 'c' , name: 'Heal dog' , tooltip: null , status: 'completed' },
{ id: 'd' , name: 'Todo this' , action: 'd' , status: 'completed' },
{ id: 'e' , name: 'Todo that' , action: 'e' , status: 'todo' }
];
const grouped = todos.reduce((acc, { status, ...rest }) => ({
...acc,
[status]: [...(acc[status] ?? []), { ...rest, status }]
}), {});
console.log(grouped);
```
```css
.as-console-wrapper { top: 0; max-height: 100% !important; }
```
|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
```js
const todos = [{id: 'a',name: 'Buy dog',action: 'a',status: 'deleted'},{id: 'b',name: 'Buy food',tooltip: null,status: 'completed'},{id: 'c',name: 'Heal dog',tooltip: null,status: 'completed'},{id: 'd',name: 'Todo this',action: 'd',status: 'completed'},{id: 'e',name: 'Todo that',action: 'e',status: 'todo'}];
const result = todos.reduce((acc, todo) => {
return {
...acc,
[todo.status]: acc[todo.status]
? [...acc[todo.status], { ...acc[todo.status], ...todo }]
: [todo]
};
}, {});
console.log(result)
```
Not sure if this result is what you are looking for. Basically in the end we will have an object with status as key, and the value will be an array that act as a grouping
|
You're only getting the most recent completed element from the original array because each object in your result is an object, not an array... It might be easier to start with a simple forEach instead of the reduce to see if that gets you what you need:
```js
const todos = [{
id: 'a',
name: 'Buy dog',
action: 'a',
status: 'deleted',
},
{
id: 'b',
name: 'Buy food',
tooltip: null,
status: 'completed',
},
{
id: 'c',
name: 'Heal dog',
tooltip: null,
status: 'completed',
},
{
id: 'd',
name: 'Todo this',
action: 'd',
status: 'completed',
},
{
id: 'e',
name: 'Todo that',
action: 'e',
status: 'todo',
},
];
let res = {}
todos.forEach(todo => {
res[todo.status]=res[todo.status]||[]
res[todo.status].push(todo)
})
console.log(res)
```
|
34,708,302
|
I've got a list of dictionaries in a JSON file that looks like this:
```
[{"url": "http://www.URL1.com", "date": "2001-01-01"},
{"url": "http://www.URL2.com", "date": "2001-01-02"}, ...]
```
but I'm struggling to import it into a pandas data frame — this should be pretty easy, but I'm blanking on it. Anyone able to set me straight here?
Likewise, what's the best way to simply read it into a list of dictionaries to use w/in python?
|
2016/01/10
|
[
"https://Stackoverflow.com/questions/34708302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4309943/"
] |
You can use [`from_dict`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html):
```
import pandas as pd
lis = [{"url": "http://www.URL1.com", "date": "2001-01-01"},
{"url": "http://www.URL2.com", "date": "2001-01-02"}]
print pd.DataFrame.from_dict(lis)
date url
0 2001-01-01 http://www.URL1.com
1 2001-01-02 http://www.URL2.com
```
Or you can use `DataFrame` constructor:
```
import pandas as pd
lis = [{"url": "http://www.URL1.com", "date": "2001-01-01"}, {"url": "http://www.URL2.com", "date": "2001-01-02"}]
print pd.DataFrame(lis)
date url
0 2001-01-01 http://www.URL1.com
1 2001-01-02 http://www.URL2.com
```
|
While `from_dict` will work here, the prescribed way would be to use [`pd.read_json`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html) with `orient='records'`. This parses an input that is
>
> list-like `[{column -> value}, ... , {column -> value}]`
>
>
>
Example: say this is the text of `lis.json`:
```
[{"url": "http://www.URL1.com", "date": "2001-01-01"},
{"url": "http://www.URL2.com", "date": "2001-01-02"}]
```
To pass the file path itself as input rather than a list as in @jezrael's answer:
```
print(pd.read_json('lis.json', orient='records'))
date url
0 2001-01-01 http://www.URL1.com
1 2001-01-02 http://www.URL2.com
```
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
To show the progress bar:
```
from tqdm import tqdm
for x in tqdm(my_list):
# do something with x
#### In case using with enumerate:
for i, x in enumerate( tqdm(my_list) ):
# do something with i and x
```
[](https://i.stack.imgur.com/H9sUC.png)
*Some notes on the attached picture*:
`49%`: It already finished 49% of the whole process
`979/2000`: Working on the 979th element/iteration, out of 2000 elements/iterations
`01:50`: It's been running for 1 minute and 50 seconds
`01:55`: Estimated time left to run
`8.81 it/s`: On average, it processes 8.81 elements per second
|
Or you can use this (can be used for any situation):
```
for i in tqdm (range (1), desc="Loading..."):
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
I think this could be most elegantly be solved in this manner:
```
import progressbar
bar = progressbar.ProgressBar(maxval=len(members)).start()
for idx, member in enumerate(members):
...
bar.update(idx)
```
|
Or you can use this (can be used for any situation):
```
for i in tqdm (range (1), desc="Loading..."):
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
To show the progress bar:
```
from tqdm import tqdm
for x in tqdm(my_list):
# do something with x
#### In case using with enumerate:
for i, x in enumerate( tqdm(my_list) ):
# do something with i and x
```
[](https://i.stack.imgur.com/H9sUC.png)
*Some notes on the attached picture*:
`49%`: It already finished 49% of the whole process
`979/2000`: Working on the 979th element/iteration, out of 2000 elements/iterations
`01:50`: It's been running for 1 minute and 50 seconds
`01:55`: Estimated time left to run
`8.81 it/s`: On average, it processes 8.81 elements per second
|
The [rich module](https://pypi.org/project/rich/) has also a progress bar that can be included in your for loop:
```
import time # for demonstration only
from rich.progress import track
members = ['Liam', 'Olivia', 'Noah', 'Emma', 'Oliver', 'Charlotte'] # for demonstration only
for member in track(members):
# your code here
print(member) # for demonstration only
time.sleep(1.5) # for demonstration only
```
Note: `time` is only used to get the delay for the screenshot.
Here's a screenshot from within the run:
[](https://i.stack.imgur.com/s84Qn.png)
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
The basic idea of a progress bar from a loop is to insert points within the loop to update the progress bar. An example would be something like this:
```
membersProcessed = 0
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
membersProcessed += 1
print 'Progress: {}/{} members processed'.format(membersProcessed, len(members))
```
Maybe this helps.
And you could include a more detailed one by adding points after certain commands within the for loop as well.
|
The [rich module](https://pypi.org/project/rich/) has also a progress bar that can be included in your for loop:
```
import time # for demonstration only
from rich.progress import track
members = ['Liam', 'Olivia', 'Noah', 'Emma', 'Oliver', 'Charlotte'] # for demonstration only
for member in track(members):
# your code here
print(member) # for demonstration only
time.sleep(1.5) # for demonstration only
```
Note: `time` is only used to get the delay for the screenshot.
Here's a screenshot from within the run:
[](https://i.stack.imgur.com/s84Qn.png)
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
The [rich module](https://pypi.org/project/rich/) has also a progress bar that can be included in your for loop:
```
import time # for demonstration only
from rich.progress import track
members = ['Liam', 'Olivia', 'Noah', 'Emma', 'Oliver', 'Charlotte'] # for demonstration only
for member in track(members):
# your code here
print(member) # for demonstration only
time.sleep(1.5) # for demonstration only
```
Note: `time` is only used to get the delay for the screenshot.
Here's a screenshot from within the run:
[](https://i.stack.imgur.com/s84Qn.png)
|
Or you can use this (can be used for any situation):
```
for i in tqdm (range (1), desc="Loading..."):
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
Using [tqdm](https://github.com/tqdm/):
```
from tqdm import tqdm
for member in tqdm(members):
# current contents of your for loop
```
`tqdm()` takes `members` and iterates over it, but each time it yields a new member (between each iteration of the loop), it also updates a progress bar on your command line. That makes this actually quite similar to Matthias' solution (printing stuff at the end of each loop iteration), but the progressbar update logic is nicely encapsulated inside `tqdm`.
|
Or you can use this (can be used for any situation):
```
for i in tqdm (range (1), desc="Loading..."):
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
To show the progress bar:
```
from tqdm import tqdm
for x in tqdm(my_list):
# do something with x
#### In case using with enumerate:
for i, x in enumerate( tqdm(my_list) ):
# do something with i and x
```
[](https://i.stack.imgur.com/H9sUC.png)
*Some notes on the attached picture*:
`49%`: It already finished 49% of the whole process
`979/2000`: Working on the 979th element/iteration, out of 2000 elements/iterations
`01:50`: It's been running for 1 minute and 50 seconds
`01:55`: Estimated time left to run
`8.81 it/s`: On average, it processes 8.81 elements per second
|
The basic idea of a progress bar from a loop is to insert points within the loop to update the progress bar. An example would be something like this:
```
membersProcessed = 0
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
membersProcessed += 1
print 'Progress: {}/{} members processed'.format(membersProcessed, len(members))
```
Maybe this helps.
And you could include a more detailed one by adding points after certain commands within the for loop as well.
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
The basic idea of a progress bar from a loop is to insert points within the loop to update the progress bar. An example would be something like this:
```
membersProcessed = 0
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
membersProcessed += 1
print 'Progress: {}/{} members processed'.format(membersProcessed, len(members))
```
Maybe this helps.
And you could include a more detailed one by adding points after certain commands within the for loop as well.
|
Or you can use this (can be used for any situation):
```
for i in tqdm (range (1), desc="Loading..."):
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
I think this could be most elegantly be solved in this manner:
```
import progressbar
bar = progressbar.ProgressBar(maxval=len(members)).start()
for idx, member in enumerate(members):
...
bar.update(idx)
```
|
The [rich module](https://pypi.org/project/rich/) has also a progress bar that can be included in your for loop:
```
import time # for demonstration only
from rich.progress import track
members = ['Liam', 'Olivia', 'Noah', 'Emma', 'Oliver', 'Charlotte'] # for demonstration only
for member in track(members):
# your code here
print(member) # for demonstration only
time.sleep(1.5) # for demonstration only
```
Note: `time` is only used to get the delay for the screenshot.
Here's a screenshot from within the run:
[](https://i.stack.imgur.com/s84Qn.png)
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
To show the progress bar:
```
from tqdm import tqdm
for x in tqdm(my_list):
# do something with x
#### In case using with enumerate:
for i, x in enumerate( tqdm(my_list) ):
# do something with i and x
```
[](https://i.stack.imgur.com/H9sUC.png)
*Some notes on the attached picture*:
`49%`: It already finished 49% of the whole process
`979/2000`: Working on the 979th element/iteration, out of 2000 elements/iterations
`01:50`: It's been running for 1 minute and 50 seconds
`01:55`: Estimated time left to run
`8.81 it/s`: On average, it processes 8.81 elements per second
|
I think this could be most elegantly be solved in this manner:
```
import progressbar
bar = progressbar.ProgressBar(maxval=len(members)).start()
for idx, member in enumerate(members):
...
bar.update(idx)
```
|
72,216,546
|
Used the standard python installation file (3.9.12) for windows.
The installation file have a built in option for pip installation:
[](https://i.stack.imgur.com/CEH51.png)
The resulting python is without pip.
Then I try to install pip directly by using "get-pip.py" file.
This action resulted in:
>
> WARNING: pip is configured with locations that require TLS/SSL,
> however the ssl module in Python is not available. WARNING: Retrying
> (Retry(total=4, connect=None, read=None, redirect=None, status=None))
> after connection broken by 'SSLError("Can't connect to HTTPS URL
> because the SSL module is not available.")': /simple/pip/ WARNING:
> Retrying (Retry(total=3, connect=None, read=None, redirect=None,
> status=None)) after connection broken by 'SSLError("Can't connect to
> HTTPS URL because the SSL module is not available.")': /simple/pip/
> WARNING: Retrying (Retry(total=2, connect=None, read=None,
> redirect=None, status=None)) after connection broken by
> 'SSLError("Can't connect to HTTPS URL because the SSL module is not
> available.")': /simple/pip/ ERROR: Operation cancelled by user
>
>
>
What can I do?
|
2022/05/12
|
[
"https://Stackoverflow.com/questions/72216546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8812406/"
] |
One should add `min-h-0` to `<main>`.. I swear I tried everything.
|
Add `overflow-y-scroll h-3/5` to the div containing the paragraphs.
Please find below the solution code and watch result in full screen.
```html
<script src="https://cdn.tailwindcss.com" ></script>
<div class="flex h-full flex-col">
<mark class="bg-red-900 p-2 font-semibold text-white"> Warning message </mark>
<main class="flex flex-1 bg-green-100">
<div class="flex flex-[4] bg-slate-100">
<div class="flex flex-col bg-slate-200">
<button variant="ghost">Filter</button>
<button variant="ghost">Filter</button>
<button variant="ghost">Filter</button>
<button variant="ghost">Filter</button>
</div>
<div class="flex flex-1 flex-col bg-slate-400">
<div class="flex flex-row-reverse border-y p-2">
<h2 class="flex items-center font-black">F0KLM</h2>
<div class="flex flex-1 flex-col">
<span class="text-sm">User</span>
<span class="text-sm">11:30</span>
</div>
</div>
</div>
</div>
<div class="flex min-h-0 flex-[7] flex-col bg-red-100">
<div class="flex items-center justify-between p-4">
<div class="flex gap-4">
<h1 class="font-black">F0KLM</h1>
<BadgeStatus status={"new"} />
</div>
<div class="rounded-md bg-black p-2 text-white">11:24</div>
</div>
<div class="mx-4 flex flex-1 flex-col gap-2 overflow-y-auto bg-white">
<div class="rounded-md border p-2">Header</div>
<div class="rounded-md border p-2 overflow-y-scroll h-3/5">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut
labore et dolore magna aliqua. At erat pellentesque adipiscing commodo elit at imperdiet.
Vulputate mi sit amet mauris commodo quis imperdiet massa tincidunt. Feugiat in fermentum
posuere urna nec tincidunt praesent semper feugiat. Arcu felis bibendum ut tristique et
egestas quis ipsum.</p>
Pharetra
<p>sit amet aliquam id diam. Tellus mauris a diam maecenas sed enim ut. Sed faucibus turpis in
eu. Neque ornare aenean euismod elementum nisi quis eleifend. In nulla posuere sollicitudin
aliquam ultrices sagittis orci a. Non quam lacus suspendisse faucibus interdum posuere lorem
ipsum dolor. Eget magna fermentum iaculis eu. Elementum eu facilisis sed odio. Turpis egestas
sed tempus urna et pharetra pharetra massa massa. Nunc lobortis mattis aliquam faucibus. Leo
integer malesuada nunc vel risus commodo.</p>
<p>sit amet aliquam id diam. Tellus mauris a diam maecenas sed enim ut. Sed faucibus turpis in
eu. Neque ornare aenean euismod elementum nisi quis eleifend. In nulla posuere sollicitudin
aliquam ultrices sagittis orci a. Non quam lacus suspendisse faucibus interdum posuere lorem
ipsum dolor. Eget magna fermentum iaculis eu. Elementum eu facilisis sed odio. Turpis egestas
sed tempus urna et pharetra pharetra massa massa. Nunc lobortis mattis aliquam faucibus. Leo
integer malesuada nunc vel risus commodo.</p>
<p>sit amet aliquam id diam. Tellus mauris a diam maecenas sed enim ut. Sed faucibus turpis in
eu. Neque ornare aenean euismod elementum nisi quis eleifend. In nulla posuere sollicitudin
aliquam ultrices sagittis orci a. Non quam lacus suspendisse faucibus interdum posuere lorem
ipsum dolor. Eget magna fermentum iaculis eu. Elementum eu facilisis sed odio. Turpis egestas
sed tempus urna et pharetra pharetra massa massa. Nunc lobortis mattis aliquam faucibus. Leo
integer malesuada nunc vel risus commodo.</p>
</div>
<div class="rounded-md border p-2">Info</div>
</div>
<div class="flex justify-end gap-4 p-4">
<button variant="ghost">Ok</button>
<button variant="ghost">Reject</button>
</div>
</div>
</main>
<nav class="flex justify-around bg-white py-2">
<button variant="ghost">Button</button>
<button variant="ghost">Button</button>
<button variant="ghost">Button</button>
<button variant="ghost">Button</button>
</nav>
</div>
```
|
55,436,896
|
I have a table of IDs and dates of quarterly data and i would like to reindex this to daily (weekdays).
Example table:
[](https://i.stack.imgur.com/bLvFf.png)
I'm trying to figure out a pythonic or pandas way to reindex to a higher frequency date range e.g. daily and forward fill any NaNs.
so far have tried:
```
df = pd.read_sql('select date, id, type, value from db_table' con=conn, index_col=['date', 'id', 'type'])
dates = pd.bdate_range(start, end)
new_idx = pd.MultiIndex.from_product([dates, df.index.get_level_values(1), df.index.get_level_values(2)]
new_df = df.reindex(new_idx)
#this just hangs
new_df = new_df.groupby(level=1).fillna(method='ffill')
```
to no avail. I either get a
`Exception: cannot handle a non-unique multi-index!`
Or, if the dates are consistent between ids and types the individual dates are reproduced multiple times (which sounds like a bug?)
Ultimately I would just like to group the table by date, id and type and have a consistent date index across ids and types.
Is there a way to do this in pandas?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55436896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5615327/"
] |
Yes you can do with `merge`
```
new_idx_frame=new_idx.to_frame()
new_idx_frame.columns=['date', 'id', 'type']
Yourdf=df.reset_index().merge(new_idx_frame,how='right',sort =True).groupby('id').ffill()# here I am using toy data
Out[408]:
id date type value
0 1 1 1 NaN
1 1 1 2 NaN
2 2 1 1 666666.0
3 2 1 2 99999.0
4 1 2 1 -1.0
5 1 2 1 -1.0
6 1 2 2 -1.0
7 2 2 1 99999.0
8 2 2 2 99999.0
```
---
Sample data
```
df=pd.DataFrame({'date':[1,1,2,2],'id':[2,2,1,1],'type':[2,1,1,1],'value':[99999,666666,-1,-1]})
df=df.set_index(['date', 'id', 'type'])
new_idx = pd.MultiIndex.from_product([[1,2], [1,2],[1,2]])
```
|
Wen-Ben's answer is almost there - thank you for that. The only thing missing is grouping by ['id', 'type'] when doing the forward fill.
Further, when creating the new multindex in my use case should have unique values:
```
new_idx = pd.MultiIndex.from_product([dates, df.index.get_level_values(1).unique(), df.index.get_level_values(2).unique()])
```
|
66,491,254
|
I have created python desktop software. Now I want to market that as a product. But my problem is, anyone can decompile my exe file and they will get the actual code.
So is there any way to encrypt my code and convert it to exe before deployment. I have tried different ways.
But nothing is working. Is there any way to do that?.Thanks in advance
|
2021/03/05
|
[
"https://Stackoverflow.com/questions/66491254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14155439/"
] |
This [link](https://wiki.python.org/moin/Asking%20for%20Help/How%20do%20you%20protect%20Python%20source%20code%3F) has most of the info you need.
But since links are discouraged here:
There is py2exe, which compiles your code into an .exe file, but afaik it's not difficult to reverse-engineer the code from the exe file.
You can of course make your code more difficult to understand. Rename your classes, functions to be non-sensical (e.g. rename print(s) to delete(s) or to a()) people will have a difficult time then.
You can also avoid all of that by using SaaS (Software as a Service), where you can host your code online on a server and get paid by people using it.
Or consider open-sourcing it :)
|
You can install pyinstaller per `pip install pyinstaller` (make sure to also add it to your environment variables) and then open shell in the folder where your file is (shift+right-click somewhere where no file is and "open PowerShell here") and the do "pyinstaller --onefile YOUR\_FILE".
If there will be created a dist folder, take out the exe file and delete the build folder and the .spec I think it is.
And there you go with your standalone exe File.
|
4,331,348
|
If I have a python module that has a bunch of functions, say like this:
```
#funcs.py
def foo() :
print "foo!"
def bar() :
print "bar!"
```
And I have another module that is designed to parse a list of functions from a string and run those functions:
```
#parser.py
from funcs import *
def execute(command):
command = command.split()
for c in command:
function = globals()[c]
function()
```
Then I can open up python and do the following:
```
>>> import parser
>>> parser.execute("foo bar bar foo")
foo!
bar!
bar!
foo!
```
I want to add a convenience function to `funcs.py` that allows a list of functions to be called as a function itself:
```
#funcs.py (new version)
import parser
def foo() :
print "foo!"
def bar() :
print "bar!"
def parse(commands="foo foo") :
parser.execute(commands)
```
Now I can recursively parse from the parser itself:
```
>>> import parser
>>> parser.execute("parse")
foo!
foo!
>>> parser.execute("parse bar parse")
foo!
foo!
bar!
foo!
foo!
```
But for some reason I can't just run `parse` from `funcs`, as I get a key error:
```
>>> import funcs
>>> funcs.parse("foo bar")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "funcs.py", line 11, in parse
parser.execute(commands)
File "parser.py", line 6, in execute
function = globals()[c]
KeyError: 'foo'
```
So even though `foo` should be imported into `parser.py` through the `from funcs import *` line, I'm not finding `foo` in the `globals()` of `parser.py` when it is used through `funcs.py`. How could this happen?
I should finally point out that importing `parser` and then `funcs` (but only in that order) allows it to work as expected:
```
>>> import parser
>>> import funcs
>>> funcs.parse("foo bar")
foo!
bar!
```
|
2010/12/02
|
[
"https://Stackoverflow.com/questions/4331348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/67829/"
] |
`import module_name` does something fundamentally different from what `from module_name import *` does.
The former creates a global named `module_name`, which is of type `module` and which contains the module's names, accessed as attributes. The latter creates a global for each of those names within `module_name`, but not for `module_name` itself.
Thus, when you `import funcs`, `foo` and `bar` are not put into `globals()`, and therefore are not found when `execute` looks for them.
Cyclic dependencies like this (trying to have `parser` import names from `funcs` while `funcs` also imports `parser`) are bad. Explicit is better than implicit. Don't try to create this much magic. Tell `parse()` what functions are available.
|
* Print the globals after you import parser to see what it did
* `parser` is it built-in module also. Usually the built-in parser should load not yours.
I would change the name so you do not have problems.
* Your importing funcs but parser imports \* from funcs?
I would think carefully about what order you are importing modules and where you need them.
|
4,331,348
|
If I have a python module that has a bunch of functions, say like this:
```
#funcs.py
def foo() :
print "foo!"
def bar() :
print "bar!"
```
And I have another module that is designed to parse a list of functions from a string and run those functions:
```
#parser.py
from funcs import *
def execute(command):
command = command.split()
for c in command:
function = globals()[c]
function()
```
Then I can open up python and do the following:
```
>>> import parser
>>> parser.execute("foo bar bar foo")
foo!
bar!
bar!
foo!
```
I want to add a convenience function to `funcs.py` that allows a list of functions to be called as a function itself:
```
#funcs.py (new version)
import parser
def foo() :
print "foo!"
def bar() :
print "bar!"
def parse(commands="foo foo") :
parser.execute(commands)
```
Now I can recursively parse from the parser itself:
```
>>> import parser
>>> parser.execute("parse")
foo!
foo!
>>> parser.execute("parse bar parse")
foo!
foo!
bar!
foo!
foo!
```
But for some reason I can't just run `parse` from `funcs`, as I get a key error:
```
>>> import funcs
>>> funcs.parse("foo bar")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "funcs.py", line 11, in parse
parser.execute(commands)
File "parser.py", line 6, in execute
function = globals()[c]
KeyError: 'foo'
```
So even though `foo` should be imported into `parser.py` through the `from funcs import *` line, I'm not finding `foo` in the `globals()` of `parser.py` when it is used through `funcs.py`. How could this happen?
I should finally point out that importing `parser` and then `funcs` (but only in that order) allows it to work as expected:
```
>>> import parser
>>> import funcs
>>> funcs.parse("foo bar")
foo!
bar!
```
|
2010/12/02
|
[
"https://Stackoverflow.com/questions/4331348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/67829/"
] |
`import module_name` does something fundamentally different from what `from module_name import *` does.
The former creates a global named `module_name`, which is of type `module` and which contains the module's names, accessed as attributes. The latter creates a global for each of those names within `module_name`, but not for `module_name` itself.
Thus, when you `import funcs`, `foo` and `bar` are not put into `globals()`, and therefore are not found when `execute` looks for them.
Cyclic dependencies like this (trying to have `parser` import names from `funcs` while `funcs` also imports `parser`) are bad. Explicit is better than implicit. Don't try to create this much magic. Tell `parse()` what functions are available.
|
Your "parser" is a pretty bad idea.
Do this instead.
```
def execute(*functions):
for function in functions:
function()
```
Then you can open up python and do the following:
```
>>> import parser
>>> from funcs import foo, bar
>>> parser.execute(foo, bar, bar, foo)
```
Life will be simpler without using "strings" where the function itself is what you really meant.
|
73,066,303
|
I have python 3.10, selenium 4.3.0 and i would like to click on the following element:
```
<a href="#" onclick="fireLoginOrRegisterModalRequest('sign_in');ga('send', 'event', 'main_navigation', 'login', '1st_level');"> Mein Konto </a>
```
Unfortunately,
`driver.find_element(By.CSS_SELECTOR,"a[onclick^='fireLoginOrRegisterModalRequest'][onclick*='login']").click()`
does not work. See the following error message. The other opportunities wont work as well and i get a simular error message..
```
"C:\Users\...\AppData\Local\Programs\Python\Python310\python.exe" "C:/Users/.../PycharmProjects/bot/main.py"
C:\Users\...\PycharmProjects\bot\main.py:7: DeprecationWarning: executable_path has been deprecated, please pass in a Service object
driver = webdriver.Chrome('./chromedriver.exe')
Traceback (most recent call last):
File "C:\Users\...\PycharmProjects\bot\main.py", line 15, in <module>
mkButton = driver.find_element(By.CSS_SELECTOR, "a[onclick^='fireLoginOrRegisterModalRequest'][onclick*='login']").click()
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webelement.py", line 88, in click
self._execute(Command.CLICK_ELEMENT)
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webelement.py", line 396, in _execute
return self._parent.execute(command, params)
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 435, in execute
self.error_handler.check_response(response)
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
(Session info: chrome=103.0.5060.114)
Stacktrace:
Backtrace:
Ordinal0 [0x00D35FD3+2187219]
Ordinal0 [0x00CCE6D1+1763025]
Ordinal0 [0x00BE3D40+802112]
Ordinal0 [0x00C12C03+994307]
Ordinal0 [0x00C089B3+952755]
Ordinal0 [0x00C2CB8C+1100684]
Ordinal0 [0x00C08394+951188]
Ordinal0 [0x00C2CDA4+1101220]
Ordinal0 [0x00C3CFC2+1167298]
Ordinal0 [0x00C2C9A6+1100198]
Ordinal0 [0x00C06F80+946048]
Ordinal0 [0x00C07E76+949878]
GetHandleVerifier [0x00FD90C2+2721218]
GetHandleVerifier [0x00FCAAF0+2662384]
GetHandleVerifier [0x00DC137A+526458]
GetHandleVerifier [0x00DC0416+522518]
Ordinal0 [0x00CD4EAB+1789611]
Ordinal0 [0x00CD97A8+1808296]
Ordinal0 [0x00CD9895+1808533]
Ordinal0 [0x00CE26C1+1844929]
BaseThreadInitThunk [0x7688FA29+25]
RtlGetAppContainerNamedObjectPath [0x77D57A9E+286]
RtlGetAppContainerNamedObjectPath [0x77D57A6E+238]
Process finished with exit code 1
```
|
2022/07/21
|
[
"https://Stackoverflow.com/questions/73066303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19163184/"
] |
It turns out that despite opening up to all IP-addresses, the networkpolicy does not allow egress to the DNS pod, which is in another namespace.
```
# Identifying DNS pod
kubectl get pods -A | grep dns
# Identifying DNS pod label
kubectl describe pods -n kube-system coredns-64cfd66f7-rzgwk
```
Next I add the dns label to the egress policy:
```
# network_policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all
namespace: mytestnamespace
spec:
podSelector: {}
policyTypes:
- Egress
- Ingress
egress:
- to:
- ipBlock:
cidr: "0.0.0.0/0"
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
- podSelector:
matchLabels:
k8s-app: "kube-dns"
```
I apply the network policy and test the curl calls:
```
# Setting up policy
kubectl apply -f network_policy.yaml
# Testing curl call
kubectl -n mytestnamespace exec service-c-78f784b475-qsdqg -- bin/bash -c 'curl www.google.com'
```
SUCCESS! Now I can make egress calls, next I just have to block the appropriate IP-addresses in the private network.
|
The reason is, `curl` attempts to form a 2-way TCP connection with the HTTP server at `www.google.com`. This means, *both* egress and ingress traffic need to be allowed on your policy. Currently, only out-bound traffic is allowed. Maybe, you'll be able to see this in more detail if you ran curl in verbose mode:
```
kubectl -n mytestnamespace exec service-c-78f784b475-qsdqg -- bin/bash -c 'curl www.google.com' -v
```
You can then see the communication back and forth marked by arrows `>` (out-going) and `<` (in-coming). You'll notice only `>` arrows will be listed and no `<` traffic will be shown.
>
> Note: If you do something simpler like `ping google.com` it might work, since this is a simple state-less communication.
>
>
>
In order to have this, you can simply add an allow-all ingress rule to your policy like:
```
ingress:
- {}
```
Also, there's a simpler way to allow all egress traffic simply as:
```
egress:
- {}
```
I hope this helps. You can read more about policies [here](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
|
61,243,561
|
I wrote some classes which were a shopping cart, Customer, Order, and some functions for discounts for the orders. this is the final lines of my code.
```
anu = Customer('anu', 4500) # 1
user_cart = [Lineitem('apple', 7, 7), Lineitem('orange', 5, 6)] # 2
order1 = Order(anu, user_cart) # 2
print(type(joe)) # 4
print(order1) # 5
format() # 6
```
guys, I know that format throws an error for not passing any argument I am asking you why the error comes first and if it comes first how does the rest of the code execute well. I think the python interpreter keeps executing code and when it finds a bug it deletes all the output and throws that error.
this is my output.
```
Traceback (most recent call last):
File "C:\Users\User\PyCharm Community Edition with Anaconda plugin 2019.3.1\plugins\python-ce\helpers\pydev\pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Users\User\PyCharm Community Edition with Anaconda plugin 2019.3.1\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/programming/python/progs/my_programs/abc_module/abstract_classes.py", line 97, in <module>
format()
TypeError: format expected at least 1 argument, got 0
<class '__main__.Customer'>
Congratulations! your order was successfully processed.
Customer name: anu
Fidelity points: 4500 points
Order :-
product count: 2
product : apple
amount : 7 apple(s)
price per unit : 7$ per 1 apple
total price : 49$
product : orange
amount : 5 orange(s)
price per unit : 6$ per 1 orange
total price : 30$
subtotal : 79$
promotion : Fidelity Deal
discount amount : 24.88$
total : 54.115$
```
|
2020/04/16
|
[
"https://Stackoverflow.com/questions/61243561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12857878/"
] |
You can do this by awaiting the listen even, wrapping it in a promise, and calling the promise resolve as the callback to the server listen
```js
const app = express();
let server;
await new Promise(resolve => server = app.listen(0, "127.0.0.1", resolve));
this.global.server = server;
```
You could also put a custom callback that will just call the promise resolver as the third argument to the `app.listen()` and it should run that code then call resolve if you need some sort of diagnostics.
|
Extending [Robert Mennell](https://stackoverflow.com/users/5377773) answer:
>
> You could also put a custom callback that will just call the promise resolver as the third argument to the `app.listen()` and it should run that code then call resolve if you need some sort of diagnostics.
>
>
>
```js
let server;
const app = express();
await new Promise(function(resolve) {
server = app.listen(0, "127.0.0.1", function() {
console.log(`Running express server on '${JSON.stringify(server.address())}'...`);
resolve();
});
});
this.global.server = server;
```
Then, you can access the `this.global.server` in your tests files to get the server port/address: [Is it possible to create an Express.js server in my Jest test suite?](https://stackoverflow.com/a/61260044/4934640)
|
57,122,160
|
I am setting up a django rest api and need to integrate social login feature.I followed the following link
[Simple Facebook social Login using Django Rest Framework.](https://medium.com/@katherinekimetto/simple-facebook-social-login-using-django-rest-framework-e2ac10266be1)
After setting up all , when a migrate (7 th step :- python manage.py migrate
) , getting the error ' ModuleNotFoundError: No module named 'rest\_frameworkoauth2\_provider'
' . Is there any package i missed? i cant figure out the error.
|
2019/07/20
|
[
"https://Stackoverflow.com/questions/57122160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5113584/"
] |
Firefox uses different flags. I am not sure exactly what your aim is but I am assuming you are trying to avoid some website detecting that you are using selenium.
There are different methods to avoid websites detecting the use of Selenium.
1) The value of navigator.webdriver is set to true by default when using Selenium. This variable will be present in Chrome as well as Firefox. This variable should be set to "undefined" to avoid detection.
2) A proxy server can also be used to avoid detection.
3) Some websites are able to use the state of your browser to determine if you are using Selenium. You can set Selenium to use a custom browser profile to avoid this.
The code below uses all three of these approaches.
```
profile = webdriver.FirefoxProfile('C:\\Users\\You\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\something.default-release')
PROXY_HOST = "12.12.12.123"
PROXY_PORT = "1234"
profile.set_preference("network.proxy.type", 1)
profile.set_preference("network.proxy.http", PROXY_HOST)
profile.set_preference("network.proxy.http_port", int(PROXY_PORT))
profile.set_preference("dom.webdriver.enabled", False)
profile.set_preference('useAutomationExtension', False)
profile.update_preferences()
desired = DesiredCapabilities.FIREFOX
driver = webdriver.Firefox(firefox_profile=profile, desired_capabilities=desired)
```
|
you may try:
```
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
from selenium.webdriver import DesiredCapabilities
from selenium.webdriver import Firefox
profile = FirefoxProfile()
profile.set_preference('devtools.jsonview.enabled', False)
profile.update_preferences()
desired = DesiredCapabilities.FIREFOX
driver = Firefox(firefox_profile=profile, desired_capabilities=desired)
```
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
It's hard to answer, not knowing exactly what's allowed and what's not. But how about this O(N^2) solution?
```
[x for x in correct_order if x in messed_order] + [x for x in messed_order if x not in correct_order]
```
|
Does the exact order of the 55/66/44 items matter, or do they just need to be listed at the end? If the order doesn't matter you could do this:
```
[i for i in correct_order if i in messed_order] +
list(set(messed_order) - set(correct_order))
```
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
It's hard to answer, not knowing exactly what's allowed and what's not. But how about this O(N^2) solution?
```
[x for x in correct_order if x in messed_order] + [x for x in messed_order if x not in correct_order]
```
|
Here is one that **destroys `messed_order`**
```
[messed_order.remove(i) or i for i in correct_order if i in messed_order] + messed_order
```
This one sorts `messed_order` in place
```
messed_order.sort(key=(correct_order+messed_order).index)
```
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
It's hard to answer, not knowing exactly what's allowed and what's not. But how about this O(N^2) solution?
```
[x for x in correct_order if x in messed_order] + [x for x in messed_order if x not in correct_order]
```
|
Not to detract from the answers already given, but it's python - you *aren't* arbitrarily restricted from using loops:
```
for item in correct_order: messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
is as valid as putting the loop on two lines.
Alternatively, this is Zope - if you can't do it in a single "python:" expression, yes you can use a helper script. Scripts are found by acquisition, so a template containing something like:
```
<tag tal:define="abc context/script">
```
will lookup a *either* an attribute 'script' of the current object (*context*) [which could be a method or a property], or a "Script (Python)" object named *script* in the current folder, or in any ancestor folder! In fact, it doesn't even need to be a script object - though for your purpose it needs to be some object that returns a list.
Far from "the Department of Arbitrary Restrictions" as Thanatos put it, it's more as if there aren't enough restrictions!
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
It's hard to answer, not knowing exactly what's allowed and what's not. But how about this O(N^2) solution?
```
[x for x in correct_order if x in messed_order] + [x for x in messed_order if x not in correct_order]
```
|
Create a `Script (Python)` object in your skin and use that as a function. TALES expressions are limited for a reason: they are there only to help you create HTML or XML markup, not do full-scale business logic. Better still, create a proper browser view and avoid the severe restrictions laid on Through-The-Web editable code.
Also, you are misrepresenting or misunderstanding TALES. You *can* use builtin methods like sorted and set. And instead of `if` you can use test(condition, iftrue, iffalse) or a good old `condition and iftrue or iffalse` with the limitation that the result of `iftrue` must itself evaluate to true.
Even better, you can access a limited set of Python modules via the `modules` dictionary, such as `modules['string']`. You'll need to make additional security declarations in a filesystem python module to extend this though.
See the [Python TALES expression section](http://docs.zope.org/zope2/zope2book/AppendixC.html#tales-python-expressions) of the TAL documentation. Note that the list of built-ins accessible to TALES listed there has since been expanded to cover newer python versions.
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
Create a `Script (Python)` object in your skin and use that as a function. TALES expressions are limited for a reason: they are there only to help you create HTML or XML markup, not do full-scale business logic. Better still, create a proper browser view and avoid the severe restrictions laid on Through-The-Web editable code.
Also, you are misrepresenting or misunderstanding TALES. You *can* use builtin methods like sorted and set. And instead of `if` you can use test(condition, iftrue, iffalse) or a good old `condition and iftrue or iffalse` with the limitation that the result of `iftrue` must itself evaluate to true.
Even better, you can access a limited set of Python modules via the `modules` dictionary, such as `modules['string']`. You'll need to make additional security declarations in a filesystem python module to extend this though.
See the [Python TALES expression section](http://docs.zope.org/zope2/zope2book/AppendixC.html#tales-python-expressions) of the TAL documentation. Note that the list of built-ins accessible to TALES listed there has since been expanded to cover newer python versions.
|
Does the exact order of the 55/66/44 items matter, or do they just need to be listed at the end? If the order doesn't matter you could do this:
```
[i for i in correct_order if i in messed_order] +
list(set(messed_order) - set(correct_order))
```
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
Create a `Script (Python)` object in your skin and use that as a function. TALES expressions are limited for a reason: they are there only to help you create HTML or XML markup, not do full-scale business logic. Better still, create a proper browser view and avoid the severe restrictions laid on Through-The-Web editable code.
Also, you are misrepresenting or misunderstanding TALES. You *can* use builtin methods like sorted and set. And instead of `if` you can use test(condition, iftrue, iffalse) or a good old `condition and iftrue or iffalse` with the limitation that the result of `iftrue` must itself evaluate to true.
Even better, you can access a limited set of Python modules via the `modules` dictionary, such as `modules['string']`. You'll need to make additional security declarations in a filesystem python module to extend this though.
See the [Python TALES expression section](http://docs.zope.org/zope2/zope2book/AppendixC.html#tales-python-expressions) of the TAL documentation. Note that the list of built-ins accessible to TALES listed there has since been expanded to cover newer python versions.
|
Here is one that **destroys `messed_order`**
```
[messed_order.remove(i) or i for i in correct_order if i in messed_order] + messed_order
```
This one sorts `messed_order` in place
```
messed_order.sort(key=(correct_order+messed_order).index)
```
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
Create a `Script (Python)` object in your skin and use that as a function. TALES expressions are limited for a reason: they are there only to help you create HTML or XML markup, not do full-scale business logic. Better still, create a proper browser view and avoid the severe restrictions laid on Through-The-Web editable code.
Also, you are misrepresenting or misunderstanding TALES. You *can* use builtin methods like sorted and set. And instead of `if` you can use test(condition, iftrue, iffalse) or a good old `condition and iftrue or iffalse` with the limitation that the result of `iftrue` must itself evaluate to true.
Even better, you can access a limited set of Python modules via the `modules` dictionary, such as `modules['string']`. You'll need to make additional security declarations in a filesystem python module to extend this though.
See the [Python TALES expression section](http://docs.zope.org/zope2/zope2book/AppendixC.html#tales-python-expressions) of the TAL documentation. Note that the list of built-ins accessible to TALES listed there has since been expanded to cover newer python versions.
|
Not to detract from the answers already given, but it's python - you *aren't* arbitrarily restricted from using loops:
```
for item in correct_order: messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
is as valid as putting the loop on two lines.
Alternatively, this is Zope - if you can't do it in a single "python:" expression, yes you can use a helper script. Scripts are found by acquisition, so a template containing something like:
```
<tag tal:define="abc context/script">
```
will lookup a *either* an attribute 'script' of the current object (*context*) [which could be a method or a property], or a "Script (Python)" object named *script* in the current folder, or in any ancestor folder! In fact, it doesn't even need to be a script object - though for your purpose it needs to be some object that returns a list.
Far from "the Department of Arbitrary Restrictions" as Thanatos put it, it's more as if there aren't enough restrictions!
|
41,954,262
|
I am running through a very large data set with a json object for each row. And after passing through almost 1 billion rows I am suddenly getting an error with my transaction.
I am using Python 3.x and psycopg2 to perform my transactions.
The json object I am trying to write is as follows:
```
{"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
and psycopg2 is reporting the following:
```
syntax error at or near "t"
LINE 3: ...": "ConfigManager: Config if valid JSON but doesn't seem to ...
```
I am aware that the problem is the single quote, however I have no idea of how to escape this problem so that the json object can be passed to my database.
My json column is of type `jsonb` and is named `first_row`
So if I try:
```
INSERT INTO "356002062376054" (first_row) VALUES {"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
I still get an error. And nothing I seem to do will fix this problem. I have tried escaping the `t` with a `\` and changing the double quotes to single qoutes, and removing the curly brackets as well. Nothing works.
I believe it's quite important that I am remembering to do
```
`json.dumps({"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."})` in my python code and yet I am getting this problem.
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41954262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302843/"
] |
you're trying to write a `tuple` to your file.
While it would work for `print` since `print` knows how to print several arguments, `write` is more strict.
Just format your `var` properly.
```
var = "\n{}\n".format(test)
```
|
I assume you want to concatinate the string, so use `+` instead of `,`
```
import pandas as pd
import os
#path = '/test'
#os.chdir(path)
def writeScores(test):
with open('output.txt', 'w') as f:
var = "\n" + test + "\n"
f.write(var)
writeScores("asd")
```
|
41,954,262
|
I am running through a very large data set with a json object for each row. And after passing through almost 1 billion rows I am suddenly getting an error with my transaction.
I am using Python 3.x and psycopg2 to perform my transactions.
The json object I am trying to write is as follows:
```
{"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
and psycopg2 is reporting the following:
```
syntax error at or near "t"
LINE 3: ...": "ConfigManager: Config if valid JSON but doesn't seem to ...
```
I am aware that the problem is the single quote, however I have no idea of how to escape this problem so that the json object can be passed to my database.
My json column is of type `jsonb` and is named `first_row`
So if I try:
```
INSERT INTO "356002062376054" (first_row) VALUES {"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
I still get an error. And nothing I seem to do will fix this problem. I have tried escaping the `t` with a `\` and changing the double quotes to single qoutes, and removing the curly brackets as well. Nothing works.
I believe it's quite important that I am remembering to do
```
`json.dumps({"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."})` in my python code and yet I am getting this problem.
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41954262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302843/"
] |
you're trying to write a `tuple` to your file.
While it would work for `print` since `print` knows how to print several arguments, `write` is more strict.
Just format your `var` properly.
```
var = "\n{}\n".format(test)
```
|
Instead of string you have created a tuple here with comma separated values like
```
var = "\n" + test + "\n"
```
So you can't write that tuple directly on the file.so lets put values inside that tuple to a string as:
```
with open('output.txt', 'w') as fp:
var = "\n", test, "\n"
fp.write(''.join('%s' % x for x in var))
```
and the much better way is answered by @Jean-François Fabre
|
41,954,262
|
I am running through a very large data set with a json object for each row. And after passing through almost 1 billion rows I am suddenly getting an error with my transaction.
I am using Python 3.x and psycopg2 to perform my transactions.
The json object I am trying to write is as follows:
```
{"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
and psycopg2 is reporting the following:
```
syntax error at or near "t"
LINE 3: ...": "ConfigManager: Config if valid JSON but doesn't seem to ...
```
I am aware that the problem is the single quote, however I have no idea of how to escape this problem so that the json object can be passed to my database.
My json column is of type `jsonb` and is named `first_row`
So if I try:
```
INSERT INTO "356002062376054" (first_row) VALUES {"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
I still get an error. And nothing I seem to do will fix this problem. I have tried escaping the `t` with a `\` and changing the double quotes to single qoutes, and removing the curly brackets as well. Nothing works.
I believe it's quite important that I am remembering to do
```
`json.dumps({"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."})` in my python code and yet I am getting this problem.
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41954262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302843/"
] |
You have this error because your first argument is not string, you should pass it a string ,you can use this simple script and also you can change "rw" to "w+"
or use 'a+' for appending (not erasing existing content)
```
import os
writepath = 'some/path/to/file.txt'
mode = 'a' if os.path.exists(writepath) else 'w'
with open(writepath, mode) as f:
f.write('Freeman Was Here!\n')
```
|
I assume you want to concatinate the string, so use `+` instead of `,`
```
import pandas as pd
import os
#path = '/test'
#os.chdir(path)
def writeScores(test):
with open('output.txt', 'w') as f:
var = "\n" + test + "\n"
f.write(var)
writeScores("asd")
```
|
41,954,262
|
I am running through a very large data set with a json object for each row. And after passing through almost 1 billion rows I am suddenly getting an error with my transaction.
I am using Python 3.x and psycopg2 to perform my transactions.
The json object I am trying to write is as follows:
```
{"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
and psycopg2 is reporting the following:
```
syntax error at or near "t"
LINE 3: ...": "ConfigManager: Config if valid JSON but doesn't seem to ...
```
I am aware that the problem is the single quote, however I have no idea of how to escape this problem so that the json object can be passed to my database.
My json column is of type `jsonb` and is named `first_row`
So if I try:
```
INSERT INTO "356002062376054" (first_row) VALUES {"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
I still get an error. And nothing I seem to do will fix this problem. I have tried escaping the `t` with a `\` and changing the double quotes to single qoutes, and removing the curly brackets as well. Nothing works.
I believe it's quite important that I am remembering to do
```
`json.dumps({"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."})` in my python code and yet I am getting this problem.
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41954262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302843/"
] |
I assume you want to concatinate the string, so use `+` instead of `,`
```
import pandas as pd
import os
#path = '/test'
#os.chdir(path)
def writeScores(test):
with open('output.txt', 'w') as f:
var = "\n" + test + "\n"
f.write(var)
writeScores("asd")
```
|
Instead of string you have created a tuple here with comma separated values like
```
var = "\n" + test + "\n"
```
So you can't write that tuple directly on the file.so lets put values inside that tuple to a string as:
```
with open('output.txt', 'w') as fp:
var = "\n", test, "\n"
fp.write(''.join('%s' % x for x in var))
```
and the much better way is answered by @Jean-François Fabre
|
41,954,262
|
I am running through a very large data set with a json object for each row. And after passing through almost 1 billion rows I am suddenly getting an error with my transaction.
I am using Python 3.x and psycopg2 to perform my transactions.
The json object I am trying to write is as follows:
```
{"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
and psycopg2 is reporting the following:
```
syntax error at or near "t"
LINE 3: ...": "ConfigManager: Config if valid JSON but doesn't seem to ...
```
I am aware that the problem is the single quote, however I have no idea of how to escape this problem so that the json object can be passed to my database.
My json column is of type `jsonb` and is named `first_row`
So if I try:
```
INSERT INTO "356002062376054" (first_row) VALUES {"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
I still get an error. And nothing I seem to do will fix this problem. I have tried escaping the `t` with a `\` and changing the double quotes to single qoutes, and removing the curly brackets as well. Nothing works.
I believe it's quite important that I am remembering to do
```
`json.dumps({"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."})` in my python code and yet I am getting this problem.
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41954262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302843/"
] |
You have this error because your first argument is not string, you should pass it a string ,you can use this simple script and also you can change "rw" to "w+"
or use 'a+' for appending (not erasing existing content)
```
import os
writepath = 'some/path/to/file.txt'
mode = 'a' if os.path.exists(writepath) else 'w'
with open(writepath, mode) as f:
f.write('Freeman Was Here!\n')
```
|
Instead of string you have created a tuple here with comma separated values like
```
var = "\n" + test + "\n"
```
So you can't write that tuple directly on the file.so lets put values inside that tuple to a string as:
```
with open('output.txt', 'w') as fp:
var = "\n", test, "\n"
fp.write(''.join('%s' % x for x in var))
```
and the much better way is answered by @Jean-François Fabre
|
38,589,830
|
I have following:
1. ```
python manage.py crontab show # this command give following
Currently active jobs in crontab:
12151a7f59f3f0be816fa30b31e7cc4d -> ('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
```
2. My app is in virtual environment (env) active
3. In my media\_api\_server/cron.py I have following function:
```
def cronSendEmail():
print("Hello")
return true
```
4. In settings.py module:
```
INSTALLED_APPS = (
......
'django_crontab',
)
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
]
```
To me all defined in place but when I run python manage.py runserver in virtual environment, console doesn't print any thing.
```
System check identified no issues (0 silenced).
July 26, 2016 - 12:12:52
Django version 1.8.1, using settings 'mediaserver.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
```
'Django-crontab' module is not working I followed its documentation here <https://pypi.python.org/pypi/django-crontab>
|
2016/07/26
|
[
"https://Stackoverflow.com/questions/38589830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5287027/"
] |
Your code actually works. You may be think that `print("Hello")` should appear in stdout? So it doesn't work that way, because cron doesn't use `stdour` and `stderr` for it's output. To see actual results you should point path to some log file in `CRONJOBS` list: just put `'>> /path/to/log/file.log'` as last argument, e.g:
```
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail', '>> /path/to/log/file.log')
]
```
Also it might be helpful to redirect your errors to stdout too. For this you heed to add `CRONTAB_COMMAND_SUFFIX = '2>&1'` to your `settings.py`
|
Try to change the crontab with a first line like:
```
SHELL=/bin/bash
```
Create the new line at crontab with:
```
./manage.py crontab add
```
And change the line created by crontab library with the command:
>
> crontab -e
>
>
>
```
*/1 * * * * source /home/app/env/bin/activate && /home/app/manage.py crontab run 123HASHOFTASK123
```
|
38,589,830
|
I have following:
1. ```
python manage.py crontab show # this command give following
Currently active jobs in crontab:
12151a7f59f3f0be816fa30b31e7cc4d -> ('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
```
2. My app is in virtual environment (env) active
3. In my media\_api\_server/cron.py I have following function:
```
def cronSendEmail():
print("Hello")
return true
```
4. In settings.py module:
```
INSTALLED_APPS = (
......
'django_crontab',
)
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
]
```
To me all defined in place but when I run python manage.py runserver in virtual environment, console doesn't print any thing.
```
System check identified no issues (0 silenced).
July 26, 2016 - 12:12:52
Django version 1.8.1, using settings 'mediaserver.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
```
'Django-crontab' module is not working I followed its documentation here <https://pypi.python.org/pypi/django-crontab>
|
2016/07/26
|
[
"https://Stackoverflow.com/questions/38589830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5287027/"
] |
Your code actually works. You may be think that `print("Hello")` should appear in stdout? So it doesn't work that way, because cron doesn't use `stdour` and `stderr` for it's output. To see actual results you should point path to some log file in `CRONJOBS` list: just put `'>> /path/to/log/file.log'` as last argument, e.g:
```
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail', '>> /path/to/log/file.log')
]
```
Also it might be helpful to redirect your errors to stdout too. For this you heed to add `CRONTAB_COMMAND_SUFFIX = '2>&1'` to your `settings.py`
|
`source` sometimes not working in shell scripts
use `.` (dot) instead
```
*/1 * * * * . /home/app/env/bin/activate e.t.c.
```
|
38,589,830
|
I have following:
1. ```
python manage.py crontab show # this command give following
Currently active jobs in crontab:
12151a7f59f3f0be816fa30b31e7cc4d -> ('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
```
2. My app is in virtual environment (env) active
3. In my media\_api\_server/cron.py I have following function:
```
def cronSendEmail():
print("Hello")
return true
```
4. In settings.py module:
```
INSTALLED_APPS = (
......
'django_crontab',
)
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
]
```
To me all defined in place but when I run python manage.py runserver in virtual environment, console doesn't print any thing.
```
System check identified no issues (0 silenced).
July 26, 2016 - 12:12:52
Django version 1.8.1, using settings 'mediaserver.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
```
'Django-crontab' module is not working I followed its documentation here <https://pypi.python.org/pypi/django-crontab>
|
2016/07/26
|
[
"https://Stackoverflow.com/questions/38589830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5287027/"
] |
Your code actually works. You may be think that `print("Hello")` should appear in stdout? So it doesn't work that way, because cron doesn't use `stdour` and `stderr` for it's output. To see actual results you should point path to some log file in `CRONJOBS` list: just put `'>> /path/to/log/file.log'` as last argument, e.g:
```
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail', '>> /path/to/log/file.log')
]
```
Also it might be helpful to redirect your errors to stdout too. For this you heed to add `CRONTAB_COMMAND_SUFFIX = '2>&1'` to your `settings.py`
|
An alternative way, directly in the OS( in case you are running on a Unix system), so that you won't have to fiddle with python libraries. Provided your OS user has all permissions/owns the django project, you can do it on the command line and in the directory where your project folder is;
```
user@server:~$ crontab -l
0 6 * * * python PROJECT/manage.py shell -c "<--import statements; function calls;-->"
```
Just sharing, cause recently I've had similar such situation and this turned out be a clean workaround
|
38,589,830
|
I have following:
1. ```
python manage.py crontab show # this command give following
Currently active jobs in crontab:
12151a7f59f3f0be816fa30b31e7cc4d -> ('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
```
2. My app is in virtual environment (env) active
3. In my media\_api\_server/cron.py I have following function:
```
def cronSendEmail():
print("Hello")
return true
```
4. In settings.py module:
```
INSTALLED_APPS = (
......
'django_crontab',
)
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
]
```
To me all defined in place but when I run python manage.py runserver in virtual environment, console doesn't print any thing.
```
System check identified no issues (0 silenced).
July 26, 2016 - 12:12:52
Django version 1.8.1, using settings 'mediaserver.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
```
'Django-crontab' module is not working I followed its documentation here <https://pypi.python.org/pypi/django-crontab>
|
2016/07/26
|
[
"https://Stackoverflow.com/questions/38589830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5287027/"
] |
`source` sometimes not working in shell scripts
use `.` (dot) instead
```
*/1 * * * * . /home/app/env/bin/activate e.t.c.
```
|
Try to change the crontab with a first line like:
```
SHELL=/bin/bash
```
Create the new line at crontab with:
```
./manage.py crontab add
```
And change the line created by crontab library with the command:
>
> crontab -e
>
>
>
```
*/1 * * * * source /home/app/env/bin/activate && /home/app/manage.py crontab run 123HASHOFTASK123
```
|
38,589,830
|
I have following:
1. ```
python manage.py crontab show # this command give following
Currently active jobs in crontab:
12151a7f59f3f0be816fa30b31e7cc4d -> ('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
```
2. My app is in virtual environment (env) active
3. In my media\_api\_server/cron.py I have following function:
```
def cronSendEmail():
print("Hello")
return true
```
4. In settings.py module:
```
INSTALLED_APPS = (
......
'django_crontab',
)
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
]
```
To me all defined in place but when I run python manage.py runserver in virtual environment, console doesn't print any thing.
```
System check identified no issues (0 silenced).
July 26, 2016 - 12:12:52
Django version 1.8.1, using settings 'mediaserver.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
```
'Django-crontab' module is not working I followed its documentation here <https://pypi.python.org/pypi/django-crontab>
|
2016/07/26
|
[
"https://Stackoverflow.com/questions/38589830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5287027/"
] |
`source` sometimes not working in shell scripts
use `.` (dot) instead
```
*/1 * * * * . /home/app/env/bin/activate e.t.c.
```
|
An alternative way, directly in the OS( in case you are running on a Unix system), so that you won't have to fiddle with python libraries. Provided your OS user has all permissions/owns the django project, you can do it on the command line and in the directory where your project folder is;
```
user@server:~$ crontab -l
0 6 * * * python PROJECT/manage.py shell -c "<--import statements; function calls;-->"
```
Just sharing, cause recently I've had similar such situation and this turned out be a clean workaround
|
55,476,131
|
I'm trying to implement fastai pretrain language model and it requires torch to work. After run the code, I got some problem about the import torch.\_C
I run it on my linux, python 3.7.1, via pip: torch 1.0.1.post2, cuda V7.5.17. I'm getting this error:
```
Traceback (most recent call last):
File "pretrain_lm.py", line 7, in <module>
import fastai
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/__init__.py", line 1, in <module>
from .basic_train import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/basic_train.py", line 2, in <module>
from .torch_core import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/torch_core.py", line 2, in <module>
from .imports.torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/__init__.py", line 2, in <module>
from .torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/torch.py", line 1, in <module>
import torch, torch.nn.functional as F
File "/home/andira/anaconda3/lib/python3.7/site-packages/torch/__init__.py", line 84, in <module>
from torch._C import *
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
```
So I tried to run this line:
```
from torch._C import *
```
and got same result
```
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
```
I checked `/home/andira/anaconda3/lib/python3.7/site-packages/torch/lib` and there are only `libcaffe2_gpu.so` and `libshm.so` file, and I can't find libtorch\_python.so either. My question is, what is actually libtorch\_python.so? I've read some of article and like most of it talked about *undefined symbol*, not *cannot open shared object file: No such file or directory* like mine. I'm new at python and torch so I really appreciate your answer.
|
2019/04/02
|
[
"https://Stackoverflow.com/questions/55476131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6332850/"
] |
My problem is solved. I'm uninstalling my torch twice
```
pip uninstall torch
pip uninstall torch
```
and then re-installing it back:
```
pip install torch==1.0.1.post2
```
|
Try to use `pytorch` 1.4.0. For which, upgrade the `pytorch` library using the following command,
```
pip install -U torch==1.5
```
If you are working on Colab then use the following command,
```
!pip install -U torch==1.5
```
Still facing challenges on the library then install `detectron2` library also.
```
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
```
|
55,476,131
|
I'm trying to implement fastai pretrain language model and it requires torch to work. After run the code, I got some problem about the import torch.\_C
I run it on my linux, python 3.7.1, via pip: torch 1.0.1.post2, cuda V7.5.17. I'm getting this error:
```
Traceback (most recent call last):
File "pretrain_lm.py", line 7, in <module>
import fastai
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/__init__.py", line 1, in <module>
from .basic_train import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/basic_train.py", line 2, in <module>
from .torch_core import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/torch_core.py", line 2, in <module>
from .imports.torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/__init__.py", line 2, in <module>
from .torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/torch.py", line 1, in <module>
import torch, torch.nn.functional as F
File "/home/andira/anaconda3/lib/python3.7/site-packages/torch/__init__.py", line 84, in <module>
from torch._C import *
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
```
So I tried to run this line:
```
from torch._C import *
```
and got same result
```
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
```
I checked `/home/andira/anaconda3/lib/python3.7/site-packages/torch/lib` and there are only `libcaffe2_gpu.so` and `libshm.so` file, and I can't find libtorch\_python.so either. My question is, what is actually libtorch\_python.so? I've read some of article and like most of it talked about *undefined symbol*, not *cannot open shared object file: No such file or directory* like mine. I'm new at python and torch so I really appreciate your answer.
|
2019/04/02
|
[
"https://Stackoverflow.com/questions/55476131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6332850/"
] |
My problem is solved. I'm uninstalling my torch twice
```
pip uninstall torch
pip uninstall torch
```
and then re-installing it back:
```
pip install torch==1.0.1.post2
```
|
I run into this error when I accidentally overwrite `pytorch` with a different channel. My original `pytorch` installation is from `pytorch` channel and in a later update it was overwritten with the one from `conda-forge`. I got this error even if the version is the same. After reinstalling `pytorch` from `pytorch` channel, the error is gone.
|
60,693,395
|
I created a small app in Django and runserver and admin works fine.
I wrote some tests which can call with `python manage.py test` and the tests pass.
Now I would like to call one particular test via PyCharm.
This fails like this:
```
/home/guettli/x/venv/bin/python
/snap/pycharm-community/179/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py
--path /home/guettli/x/xyz/tests.py
Launching pytest with arguments /home/guettli/x/xyz/tests.py in /home/guettli/x
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 --
cachedir: .pytest_cache
rootdir: /home/guettli/x
collecting ...
xyz/tests.py:None (xyz/tests.py)
xyz/tests.py:6: in <module>
from . import views
xyz/views.py:5: in <module>
from xyz.models import Term, SearchLog, GlobalConfig
xyz/models.py:1: in <module>
from django.contrib.auth.models import User
venv/lib/python3.6/site-packages/django/contrib/auth/models.py:2: in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
venv/lib/python3.6/site-packages/django/contrib/auth/base_user.py:47: in <module>
class AbstractBaseUser(models.Model):
venv/lib/python3.6/site-packages/django/db/models/base.py:107: in __new__
app_config = apps.get_containing_app_config(module)
venv/lib/python3.6/site-packages/django/apps/registry.py:252: in get_containing_app_config
self.check_apps_ready()
venv/lib/python3.6/site-packages/django/apps/registry.py:134: in check_apps_ready
settings.INSTALLED_APPS
venv/lib/python3.6/site-packages/django/conf/__init__.py:76: in __getattr__
self._setup(name)
venv/lib/python3.6/site-packages/django/conf/__init__.py:61: in _setup
% (desc, ENVIRONMENT_VARIABLE))
E django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS,
but settings are not configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Assertion failed
collected 0 items / 1 error
```
I understand the background: My app `xyz` is reusable. It does not contain any settings.
The app does not know (and should not know) my project. But the settings are in my project.
How to solve this?
I read the great django docs, but could not find a solution.
How to set `DJANGO_SETTINGS_MODULE` if you execute one particular test directly from PyCharm with "Run" (ctrl-shift-F10)?
|
2020/03/15
|
[
"https://Stackoverflow.com/questions/60693395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] |
You can add DJANGO\_SETTINGS\_MODULE as an environmental variable:
In the menu: Run -> Edit Configurations -> Templates -> Python Tests -> Unittests
[](https://i.stack.imgur.com/XKKTG.png)
And delete old "Unittests for tests...." entries.
|
You can specify the settings in your test command
Assuming your in the xyz directory, and the structure is
```
/xyz
- manage.py
- xyz/
- settings.py
```
The following command should work
```
python manage.py test --settings=xyz.settings
```
|
60,693,395
|
I created a small app in Django and runserver and admin works fine.
I wrote some tests which can call with `python manage.py test` and the tests pass.
Now I would like to call one particular test via PyCharm.
This fails like this:
```
/home/guettli/x/venv/bin/python
/snap/pycharm-community/179/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py
--path /home/guettli/x/xyz/tests.py
Launching pytest with arguments /home/guettli/x/xyz/tests.py in /home/guettli/x
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 --
cachedir: .pytest_cache
rootdir: /home/guettli/x
collecting ...
xyz/tests.py:None (xyz/tests.py)
xyz/tests.py:6: in <module>
from . import views
xyz/views.py:5: in <module>
from xyz.models import Term, SearchLog, GlobalConfig
xyz/models.py:1: in <module>
from django.contrib.auth.models import User
venv/lib/python3.6/site-packages/django/contrib/auth/models.py:2: in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
venv/lib/python3.6/site-packages/django/contrib/auth/base_user.py:47: in <module>
class AbstractBaseUser(models.Model):
venv/lib/python3.6/site-packages/django/db/models/base.py:107: in __new__
app_config = apps.get_containing_app_config(module)
venv/lib/python3.6/site-packages/django/apps/registry.py:252: in get_containing_app_config
self.check_apps_ready()
venv/lib/python3.6/site-packages/django/apps/registry.py:134: in check_apps_ready
settings.INSTALLED_APPS
venv/lib/python3.6/site-packages/django/conf/__init__.py:76: in __getattr__
self._setup(name)
venv/lib/python3.6/site-packages/django/conf/__init__.py:61: in _setup
% (desc, ENVIRONMENT_VARIABLE))
E django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS,
but settings are not configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Assertion failed
collected 0 items / 1 error
```
I understand the background: My app `xyz` is reusable. It does not contain any settings.
The app does not know (and should not know) my project. But the settings are in my project.
How to solve this?
I read the great django docs, but could not find a solution.
How to set `DJANGO_SETTINGS_MODULE` if you execute one particular test directly from PyCharm with "Run" (ctrl-shift-F10)?
|
2020/03/15
|
[
"https://Stackoverflow.com/questions/60693395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] |
You can add DJANGO\_SETTINGS\_MODULE as an environmental variable:
In the menu: Run -> Edit Configurations -> Templates -> Python Tests -> Unittests
[](https://i.stack.imgur.com/XKKTG.png)
And delete old "Unittests for tests...." entries.
|
Edited: For this method to work django support should me enabled in pycharm. I guess it should be possible to setup the equivalent template in the community edition version of pycharm.
**Method with django support enabled:**
I find that the most convenient way that also allow you to directly click on a particular test case and run it directly within pycharm without having to set the settings every time is to do the following:
->Edit configuration (Run/Debug configurations)
->Templates and select "Django Tests"
->Tick "Custom settings" and then browse to the settings you want use.
Then when you launch tests directly within pycharm it will use it as a template.
**If you test with any other supported method by pycharm**, you can pick testing framework in pycharm: [Choose testing framework](https://www.jetbrains.com/help/pycharm/choosing-your-testing-framework.html)
and then create a template for it.
|
60,693,395
|
I created a small app in Django and runserver and admin works fine.
I wrote some tests which can call with `python manage.py test` and the tests pass.
Now I would like to call one particular test via PyCharm.
This fails like this:
```
/home/guettli/x/venv/bin/python
/snap/pycharm-community/179/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py
--path /home/guettli/x/xyz/tests.py
Launching pytest with arguments /home/guettli/x/xyz/tests.py in /home/guettli/x
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 --
cachedir: .pytest_cache
rootdir: /home/guettli/x
collecting ...
xyz/tests.py:None (xyz/tests.py)
xyz/tests.py:6: in <module>
from . import views
xyz/views.py:5: in <module>
from xyz.models import Term, SearchLog, GlobalConfig
xyz/models.py:1: in <module>
from django.contrib.auth.models import User
venv/lib/python3.6/site-packages/django/contrib/auth/models.py:2: in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
venv/lib/python3.6/site-packages/django/contrib/auth/base_user.py:47: in <module>
class AbstractBaseUser(models.Model):
venv/lib/python3.6/site-packages/django/db/models/base.py:107: in __new__
app_config = apps.get_containing_app_config(module)
venv/lib/python3.6/site-packages/django/apps/registry.py:252: in get_containing_app_config
self.check_apps_ready()
venv/lib/python3.6/site-packages/django/apps/registry.py:134: in check_apps_ready
settings.INSTALLED_APPS
venv/lib/python3.6/site-packages/django/conf/__init__.py:76: in __getattr__
self._setup(name)
venv/lib/python3.6/site-packages/django/conf/__init__.py:61: in _setup
% (desc, ENVIRONMENT_VARIABLE))
E django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS,
but settings are not configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Assertion failed
collected 0 items / 1 error
```
I understand the background: My app `xyz` is reusable. It does not contain any settings.
The app does not know (and should not know) my project. But the settings are in my project.
How to solve this?
I read the great django docs, but could not find a solution.
How to set `DJANGO_SETTINGS_MODULE` if you execute one particular test directly from PyCharm with "Run" (ctrl-shift-F10)?
|
2020/03/15
|
[
"https://Stackoverflow.com/questions/60693395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] |
You can add DJANGO\_SETTINGS\_MODULE as an environmental variable:
In the menu: Run -> Edit Configurations -> Templates -> Python Tests -> Unittests
[](https://i.stack.imgur.com/XKKTG.png)
And delete old "Unittests for tests...." entries.
|
If you use django and pytest, then I recommend the plugin [pytest-django](https://pytest-django.readthedocs.io/en/latest/)
It provides a simple way to set DJANGO\_SETTINGS\_MODULE via configuration.
See [configuring django](https://pytest-django.readthedocs.io/en/latest/configuring_django.html#pytest-ini-settings)
```
[pytest]
DJANGO_SETTINGS_MODULE = test_settings
```
|
60,693,395
|
I created a small app in Django and runserver and admin works fine.
I wrote some tests which can call with `python manage.py test` and the tests pass.
Now I would like to call one particular test via PyCharm.
This fails like this:
```
/home/guettli/x/venv/bin/python
/snap/pycharm-community/179/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py
--path /home/guettli/x/xyz/tests.py
Launching pytest with arguments /home/guettli/x/xyz/tests.py in /home/guettli/x
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 --
cachedir: .pytest_cache
rootdir: /home/guettli/x
collecting ...
xyz/tests.py:None (xyz/tests.py)
xyz/tests.py:6: in <module>
from . import views
xyz/views.py:5: in <module>
from xyz.models import Term, SearchLog, GlobalConfig
xyz/models.py:1: in <module>
from django.contrib.auth.models import User
venv/lib/python3.6/site-packages/django/contrib/auth/models.py:2: in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
venv/lib/python3.6/site-packages/django/contrib/auth/base_user.py:47: in <module>
class AbstractBaseUser(models.Model):
venv/lib/python3.6/site-packages/django/db/models/base.py:107: in __new__
app_config = apps.get_containing_app_config(module)
venv/lib/python3.6/site-packages/django/apps/registry.py:252: in get_containing_app_config
self.check_apps_ready()
venv/lib/python3.6/site-packages/django/apps/registry.py:134: in check_apps_ready
settings.INSTALLED_APPS
venv/lib/python3.6/site-packages/django/conf/__init__.py:76: in __getattr__
self._setup(name)
venv/lib/python3.6/site-packages/django/conf/__init__.py:61: in _setup
% (desc, ENVIRONMENT_VARIABLE))
E django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS,
but settings are not configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Assertion failed
collected 0 items / 1 error
```
I understand the background: My app `xyz` is reusable. It does not contain any settings.
The app does not know (and should not know) my project. But the settings are in my project.
How to solve this?
I read the great django docs, but could not find a solution.
How to set `DJANGO_SETTINGS_MODULE` if you execute one particular test directly from PyCharm with "Run" (ctrl-shift-F10)?
|
2020/03/15
|
[
"https://Stackoverflow.com/questions/60693395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] |
You can specify the settings in your test command
Assuming your in the xyz directory, and the structure is
```
/xyz
- manage.py
- xyz/
- settings.py
```
The following command should work
```
python manage.py test --settings=xyz.settings
```
|
Edited: For this method to work django support should me enabled in pycharm. I guess it should be possible to setup the equivalent template in the community edition version of pycharm.
**Method with django support enabled:**
I find that the most convenient way that also allow you to directly click on a particular test case and run it directly within pycharm without having to set the settings every time is to do the following:
->Edit configuration (Run/Debug configurations)
->Templates and select "Django Tests"
->Tick "Custom settings" and then browse to the settings you want use.
Then when you launch tests directly within pycharm it will use it as a template.
**If you test with any other supported method by pycharm**, you can pick testing framework in pycharm: [Choose testing framework](https://www.jetbrains.com/help/pycharm/choosing-your-testing-framework.html)
and then create a template for it.
|
60,693,395
|
I created a small app in Django and runserver and admin works fine.
I wrote some tests which can call with `python manage.py test` and the tests pass.
Now I would like to call one particular test via PyCharm.
This fails like this:
```
/home/guettli/x/venv/bin/python
/snap/pycharm-community/179/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py
--path /home/guettli/x/xyz/tests.py
Launching pytest with arguments /home/guettli/x/xyz/tests.py in /home/guettli/x
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 --
cachedir: .pytest_cache
rootdir: /home/guettli/x
collecting ...
xyz/tests.py:None (xyz/tests.py)
xyz/tests.py:6: in <module>
from . import views
xyz/views.py:5: in <module>
from xyz.models import Term, SearchLog, GlobalConfig
xyz/models.py:1: in <module>
from django.contrib.auth.models import User
venv/lib/python3.6/site-packages/django/contrib/auth/models.py:2: in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
venv/lib/python3.6/site-packages/django/contrib/auth/base_user.py:47: in <module>
class AbstractBaseUser(models.Model):
venv/lib/python3.6/site-packages/django/db/models/base.py:107: in __new__
app_config = apps.get_containing_app_config(module)
venv/lib/python3.6/site-packages/django/apps/registry.py:252: in get_containing_app_config
self.check_apps_ready()
venv/lib/python3.6/site-packages/django/apps/registry.py:134: in check_apps_ready
settings.INSTALLED_APPS
venv/lib/python3.6/site-packages/django/conf/__init__.py:76: in __getattr__
self._setup(name)
venv/lib/python3.6/site-packages/django/conf/__init__.py:61: in _setup
% (desc, ENVIRONMENT_VARIABLE))
E django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS,
but settings are not configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Assertion failed
collected 0 items / 1 error
```
I understand the background: My app `xyz` is reusable. It does not contain any settings.
The app does not know (and should not know) my project. But the settings are in my project.
How to solve this?
I read the great django docs, but could not find a solution.
How to set `DJANGO_SETTINGS_MODULE` if you execute one particular test directly from PyCharm with "Run" (ctrl-shift-F10)?
|
2020/03/15
|
[
"https://Stackoverflow.com/questions/60693395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] |
You can specify the settings in your test command
Assuming your in the xyz directory, and the structure is
```
/xyz
- manage.py
- xyz/
- settings.py
```
The following command should work
```
python manage.py test --settings=xyz.settings
```
|
If you use django and pytest, then I recommend the plugin [pytest-django](https://pytest-django.readthedocs.io/en/latest/)
It provides a simple way to set DJANGO\_SETTINGS\_MODULE via configuration.
See [configuring django](https://pytest-django.readthedocs.io/en/latest/configuring_django.html#pytest-ini-settings)
```
[pytest]
DJANGO_SETTINGS_MODULE = test_settings
```
|
60,693,395
|
I created a small app in Django and runserver and admin works fine.
I wrote some tests which can call with `python manage.py test` and the tests pass.
Now I would like to call one particular test via PyCharm.
This fails like this:
```
/home/guettli/x/venv/bin/python
/snap/pycharm-community/179/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py
--path /home/guettli/x/xyz/tests.py
Launching pytest with arguments /home/guettli/x/xyz/tests.py in /home/guettli/x
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 --
cachedir: .pytest_cache
rootdir: /home/guettli/x
collecting ...
xyz/tests.py:None (xyz/tests.py)
xyz/tests.py:6: in <module>
from . import views
xyz/views.py:5: in <module>
from xyz.models import Term, SearchLog, GlobalConfig
xyz/models.py:1: in <module>
from django.contrib.auth.models import User
venv/lib/python3.6/site-packages/django/contrib/auth/models.py:2: in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
venv/lib/python3.6/site-packages/django/contrib/auth/base_user.py:47: in <module>
class AbstractBaseUser(models.Model):
venv/lib/python3.6/site-packages/django/db/models/base.py:107: in __new__
app_config = apps.get_containing_app_config(module)
venv/lib/python3.6/site-packages/django/apps/registry.py:252: in get_containing_app_config
self.check_apps_ready()
venv/lib/python3.6/site-packages/django/apps/registry.py:134: in check_apps_ready
settings.INSTALLED_APPS
venv/lib/python3.6/site-packages/django/conf/__init__.py:76: in __getattr__
self._setup(name)
venv/lib/python3.6/site-packages/django/conf/__init__.py:61: in _setup
% (desc, ENVIRONMENT_VARIABLE))
E django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS,
but settings are not configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Assertion failed
collected 0 items / 1 error
```
I understand the background: My app `xyz` is reusable. It does not contain any settings.
The app does not know (and should not know) my project. But the settings are in my project.
How to solve this?
I read the great django docs, but could not find a solution.
How to set `DJANGO_SETTINGS_MODULE` if you execute one particular test directly from PyCharm with "Run" (ctrl-shift-F10)?
|
2020/03/15
|
[
"https://Stackoverflow.com/questions/60693395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] |
Edited: For this method to work django support should me enabled in pycharm. I guess it should be possible to setup the equivalent template in the community edition version of pycharm.
**Method with django support enabled:**
I find that the most convenient way that also allow you to directly click on a particular test case and run it directly within pycharm without having to set the settings every time is to do the following:
->Edit configuration (Run/Debug configurations)
->Templates and select "Django Tests"
->Tick "Custom settings" and then browse to the settings you want use.
Then when you launch tests directly within pycharm it will use it as a template.
**If you test with any other supported method by pycharm**, you can pick testing framework in pycharm: [Choose testing framework](https://www.jetbrains.com/help/pycharm/choosing-your-testing-framework.html)
and then create a template for it.
|
If you use django and pytest, then I recommend the plugin [pytest-django](https://pytest-django.readthedocs.io/en/latest/)
It provides a simple way to set DJANGO\_SETTINGS\_MODULE via configuration.
See [configuring django](https://pytest-django.readthedocs.io/en/latest/configuring_django.html#pytest-ini-settings)
```
[pytest]
DJANGO_SETTINGS_MODULE = test_settings
```
|
34,116,682
|
I am trying to save an image with python that is Base64 encoded. Here the string is to large to post but here is the image
[](https://i.stack.imgur.com/JYHLV.jpg)
And when received by python the last 2 characters are `==` although the string is not formatted so I do this
```
import base64
data = "data:image/png;base64," + photo_base64.replace(" ", "+")
```
And then I do this
```
imgdata = base64.b64decode(data)
filename = 'some_image.jpg' # I assume you have a way of picking unique filenames
with open(filename, 'wb') as f:
f.write(imgdata)
```
But this causes this error
```
Traceback (most recent call last):
File "/var/www/cgi-bin/save_info.py", line 83, in <module>
imgdata = base64.b64decode(data)
File "/usr/lib64/python2.7/base64.py", line 76, in b64decode
raise TypeError(msg)
TypeError: Incorrect padding
```
I also printed out the length of the string once the `data:image/png;base64,` has been added and the `spaces` replace with `+` and it has a length of `34354`, I have tried a bunch of different images but all of them when I try to open the saved file say that the file is damaged.
What is happening and why is the file corrupt?
Thanks
**EDIT**
Here is some base64 that also failed
```
iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAADBQTFRFA6b1q Ci5/f2lt/9yu3 Y8v2cMpb1/DSJbz5i9R2NLwfLrWbw m T8I8////////SvMAbAAAABB0Uk5T////////////////////AOAjXRkAAACYSURBVHjaLI8JDgMgCAQ5BVG3//9t0XYTE2Y5BPq0IGpwtxtTP4G5IFNMnmEKuCopPKUN8VTNpEylNgmCxjZa2c1kafpHSvMkX6sWe7PTkwRX1dY7gdyMRHZdZ98CF6NZT2ecMVaL9tmzTtMYcwbP y3XeTgZkF5s1OSHwRzo1fkILgWC5R0X4BHYu7t/136wO71DbvwVYADUkQegpokSjwAAAABJRU5ErkJggg==
```
This is what I receive in my python script from the POST Request
Note I have not replace the spaces with +'s
|
2015/12/06
|
[
"https://Stackoverflow.com/questions/34116682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3630528/"
] |
There is no need to add `data:image/png;base64,` before, I tried using the code below, it works fine.
```
import base64
data = 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAADBQTFRFA6b1q Ci5/f2lt/9yu3 Y8v2cMpb1/DSJbz5i9R2NLwfLrWbw m T8I8////////SvMAbAAAABB0Uk5T////////////////////AOAjXRkAAACYSURBVHjaLI8JDgMgCAQ5BVG3//9t0XYTE2Y5BPq0IGpwtxtTP4G5IFNMnmEKuCopPKUN8VTNpEylNgmCxjZa2c1kafpHSvMkX6sWe7PTkwRX1dY7gdyMRHZdZ98CF6NZT2ecMVaL9tmzTtMYcwbP y3XeTgZkF5s1OSHwRzo1fkILgWC5R0X4BHYu7t/136wO71DbvwVYADUkQegpokSjwAAAABJRU5ErkJggg=='.replace(' ', '+')
imgdata = base64.b64decode(data)
filename = 'some_image.jpg' # I assume you have a way of picking unique filenames
with open(filename, 'wb') as f:
f.write(imgdata)
```
|
If you append **data:image/png;base64,** to data, then you get error. If You have this, you must replace it.
```
new_data = initial_data.replace('data:image/png;base64,', '')
```
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
I've installed robotframework using this command "sudo pip3 install robotframework" in jenkins server. and now my jenkins pipeline script can now run my robot scripts
|
I'm not very comfort with Windows environment, so let me give my two cents:
1) Try to set the PATH or PYTHONPATH to the location where your robot file is
2) Try to run robot from python script. I saw that you tried it above, but take a look at the RF User Guide and see if you are doing something wrong:
<https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#using-robot-and-rebot-scripts>
maybe just
```
python -m robot ....
```
is fine
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
Thanks a lot, it works for me. Just write the following in the terminal:
```sh
python -m robot "your file name"
```
In this case the file name is `TC1.robot`, so the command would be:
```sh
python -m robot TC1.robot
```
|
I'm not very comfort with Windows environment, so let me give my two cents:
1) Try to set the PATH or PYTHONPATH to the location where your robot file is
2) Try to run robot from python script. I saw that you tried it above, but take a look at the RF User Guide and see if you are doing something wrong:
<https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#using-robot-and-rebot-scripts>
maybe just
```
python -m robot ....
```
is fine
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
I was getting an error when executing robot scripts through linux command of
```
sudo pip install robotframework
```
and the below command worked for me:
```
sudo pip3 install robotframework
```
|
I'm not very comfort with Windows environment, so let me give my two cents:
1) Try to set the PATH or PYTHONPATH to the location where your robot file is
2) Try to run robot from python script. I saw that you tried it above, but take a look at the RF User Guide and see if you are doing something wrong:
<https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#using-robot-and-rebot-scripts>
maybe just
```
python -m robot ....
```
is fine
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
I've installed robotframework using this command "sudo pip3 install robotframework" in jenkins server. and now my jenkins pipeline script can now run my robot scripts
|
Thanks, It worked , first I cd to the Site packeges where Robot is installed and ran with Python -m command cd Users\babo\AppData\Roaming\Python\Python27\site-packages\robot>C:\Python27\python.exe -m robot.run -d Results C:\Users\bab\Robot\_Sframe\E2EAutomation\Test\_Suite\Enrollment\_834.robo
We can close this
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
Thanks a lot, it works for me. Just write the following in the terminal:
```sh
python -m robot "your file name"
```
In this case the file name is `TC1.robot`, so the command would be:
```sh
python -m robot TC1.robot
```
|
Thanks, It worked , first I cd to the Site packeges where Robot is installed and ran with Python -m command cd Users\babo\AppData\Roaming\Python\Python27\site-packages\robot>C:\Python27\python.exe -m robot.run -d Results C:\Users\bab\Robot\_Sframe\E2EAutomation\Test\_Suite\Enrollment\_834.robo
We can close this
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
I was getting an error when executing robot scripts through linux command of
```
sudo pip install robotframework
```
and the below command worked for me:
```
sudo pip3 install robotframework
```
|
Thanks, It worked , first I cd to the Site packeges where Robot is installed and ran with Python -m command cd Users\babo\AppData\Roaming\Python\Python27\site-packages\robot>C:\Python27\python.exe -m robot.run -d Results C:\Users\bab\Robot\_Sframe\E2EAutomation\Test\_Suite\Enrollment\_834.robo
We can close this
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
Thanks a lot, it works for me. Just write the following in the terminal:
```sh
python -m robot "your file name"
```
In this case the file name is `TC1.robot`, so the command would be:
```sh
python -m robot TC1.robot
```
|
I've installed robotframework using this command "sudo pip3 install robotframework" in jenkins server. and now my jenkins pipeline script can now run my robot scripts
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
I was getting an error when executing robot scripts through linux command of
```
sudo pip install robotframework
```
and the below command worked for me:
```
sudo pip3 install robotframework
```
|
I've installed robotframework using this command "sudo pip3 install robotframework" in jenkins server. and now my jenkins pipeline script can now run my robot scripts
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
Thanks a lot, it works for me. Just write the following in the terminal:
```sh
python -m robot "your file name"
```
In this case the file name is `TC1.robot`, so the command would be:
```sh
python -m robot TC1.robot
```
|
I was getting an error when executing robot scripts through linux command of
```
sudo pip install robotframework
```
and the below command worked for me:
```
sudo pip3 install robotframework
```
|
56,646,940
|
I load a saved h5 model and want to save the model as pb.
The model is saved during training with the `tf.keras.callbacks.ModelCheckpoint` callback function.
TF version: 2.0.0a
**edit**: same issue also with 2.0.0-beta1
My steps to save a pb:
1. I first set `K.set_learning_phase(0)`
2. then I load the model with `tf.keras.models.load_model`
3. Then, I define the `freeze_session()` function.
4. (optional I compile the model)
5. Then using the `freeze_session()` function with `tf.keras.backend.get_session`
**The error** I get, with and without compiling:
>
> AttributeError: module 'tensorflow.python.keras.api.\_v2.keras.backend'
> has no attribute 'get\_session'
>
>
>
**My Question:**
1. Does TF2 not have the `get_session` anymore?
(I know that `tf.contrib.saved_model.save_keras_model` does not exist anymore and I also tried `tf.saved_model.save` which not really worked)
2. Or does `get_session` only work when I actually train the model and just loading the h5 does not work
**Edit**: Also with a freshly trained session, no get\_session is available.
* If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
---
**update**:
Since the official release of TF2.x graph/session concept has changed. The `savedmodel` api should be used.
You can use the `tf.compat.v1.disable_eager_execution()` with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
|
2019/06/18
|
[
"https://Stackoverflow.com/questions/56646940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6510273/"
] |
I do save the model to `pb` from `h5` model:
```py
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
```
I use TF2 to convert model like:
1. pass `keras.callbacks.ModelCheckpoint(save_weights_only=True)` to `model.fit` and save `checkpoint` while training;
2. After training, `self.model.load_weights(self.checkpoint_path)` load `checkpoint`;
3. `self.model.save(h5_path, overwrite=True, include_optimizer=False)` save as `h5`;
4. convert `h5` to `pb` just like above;
|
I'm wondering the same thing, as I'm trying to use get\_session() and set\_session() to free up GPU memory. These functions seem to be missing and [aren't in the TF2.0 Keras documentation](http://faroit.com/keras-docs/2.0.0/backend/). I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
|
56,646,940
|
I load a saved h5 model and want to save the model as pb.
The model is saved during training with the `tf.keras.callbacks.ModelCheckpoint` callback function.
TF version: 2.0.0a
**edit**: same issue also with 2.0.0-beta1
My steps to save a pb:
1. I first set `K.set_learning_phase(0)`
2. then I load the model with `tf.keras.models.load_model`
3. Then, I define the `freeze_session()` function.
4. (optional I compile the model)
5. Then using the `freeze_session()` function with `tf.keras.backend.get_session`
**The error** I get, with and without compiling:
>
> AttributeError: module 'tensorflow.python.keras.api.\_v2.keras.backend'
> has no attribute 'get\_session'
>
>
>
**My Question:**
1. Does TF2 not have the `get_session` anymore?
(I know that `tf.contrib.saved_model.save_keras_model` does not exist anymore and I also tried `tf.saved_model.save` which not really worked)
2. Or does `get_session` only work when I actually train the model and just loading the h5 does not work
**Edit**: Also with a freshly trained session, no get\_session is available.
* If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
---
**update**:
Since the official release of TF2.x graph/session concept has changed. The `savedmodel` api should be used.
You can use the `tf.compat.v1.disable_eager_execution()` with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
|
2019/06/18
|
[
"https://Stackoverflow.com/questions/56646940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6510273/"
] |
I do save the model to `pb` from `h5` model:
```py
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
```
I use TF2 to convert model like:
1. pass `keras.callbacks.ModelCheckpoint(save_weights_only=True)` to `model.fit` and save `checkpoint` while training;
2. After training, `self.model.load_weights(self.checkpoint_path)` load `checkpoint`;
3. `self.model.save(h5_path, overwrite=True, include_optimizer=False)` save as `h5`;
4. convert `h5` to `pb` just like above;
|
use
```
from tensorflow.compat.v1.keras.backend import get_session
```
in keras 2 & tensorflow 2.2
then call
```
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
from tensorflow.compat.v1.keras.backend import get_session
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
```
|
62,655,911
|
I'm looking at using Pandas UDF's in PySpark (v3). For a number of reasons, I understand iterating and UDF's in general are bad and I understand that the simple examples I show here can be done PySpark using SQL functions - all of that is besides the point!
I've been following this guide: <https://databricks.com/blog/2020/05/20/new-pandas-udfs-and-python-type-hints-in-the-upcoming-release-of-apache-spark-3-0.html>
I have a simple example working from the docs:
```
import pandas as pd
from typing import Iterator, Tuple
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, pandas_udf
spark = SparkSession.builder.getOrCreate()
pdf = pd.DataFrame(([1, 2, 3], [4, 5, 6], [8, 9, 0]), columns=["x", "y", "z"])
df = spark.createDataFrame(pdf)
@pandas_udf('long')
def test1(x: pd.Series, y: pd.Series) -> pd.Series:
return x + y
df.select(test1(col("x"), col("y"))).show()
```
And this works well for performing basic arithmetic - if I want to add, multiply etc this is straight forward (but it is also straightforward in PySpark without functions).
I want to do a comparison between the values for example:
```
@pandas_udf('long')
def test2(x: pd.Series, y: pd.Series) -> pd.Series:
return x if x > y else y
df.select(test2(col("x"), col("y"))).show()
```
This will error with `ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().`. I understand that it is evaluating the series rather than the row value.
So there is an iterator example. Again this works fine for the basic arithmetic example they provide. But if I try to apply logic:
```
@pandas_udf("long")
def test3(batch_iter: Iterator[Tuple[pd.Series, pd.Series]]) -> Iterator[pd.Series]:
for x, y in batch_iter:
yield x if x > y else y
df.select(test3(col("x"), col("y"))).show()
```
I get the same ValueError as before.
So my question is how should I perform row by row comparisons like this? Is it possible in a vectorised function? And if not then what are the use cases for them?
|
2020/06/30
|
[
"https://Stackoverflow.com/questions/62655911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1802510/"
] |
I figured this out. So simple after you write it down and publish the problem to the world.
All that needs to happen is to return an array and then convert to a Pandas Series:
```
@pandas_udf('long')
def test4(x: pd.Series, y: pd.Series) -> pd.Series:
return pd.Series([a if a > b else b for a, b in zip(x, y)])
df.select(test4(col("x"),col("y"))).show()
```
|
I've spent the last two days looking for this answer, thank you simon\_dmorias!
I needed a slightly modified example here. I'm breaking out the single pandas\_udf into multiple components for easier management. Here is an example of what I'm using for others to reference:
```
xdf = pd.DataFrame(([1, 2, 3,'Fixed'], [4, 5, 6,'Variable'], [8, 9, 0,'Adjustable']), columns=["x", "y", "z", "Description"])
df = spark.createDataFrame(xdf)
def fnRate(x):
return pd.Series(['Fixed' if 'Fixed' in str(v) else 'Variable' if 'Variable' in str(v) else 'Other' for v in zip(x)])
@pandas_udf('string')
def fnRateRecommended(Description: pd.Series) -> pd.Series:
varProduct = fnRate(Description)
return varProduct
# call function
df.withColumn("Recommendation", fnRateRecommended(sf.col("Description"))).show()
```
|
62,374,607
|
I have a list 2,3,4,3,5,9,4,5,6
I want to iterate over the list until I get the first highest number that is followed by a lower number. Then to iterate over the rest of the number until I get the lowest number followed by a higher number. Then the next highest highest number that is followed by a lower number.And so on.The result I want is 2,4,3,9,4,6
Here is my last attempt.I seem to be going round in circles
```
#!/usr/bin/env python
value = []
high_hold = [0]
low_hold = [20]
num = [4,5,20,9,8,6,2,3,5,10,2,]
def high():
for i in num:
if i > high_hold[-1]:
high_hold.append(i)
def low():
for i in num:
if i < low_hold[-1]:
low_hold.append(i)
high()
a = high_hold[-1]
value.append(a)
high_hold = high_hold[1:]
b = len(high_hold) -1
num = num[b:]
low()
c = len(low_hold) -1
num = num[c:]
value.append(b)
print('5: ', value, '(this is what we want)')
print(num)
high_hold = [0]
def high():
for i in num:
if i > high_hold[-1]:
high_hold.append(i)
high()
a = high_hold[-1]
print(a)
print('1: ', high_hold, 'high hold')
```
|
2020/06/14
|
[
"https://Stackoverflow.com/questions/62374607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13744765/"
] |
you need to add the new user ONLY after you have checked all of them. instead you have it in the middle of your for loop, so it's going to add it over and over.
try this:
```
var doesExistFlag = false;
for (let i = 0; i < this.users.length; i++) {
if (this.users[i].user == this.adminId) {
doesExistFlag = true;
}
}
if(!doesExistFlag)
this.users.push({user: this.adminId, permissions: this.g.admin_rights.value});
```
an even better solution would be to generate a random id based on timestamp.
|
`id` is the name of the field ...which is never being compared to.
`adminId` probably should be `userId`, for the sake of readability.
While frankly speaking, just sort it on the server-side already.
|
62,374,607
|
I have a list 2,3,4,3,5,9,4,5,6
I want to iterate over the list until I get the first highest number that is followed by a lower number. Then to iterate over the rest of the number until I get the lowest number followed by a higher number. Then the next highest highest number that is followed by a lower number.And so on.The result I want is 2,4,3,9,4,6
Here is my last attempt.I seem to be going round in circles
```
#!/usr/bin/env python
value = []
high_hold = [0]
low_hold = [20]
num = [4,5,20,9,8,6,2,3,5,10,2,]
def high():
for i in num:
if i > high_hold[-1]:
high_hold.append(i)
def low():
for i in num:
if i < low_hold[-1]:
low_hold.append(i)
high()
a = high_hold[-1]
value.append(a)
high_hold = high_hold[1:]
b = len(high_hold) -1
num = num[b:]
low()
c = len(low_hold) -1
num = num[c:]
value.append(b)
print('5: ', value, '(this is what we want)')
print(num)
high_hold = [0]
def high():
for i in num:
if i > high_hold[-1]:
high_hold.append(i)
high()
a = high_hold[-1]
print(a)
print('1: ', high_hold, 'high hold')
```
|
2020/06/14
|
[
"https://Stackoverflow.com/questions/62374607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13744765/"
] |
You could use array `some()` in the following way
```js
var input = [
{ id: 1, user: 32, permissions: 'sample', confirmed: false },
{ id: 2, user: 41, permissions: 'sample', confirmed: false },
{ id: 3, user: 12, permissions: 'sample', confirmed: false },
{ id: 4, user: 5, permissions: 'sample', confirmed: false },
{ id: 5, user: 78, permissions: 'sample', confirmed: false }
];
var found = (input, adminId) => input.some(user => user.user === adminId);
if (!found(input, 41)) { // don't push
input.push({ id: 5, user: 41, permissions: 'sample', confirmed: false })
}
if (!found(input, 62)) { // push
input.push({ id: 6, user: 62, permissions: 'sample', confirmed: false })
}
if (!found(input, 62)) { // don't push
input.push({ id: 2, user: 62, permissions: 'sample', confirmed: false })
}
if (!found(input, 17)) { // push
input.push({ id: 6, user: 17, permissions: 'sample', confirmed: false })
}
console.log(input);
```
For your use case it might be used as
```js
if(!this.users.some(user => user.user === this.adminId)) {
this.users.push({user: this.adminId, permissions: this.g.admin_rights.value});
}
```
|
you need to add the new user ONLY after you have checked all of them. instead you have it in the middle of your for loop, so it's going to add it over and over.
try this:
```
var doesExistFlag = false;
for (let i = 0; i < this.users.length; i++) {
if (this.users[i].user == this.adminId) {
doesExistFlag = true;
}
}
if(!doesExistFlag)
this.users.push({user: this.adminId, permissions: this.g.admin_rights.value});
```
an even better solution would be to generate a random id based on timestamp.
|
62,374,607
|
I have a list 2,3,4,3,5,9,4,5,6
I want to iterate over the list until I get the first highest number that is followed by a lower number. Then to iterate over the rest of the number until I get the lowest number followed by a higher number. Then the next highest highest number that is followed by a lower number.And so on.The result I want is 2,4,3,9,4,6
Here is my last attempt.I seem to be going round in circles
```
#!/usr/bin/env python
value = []
high_hold = [0]
low_hold = [20]
num = [4,5,20,9,8,6,2,3,5,10,2,]
def high():
for i in num:
if i > high_hold[-1]:
high_hold.append(i)
def low():
for i in num:
if i < low_hold[-1]:
low_hold.append(i)
high()
a = high_hold[-1]
value.append(a)
high_hold = high_hold[1:]
b = len(high_hold) -1
num = num[b:]
low()
c = len(low_hold) -1
num = num[c:]
value.append(b)
print('5: ', value, '(this is what we want)')
print(num)
high_hold = [0]
def high():
for i in num:
if i > high_hold[-1]:
high_hold.append(i)
high()
a = high_hold[-1]
print(a)
print('1: ', high_hold, 'high hold')
```
|
2020/06/14
|
[
"https://Stackoverflow.com/questions/62374607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13744765/"
] |
You could use array `some()` in the following way
```js
var input = [
{ id: 1, user: 32, permissions: 'sample', confirmed: false },
{ id: 2, user: 41, permissions: 'sample', confirmed: false },
{ id: 3, user: 12, permissions: 'sample', confirmed: false },
{ id: 4, user: 5, permissions: 'sample', confirmed: false },
{ id: 5, user: 78, permissions: 'sample', confirmed: false }
];
var found = (input, adminId) => input.some(user => user.user === adminId);
if (!found(input, 41)) { // don't push
input.push({ id: 5, user: 41, permissions: 'sample', confirmed: false })
}
if (!found(input, 62)) { // push
input.push({ id: 6, user: 62, permissions: 'sample', confirmed: false })
}
if (!found(input, 62)) { // don't push
input.push({ id: 2, user: 62, permissions: 'sample', confirmed: false })
}
if (!found(input, 17)) { // push
input.push({ id: 6, user: 17, permissions: 'sample', confirmed: false })
}
console.log(input);
```
For your use case it might be used as
```js
if(!this.users.some(user => user.user === this.adminId)) {
this.users.push({user: this.adminId, permissions: this.g.admin_rights.value});
}
```
|
`id` is the name of the field ...which is never being compared to.
`adminId` probably should be `userId`, for the sake of readability.
While frankly speaking, just sort it on the server-side already.
|
58,031,373
|
I have a queue of 500 processes that I want to run through a python script, I want to run every N processes in parallel.
What my python script does so far:
It runs N processes in parallel, waits for all of them to terminate, then runs the next N files.
What I need to do:
When one of the N processes is finished, another process from the queue is automatically started, without waiting for the rest of the processes to terminate.
Note: I do not know how much time each process will take, so I can't schedule a process to run at a particular time.
Following is the code that I have.
I am currently using subprocess.Popen, but I'm not limited to its use.
```
for i in range(0, len(queue), N):
batch = []
for _ in range(int(jobs)):
batch.append(queue.pop(0))
for process in batch:
p = subprocess.Popen([process])
ps.append(p)
for p in ps:
p.communicate()
```
|
2019/09/20
|
[
"https://Stackoverflow.com/questions/58031373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11597586/"
] |
Advanced PDF Template is not yet supported in SuiteBundle.
|
Update:
I noticed that when I create a new advanced template by customizing a standard one, I can see the new template in the bundle creation process.
If I start from a "saved search", I don't...
It is weird, ins't it?
|
60,740,554
|
I try to implement Apache Airflow with the CeleryExecutor. For the database I use Postgres, for the celery message queue I use Redis. When using LocalExecutor everything works fine, but when I set the CeleryExecutor in the airflow.cfg and want to set the Postgres database as the result\_backend
```
result_backend = postgresql+psycopg2://airflow_user:*******@localhost/airflow
```
I get this error when running the Airflow scheduler no matter which DAG I trigger:
```
[2020-03-18 14:14:13,341] {scheduler_job.py:1382} ERROR - Exception when executing execute_helper
Traceback (most recent call last):
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'backend'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1380, in _execute
self._execute_helper()
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1441, in _execute_helper
if not self._validate_and_run_task_instances(simple_dag_bag=simple_dag_bag):
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1503, in _validate_and_run_task_instances
self.executor.heartbeat()
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/executors/base_executor.py", line 130, in heartbeat
self.trigger_tasks(open_slots)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 205, in trigger_tasks
cached_celery_backend = tasks[0].backend
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/local.py", line 146, in __getattr__
return getattr(self._get_current_object(), name)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/task.py", line 1037, in backend
return self.app.backend
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/base.py", line 1227, in backend
return self._get_backend()
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/base.py", line 944, in _get_backend
self.loader)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/backends.py", line 74, in by_url
return by_name(backend, loader), url
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/backends.py", line 60, in by_name
backend, 'is a Python module, not a backend class.'))
celery.exceptions.ImproperlyConfigured: Unknown result backend: 'postgresql'. Did you spell that correctly? ('is a Python module, not a backend class.')
```
The exact same parameter to direct to the database works
```
sql_alchemy_conn = postgresql+psycopg2://airflow_user:*******@localhost/airflow
```
Setting Redis as the celery result\_backend works, but I read it is not the recommended way.
```
result_backend = redis://localhost:6379/0
```
Does anyone see what I am doing wrong?
|
2020/03/18
|
[
"https://Stackoverflow.com/questions/60740554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4296244/"
] |
You need to add the `db+` prefix to the database connection string:
```py
f"db+postgresql+psycopg2://{user}:{password}@{host}/{database}"
```
This is also mentioned in the docs: <https://docs.celeryproject.org/en/stable/userguide/configuration.html#database-url-examples>
|
You need to add the `db+` prefix to the database connection string:
```
result_backend = db+postgresql://airflow_user:*******@localhost/airflow
```
|
5,556,360
|
I'm having a problem getting matplotlib to work in ubuntu 10.10.
First I install the matplotlib using apt-get, and later I found that the version is 0.99 and some examples on the official site just won't work. Then I download the 1.01 version and install it without uninstalling the 0.99 version. To make the situation more specific, here is the configuration:
```
BUILDING MATPLOTLIB
matplotlib: 1.0.1
python: 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC
4.4.5]
platform: linux2
REQUIRED DEPENDENCIES
numpy: 1.6.0b1
freetype2: 12.2.6
OPTIONAL BACKEND DEPENDENCIES
libpng: 1.2.44
Tkinter: no
* Using default library and include directories for
* Tcl and Tk because a Tk window failed to open.
* You may need to define DISPLAY for Tk to work so
* that setup can determine where your libraries are
* located. Tkinter present, but header files are not
* found. You may need to install development
* packages.
wxPython: no
* wxPython not found
pkg-config: looking for pygtk-2.0 gtk+-2.0
* Package pygtk-2.0 was not found in the pkg-config
* search path. Perhaps you should add the directory
* containing `pygtk-2.0.pc' to the PKG_CONFIG_PATH
* environment variable No package 'pygtk-2.0' found
* Package gtk+-2.0 was not found in the pkg-config
* search path. Perhaps you should add the directory
* containing `gtk+-2.0.pc' to the PKG_CONFIG_PATH
* environment variable No package 'gtk+-2.0' found
* You may need to install 'dev' package(s) to
* provide header files.
Gtk+: no
* Could not find Gtk+ headers in any of
* '/usr/local/include', '/usr/include', '.'
Mac OS X native: no
Qt: no
Qt4: no
Cairo: 1.8.8
OPTIONAL DATE/TIMEZONE DEPENDENCIES
datetime: present, version unknown
dateutil: 1.4.1
pytz: 2010b
OPTIONAL USETEX DEPENDENCIES
dvipng: no
ghostscript: 8.71
latex: no
pdftops: 0.14.3
[Edit setup.cfg to suppress the above messages]
```
and now i can import matplotlib but once i run the example code, it just terminated and I get no results. I tried to 'clean install' several times, which means I delete all the files include the .matplotlib and the matplotlib directory under dist-package, but I still can't get things done.
What makes weirder is that after I reinstall the 0.99 version, it works pretty well.
Any ideas?
|
2011/04/05
|
[
"https://Stackoverflow.com/questions/5556360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/693500/"
] |
Ben Gamari has [packaged](https://launchpad.net/~bgamari/+archive/matplotlib-unofficial) matplotlib 1.0 for Ubuntu.
|
Try installing it with `pip`:
```
sudo apt-get install python-pip
sudo pip install matplotlib
```
I just tested this and it should install matplotlib 1.0.1.
|
5,556,360
|
I'm having a problem getting matplotlib to work in ubuntu 10.10.
First I install the matplotlib using apt-get, and later I found that the version is 0.99 and some examples on the official site just won't work. Then I download the 1.01 version and install it without uninstalling the 0.99 version. To make the situation more specific, here is the configuration:
```
BUILDING MATPLOTLIB
matplotlib: 1.0.1
python: 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC
4.4.5]
platform: linux2
REQUIRED DEPENDENCIES
numpy: 1.6.0b1
freetype2: 12.2.6
OPTIONAL BACKEND DEPENDENCIES
libpng: 1.2.44
Tkinter: no
* Using default library and include directories for
* Tcl and Tk because a Tk window failed to open.
* You may need to define DISPLAY for Tk to work so
* that setup can determine where your libraries are
* located. Tkinter present, but header files are not
* found. You may need to install development
* packages.
wxPython: no
* wxPython not found
pkg-config: looking for pygtk-2.0 gtk+-2.0
* Package pygtk-2.0 was not found in the pkg-config
* search path. Perhaps you should add the directory
* containing `pygtk-2.0.pc' to the PKG_CONFIG_PATH
* environment variable No package 'pygtk-2.0' found
* Package gtk+-2.0 was not found in the pkg-config
* search path. Perhaps you should add the directory
* containing `gtk+-2.0.pc' to the PKG_CONFIG_PATH
* environment variable No package 'gtk+-2.0' found
* You may need to install 'dev' package(s) to
* provide header files.
Gtk+: no
* Could not find Gtk+ headers in any of
* '/usr/local/include', '/usr/include', '.'
Mac OS X native: no
Qt: no
Qt4: no
Cairo: 1.8.8
OPTIONAL DATE/TIMEZONE DEPENDENCIES
datetime: present, version unknown
dateutil: 1.4.1
pytz: 2010b
OPTIONAL USETEX DEPENDENCIES
dvipng: no
ghostscript: 8.71
latex: no
pdftops: 0.14.3
[Edit setup.cfg to suppress the above messages]
```
and now i can import matplotlib but once i run the example code, it just terminated and I get no results. I tried to 'clean install' several times, which means I delete all the files include the .matplotlib and the matplotlib directory under dist-package, but I still can't get things done.
What makes weirder is that after I reinstall the 0.99 version, it works pretty well.
Any ideas?
|
2011/04/05
|
[
"https://Stackoverflow.com/questions/5556360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/693500/"
] |
Ben Gamari has [packaged](https://launchpad.net/~bgamari/+archive/matplotlib-unofficial) matplotlib 1.0 for Ubuntu.
|
I was having the same issue on Ubuntu 12.04. I solved it installing **python-gtk2-dev** and re-installing **matplotlib**:
```
sudo apt-get install python-gtk2-dev
sudo pip install --upgrade matplotlib
```
The message about dependencies changed to:
```
Gtk+: gtk+: 2.24.10, glib: 2.32.3, pygtk: 2.24.0,
pygobject: 2.28.6
```
|
45,803,713
|
Presently we have a big-data cluster built using Cloudera-Virtual machines. By default the Python version on the VM is 2.7.
For one of my programs I need Python 3.6. My team is very skeptical about 2 installations and afraid of breaking existing cluster/VM. I was planning to follow this article and install 2 versions <https://www.digitalocean.com/community/tutorials/how-to-set-up-python-2-7-6-and-3-3-3-on-centos-6-4>
Is there a way "I can package Python 3.6" version in my project, and set the Python home path to my project folder, so that there is no installation that needs to be done on the existing Virtual machine?
Since we have to download python and build source for the Unix version, I want to skip this part on VM, and instead ship the folder which has Python 3.6
|
2017/08/21
|
[
"https://Stackoverflow.com/questions/45803713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/864598/"
] |
It seems that [`miniconda`](https://conda.io/miniconda.html) is what you need.
using it you can manage multiple python environments with different versions of python.
**to install miniconda3 just run:**
-----------------------------------
```
# this will download & install miniconda3 on your home dir
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3
```
**then, create new python3.6 env:**
-----------------------------------
```
conda create -y -n myproject 'python>3.6'
```
**now, enter the new python3.6 env**
------------------------------------
```
source activate myproject
python3
```
`miniconda` can also install python packages, including pip packages and compiled packages. you can also copy envs from one machine to another. I encourage you to take a deeper look into it.
|
ShmulikA's suggestion is pretty good.
Here I'd like to add another one - I use Python 2.7.x, but for few prototypes, I had to go with Python 3.x. For this I used the **`pyenv`** utility.
Once installed, all you have to do is:
```
pyenv install 3.x.x
```
Can list all the available Python variants:
```
pyenv versions
```
To use the specific version, while at the project root, execute the following:
```
pyenv local 3.x.x
```
It'll create a file .python-version at the project root, having the version as it's content:
```
[nahmed@localhost ~]$ cat some-project/.python-version
3.5.2
```
Example:
```
[nahmed@localhost ~]$ pyenv versions
* system (set by /home/nahmed/.pyenv/version)
3.5.2
3.5.2/envs/venv_scrapy
venv_scrapy
[nahmed@localhost ~]$ pyenv local 3.5.2
[nahmed@localhost ~]$ pyenv versions
system
* 3.5.2 (set by /home/nahmed/.python-version)
3.5.2/envs/venv_scrapy
venv_scrapy
```
---
I found it very simple to use.
Here's a [post](http://devopspy.com/python/pyenv-setup/) regarding the installation and basic usage (blog post by me).
---
For the part:
>
> Since we have to download python and build source for the Unix
> version, I want to skip this part on VM, and instead ship the folder
> which has Python 3.6
>
>
>
You might look into ways to embed Python interpreter with your Python application:
>
> And for both Windows and Linux, there's [**bbfreeze**](https://pypi.python.org/pypi/bbfreeze/) or also [**pyinstaller**](http://www.pyinstaller.org/)
>
>
>
from - [SOAnswer](https://stackoverflow.com/questions/2441172/embed-python-interpreter-in-a-python-application/2441184#2441184).
|
66,588,659
|
I have a variable that saves the user's input. If the user inputs a list, e.g `["oranges","apples","pears"]` python seems to take this as a string, and print every character, instead of every word, that the code would print if fruit was simply a list. How do I the code to do this? Here is what I've tried...
```
fruit = input("What is your favourite fruit?")
fruit = list(fruit)
for i in fruit:
print(i)
```
|
2021/03/11
|
[
"https://Stackoverflow.com/questions/66588659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
python takes inputs as one huge string so instead of that being a list it is just a string like that looks like this
```py
'["oranges","apples","pears"]'
```
turning this into a list will just look like
```py
['[', '"', 'o', 'r', 'a', 'n', 'g', 'e', 's', '"', ',', '"', 'a', 'p', 'p', 'l', 'e', 's', '"', ',', '"', 'p', 'e', 'a', 'r', 's', '"', ']']
```
instead of this try something like this which asks for favourite fruits until you do not enter a fruit
```
Fruits = []
while True:
temp = input()
if temp == "":
break
else:
Fruits.append(temp)
```
and then output the values
```
for x in Fruits:
print(x)
```
|
You will have to split word is list by comma.
```
fruit = list(fruit.split(","))
fruit = input("What is you favourite fruit?")
fruit = fruit.split(",")
for i in fruit:
print(i)
```
|
62,616,736
|
I've been using the Fermipy conda environment on Python 2.7.14 64-bit on macOS Catalina 10.15.5 and overnight received the error "r.start is not a function" when trying to connect to the Jyputer server through Vscode (if I try on Jupyter Notebook/Lab the server instantly dies). I had a bunch of clutter on my system so I ended up formatting it and reinstalling all the dependencies needed (such as Conda through Homebrew, Fermitools through Conda and Fermipy through the install script on their site), but still get the same error, although I was previously running python scripts just fine. It gives me no other error or output, if it did I would attach it here. [This is the error I get.](https://i.stack.imgur.com/mW02y.png)
Edit: I get the same error using any version of Python 2.7.XX and not for python 3.7.XX.
|
2020/06/27
|
[
"https://Stackoverflow.com/questions/62616736",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13820618/"
] |
As answered here, <https://github.com/microsoft/vscode-python/issues/12355#issuecomment-652515770>
VSCode changed how it launches jupyter kernels, and the new method is incompatible with python 2.7.
Add this line to your VSCode settings.json file and restart.
```
"python.experiments.optOutFrom": ["LocalZMQKernel - experiment"]
```
|
I got the same message. (r.start is not a function.) I had an old uninstalled version of anaconda on the computer which had left behind a folder containing its python version. Jupyter was supposed to be running from new venv after setting both python and jupyter path in vscode. I fully deleted remaining files from old anaconda install - message went away and notebook ran fine. Maybe try getting rid of all conda stuff and pip install jupyter and anything else you need.
|
72,285,267
|
I have a below dictionary defined with IP address of the application for the respective region. Region is user input variable, based on the input i need to procees the IP in rest of my script.
```
app_list=["puppet","dns","ntp"]
dns={'apac':["172.118.162.93","172.118.144.93"],'euro':["172.118.76.93","172.118.204.93","172.118.236.93"],'cana':["172.118.48.93","172.118.172.93"]}
ntp={'asia':["172.118.162.93","172.118.148.93"],'euro':["172.118.76.93","172.118.204.93","172.118.236.93"],'cana':["172.118.48.93","172.118.172.93"]}
puppet={'asia':["172.118.162.2251","1932.1625.254.2493"],'euro':["172.118.76.21","1932.1625.254.2493"],'cana':["172.118.76.21","193.1625.254.249"]}
```
***Code Tried***
```
region=raw_input("entee the region:")
for appl in app_list:
for ip in appl[region]:
<<<rest of operations with IP in script>>
```
When I tried the above code got the error mentioned below. I tried to convert the str object to dict using json and ast module but still did not succeedd.
**Error received**
```
TypeError: string indices must be integers, not str
```
As new to python unsure how to get the ip list based on the region
|
2022/05/18
|
[
"https://Stackoverflow.com/questions/72285267",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3619226/"
] |
```
# use a nested dict comprehension
# use enumerate for the index of the list items and add it to the key using f-string
{key: {f"{k}.{i}": e for k, v in val.items() for i, e in enumerate(v)} for key, val in my_dict.items()}
{'Ka': {'Ka.0': '0.80', 'Ka.1': '0.1',
'Ba.0': '0.50', 'Ba.1': '1.1',
'FC.0': '0.78', 'FC.1': '0.0',
'AA.0': '0.66', 'AA.1': '8.1'},
'AL': {'AR.0': '2.71', 'AR.1': '7.3',
'KK.0': '10.00', 'KK.1': '90.0'}}
```
|
```
from collections import defaultdict
new = defaultdict(dict)
for k, values in d.items():
for sub_key, values in values.items():
for value in values:
existing_key_count = sum(1 for existing_key in new[k].keys() if existing_key.startswith(sub_key))
new_key = f"{sub_key}.{existing_key_count}"
new[k][new_key] = value
```
>
> {'Ka': {'Ka.0': '0.80',
> 'Ka.1': '0.1',
> 'Ba.0': '0.50',
> 'Ba.1': '1.1',
> 'FC.0': '0.78',
> 'FC.1': '0.0',
> 'AA.0': '0.66',
> 'AA.1': '8.1'},
> 'AL': {'AR.0': '2.71', 'AR.1': '7.3', 'KK.0': '10.00', 'KK.1': '90.0'}}
>
>
>
|
72,285,267
|
I have a below dictionary defined with IP address of the application for the respective region. Region is user input variable, based on the input i need to procees the IP in rest of my script.
```
app_list=["puppet","dns","ntp"]
dns={'apac':["172.118.162.93","172.118.144.93"],'euro':["172.118.76.93","172.118.204.93","172.118.236.93"],'cana':["172.118.48.93","172.118.172.93"]}
ntp={'asia':["172.118.162.93","172.118.148.93"],'euro':["172.118.76.93","172.118.204.93","172.118.236.93"],'cana':["172.118.48.93","172.118.172.93"]}
puppet={'asia':["172.118.162.2251","1932.1625.254.2493"],'euro':["172.118.76.21","1932.1625.254.2493"],'cana':["172.118.76.21","193.1625.254.249"]}
```
***Code Tried***
```
region=raw_input("entee the region:")
for appl in app_list:
for ip in appl[region]:
<<<rest of operations with IP in script>>
```
When I tried the above code got the error mentioned below. I tried to convert the str object to dict using json and ast module but still did not succeedd.
**Error received**
```
TypeError: string indices must be integers, not str
```
As new to python unsure how to get the ip list based on the region
|
2022/05/18
|
[
"https://Stackoverflow.com/questions/72285267",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3619226/"
] |
```
sample_dict = {'Ka': {'Ka': ['0.80', '0.1'],
'Ba': ['0.50', '1.1'],
'FC': ['0.78', '0.0'],
'AA': ['0.66', '8.1']},
'AL': {'AR': ['2.71', '7.3'], 'KK': ['10.00', '90.0']}}
for item, value in sample_dict.items():
print(item, value, type(value))
if type(value)==dict:
for item2 in list(value):
if type(value[item2])==list:
value[item2 + ".0"] = value[item2][0]
value[item2 + ".1"] = value[item2][1]
del value[item2]
print(sample_dict)
```
and the result is:
```
{'Ka': {'Ka.0': '0.80', 'Ka.1': '0.1',
'Ba.0': '0.50', 'Ba.1': '1.1',
'FC.0': '0.78', 'FC.1': '0.0',
'AA.0': '0.66', 'AA.1': '8.1'},
'AL': {'AR.0': '2.71', 'AR.1': '7.3',
'KK.0': '10.00', 'KK.1': '90.0'}}
```
|
```
from collections import defaultdict
new = defaultdict(dict)
for k, values in d.items():
for sub_key, values in values.items():
for value in values:
existing_key_count = sum(1 for existing_key in new[k].keys() if existing_key.startswith(sub_key))
new_key = f"{sub_key}.{existing_key_count}"
new[k][new_key] = value
```
>
> {'Ka': {'Ka.0': '0.80',
> 'Ka.1': '0.1',
> 'Ba.0': '0.50',
> 'Ba.1': '1.1',
> 'FC.0': '0.78',
> 'FC.1': '0.0',
> 'AA.0': '0.66',
> 'AA.1': '8.1'},
> 'AL': {'AR.0': '2.71', 'AR.1': '7.3', 'KK.0': '10.00', 'KK.1': '90.0'}}
>
>
>
|
45,732,286
|
I am working with protein sequences. My goal is to create a convolutional network which will predict three angles for each amino acid in the protein. I'm having trouble debugging a TFLearn DNN model that requires a reshape operation.
The input data describes (currently) 25 proteins of varying lengths. To use Tensors I need to have uniform dimensions, so I pad the empty input cells with zeros. Each amino acid is represented by a 4-dimensional code. The details of that are probably unimportant, other than to help you understand the shapes of the Tensors.
The output of the DNN is six numbers, representing the sines and cosines of three angles. To create ordered pairs, the DNN graph reshapes a [..., 6] Tensor to [..., 3, 2]. My target data is encoded the same way. I calculate the loss using cosine distance.
I built a non-convolutional DNN which showed good initial learning behavior which is very similar to the code I will post here. But that model treated three adjacent amino acids in isolation. I want to treat *each protein* as a unit -- with sliding windows 3 amino acids wide at first, and eventually larger.
Now that I am converting to a convolutional model, I can't seem to get the shapes to match. Here are the working portions of my code:
```
import tensorflow as tf
import tflearn as tfl
from protein import ProteinDatabase # don't worry about its details
def backbone_angle_distance(predict, actual):
with tf.name_scope("BackboneAngleDistance"):
actual = tfl.reshape(actual, [-1,3,2])
# Supply the -1 argument for axis that TFLearn can't pass
loss = tf.losses.cosine_distance(predict, actual, -1,
reduction=tf.losses.Reduction.MEAN)
return loss
# Training data
database = ProteinDatabase("./data")
inp, tgt = database.training_arrays()
# DNN model, convolution only in topmost layer for now
net = tfl.input_data(shape=[None, None, 4])
net = tfl.conv_1d(net, 24, 3)
net = tfl.conv_1d(net, 12, 1)
net = tfl.conv_1d(net, 6, 1)
net = tfl.reshape(net, [-1,3,2])
net = tf.nn.l2_normalize(net, dim=2)
net = tfl.regression(net, optimizer="sgd", learning_rate=0.1, \
loss=backbone_angle_distance)
model = tfl.DNN(net)
# Generate a prediction. Compare shapes for compatibility.
out = model.predict(inp)
print("\ninp : {}, shape = {}".format(type(inp), inp.shape))
print("out : {}, shape = {}".format(type(out), out.shape))
print("tgt : {}, shape = {}".format(type(tgt), tgt.shape))
print("tgt shape, if flattened by one dimension = {}\n".\
format(tgt.reshape([-1,3,2]).shape))
```
The output at this point is:
```
inp : <class 'numpy.ndarray'>, shape = (25, 543, 4)
out : <class 'numpy.ndarray'>, shape = (13575, 3, 2)
tgt : <class 'numpy.ndarray'>, shape = (25, 543, 3, 2)
tgt shape, if flattened by one dimension = (13575, 3, 2)
```
So if I reshape the 4D Tensor **tgt**, flattening the outermost dimension, **out** and **tgt** should match. Since TFLearn's code makes the batches, I try to intercept and reshape the Tensor **actual** in the first line of backbone\_angle\_distance(), my custom loss function.
If I add a few lines to attempt model fitting as follows:
```
e, b = 1, 5
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
```
I get the following extra output and error:
```
---------------------------------
Run id: EEG6JW
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 20
Validation samples: 5
--
--
Traceback (most recent call last):
File "exp54.py", line 252, in <module>
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/dnn.py", line 216, in fit
callbacks=callbacks)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 339, in fit
show_metric)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 818, in _train
feed_batch)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (5, 543, 3, 2) for Tensor 'TargetsData/Y:0', which has shape '(?, 3, 2)'
```
Where in my code am I SPECIFYING that TargetsData/Y:0 has shape (?, 3, 2)? I know it won't be. According to the traceback, I never actually seem to reach my reshape operation in backbone\_angle\_distance().
Any advice is appreciated, thanks!
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45732286",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9376487/"
] |
There is a new plugin (since a year),
called [chartjs-plugin-piechart-outlabels](https://www.npmjs.com/package/chartjs-plugin-piechart-outlabels)
Just import the source
`<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-piechart-outlabels"></script>`
and use it with the outlabeledPie type
```
var randomScalingFactor = function() {
return Math.round(Math.random() * 100);
};
var ctx = document.getElementById("chart-area").getContext("2d");
var myDoughnut = new Chart(ctx, {
type: 'outlabeledPie',
data: {
labels: ["January", "February", "March", "April", "May"],
...
plugins: {
legend: false,
outlabels: {
text: '%l %p',
color: 'white',
stretch: 45,
font: {
resizable: true,
minSize: 12,
maxSize: 18
}
}
}
})
```
|
The real problem lies with the overlapping of the labels when the slices are small.You can use [PieceLabel.js](https://emn178.github.io/Chart.PieceLabel.js/samples/demo/) which solves the issue of overlapping labels by hiding it . You mentioned that you **cannot hide labels** so use legends, which will display names of all slices
Or if you want exact behavior you can go with the [highcharts](https://www.highcharts.com/demo/pie-basic), but it requires licence for commercial use.
```js
var randomScalingFactor = function() {
return Math.round(Math.random() * 100);
};
var ctx = document.getElementById("chart-area").getContext("2d");
var myDoughnut = new Chart(ctx, {
type: 'pie',
data: {
labels: ["January", "February", "March", "April", "May"],
datasets: [{
data: [
250,
30,
5,
4,
2,
],
backgroundColor: ['#ff3d67', '#ff9f40', '#ffcd56', '#4bc0c0', '#999999'],
borderColor: 'white',
borderWidth: 5,
}]
},
showDatapoints: true,
options: {
tooltips: {
enabled: false
},
pieceLabel: {
render: 'label',
arc: true,
fontColor: '#000',
position: 'outside'
},
responsive: true,
legend: {
position: 'top',
},
title: {
display: true,
text: 'Testing',
fontSize: 20
},
animation: {
animateScale: true,
animateRotate: true
}
}
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.6.0/Chart.min.js"></script>
<script src="https://cdn.rawgit.com/emn178/Chart.PieceLabel.js/master/build/Chart.PieceLabel.min.js"></script>
<canvas id="chart-area"></canvas>
```
[Fiddle](https://jsfiddle.net/deep3015/zadn9j1j/1/) demo
|
45,732,286
|
I am working with protein sequences. My goal is to create a convolutional network which will predict three angles for each amino acid in the protein. I'm having trouble debugging a TFLearn DNN model that requires a reshape operation.
The input data describes (currently) 25 proteins of varying lengths. To use Tensors I need to have uniform dimensions, so I pad the empty input cells with zeros. Each amino acid is represented by a 4-dimensional code. The details of that are probably unimportant, other than to help you understand the shapes of the Tensors.
The output of the DNN is six numbers, representing the sines and cosines of three angles. To create ordered pairs, the DNN graph reshapes a [..., 6] Tensor to [..., 3, 2]. My target data is encoded the same way. I calculate the loss using cosine distance.
I built a non-convolutional DNN which showed good initial learning behavior which is very similar to the code I will post here. But that model treated three adjacent amino acids in isolation. I want to treat *each protein* as a unit -- with sliding windows 3 amino acids wide at first, and eventually larger.
Now that I am converting to a convolutional model, I can't seem to get the shapes to match. Here are the working portions of my code:
```
import tensorflow as tf
import tflearn as tfl
from protein import ProteinDatabase # don't worry about its details
def backbone_angle_distance(predict, actual):
with tf.name_scope("BackboneAngleDistance"):
actual = tfl.reshape(actual, [-1,3,2])
# Supply the -1 argument for axis that TFLearn can't pass
loss = tf.losses.cosine_distance(predict, actual, -1,
reduction=tf.losses.Reduction.MEAN)
return loss
# Training data
database = ProteinDatabase("./data")
inp, tgt = database.training_arrays()
# DNN model, convolution only in topmost layer for now
net = tfl.input_data(shape=[None, None, 4])
net = tfl.conv_1d(net, 24, 3)
net = tfl.conv_1d(net, 12, 1)
net = tfl.conv_1d(net, 6, 1)
net = tfl.reshape(net, [-1,3,2])
net = tf.nn.l2_normalize(net, dim=2)
net = tfl.regression(net, optimizer="sgd", learning_rate=0.1, \
loss=backbone_angle_distance)
model = tfl.DNN(net)
# Generate a prediction. Compare shapes for compatibility.
out = model.predict(inp)
print("\ninp : {}, shape = {}".format(type(inp), inp.shape))
print("out : {}, shape = {}".format(type(out), out.shape))
print("tgt : {}, shape = {}".format(type(tgt), tgt.shape))
print("tgt shape, if flattened by one dimension = {}\n".\
format(tgt.reshape([-1,3,2]).shape))
```
The output at this point is:
```
inp : <class 'numpy.ndarray'>, shape = (25, 543, 4)
out : <class 'numpy.ndarray'>, shape = (13575, 3, 2)
tgt : <class 'numpy.ndarray'>, shape = (25, 543, 3, 2)
tgt shape, if flattened by one dimension = (13575, 3, 2)
```
So if I reshape the 4D Tensor **tgt**, flattening the outermost dimension, **out** and **tgt** should match. Since TFLearn's code makes the batches, I try to intercept and reshape the Tensor **actual** in the first line of backbone\_angle\_distance(), my custom loss function.
If I add a few lines to attempt model fitting as follows:
```
e, b = 1, 5
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
```
I get the following extra output and error:
```
---------------------------------
Run id: EEG6JW
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 20
Validation samples: 5
--
--
Traceback (most recent call last):
File "exp54.py", line 252, in <module>
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/dnn.py", line 216, in fit
callbacks=callbacks)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 339, in fit
show_metric)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 818, in _train
feed_batch)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (5, 543, 3, 2) for Tensor 'TargetsData/Y:0', which has shape '(?, 3, 2)'
```
Where in my code am I SPECIFYING that TargetsData/Y:0 has shape (?, 3, 2)? I know it won't be. According to the traceback, I never actually seem to reach my reshape operation in backbone\_angle\_distance().
Any advice is appreciated, thanks!
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45732286",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9376487/"
] |
The real problem lies with the overlapping of the labels when the slices are small.You can use [PieceLabel.js](https://emn178.github.io/Chart.PieceLabel.js/samples/demo/) which solves the issue of overlapping labels by hiding it . You mentioned that you **cannot hide labels** so use legends, which will display names of all slices
Or if you want exact behavior you can go with the [highcharts](https://www.highcharts.com/demo/pie-basic), but it requires licence for commercial use.
```js
var randomScalingFactor = function() {
return Math.round(Math.random() * 100);
};
var ctx = document.getElementById("chart-area").getContext("2d");
var myDoughnut = new Chart(ctx, {
type: 'pie',
data: {
labels: ["January", "February", "March", "April", "May"],
datasets: [{
data: [
250,
30,
5,
4,
2,
],
backgroundColor: ['#ff3d67', '#ff9f40', '#ffcd56', '#4bc0c0', '#999999'],
borderColor: 'white',
borderWidth: 5,
}]
},
showDatapoints: true,
options: {
tooltips: {
enabled: false
},
pieceLabel: {
render: 'label',
arc: true,
fontColor: '#000',
position: 'outside'
},
responsive: true,
legend: {
position: 'top',
},
title: {
display: true,
text: 'Testing',
fontSize: 20
},
animation: {
animateScale: true,
animateRotate: true
}
}
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.6.0/Chart.min.js"></script>
<script src="https://cdn.rawgit.com/emn178/Chart.PieceLabel.js/master/build/Chart.PieceLabel.min.js"></script>
<canvas id="chart-area"></canvas>
```
[Fiddle](https://jsfiddle.net/deep3015/zadn9j1j/1/) demo
|
I resolved:
We add script to file global:
```
if(window.Chartist && Chartist.Pie && !Chartist.Pie.prototype.resolveOverlap) {
Chartist.Pie.prototype.resolveOverlap = function() {
this.on('draw', function(ctx) {
if(ctx.type == 'label') {
let gText = $(ctx.group._node).find('text');
let ctxHeight = ctx.element.height();
gText.each(function(index, ele){
let item = $(ele);
let diff = ctx.element.attr('dy') - item.attr('dy');
if(diff == 0) {
return false;
}
if(Math.abs(diff) < ctxHeight) {
ctx.element.attr({dy: ctx.element.attr('dy') - ctxHeight});
}
});
}
});
};
}
```
and then:
```
new Chartist.Pie(element, data, options).resolveOverlap();
```
|
45,732,286
|
I am working with protein sequences. My goal is to create a convolutional network which will predict three angles for each amino acid in the protein. I'm having trouble debugging a TFLearn DNN model that requires a reshape operation.
The input data describes (currently) 25 proteins of varying lengths. To use Tensors I need to have uniform dimensions, so I pad the empty input cells with zeros. Each amino acid is represented by a 4-dimensional code. The details of that are probably unimportant, other than to help you understand the shapes of the Tensors.
The output of the DNN is six numbers, representing the sines and cosines of three angles. To create ordered pairs, the DNN graph reshapes a [..., 6] Tensor to [..., 3, 2]. My target data is encoded the same way. I calculate the loss using cosine distance.
I built a non-convolutional DNN which showed good initial learning behavior which is very similar to the code I will post here. But that model treated three adjacent amino acids in isolation. I want to treat *each protein* as a unit -- with sliding windows 3 amino acids wide at first, and eventually larger.
Now that I am converting to a convolutional model, I can't seem to get the shapes to match. Here are the working portions of my code:
```
import tensorflow as tf
import tflearn as tfl
from protein import ProteinDatabase # don't worry about its details
def backbone_angle_distance(predict, actual):
with tf.name_scope("BackboneAngleDistance"):
actual = tfl.reshape(actual, [-1,3,2])
# Supply the -1 argument for axis that TFLearn can't pass
loss = tf.losses.cosine_distance(predict, actual, -1,
reduction=tf.losses.Reduction.MEAN)
return loss
# Training data
database = ProteinDatabase("./data")
inp, tgt = database.training_arrays()
# DNN model, convolution only in topmost layer for now
net = tfl.input_data(shape=[None, None, 4])
net = tfl.conv_1d(net, 24, 3)
net = tfl.conv_1d(net, 12, 1)
net = tfl.conv_1d(net, 6, 1)
net = tfl.reshape(net, [-1,3,2])
net = tf.nn.l2_normalize(net, dim=2)
net = tfl.regression(net, optimizer="sgd", learning_rate=0.1, \
loss=backbone_angle_distance)
model = tfl.DNN(net)
# Generate a prediction. Compare shapes for compatibility.
out = model.predict(inp)
print("\ninp : {}, shape = {}".format(type(inp), inp.shape))
print("out : {}, shape = {}".format(type(out), out.shape))
print("tgt : {}, shape = {}".format(type(tgt), tgt.shape))
print("tgt shape, if flattened by one dimension = {}\n".\
format(tgt.reshape([-1,3,2]).shape))
```
The output at this point is:
```
inp : <class 'numpy.ndarray'>, shape = (25, 543, 4)
out : <class 'numpy.ndarray'>, shape = (13575, 3, 2)
tgt : <class 'numpy.ndarray'>, shape = (25, 543, 3, 2)
tgt shape, if flattened by one dimension = (13575, 3, 2)
```
So if I reshape the 4D Tensor **tgt**, flattening the outermost dimension, **out** and **tgt** should match. Since TFLearn's code makes the batches, I try to intercept and reshape the Tensor **actual** in the first line of backbone\_angle\_distance(), my custom loss function.
If I add a few lines to attempt model fitting as follows:
```
e, b = 1, 5
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
```
I get the following extra output and error:
```
---------------------------------
Run id: EEG6JW
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 20
Validation samples: 5
--
--
Traceback (most recent call last):
File "exp54.py", line 252, in <module>
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/dnn.py", line 216, in fit
callbacks=callbacks)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 339, in fit
show_metric)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 818, in _train
feed_batch)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (5, 543, 3, 2) for Tensor 'TargetsData/Y:0', which has shape '(?, 3, 2)'
```
Where in my code am I SPECIFYING that TargetsData/Y:0 has shape (?, 3, 2)? I know it won't be. According to the traceback, I never actually seem to reach my reshape operation in backbone\_angle\_distance().
Any advice is appreciated, thanks!
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45732286",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9376487/"
] |
There is a new plugin (since a year),
called [chartjs-plugin-piechart-outlabels](https://www.npmjs.com/package/chartjs-plugin-piechart-outlabels)
Just import the source
`<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-piechart-outlabels"></script>`
and use it with the outlabeledPie type
```
var randomScalingFactor = function() {
return Math.round(Math.random() * 100);
};
var ctx = document.getElementById("chart-area").getContext("2d");
var myDoughnut = new Chart(ctx, {
type: 'outlabeledPie',
data: {
labels: ["January", "February", "March", "April", "May"],
...
plugins: {
legend: false,
outlabels: {
text: '%l %p',
color: 'white',
stretch: 45,
font: {
resizable: true,
minSize: 12,
maxSize: 18
}
}
}
})
```
|
I resolved:
We add script to file global:
```
if(window.Chartist && Chartist.Pie && !Chartist.Pie.prototype.resolveOverlap) {
Chartist.Pie.prototype.resolveOverlap = function() {
this.on('draw', function(ctx) {
if(ctx.type == 'label') {
let gText = $(ctx.group._node).find('text');
let ctxHeight = ctx.element.height();
gText.each(function(index, ele){
let item = $(ele);
let diff = ctx.element.attr('dy') - item.attr('dy');
if(diff == 0) {
return false;
}
if(Math.abs(diff) < ctxHeight) {
ctx.element.attr({dy: ctx.element.attr('dy') - ctxHeight});
}
});
}
});
};
}
```
and then:
```
new Chartist.Pie(element, data, options).resolveOverlap();
```
|
45,732,286
|
I am working with protein sequences. My goal is to create a convolutional network which will predict three angles for each amino acid in the protein. I'm having trouble debugging a TFLearn DNN model that requires a reshape operation.
The input data describes (currently) 25 proteins of varying lengths. To use Tensors I need to have uniform dimensions, so I pad the empty input cells with zeros. Each amino acid is represented by a 4-dimensional code. The details of that are probably unimportant, other than to help you understand the shapes of the Tensors.
The output of the DNN is six numbers, representing the sines and cosines of three angles. To create ordered pairs, the DNN graph reshapes a [..., 6] Tensor to [..., 3, 2]. My target data is encoded the same way. I calculate the loss using cosine distance.
I built a non-convolutional DNN which showed good initial learning behavior which is very similar to the code I will post here. But that model treated three adjacent amino acids in isolation. I want to treat *each protein* as a unit -- with sliding windows 3 amino acids wide at first, and eventually larger.
Now that I am converting to a convolutional model, I can't seem to get the shapes to match. Here are the working portions of my code:
```
import tensorflow as tf
import tflearn as tfl
from protein import ProteinDatabase # don't worry about its details
def backbone_angle_distance(predict, actual):
with tf.name_scope("BackboneAngleDistance"):
actual = tfl.reshape(actual, [-1,3,2])
# Supply the -1 argument for axis that TFLearn can't pass
loss = tf.losses.cosine_distance(predict, actual, -1,
reduction=tf.losses.Reduction.MEAN)
return loss
# Training data
database = ProteinDatabase("./data")
inp, tgt = database.training_arrays()
# DNN model, convolution only in topmost layer for now
net = tfl.input_data(shape=[None, None, 4])
net = tfl.conv_1d(net, 24, 3)
net = tfl.conv_1d(net, 12, 1)
net = tfl.conv_1d(net, 6, 1)
net = tfl.reshape(net, [-1,3,2])
net = tf.nn.l2_normalize(net, dim=2)
net = tfl.regression(net, optimizer="sgd", learning_rate=0.1, \
loss=backbone_angle_distance)
model = tfl.DNN(net)
# Generate a prediction. Compare shapes for compatibility.
out = model.predict(inp)
print("\ninp : {}, shape = {}".format(type(inp), inp.shape))
print("out : {}, shape = {}".format(type(out), out.shape))
print("tgt : {}, shape = {}".format(type(tgt), tgt.shape))
print("tgt shape, if flattened by one dimension = {}\n".\
format(tgt.reshape([-1,3,2]).shape))
```
The output at this point is:
```
inp : <class 'numpy.ndarray'>, shape = (25, 543, 4)
out : <class 'numpy.ndarray'>, shape = (13575, 3, 2)
tgt : <class 'numpy.ndarray'>, shape = (25, 543, 3, 2)
tgt shape, if flattened by one dimension = (13575, 3, 2)
```
So if I reshape the 4D Tensor **tgt**, flattening the outermost dimension, **out** and **tgt** should match. Since TFLearn's code makes the batches, I try to intercept and reshape the Tensor **actual** in the first line of backbone\_angle\_distance(), my custom loss function.
If I add a few lines to attempt model fitting as follows:
```
e, b = 1, 5
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
```
I get the following extra output and error:
```
---------------------------------
Run id: EEG6JW
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 20
Validation samples: 5
--
--
Traceback (most recent call last):
File "exp54.py", line 252, in <module>
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/dnn.py", line 216, in fit
callbacks=callbacks)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 339, in fit
show_metric)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 818, in _train
feed_batch)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (5, 543, 3, 2) for Tensor 'TargetsData/Y:0', which has shape '(?, 3, 2)'
```
Where in my code am I SPECIFYING that TargetsData/Y:0 has shape (?, 3, 2)? I know it won't be. According to the traceback, I never actually seem to reach my reshape operation in backbone\_angle\_distance().
Any advice is appreciated, thanks!
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45732286",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9376487/"
] |
There is a new plugin (since a year),
called [chartjs-plugin-piechart-outlabels](https://www.npmjs.com/package/chartjs-plugin-piechart-outlabels)
Just import the source
`<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-piechart-outlabels"></script>`
and use it with the outlabeledPie type
```
var randomScalingFactor = function() {
return Math.round(Math.random() * 100);
};
var ctx = document.getElementById("chart-area").getContext("2d");
var myDoughnut = new Chart(ctx, {
type: 'outlabeledPie',
data: {
labels: ["January", "February", "March", "April", "May"],
...
plugins: {
legend: false,
outlabels: {
text: '%l %p',
color: 'white',
stretch: 45,
font: {
resizable: true,
minSize: 12,
maxSize: 18
}
}
}
})
```
|
I can't find an exact plugin but I make one.
```
const data = {
labels: ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat"],
datasets: [
{
data: [1, 2, 3, 4, 5, 6],
backgroundColor: [
"#316065",
"#1A7F89",
"#2D9CA7",
"#2D86A7",
"#1167A7",
"#142440",
],
borderColor: [
"#316065",
"#1A7F89",
"#2D9CA7",
"#2D86A7",
"#1167A7",
"#142440",
],
},
],
};
// pieLabelsLine plugin
const pieLabelsLine = {
id: "pieLabelsLine",
afterDraw(chart) {
const {
ctx,
chartArea: { width, height },
} = chart;
const cx = chart._metasets[0].data[0].x;
const cy = chart._metasets[0].data[0].y;
const sum = chart.data.datasets[0].data.reduce((a, b) => a + b, 0);
chart.data.datasets.forEach((dataset, i) => {
chart.getDatasetMeta(i).data.forEach((datapoint, index) => {
const { x: a, y: b } = datapoint.tooltipPosition();
const x = 2 * a - cx;
const y = 2 * b - cy;
// draw line
const halfwidth = width / 2;
const halfheight = height / 2;
const xLine = x >= halfwidth ? x + 20 : x - 20;
const yLine = y >= halfheight ? y + 20 : y - 20;
const extraLine = x >= halfwidth ? 10 : -10;
ctx.beginPath();
ctx.moveTo(x, y);
ctx.arc(x, y, 2, 0, 2 * Math.PI, true);
ctx.fill();
ctx.moveTo(x, y);
ctx.lineTo(xLine, yLine);
ctx.lineTo(xLine + extraLine, yLine);
// ctx.strokeStyle = dataset.backgroundColor[index];
ctx.strokeStyle = "black";
ctx.stroke();
// text
const textWidth = ctx.measureText(chart.data.labels[index]).width;
ctx.font = "12px Arial";
// control the position
const textXPosition = x >= halfwidth ? "left" : "right";
const plusFivePx = x >= halfwidth ? 5 : -5;
ctx.textAlign = textXPosition;
ctx.textBaseline = "middle";
// ctx.fillStyle = dataset.backgroundColor[index];
ctx.fillStyle = "black";
ctx.fillText(
((chart.data.datasets[0].data[index] * 100) / sum).toFixed(2) +
"%",
xLine + extraLine + plusFivePx,
yLine
);
});
});
},
};
// config
const config = {
type: "pie",
data,
options: {
maintainAspectRatio: false,
layout: {
padding: 30,
},
scales: {
y: {
display: false,
beginAtZero: true,
ticks: {
display: false,
},
grid: {
display: false,
},
},
x: {
display: false,
ticks: {
display: false,
},
grid: {
display: false,
},
},
},
plugins: {
legend: {
display: false,
},
},
},
plugins: [pieLabelsLine],
};
// render init block
const myChart = new Chart(document.getElementById("myChart"), config);
```
<https://codepen.io/BillDou/pen/oNoGBXb>
|
45,732,286
|
I am working with protein sequences. My goal is to create a convolutional network which will predict three angles for each amino acid in the protein. I'm having trouble debugging a TFLearn DNN model that requires a reshape operation.
The input data describes (currently) 25 proteins of varying lengths. To use Tensors I need to have uniform dimensions, so I pad the empty input cells with zeros. Each amino acid is represented by a 4-dimensional code. The details of that are probably unimportant, other than to help you understand the shapes of the Tensors.
The output of the DNN is six numbers, representing the sines and cosines of three angles. To create ordered pairs, the DNN graph reshapes a [..., 6] Tensor to [..., 3, 2]. My target data is encoded the same way. I calculate the loss using cosine distance.
I built a non-convolutional DNN which showed good initial learning behavior which is very similar to the code I will post here. But that model treated three adjacent amino acids in isolation. I want to treat *each protein* as a unit -- with sliding windows 3 amino acids wide at first, and eventually larger.
Now that I am converting to a convolutional model, I can't seem to get the shapes to match. Here are the working portions of my code:
```
import tensorflow as tf
import tflearn as tfl
from protein import ProteinDatabase # don't worry about its details
def backbone_angle_distance(predict, actual):
with tf.name_scope("BackboneAngleDistance"):
actual = tfl.reshape(actual, [-1,3,2])
# Supply the -1 argument for axis that TFLearn can't pass
loss = tf.losses.cosine_distance(predict, actual, -1,
reduction=tf.losses.Reduction.MEAN)
return loss
# Training data
database = ProteinDatabase("./data")
inp, tgt = database.training_arrays()
# DNN model, convolution only in topmost layer for now
net = tfl.input_data(shape=[None, None, 4])
net = tfl.conv_1d(net, 24, 3)
net = tfl.conv_1d(net, 12, 1)
net = tfl.conv_1d(net, 6, 1)
net = tfl.reshape(net, [-1,3,2])
net = tf.nn.l2_normalize(net, dim=2)
net = tfl.regression(net, optimizer="sgd", learning_rate=0.1, \
loss=backbone_angle_distance)
model = tfl.DNN(net)
# Generate a prediction. Compare shapes for compatibility.
out = model.predict(inp)
print("\ninp : {}, shape = {}".format(type(inp), inp.shape))
print("out : {}, shape = {}".format(type(out), out.shape))
print("tgt : {}, shape = {}".format(type(tgt), tgt.shape))
print("tgt shape, if flattened by one dimension = {}\n".\
format(tgt.reshape([-1,3,2]).shape))
```
The output at this point is:
```
inp : <class 'numpy.ndarray'>, shape = (25, 543, 4)
out : <class 'numpy.ndarray'>, shape = (13575, 3, 2)
tgt : <class 'numpy.ndarray'>, shape = (25, 543, 3, 2)
tgt shape, if flattened by one dimension = (13575, 3, 2)
```
So if I reshape the 4D Tensor **tgt**, flattening the outermost dimension, **out** and **tgt** should match. Since TFLearn's code makes the batches, I try to intercept and reshape the Tensor **actual** in the first line of backbone\_angle\_distance(), my custom loss function.
If I add a few lines to attempt model fitting as follows:
```
e, b = 1, 5
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
```
I get the following extra output and error:
```
---------------------------------
Run id: EEG6JW
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 20
Validation samples: 5
--
--
Traceback (most recent call last):
File "exp54.py", line 252, in <module>
model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True)
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/dnn.py", line 216, in fit
callbacks=callbacks)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 339, in fit
show_metric)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 818, in _train
feed_batch)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (5, 543, 3, 2) for Tensor 'TargetsData/Y:0', which has shape '(?, 3, 2)'
```
Where in my code am I SPECIFYING that TargetsData/Y:0 has shape (?, 3, 2)? I know it won't be. According to the traceback, I never actually seem to reach my reshape operation in backbone\_angle\_distance().
Any advice is appreciated, thanks!
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45732286",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9376487/"
] |
I can't find an exact plugin but I make one.
```
const data = {
labels: ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat"],
datasets: [
{
data: [1, 2, 3, 4, 5, 6],
backgroundColor: [
"#316065",
"#1A7F89",
"#2D9CA7",
"#2D86A7",
"#1167A7",
"#142440",
],
borderColor: [
"#316065",
"#1A7F89",
"#2D9CA7",
"#2D86A7",
"#1167A7",
"#142440",
],
},
],
};
// pieLabelsLine plugin
const pieLabelsLine = {
id: "pieLabelsLine",
afterDraw(chart) {
const {
ctx,
chartArea: { width, height },
} = chart;
const cx = chart._metasets[0].data[0].x;
const cy = chart._metasets[0].data[0].y;
const sum = chart.data.datasets[0].data.reduce((a, b) => a + b, 0);
chart.data.datasets.forEach((dataset, i) => {
chart.getDatasetMeta(i).data.forEach((datapoint, index) => {
const { x: a, y: b } = datapoint.tooltipPosition();
const x = 2 * a - cx;
const y = 2 * b - cy;
// draw line
const halfwidth = width / 2;
const halfheight = height / 2;
const xLine = x >= halfwidth ? x + 20 : x - 20;
const yLine = y >= halfheight ? y + 20 : y - 20;
const extraLine = x >= halfwidth ? 10 : -10;
ctx.beginPath();
ctx.moveTo(x, y);
ctx.arc(x, y, 2, 0, 2 * Math.PI, true);
ctx.fill();
ctx.moveTo(x, y);
ctx.lineTo(xLine, yLine);
ctx.lineTo(xLine + extraLine, yLine);
// ctx.strokeStyle = dataset.backgroundColor[index];
ctx.strokeStyle = "black";
ctx.stroke();
// text
const textWidth = ctx.measureText(chart.data.labels[index]).width;
ctx.font = "12px Arial";
// control the position
const textXPosition = x >= halfwidth ? "left" : "right";
const plusFivePx = x >= halfwidth ? 5 : -5;
ctx.textAlign = textXPosition;
ctx.textBaseline = "middle";
// ctx.fillStyle = dataset.backgroundColor[index];
ctx.fillStyle = "black";
ctx.fillText(
((chart.data.datasets[0].data[index] * 100) / sum).toFixed(2) +
"%",
xLine + extraLine + plusFivePx,
yLine
);
});
});
},
};
// config
const config = {
type: "pie",
data,
options: {
maintainAspectRatio: false,
layout: {
padding: 30,
},
scales: {
y: {
display: false,
beginAtZero: true,
ticks: {
display: false,
},
grid: {
display: false,
},
},
x: {
display: false,
ticks: {
display: false,
},
grid: {
display: false,
},
},
},
plugins: {
legend: {
display: false,
},
},
},
plugins: [pieLabelsLine],
};
// render init block
const myChart = new Chart(document.getElementById("myChart"), config);
```
<https://codepen.io/BillDou/pen/oNoGBXb>
|
I resolved:
We add script to file global:
```
if(window.Chartist && Chartist.Pie && !Chartist.Pie.prototype.resolveOverlap) {
Chartist.Pie.prototype.resolveOverlap = function() {
this.on('draw', function(ctx) {
if(ctx.type == 'label') {
let gText = $(ctx.group._node).find('text');
let ctxHeight = ctx.element.height();
gText.each(function(index, ele){
let item = $(ele);
let diff = ctx.element.attr('dy') - item.attr('dy');
if(diff == 0) {
return false;
}
if(Math.abs(diff) < ctxHeight) {
ctx.element.attr({dy: ctx.element.attr('dy') - ctxHeight});
}
});
}
});
};
}
```
and then:
```
new Chartist.Pie(element, data, options).resolveOverlap();
```
|
58,523,431
|
`driver.getWindowHandles()` returns Set
so, if we want to choose window by index, we have to wrap Set into ArrayList:
```
var tabsList = new ArrayList<>(driver.getWindowHandles());
var nextTab = tabsList.get(1);
driver.switchTo().window(nextTab);
```
in python we can access windows by index immediately:
```
next_window = browser.window_handles[1]
driver.switch_to.window(next_window)
```
What is the purpose of choosing Set here?
|
2019/10/23
|
[
"https://Stackoverflow.com/questions/58523431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11705114/"
] |
Window Handles
--------------
In a discussion, regarding [window-handles](/questions/tagged/window-handles "show questions tagged 'window-handles'") Simon (creator of WebDriver) clearly mentioned that:
>
> While the datatype used for storing the list of handles may be ordered by insertion, the order in which the WebDriver implementation iterates over the window handles to insert them has no requirement to be stable. The ordering is arbitrary.
>
>
>
---
Background
----------
In the discussion [What is the difference between Set and List?](https://stackoverflow.com/questions/1035008/what-is-the-difference-between-set-and-list) @AndrewHare explained:
[`List<E>:`](http://docs.oracle.com/javase/8/docs/api/java/util/List.html)
>
> An ordered collection (also known as a sequence). The user of this interface has precise control over where in the list each element is inserted. The user can access elements by their integer index (position in the list) and search for elements in the list.
>
>
>
[`Set<E>:`](http://docs.oracle.com/javase/8/docs/api/java/util/Set.html)
>
> A collection that contains no duplicate elements. More formally, sets contain no pair of elements e1 and e2 such that e1.equals(e2), and at most one null element. As implied by its name, this interface models the mathematical set abstraction.
>
>
>
---
Conclusion
----------
So considering the above definition, in presence of multiple window handles, the best possible approach would be to use a **`Set<>`**
---
References
----------
You can find a couple of working examples in:
* [Best way to keep track and iterate through tabs and windows using WindowHandles using Selenium](https://stackoverflow.com/questions/46251494/best-way-to-keep-track-and-iterate-through-tabs-and-windows-using-windowhandles/46346324#46346324)
* [Open web in new tab Selenium + Python](https://stackoverflow.com/questions/28431765/open-web-in-new-tab-selenium-python/51893230#51893230)
|
One comment - take into account the order of Set is not fixed, so it will return you a random window by the usage above.
|
58,523,431
|
`driver.getWindowHandles()` returns Set
so, if we want to choose window by index, we have to wrap Set into ArrayList:
```
var tabsList = new ArrayList<>(driver.getWindowHandles());
var nextTab = tabsList.get(1);
driver.switchTo().window(nextTab);
```
in python we can access windows by index immediately:
```
next_window = browser.window_handles[1]
driver.switch_to.window(next_window)
```
What is the purpose of choosing Set here?
|
2019/10/23
|
[
"https://Stackoverflow.com/questions/58523431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11705114/"
] |
Because Sets do not impose an order,\* which is important since there is no guaranteed order of the window handles returned. This is because the window handles represent not only tabs but also tabs in other browser windows. There is no reliable definition of their overall order which would work across platforms and browsers, so a List (which imposes order) wouldn’t make much sense.
\* Technically, SortedSet is a subtype of Set which does impose an order, but the general contract of Set does not require any order.
|
One comment - take into account the order of Set is not fixed, so it will return you a random window by the usage above.
|
58,523,431
|
`driver.getWindowHandles()` returns Set
so, if we want to choose window by index, we have to wrap Set into ArrayList:
```
var tabsList = new ArrayList<>(driver.getWindowHandles());
var nextTab = tabsList.get(1);
driver.switchTo().window(nextTab);
```
in python we can access windows by index immediately:
```
next_window = browser.window_handles[1]
driver.switch_to.window(next_window)
```
What is the purpose of choosing Set here?
|
2019/10/23
|
[
"https://Stackoverflow.com/questions/58523431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11705114/"
] |
Because Sets do not impose an order,\* which is important since there is no guaranteed order of the window handles returned. This is because the window handles represent not only tabs but also tabs in other browser windows. There is no reliable definition of their overall order which would work across platforms and browsers, so a List (which imposes order) wouldn’t make much sense.
\* Technically, SortedSet is a subtype of Set which does impose an order, but the general contract of Set does not require any order.
|
Window Handles
--------------
In a discussion, regarding [window-handles](/questions/tagged/window-handles "show questions tagged 'window-handles'") Simon (creator of WebDriver) clearly mentioned that:
>
> While the datatype used for storing the list of handles may be ordered by insertion, the order in which the WebDriver implementation iterates over the window handles to insert them has no requirement to be stable. The ordering is arbitrary.
>
>
>
---
Background
----------
In the discussion [What is the difference between Set and List?](https://stackoverflow.com/questions/1035008/what-is-the-difference-between-set-and-list) @AndrewHare explained:
[`List<E>:`](http://docs.oracle.com/javase/8/docs/api/java/util/List.html)
>
> An ordered collection (also known as a sequence). The user of this interface has precise control over where in the list each element is inserted. The user can access elements by their integer index (position in the list) and search for elements in the list.
>
>
>
[`Set<E>:`](http://docs.oracle.com/javase/8/docs/api/java/util/Set.html)
>
> A collection that contains no duplicate elements. More formally, sets contain no pair of elements e1 and e2 such that e1.equals(e2), and at most one null element. As implied by its name, this interface models the mathematical set abstraction.
>
>
>
---
Conclusion
----------
So considering the above definition, in presence of multiple window handles, the best possible approach would be to use a **`Set<>`**
---
References
----------
You can find a couple of working examples in:
* [Best way to keep track and iterate through tabs and windows using WindowHandles using Selenium](https://stackoverflow.com/questions/46251494/best-way-to-keep-track-and-iterate-through-tabs-and-windows-using-windowhandles/46346324#46346324)
* [Open web in new tab Selenium + Python](https://stackoverflow.com/questions/28431765/open-web-in-new-tab-selenium-python/51893230#51893230)
|
53,372,966
|
While executing the following python script using cloud-composer, I get `*** Task instance did not exist in the DB` under the `gcs2bq` task Log in Airflow
Code:
```
import datetime
import os
import csv
import pandas as pd
import pip
from airflow import models
#from airflow.contrib.operators import dataproc_operator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from airflow.utils import trigger_rule
from airflow.contrib.operators import gcs_to_bq
from airflow.contrib.operators import bigquery_operator
print('''/-------/--------/------/
-------/--------/------/''')
yesterday = datetime.datetime.combine(
datetime.datetime.today() - datetime.timedelta(1),
datetime.datetime.min.time())
default_dag_args = {
# Setting start date as yesterday starts the DAG immediately when it is
# detected in the Cloud Storage bucket.
'start_date': yesterday,
# To email on failure or retry set 'email' arg to your email and enable
# emailing here.
'email_on_failure': False,
'email_on_retry': False,
# If a task fails, retry it once after waiting at least 5 minutes
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
'project_id': 'data-rubrics'
#models.Variable.get('gcp_project')
}
try:
# [START composer_quickstart_schedule]
with models.DAG(
'composer_agg_quickstart',
# Continue to run DAG once per day
schedule_interval=datetime.timedelta(days=1),
default_args=default_dag_args) as dag:
# [END composer_quickstart_schedule]
op_start = BashOperator(task_id='Initializing', bash_command='echo Initialized')
#op_readwrite = PythonOperator(task_id = 'ReadAggWriteFile', python_callable=read_data)
op_load = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( \
task_id='gcs2bq',\
bucket='dr-mockup-data',\
source_objects=['sample.csv'],\
destination_project_dataset_table='data-rubrics.sample_bqtable',\
schema_fields = [{'name':'a', 'type':'STRING', 'mode':'NULLABLE'},{'name':'b', 'type':'FLOAT', 'mode':'NULLABLE'}],\
write_disposition='WRITE_TRUNCATE',\
dag=dag)
#op_write = PythonOperator(task_id = 'AggregateAndWriteFile', python_callable=write_data)
op_start >> op_load
```
|
2018/11/19
|
[
"https://Stackoverflow.com/questions/53372966",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6039925/"
] |
I have stumbled on this issue also. What helped for me was to do this line:
```
spark.sql("SET spark.sql.hive.manageFilesourcePartitions=False")
```
and then use `spark.sql(query)` instead of using dataframe.
I do not know what happens under the hood, but this solved my problem.
Although it might be too late for you (since this question was asked 8 months ago), this might help for other people.
|
I know the topic is quite old but:
1. I've received same error but the actual source problem was hidden much deeper in logs. If you facing same problem as me, go to the end of your stack trace and you might find actual reason for job to be failing. In my case:
a. `org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:865)\n\t... 142 more\nCaused by: MetaException(message:Rate exceeded (Service: AWSGlue; Status Code: 400; Error Code: ThrottlingException ...` which basically means I've exceeded AWS Glue Data Catalog quota **OR**:
b. `MetaException(message:1 validation error detected: Value '(<my filtering condition goes here>' at 'expression' failed to satisfy constraint: Member must have length less than or equal to 2048` which means that filtering condition I've put in my dataframe definition is too long
Long story short, deep dive into your logs because reason for your error might be really simple, just the top message is far from being clear.
2. If you are working with tables that has huge number of partitions (in my case hundreds of thousands) I would strongly recommend against setting `spark.sql.hive.manageFilesourcePartitions=False` . Yes, it will resolve the issue but the performance degradation is enormous.
|
70,339,321
|
The decision variable of my optimization problem (which I am aiming at keeping linear) is a placement binary vector, where the value in each position is either 0 or 1 (two different possible locations of item i).
One component of the objective function is this:
[](https://i.stack.imgur.com/SFvgz.png)
C\_T is the const of transferring N items.
k is the iteration in which I am currently solving the problem and k-1 is the current displacement of items (result of solving the last iteration of the problem k-1). I have an initial condition (k=0).
N is "how many positions of x are different between the current displacement (k-1) and the outcome of the optimization problem (future optimal displacement x^k)".
How can I keep this component of the objective function linear? In other words, how can I replace the XOR operator?
I thought about using the absolute difference as an alternative but I'm not sure it will help.
Is there a linear way to do this?
I will implement this problem with PuLP in python, maybe there is something that can help over there as well...
|
2021/12/13
|
[
"https://Stackoverflow.com/questions/70339321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11422437/"
] |
My notation is: `xprev[i]` is the previous solution and `x[i]` is the current one. I assume `xprev[i]` is a binary constant and `x[i]` is a binary variable. Then we can write
```
sum(i, |xprev[i]-x[i]|)
=sum(i|xprev[i]=0, x[i]) + sum(i|xprev[i]=1, 1-x[i])
=sum(i, x[i]*(1-xprev[i]) + (1-x[i])*xprev[i])
```
Both the second and third lines can be implemented directly in Pulp. Note that | in the second line is 'such that'.
---
Below we have a comment that claims this is wrong. So let's write my expression as `B*(1-A)+(1-B)*A`. The following truth table can be constructed:
```
A B A xor B B*(1-A)+(1-B)*A
0 0 0 0 + 0
0 1 1 1 + 0
1 0 1 0 + 1
1 1 0 0 + 0
```
Note that `A xor B = A*not(B) + not(A)*B` is a well-known identity.
---
Note. Here I used the assumption that `xprev[i]` (or `A`) is a constant so things are linear. If both are (boolean) variables (let's call them x and y), then we need to do something differently. We can linearize the construct `z = x xor y` using four inequalities:
```
z <= x + y
z >= x - y
z >= y - x
z <= 2 - x - y
```
This is now linear can be used inside a MIP solver.
|
**UPDATE**:
If what you need to replace is a XOR gate, then you could use a combination of other gates, which are linear, to replace it. Here are some of them <https://en.wikipedia.org/wiki/XOR_gate#Alternatives>.
Example: `A XOR B = (A OR B) AND (NOT A + NOT B)`. When A and B are binary, that should translate mathematically to:
```
(A + B - A * B) * ((-A) + (-B) - (-A * -B))
```
---
Why not use multiplication?
```
AND table
0 0 = 0
0 1 = 0
1 0 = 0
1 1 = 1
```
```
Multiplication table
0*0 = 0
0*1 = 0
1*0 = 0
1*1 = 1
```
I think that does it. If it doesn't, then more details are needed I suppose.
|
65,493,246
|
I am using python through a secure shell. When I use pydot and graphviz package, it shows error [Errno 2] dot not found in path. I searched so many solutions. People suggest 'sudo apt install graphviz' or 'sudo apt-get install graphviz'. But when I use 'sudo', it shows 'username is not in the sudoers file.This incident will be reported'. I also tried to add the graphviz folder location to PATH variable using 'export PATH= $PATH:/..../lib/python3.8/site-packages/graphviz'( exact path is shown in the picture), it doesn't work. Could anyone help please? Thank you very much.
I added a screenshot. I understand that I need to add path including 'bin' [enter image description here](https://i.stack.imgur.com/WXtx8.png), but then I didnt find the bin folder. I know what that folder looks like on Windows. When I use Filezilla to check this graphviz folder, It doesn't have this 'bin' folder. I installed Graphviz using 'pip3 install graphviz' when I search "How do I install Graphviz on Linux?", they all say 'sodu .....', which doesn't work for me apparently . Could anyone help please?
|
2020/12/29
|
[
"https://Stackoverflow.com/questions/65493246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14644632/"
] |
You can use `_source` to limit what's retrieved:
```
POST indexname/_search
{
"_source": "scores.a/*"
}
```
Alternatively, you could employ `script_fields` which do exactly the same but offer playroom for value modification too:
```
POST indexname/_search
{
"script_fields": {
"scores_prefixed_with_a": {
"script": {
"source": """params._source.scores.entrySet()
.stream()
.filter(e->e.getKey().startsWith(params.prefix))
.collect(Collectors.toMap(e->e.getKey(),e->e.getValue()))""",
"params": {
"prefix": "a/"
}
}
}
}
}
```
|
Use [`.filter()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) - [`.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce) on [`Object.keys()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys)
```js
const data = [{"scores": {"a/b": 1.231, "a/c": 23.11, "x/a": 1232.1}}, {"scores": {"a/d": 3.1}}];
const allowed = 'a/';
const res = data.map((d) =>
Object.keys(d.scores)
.filter(key => key.startsWith(allowed)))
.reduce((prev, curr) => prev.concat(curr));
console.log(res)
```
```
[
"a/b",
"a/c",
"a/d"
]
```
---
---
Originial, with object instead off keys;
```js
const data = [{"scores": {"a/b": 1.231, "a/c": 23.11, "x/a": 1232.1}}, {"scores": {"a/d": 3.1}}];
const allowed = 'a/';
const res = data.map((d) =>
Object.keys(d.scores)
.filter(key => key.startsWith(allowed))
.reduce((obj, key) => { obj[key] = d.scores[key]; return obj; }, {}));
console.log(res);
```
|
10,234,575
|
I first installed pymongo using easy\_install, that didn't work so I tried with pip and it is still failing.
This is fine in the terminal:
```
Macintosh:etc me$ python
Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 14:13:39)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pymongo
>>>
```
But on line 10 of my script
```
import pymongo
```
throws the following error:
>
> File "test.py", line 10, >in <module>
> import pymongo
> ImportError: No module named pymongo
>
>
>
I'm using the standard Lion builds of Apache and Python. Is there anybody else who has experienced this?
Thanks
EDIT: I should also mention that during install it throws the following error
```
Downloading/unpacking pymongo
Downloading pymongo-2.1.1.tar.gz (199Kb): 199Kb downloaded
Running setup.py egg_info for package pymongo
Installing collected packages: pymongo
Running setup.py install for pymongo
building 'bson._cbson' extension
gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -Ibson -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c bson/_cbsonmodule.c -o build/temp.macosx-10.3-fat-2.7/bson/_cbsonmodule.o
unable to execute gcc-4.0: No such file or directory
command 'gcc-4.0' failed with exit status 1
**************************************************************
WARNING: The bson._cbson extension module could not
be compiled. No C extensions are essential for PyMongo to run,
although they do result in significant speed improvements.
If you are seeing this message on Linux you probably need to
install GCC and/or the Python development package for your
version of Python. Python development package names for popular
Linux distributions include:
RHEL/CentOS: python-devel
Debian/Ubuntu: python-dev
Above is the ouput showing how the compilation failed.
**************************************************************
building 'pymongo._cmessage' extension
gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -Ibson -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pymongo/_cmessagemodule.c -o build/temp.macosx-10.3-fat-2.7/pymongo/_cmessagemodule.o
unable to execute gcc-4.0: No such file or directory
command 'gcc-4.0' failed with exit status 1
**************************************************************
WARNING: The pymongo._cmessage extension module could not
be compiled. No C extensions are essential for PyMongo to run,
although they do result in significant speed improvements.
If you are seeing this message on Linux you probably need to
install GCC and/or the Python development package for your
version of Python. Python development package names for popular
Linux distributions include:
RHEL/CentOS: python-devel
Debian/Ubuntu: python-dev
Above is the ouput showing how the compilation failed.
**************************************************************
```
And then goes on to say
```
Successfully installed pymongo
Cleaning up...
Macintosh:etc me$
```
Very odd.
My sys.path in script is returned as:
['/Library/WebServer/Documents/', '/Library/Python/2.7/site-packages/tweepy-1.7.1-py2.7.egg', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages']
And in interpreter:
['', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/SQLObject-1.2.1-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/FormEncode-1.2.4-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c12dev\_r88846-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.1-py2.7.egg', '/Library/Python/2.7/site-packages/tweepy-1.7.1-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages', '/Library/Python/2.7/site-packages']
|
2012/04/19
|
[
"https://Stackoverflow.com/questions/10234575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4533572/"
] |
Found it! Required a path append before importing of the pymongo module
```
sys.path.append('/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages')
import pymongo
```
Would ideally like to find a way to append this to the pythonpath permanently, but this works for now!
|
I'm not sure why the last message says "Successfully installed pymongo" but it obviously failed due to the fact that you don't have gcc installed on your system. You need to do the following:
RHEL/Centos: sudo yum install gcc python-devel
Debian/Ubuntu: sudo apt-get install gcc python-dev
Then try and install pymongo again.
|
1,941,894
|
I'm trying to get virtualenv to work on my machine. I'm using python2.6, and after installing pip, and using pip to install virtualenv, running "virtualenv --no-site-packages cyclesg" results in the following:
```
New python executable in cyclesg/bin/python
Installing setuptools....
Complete output from command /home/nubela/Workspace/cyclesg...ython -c "#!python
\"\"\"Bootstrap setuptoo...
" /usr/lib/python2.6/site-packag...6.egg:
error: invalid Python installation: unable to open /home/nubela/Workspace/cyclesg_dep/cyclesg/include/multiarch-i386-linux/python2.6/pyconfig.h (No such file or directory)
----------------------------------------
...Installing setuptools...done.
New python executable in cyclesg/bin/python
Installing setuptools....
Complete output from command /home/nubela/Workspace/cyclesg...ython -c "#!python
\"\"\"Bootstrap setuptoo...
" /usr/lib/python2.6/site-packag...6.egg:
error: invalid Python installation: unable to open /home/nubela/Workspace/cyclesg_dep/cyclesg/include/multiarch-i386-linux/python2.6/pyconfig.h (No such file or directory)
----------------------------------------
...Installing setuptools...done.
```
Any idea how I can remedy this? Thanks!
|
2009/12/21
|
[
"https://Stackoverflow.com/questions/1941894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/236267/"
] |
Are you on mandriva?
In order to support multilib (mixing x86/x86\_64) Mandriva messes up your python installation. They patched python, which breaks virtualenv; instead of fixing python, they then proceeded to patch virtualenv. This is useless if you are using your own virtualenv installed from pip.
Here is the bug: <https://qa.mandriva.com/show_bug.cgi?id=42808>
|
Are you on a linux based system? It looks like virtualenv is trying to build a new python exectable but can't find the files to do that. Try installing the `python-dev` package.
|
65,465,114
|
I am new to python programming. Following the AWS learning path:
<https://aws.amazon.com/getting-started/hands-on/build-train-deploy-machine-learning-model-sagemaker/?trk=el_a134p000003yWILAA2&trkCampaign=DS_SageMaker_Tutorial&sc_channel=el&sc_campaign=Data_Scientist_Hands-on_Tutorial&sc_outcome=Product_Marketing&sc_geo=mult>
I am getting an error when excuting the following block (in conda\_python3):
```
test_data_array = test_data.drop(['y_no', 'y_yes'], axis=1).values #load the data into an array
xgb_predictor.content_type = 'text/csv' # set the data type for an inference
xgb_predictor.serializer = csv_serializer # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
predictions_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an
array
print(predictions_array.shape)
```
>
> AttributeError Traceback (most recent call last)
> in
> 1 test\_data\_array = test\_data.drop(['y\_no', 'y\_yes'], axis=1).values #load the data into an array
> ----> 2 xgb\_predictor.content\_type = 'text/csv' # set the data type for an inference
> 3 xgb\_predictor.serializer = csv\_serializer # set the serializer type
> 4 predictions = xgb\_predictor.predict(test\_data\_array).decode('utf-8') # predict!
> 5 predictions\_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array
>
>
>
>
> AttributeError: can't set attribute
>
>
>
I have looked at several prior questions but couldn't find much information related to this error when it comes to creating data types.
Thanks in advance for any help.
|
2020/12/27
|
[
"https://Stackoverflow.com/questions/65465114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2601359/"
] |
If you just remove it then the prediction will work. Therefore, recommend removing this code line.
xgb\_predictor.content\_type = 'text/csv'
|
Removing xgb\_predictor.content\_type = 'text/csv' will work.
But best way is that you first check the attributes of the object:
```
xgb_predictor.__dict__.keys()
```
This way, you will know that which attributes can be set.
|
63,767,925
|
I'm really new to programming (two days old), so excuse my python dumbness. I've recently run into a problem with adding up to numbers from a list. I've managed to come up with this program:
```
list_nums = ["17", "3"]
num1 = list_nums[0]
num2 = list_nums[1]
sum = (num1) + (num2)
print(sum)
```
the problem is that instead of adding up num1 with num2 (17+3=20), Python combines both numbers (i.e. "173"). what can I do in order to add up the numbers, instead of combining them?
|
2020/09/06
|
[
"https://Stackoverflow.com/questions/63767925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14231446/"
] |
`"17"` and `"3"`These are string, if you remove double-quotes from them, they become integers `17` and `3`.
So if you want to add 2 numbers, they have to be `integer` or `float` in Python.
Just remove double-quotes in list:
`list_nums = [17, 3]`
|
Your `num1` and `num2` variables contain string values `'17'` and `'3'`. Operator `+` for strings works as a concatenation, e.g. `'17' + '3' == '173'`. If you need to get 20 out of it, you need to work with numeric types, like integers. For that, you either need to remove quotes from your 17 and 3 literals:
```
list_nums = [17, 3]
num1 = list_nums[0]
num2 = list_nums[1]
acc = num1 + num2
print(acc)
```
...or convert strings to integers on the fly:
```
list_nums = ["17", "3"]
num1 = list_nums[0]
num2 = list_nums[1]
acc = int(num1) + int(num2)
print(acc)
```
**P.S.** `sum` is the name of a built-in function in python. It's in general a good idea to avoid overriding such names. Other common names to avoid: `id`, `type`, `min`, `max`, etc.
|
40,851,872
|
Can I get python to print the source code for `__builtins__` directly?
OR (more preferably):
What is the pathname of the source code for `__builtins__`?
---
I at least know the following things:
* `__builtins__` is a module, by typing `type(__builtins__)`.
* I have tried the best-answer-suggestions to a more general case of this SO question: ["Finding the source code for built-in Python functions?"](https://stackoverflow.com/questions/8608587/finding-the-source-code-for-built-in-python-functions). But no luck:
+ `print inspect.getdoc(__builtins__)` just gives me a description.
+ `inspect.getfile(__builtins__)` just gives me an error: `TypeError: <module '__builtin__' (built-in)> is a built-in module`
+ <https://hg.python.org/cpython/file/c6880edaf6f3/#> does not seem to contain an entry for `__builtins__`. I've tried "site:" search and browsed several of the directories but gave up after a few.
|
2016/11/28
|
[
"https://Stackoverflow.com/questions/40851872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
The `__builtin__` module is implemented in [`Python/bltinmodule.c`](https://github.com/python/cpython/blob/2.7/Python/bltinmodule.c), a rather unusual location for a rather unusual module.
|
I can't try it right now, but python default ide is able to open core modules easily (I tried with math and some more)
<https://docs.python.org/2/library/idle.html>
On menus. Open module.
|
8,275,793
|
I have managed to write some simple scripts in python for android using sl4a. I can also create shortcuts on my home screen for them. But the icon chosen for this is always the python sl4a icon. Can I change this so different scripts have different icons?
|
2011/11/26
|
[
"https://Stackoverflow.com/questions/8275793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1024495/"
] |
You can change it if you build the .APK file from your computer and pick the icon there. You develop the Python SL4A application and you pick the logo in the /res/drawable folder.
|
I guess it depends on your launcher.
With ADW launcher, you can do a long press on your shortcut from your home screen and then select the icon you want to use by pressing the icon button.
For other launchers I've no idea.
|
8,275,793
|
I have managed to write some simple scripts in python for android using sl4a. I can also create shortcuts on my home screen for them. But the icon chosen for this is always the python sl4a icon. Can I change this so different scripts have different icons?
|
2011/11/26
|
[
"https://Stackoverflow.com/questions/8275793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1024495/"
] |
You can change it if you build the .APK file from your computer and pick the icon there. You develop the Python SL4A application and you pick the logo in the /res/drawable folder.
|
From what I know raduq's answer is correct as when you run a script it is not an actual application, therefor it does not have it's own icon. When you build the script into it's own apk you are able to define your own icons for your project as it is installed as an actual application within your Android system.
Once you are setup to compile your script into an apk, you can use raduq's information to locate the default icon in /res/drawable and replace it with your own image. For further information on how to compile a python script into a standalone application, please see my answer from a related post from earlier: [Python for Android APK without dependencies](https://stackoverflow.com/questions/15994251/python-for-android-apk-without-dependencies/16123145#16123145)
|
51,952,761
|
In tkinter, when a button has the focus, you can press the space bar to execute the command associated with that button. I'm trying to make pressing the Enter key do the same thing. I'm certain I've done this in the past, but I can't find the code, and what I'm doing now isn't working. I'm using python 3.6.1 on a Mac.
Here is what I've tried
```
self.startButton.bind('<Return>', self.startButton.invoke)
```
Pressing the Enter key has no effect, but pressing the space bar activates the command bound to `self.startButton`. I've tried binding to `<KeyPress-KP_Enter>` with the same result.
I also tried just binding to the command I want to execute:
```
self.startButton.bind('<Return>', self.start)
```
but the result was the same.
**EDIT**
Here is a little script that exhibits the behavior I'm talking about.
```
import tkinter as tk
root = tk.Tk()
def start():
print('started')
startButton.configure(state=tk.DISABLED)
clearButton.configure(state=tk.NORMAL)
def clear():
print('cleared')
clearButton.configure(state=tk.DISABLED)
startButton.configure(state=tk.NORMAL)
frame = tk.Frame(root)
startButton = tk.Button(frame, text = 'Start', command = start, state=tk.NORMAL)
clearButton = tk.Button(frame, text = 'Clear', command = clear, state = tk.DISABLED)
startButton.bind('<Return>', start)
startButton.pack()
clearButton.pack()
startButton.focus_set()
frame.pack()
root.mainloop()
```
In this case, it works when I press space bar and fails when I press Enter. I get an error message when I press Enter, saying that there an argument was passed, but none is required. When I change the definition of to take dummy argument, pressing Enter works, but pressing space bar fails, because of a missing argument.
I'm having trouble understanding how wizzwizz4's answer gets both to work. Also, I wasn't seeing the error message when I pressed Enter in my actual script, but that's way too long to post.
\*\* EDIT AGAIN \*\*
I was just overlooking the default value of None in Mike-SMT's script. That makes things plain.
|
2018/08/21
|
[
"https://Stackoverflow.com/questions/51952761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/908293/"
] |
The only thing your Ajax is sending is this.refs.search.value - not the name "search" / not url encoded / not multi-part encoded. Indeed, you seem to have invented your own encoding system.
Try:
```
xhr.open('get','//localhost:80/ReactStudy/travelReduxApp/public/server/search.php?search=' + value,true);
```
in Ajax.js
|
```
<?php
header('Access-Control-Allow-Origin:* ');
/*shows warning without isset*/
/*$form = $_GET["search"];
echo $form;*/
/*with isset shows not found*/
if(isset($_POST["search"])){
$form = $_GET["search"];
echo $form;
}else{
echo "not found";`ghd`
}
?>
```
|
37,357,896
|
I am using sublime to automatically word-wrap python code-lines that go beyond 79 Characters as the Pep-8 defines. Initially i was doing return to not go beyond the limit.
The only downside with that is that anyone else not having the word-wrap active wouldn't have the limitation. So should i strive forward of actually word-wrapping or is the visual word-wrap ok?
[](https://i.stack.imgur.com/Wz4ko.png)
|
2016/05/21
|
[
"https://Stackoverflow.com/questions/37357896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1767754/"
] |
PEP8 wants you to perform an actual word wrap. The point of PEP8’s stylistic rules is that the file looks the same in every editor, so you cannot rely on editor visualizations to satisfy PEP8.
This also makes you choose the point where to break deliberately. For example, Sublime will do a pretty basic job in wrapping that line; but you could do it in a more readable way, e.g.:
```
x = os.path.split(os.path.split(os.path.split(
os.path.split(os.path.split(path)[0])[0]
)[0])[0])
```
Of course that’s not necessarily pretty (I blame that mostly on this example code though), but it makes clear what belongs to what.
That being said, a good strategy is to simply avoid having to wrap lines. For example, you are using `os.path.split` over and over; so you could change your import:
```
from os.path import split
x = split(split(split(split(split(path)[0])[0])[0])[0])
```
And of course, if you find yourself doing something over and over, maybe there’s a better way to do this, for example using Python 3.4’s `pathlib`:
```
import pathlib
p = pathlib.Path(path).parents[2]
print(p.parent.absolute(), p.name)
```
|
In-file word wrapping would let your code conform to Pep-8 most consistently, even if other programmers are looking at your code using different coding environments. That seems to me to be the best solution to keeping to the standard, particularly if you are expecting that others will, at some point, be looking at your code.
If you are working with a set group of people on a project, or even in a company, it may be possible to coordinate with the other programmers to find what solution you are all most satisfied with.
For personal projects that you really aren't expecting anyone else to ever look at, I'm sure it's fine to use the visual word wrapping, but enforcing it yourself would certainly help to build on a good habit.
|
43,630,195
|
`A = [[[1,2,3],[4]],[[1,4],[2,3]]]`
Here I want to find lists in A which sum of all sublists in list not grater than 5.
Which the result should be `[[1,4],[2,3]]`
I tried a long time to solve this problem in python. But I still can't figure out the right solution, which I stuck at loop out multiple loops. My code as follows, but its obviously wrong, how to correct it?
```
A = [[[1,2,3],[4]],[[1,4],[2,3]]]
z = []
for l in A:
for list in l:
sum = 0
while sum < 5:
for i in list:
sum+=i
else:
break
else:
z.append(l)
print z
```
Asking for help~
|
2017/04/26
|
[
"https://Stackoverflow.com/questions/43630195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5702561/"
] |
A simple solution which you can think of would be like this -
```
A = [[[1,2,3],[4]],[[1,4],[2,3]]]
r = [] # this will be our result
for list in A: # Iterate through each item in A
f = True # This is a flag we set for a favorable sublist
for item in list: # Here we iterate through each list in the sublist
if sum(item) > 5: # If the sum is greater than 5 in any of them, set flag to false
f = False
if f: # If the flag is set, it means this is a favorable sublist
r.append(list)
print r
```
But I'm assuming the nesting level would be the same. <http://ideone.com/hhr9uq>
|
This should work for your problem:
```
>>> for alist in A:
... if max(sum(sublist) for sublist in alist) <= 5:
... print(alist)
...
[[1, 4], [2, 3]]
```
|
43,630,195
|
`A = [[[1,2,3],[4]],[[1,4],[2,3]]]`
Here I want to find lists in A which sum of all sublists in list not grater than 5.
Which the result should be `[[1,4],[2,3]]`
I tried a long time to solve this problem in python. But I still can't figure out the right solution, which I stuck at loop out multiple loops. My code as follows, but its obviously wrong, how to correct it?
```
A = [[[1,2,3],[4]],[[1,4],[2,3]]]
z = []
for l in A:
for list in l:
sum = 0
while sum < 5:
for i in list:
sum+=i
else:
break
else:
z.append(l)
print z
```
Asking for help~
|
2017/04/26
|
[
"https://Stackoverflow.com/questions/43630195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5702561/"
] |
Simplification of @KindStranger method in a one-liner:
```
>> [sub for x in A for sub in x if max(sum(sub) for sub in x) <= 5]
[[1, 4], [2, 3]]
```
|
This should work for your problem:
```
>>> for alist in A:
... if max(sum(sublist) for sublist in alist) <= 5:
... print(alist)
...
[[1, 4], [2, 3]]
```
|
43,630,195
|
`A = [[[1,2,3],[4]],[[1,4],[2,3]]]`
Here I want to find lists in A which sum of all sublists in list not grater than 5.
Which the result should be `[[1,4],[2,3]]`
I tried a long time to solve this problem in python. But I still can't figure out the right solution, which I stuck at loop out multiple loops. My code as follows, but its obviously wrong, how to correct it?
```
A = [[[1,2,3],[4]],[[1,4],[2,3]]]
z = []
for l in A:
for list in l:
sum = 0
while sum < 5:
for i in list:
sum+=i
else:
break
else:
z.append(l)
print z
```
Asking for help~
|
2017/04/26
|
[
"https://Stackoverflow.com/questions/43630195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5702561/"
] |
Simplification of @KindStranger method in a one-liner:
```
>> [sub for x in A for sub in x if max(sum(sub) for sub in x) <= 5]
[[1, 4], [2, 3]]
```
|
A simple solution which you can think of would be like this -
```
A = [[[1,2,3],[4]],[[1,4],[2,3]]]
r = [] # this will be our result
for list in A: # Iterate through each item in A
f = True # This is a flag we set for a favorable sublist
for item in list: # Here we iterate through each list in the sublist
if sum(item) > 5: # If the sum is greater than 5 in any of them, set flag to false
f = False
if f: # If the flag is set, it means this is a favorable sublist
r.append(list)
print r
```
But I'm assuming the nesting level would be the same. <http://ideone.com/hhr9uq>
|
43,630,195
|
`A = [[[1,2,3],[4]],[[1,4],[2,3]]]`
Here I want to find lists in A which sum of all sublists in list not grater than 5.
Which the result should be `[[1,4],[2,3]]`
I tried a long time to solve this problem in python. But I still can't figure out the right solution, which I stuck at loop out multiple loops. My code as follows, but its obviously wrong, how to correct it?
```
A = [[[1,2,3],[4]],[[1,4],[2,3]]]
z = []
for l in A:
for list in l:
sum = 0
while sum < 5:
for i in list:
sum+=i
else:
break
else:
z.append(l)
print z
```
Asking for help~
|
2017/04/26
|
[
"https://Stackoverflow.com/questions/43630195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5702561/"
] |
A simple solution which you can think of would be like this -
```
A = [[[1,2,3],[4]],[[1,4],[2,3]]]
r = [] # this will be our result
for list in A: # Iterate through each item in A
f = True # This is a flag we set for a favorable sublist
for item in list: # Here we iterate through each list in the sublist
if sum(item) > 5: # If the sum is greater than 5 in any of them, set flag to false
f = False
if f: # If the flag is set, it means this is a favorable sublist
r.append(list)
print r
```
But I'm assuming the nesting level would be the same. <http://ideone.com/hhr9uq>
|
The one with `all()`
```
[t for item in A for t in item if all(sum(t)<=5 for t in item)]
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.