qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
47,261,255
|
I'm trying to execute a dag which needs to be run only once. So I placed the dag execution interval as '@once'. However, I'm getting the error as mentioned in this link -
<https://issues.apache.org/jira/browse/AIRFLOW-1400>
Now i'm trying to pass the exact date of execution as below:
```
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2017,11,13),
'email': ['airflow@airflow.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(seconds=5)
}
dag = DAG(
dag_id='dagNameTest', default_args=default_args, schedule_interval='12 09 13 11 2017',concurrency=1)
```
This is throwing error as:
```
File "/usr/lib/python2.7/site-packages/croniter/croniter.py", line 543, in expand
expr_format))
CroniterBadCronError: [12 09 13 11 2017] is not acceptable, out of range
```
Can someone help to resolve this.
Thanks,
Arjun
|
2017/11/13
|
[
"https://Stackoverflow.com/questions/47261255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7229291/"
] |
Grouping by either "TransactionCategory" or "TranCatID" will give you the desired result shown as follows:
```
SELECT TransactionCategory.TransCatName, SUM( `Value`) AS Value FROM Transactions JOIN TransactionCategory on Transactions.TransactionCategory = TransactionCategory.TranCatID GROUP BY TransactionCategory.TransactionCategory;
or
SELECT TransactionCategory.TransCatName, SUM( `Value`) AS Value FROM Transactions JOIN TransactionCategory on Transactions.TransactionCategory = TransactionCategory.TranCatID GROUP BY TransactionCategory.TranCatID;
```
|
You can use this :
```
SELECT tc.TransCatName Category, SUM(t.Value) as Value
FROM TransactionCategory tc
LEFT JOIN Transactions t ON tc.TranCatID = t.TransactionCategory
group by tc.TransCatName
```
[SQL HERE](http://sqlfiddle.com/#!9/136aa3/1)
**OUTPUT**
```
Category | Value
-----------------------
Petrol | 38
Transportation | 68
```
Please notice the `SUM` for **PETROL**, it should `38` as above, which is wrongly written `30` in your question description!
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
Since you mentioned you can use **lodash** you can use [`merge`](https://lodash.com/docs/4.17.11#merge) like so:
`_.merge(obj1, obj2)`
to get your desired result.
See working example below:
```js
const a = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3 },
},
b = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
4: {foo: 1}
},
res = _.merge(a, b);
console.log(res);
```
```html
<script src="https://cdn.jsdelivr.net/lodash/4.16.4/lodash.min.js"></script>
```
|
you can use Object.assign and and assign object properties to empty object.
```
var a = {books: 2};
var b = {notebooks: 1};
var c = Object.assign( {}, a, b );
console.log(c);
```
or
You could use merge method from Lodash library.
can check here :
<https://www.npmjs.com/package/lodash>
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
**I have exactly what you want**.
This function will traverse through each nested object and combine it with the other. I've only tested it with 5 nested levels down the tree but, theoretically, it should work for any number of nested objects as it is a recursive function.
```
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down theobjectt tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
Here is a working example of a much more complex object merging
```js
var obj1 = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3, boor:{foob: 1, foof: 8} },
4: {continent: {
asia: {country: {india: {capital: "delhi"}, china: {capital: "beiging"}}},
europe:{country:{germany: {capital: "berlin"},france: {capital: "paris"}}}
},
vegtables: {cucumber: 2, carrot: 3, radish: 7}
}
};
var obj2 = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
3: {fooBar: 3, boor:{foob: 1, boof: 6}, boob: 9 },
4: {continent: {
northamerica: {country: {mexico: {capital: "mexico city"}, canada: {capital: "ottawa"}},},
asia: {country: {Afghanistan : {capital: "Kabul"}}}
}
},
5: {barf: 42}
};
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down the object tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
|
you can use Object.assign and and assign object properties to empty object.
```
var a = {books: 2};
var b = {notebooks: 1};
var c = Object.assign( {}, a, b );
console.log(c);
```
or
You could use merge method from Lodash library.
can check here :
<https://www.npmjs.com/package/lodash>
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
Since you mentioned you can use **lodash** you can use [`merge`](https://lodash.com/docs/4.17.11#merge) like so:
`_.merge(obj1, obj2)`
to get your desired result.
See working example below:
```js
const a = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3 },
},
b = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
4: {foo: 1}
},
res = _.merge(a, b);
console.log(res);
```
```html
<script src="https://cdn.jsdelivr.net/lodash/4.16.4/lodash.min.js"></script>
```
|
This can be easily accomplished with a combination of the neat javascript spread syntax, Array and Object prototype functions, and destructuring patterns.
```
[obj1,obj2].flatMap(Object.entries).reduce((o,[k,v])=>({...o,[k]:{...o[k],...v}}),{})
```
**As simple as this!**
---
For a very detailed explanation of how this works, refer to [this extended answer](https://stackoverflow.com/a/73420289/2938526) to a similar question.
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
**I have exactly what you want**.
This function will traverse through each nested object and combine it with the other. I've only tested it with 5 nested levels down the tree but, theoretically, it should work for any number of nested objects as it is a recursive function.
```
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down theobjectt tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
Here is a working example of a much more complex object merging
```js
var obj1 = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3, boor:{foob: 1, foof: 8} },
4: {continent: {
asia: {country: {india: {capital: "delhi"}, china: {capital: "beiging"}}},
europe:{country:{germany: {capital: "berlin"},france: {capital: "paris"}}}
},
vegtables: {cucumber: 2, carrot: 3, radish: 7}
}
};
var obj2 = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
3: {fooBar: 3, boor:{foob: 1, boof: 6}, boob: 9 },
4: {continent: {
northamerica: {country: {mexico: {capital: "mexico city"}, canada: {capital: "ottawa"}},},
asia: {country: {Afghanistan : {capital: "Kabul"}}}
}
},
5: {barf: 42}
};
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down the object tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
|
Are you looking like this?
we can use this way to merge two objects.
```
const person = { name: 'David Walsh', gender: 'Male' };
const attributes = { handsomeness: 'Extreme', hair: 'Brown', eyes: 'Blue' };
const summary = {...person, ...attributes};
```
/\*
```
Object {
"eyes": "Blue",
"gender": "Male",
"hair": "Brown",
"handsomeness": "Extreme",
"name": "David Walsh",
}
```
\*/
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
You can use spread operator.
Update :
>
> if obj2 has some properties that obj1 does not have?
>
>
>
Initially i wrote this answer assuming the keys are indexed like `0,1 and so on` but as you mentioned in comment this is not the case than you can build a array of keys and than iterate over it as
as very concisely added in comment by @Nick `[...Object.keys(obj1), ...Object.keys(obj2)]`
```js
let obj1 = {1: { foo: 1 },2: { bar: 2, fooBar: 3 },3: { fooBar: 3 },};
let obj2 = {1: { foo: 1, bar: 2 },2: { bar: 2 },};
let keys = [...new Set([...Object.keys(obj1),...Object.keys(obj2)])]
let op = {}
let merged = keys.forEach(key=>{
op[key] = {
...obj1[key],
...obj2[key]
}
})
console.log(op)
```
|
you can use Object.assign and and assign object properties to empty object.
```
var a = {books: 2};
var b = {notebooks: 1};
var c = Object.assign( {}, a, b );
console.log(c);
```
or
You could use merge method from Lodash library.
can check here :
<https://www.npmjs.com/package/lodash>
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
You can use spread operator.
Update :
>
> if obj2 has some properties that obj1 does not have?
>
>
>
Initially i wrote this answer assuming the keys are indexed like `0,1 and so on` but as you mentioned in comment this is not the case than you can build a array of keys and than iterate over it as
as very concisely added in comment by @Nick `[...Object.keys(obj1), ...Object.keys(obj2)]`
```js
let obj1 = {1: { foo: 1 },2: { bar: 2, fooBar: 3 },3: { fooBar: 3 },};
let obj2 = {1: { foo: 1, bar: 2 },2: { bar: 2 },};
let keys = [...new Set([...Object.keys(obj1),...Object.keys(obj2)])]
let op = {}
let merged = keys.forEach(key=>{
op[key] = {
...obj1[key],
...obj2[key]
}
})
console.log(op)
```
|
**I have exactly what you want**.
This function will traverse through each nested object and combine it with the other. I've only tested it with 5 nested levels down the tree but, theoretically, it should work for any number of nested objects as it is a recursive function.
```
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down theobjectt tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
Here is a working example of a much more complex object merging
```js
var obj1 = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3, boor:{foob: 1, foof: 8} },
4: {continent: {
asia: {country: {india: {capital: "delhi"}, china: {capital: "beiging"}}},
europe:{country:{germany: {capital: "berlin"},france: {capital: "paris"}}}
},
vegtables: {cucumber: 2, carrot: 3, radish: 7}
}
};
var obj2 = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
3: {fooBar: 3, boor:{foob: 1, boof: 6}, boob: 9 },
4: {continent: {
northamerica: {country: {mexico: {capital: "mexico city"}, canada: {capital: "ottawa"}},},
asia: {country: {Afghanistan : {capital: "Kabul"}}}
}
},
5: {barf: 42}
};
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down the object tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
**I have exactly what you want**.
This function will traverse through each nested object and combine it with the other. I've only tested it with 5 nested levels down the tree but, theoretically, it should work for any number of nested objects as it is a recursive function.
```
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down theobjectt tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
Here is a working example of a much more complex object merging
```js
var obj1 = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3, boor:{foob: 1, foof: 8} },
4: {continent: {
asia: {country: {india: {capital: "delhi"}, china: {capital: "beiging"}}},
europe:{country:{germany: {capital: "berlin"},france: {capital: "paris"}}}
},
vegtables: {cucumber: 2, carrot: 3, radish: 7}
}
};
var obj2 = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
3: {fooBar: 3, boor:{foob: 1, boof: 6}, boob: 9 },
4: {continent: {
northamerica: {country: {mexico: {capital: "mexico city"}, canada: {capital: "ottawa"}},},
asia: {country: {Afghanistan : {capital: "Kabul"}}}
}
},
5: {barf: 42}
};
//this function is similar to object.assign but,
//leaves the keys which are common among both bojects untouched
function merge(object1, object2)
{
function combine(p, q)
{
for(i in q)
if(!p.hasOwnProperty(i))
p[i]= q[i];
return p;
}
obj1= Object.assign(combine(obj1, obj2), obj1);//for the first level
obj1= Object.assign(traverse(obj1, obj2), obj1);//for subsequent levels down the object tree
//this function traverses each nested boject and combines it with the other object
function traverse(a, b)
{
if(typeof(a) === "object" && typeof(b) === "object")
for(i in b)
if(typeof(a[i]) === "object" && typeof(b[i]) === "object")
a[i]= Object.assign(traverse(a[i], b[i]), a[i]);
else
Object.assign(combine(a, b), a);
return a;
}
return obj1;
}
console.log(merge(obj1, obj2));
```
|
This can be easily accomplished with a combination of the neat javascript spread syntax, Array and Object prototype functions, and destructuring patterns.
```
[obj1,obj2].flatMap(Object.entries).reduce((o,[k,v])=>({...o,[k]:{...o[k],...v}}),{})
```
**As simple as this!**
---
For a very detailed explanation of how this works, refer to [this extended answer](https://stackoverflow.com/a/73420289/2938526) to a similar question.
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
Since you mentioned you can use **lodash** you can use [`merge`](https://lodash.com/docs/4.17.11#merge) like so:
`_.merge(obj1, obj2)`
to get your desired result.
See working example below:
```js
const a = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3 },
},
b = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
4: {foo: 1}
},
res = _.merge(a, b);
console.log(res);
```
```html
<script src="https://cdn.jsdelivr.net/lodash/4.16.4/lodash.min.js"></script>
```
|
Are you looking like this?
we can use this way to merge two objects.
```
const person = { name: 'David Walsh', gender: 'Male' };
const attributes = { handsomeness: 'Extreme', hair: 'Brown', eyes: 'Blue' };
const summary = {...person, ...attributes};
```
/\*
```
Object {
"eyes": "Blue",
"gender": "Male",
"hair": "Brown",
"handsomeness": "Extreme",
"name": "David Walsh",
}
```
\*/
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
You can use spread operator.
Update :
>
> if obj2 has some properties that obj1 does not have?
>
>
>
Initially i wrote this answer assuming the keys are indexed like `0,1 and so on` but as you mentioned in comment this is not the case than you can build a array of keys and than iterate over it as
as very concisely added in comment by @Nick `[...Object.keys(obj1), ...Object.keys(obj2)]`
```js
let obj1 = {1: { foo: 1 },2: { bar: 2, fooBar: 3 },3: { fooBar: 3 },};
let obj2 = {1: { foo: 1, bar: 2 },2: { bar: 2 },};
let keys = [...new Set([...Object.keys(obj1),...Object.keys(obj2)])]
let op = {}
let merged = keys.forEach(key=>{
op[key] = {
...obj1[key],
...obj2[key]
}
})
console.log(op)
```
|
This can be easily accomplished with a combination of the neat javascript spread syntax, Array and Object prototype functions, and destructuring patterns.
```
[obj1,obj2].flatMap(Object.entries).reduce((o,[k,v])=>({...o,[k]:{...o[k],...v}}),{})
```
**As simple as this!**
---
For a very detailed explanation of how this works, refer to [this extended answer](https://stackoverflow.com/a/73420289/2938526) to a similar question.
|
54,376,661
|
**To who voted to close because of unclear what I'm asking, here are the questions in my post:**
1. Can anyone tell me what's the result of `y`?
2. Is there anything called sum product in Mathematics?
3. Is `x` subject to broadcasting?
4. Why is `y` a column/row vector?
5. What if `x=np.array([[7],[2],[3]])`?
```
w=np.array([[1,2,3],[4,5,6],[7,8,9]])
x=np.array([7,2,3])
y=np.dot(w,x)
```
Can anyone tell me what's the result of `y`?
[](https://i.stack.imgur.com/KzZnm.png)
I deliberately Mosaic the screenshot so that you pretend you are in a test and cannot run python to get the result.
<https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.dot.html#numpy.dot> says
>
> If a is an N-D array and b is a 1-D array, it is a sum product over
> the last axis of a and b.
>
>
>
Is there anything called **sum product** in Mathematics?
Is `x` subject to broadcasting?
Why is `y` a column/row vector?
What if `x=np.array([[7],[2],[3]])`?
|
2019/01/26
|
[
"https://Stackoverflow.com/questions/54376661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/746461/"
] |
You can use spread operator.
Update :
>
> if obj2 has some properties that obj1 does not have?
>
>
>
Initially i wrote this answer assuming the keys are indexed like `0,1 and so on` but as you mentioned in comment this is not the case than you can build a array of keys and than iterate over it as
as very concisely added in comment by @Nick `[...Object.keys(obj1), ...Object.keys(obj2)]`
```js
let obj1 = {1: { foo: 1 },2: { bar: 2, fooBar: 3 },3: { fooBar: 3 },};
let obj2 = {1: { foo: 1, bar: 2 },2: { bar: 2 },};
let keys = [...new Set([...Object.keys(obj1),...Object.keys(obj2)])]
let op = {}
let merged = keys.forEach(key=>{
op[key] = {
...obj1[key],
...obj2[key]
}
})
console.log(op)
```
|
Since you mentioned you can use **lodash** you can use [`merge`](https://lodash.com/docs/4.17.11#merge) like so:
`_.merge(obj1, obj2)`
to get your desired result.
See working example below:
```js
const a = {
1: { foo: 1 },
2: { bar: 2, fooBar: 3 },
3: { fooBar: 3 },
},
b = {
1: { foo: 1, bar: 2 },
2: { bar: 2 },
4: {foo: 1}
},
res = _.merge(a, b);
console.log(res);
```
```html
<script src="https://cdn.jsdelivr.net/lodash/4.16.4/lodash.min.js"></script>
```
|
37,297,472
|
I use Linux Mint 17 'Quiana' and I want to install Watchman to use later Ember.js. Here were my steps:
```
$ git clone https://github.com/facebook/watchman.git
```
then
```
$ cd watchman
$ ./autogen.sh
$ ./configure.sh
```
and, when I ran `make` to compile files, it returned the following error:
```
pywatchman/bser.c:31:20: fatal error: Python.h: no such file or directory
#include <Python.h>
^
compilation terminated.
error: command 'i686-linux-gnu-gcc' failed with exit status 1
make[1]: *** [py-build] Error 1
make[1]: Leaving the directory `/home/alex/watchman'
make: *** [all] Error 2
```
I tried to run
```
$ sudo apt-get install python3-dev
```
but it appears to be already in my system. What have I done wrong?
|
2016/05/18
|
[
"https://Stackoverflow.com/questions/37297472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5846366/"
] |
Usually its the `python-dev` libs missing. Are you sure the configure uses the python 3 instead of python 2? Because if thats the case you should install `python-dev` instead of `python3-dev`.
|
Same problem if you build watchman under rasbian/raspberry. Install "python-dev".
--
```
git clone https://github.com/facebook/watchman.git
cd watchman
./autogen.sh
./configure
make
sudo make install
```
|
37,297,472
|
I use Linux Mint 17 'Quiana' and I want to install Watchman to use later Ember.js. Here were my steps:
```
$ git clone https://github.com/facebook/watchman.git
```
then
```
$ cd watchman
$ ./autogen.sh
$ ./configure.sh
```
and, when I ran `make` to compile files, it returned the following error:
```
pywatchman/bser.c:31:20: fatal error: Python.h: no such file or directory
#include <Python.h>
^
compilation terminated.
error: command 'i686-linux-gnu-gcc' failed with exit status 1
make[1]: *** [py-build] Error 1
make[1]: Leaving the directory `/home/alex/watchman'
make: *** [all] Error 2
```
I tried to run
```
$ sudo apt-get install python3-dev
```
but it appears to be already in my system. What have I done wrong?
|
2016/05/18
|
[
"https://Stackoverflow.com/questions/37297472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5846366/"
] |
Usually its the `python-dev` libs missing. Are you sure the configure uses the python 3 instead of python 2? Because if thats the case you should install `python-dev` instead of `python3-dev`.
|
i also did
```
sudo apt-get install python3-dev
```
it was still giving me the error then i ran this command
```
sudo apt-get install python-dev
```
after that.
```
make
sudo make install
```
|
37,297,472
|
I use Linux Mint 17 'Quiana' and I want to install Watchman to use later Ember.js. Here were my steps:
```
$ git clone https://github.com/facebook/watchman.git
```
then
```
$ cd watchman
$ ./autogen.sh
$ ./configure.sh
```
and, when I ran `make` to compile files, it returned the following error:
```
pywatchman/bser.c:31:20: fatal error: Python.h: no such file or directory
#include <Python.h>
^
compilation terminated.
error: command 'i686-linux-gnu-gcc' failed with exit status 1
make[1]: *** [py-build] Error 1
make[1]: Leaving the directory `/home/alex/watchman'
make: *** [all] Error 2
```
I tried to run
```
$ sudo apt-get install python3-dev
```
but it appears to be already in my system. What have I done wrong?
|
2016/05/18
|
[
"https://Stackoverflow.com/questions/37297472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5846366/"
] |
Usually its the `python-dev` libs missing. Are you sure the configure uses the python 3 instead of python 2? Because if thats the case you should install `python-dev` instead of `python3-dev`.
|
on Fedora 32 run: `sudo dnf install python-devel`
|
37,297,472
|
I use Linux Mint 17 'Quiana' and I want to install Watchman to use later Ember.js. Here were my steps:
```
$ git clone https://github.com/facebook/watchman.git
```
then
```
$ cd watchman
$ ./autogen.sh
$ ./configure.sh
```
and, when I ran `make` to compile files, it returned the following error:
```
pywatchman/bser.c:31:20: fatal error: Python.h: no such file or directory
#include <Python.h>
^
compilation terminated.
error: command 'i686-linux-gnu-gcc' failed with exit status 1
make[1]: *** [py-build] Error 1
make[1]: Leaving the directory `/home/alex/watchman'
make: *** [all] Error 2
```
I tried to run
```
$ sudo apt-get install python3-dev
```
but it appears to be already in my system. What have I done wrong?
|
2016/05/18
|
[
"https://Stackoverflow.com/questions/37297472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5846366/"
] |
Same problem if you build watchman under rasbian/raspberry. Install "python-dev".
--
```
git clone https://github.com/facebook/watchman.git
cd watchman
./autogen.sh
./configure
make
sudo make install
```
|
on Fedora 32 run: `sudo dnf install python-devel`
|
37,297,472
|
I use Linux Mint 17 'Quiana' and I want to install Watchman to use later Ember.js. Here were my steps:
```
$ git clone https://github.com/facebook/watchman.git
```
then
```
$ cd watchman
$ ./autogen.sh
$ ./configure.sh
```
and, when I ran `make` to compile files, it returned the following error:
```
pywatchman/bser.c:31:20: fatal error: Python.h: no such file or directory
#include <Python.h>
^
compilation terminated.
error: command 'i686-linux-gnu-gcc' failed with exit status 1
make[1]: *** [py-build] Error 1
make[1]: Leaving the directory `/home/alex/watchman'
make: *** [all] Error 2
```
I tried to run
```
$ sudo apt-get install python3-dev
```
but it appears to be already in my system. What have I done wrong?
|
2016/05/18
|
[
"https://Stackoverflow.com/questions/37297472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5846366/"
] |
i also did
```
sudo apt-get install python3-dev
```
it was still giving me the error then i ran this command
```
sudo apt-get install python-dev
```
after that.
```
make
sudo make install
```
|
on Fedora 32 run: `sudo dnf install python-devel`
|
51,811,662
|
in a python program I have a list that I would like to modify:
```
a = [1,2,3,4,5,1,2,3,1,4,5]
```
Say every time I see 1 in the list, I would like to replace it with 10, 9, 8. My goal is to get:
```
a = [10,9,8,2,3,4,5,10,9,8,2,3,10,9,8,4,5]
```
What's a good way to program this? Currently I have to do a 'replace' and two 'inserts' every time I see a 1 in the list.
Thank you!
|
2018/08/12
|
[
"https://Stackoverflow.com/questions/51811662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759486/"
] |
You **cannot** modify your state object or any of the objects it contains directly; you must instead use `setState`. And when you're setting state based on existing state, you must use the callback version of it; [details](https://reactjs.org/docs/state-and-lifecycle.html#state-updates-may-be-asynchronous).
So in your case, if you want to **add to** the existing `this.state.errors` array, then:
```
let errors = [];
if(this.state.daysOfWeek.length < 1) {
errors.push('Must select at least 1 day to perform workout');
}
if(this.state.workoutId === '') {
errors.push('Please select a workout to edit');
}
if (errors.length) {
this.setState(prevState => ({errors: [...prevState.errors, ...errors]}));
}
```
If you want to **replace** the `this.state.errors` array, you don't need the callback form:
```
let errors = [];
if(this.state.daysOfWeek.length < 1) {
errors.push('Must select at least 1 day to perform workout');
}
if(this.state.workoutId === '') {
errors.push('Please select a workout to edit');
}
if (errors.length) { // Or maybe you don't want this check and want to
// set it anyway, to clear existing errors
this.setState({errors});
}
```
|
In React, you must never assign to `this.state` directly. Use `this.setState()` instead. The reason is that otherwise React would not know you had changed the state.
The only exception to this rule where you assign directly to `this.state` is in your component's constructor.
|
19,882,594
|
I am trying to pull company information from the following website:
<http://www.theglobeandmail.com/globe-investor/markets/stocks/summary/?q=T-T>
I see from there page source that there are nested span statements like:
```
<li class="clearfix">
<span class="label">Low</span>
<span class="giw-a-t-sc-data">36.39</span>
</li>
<li class="clearfix">
<span class="label">Bid<span class="giw-a-t-sc-bidSize smallsize">x0</span></span>
<span class="giw-a-t-sc-data">36.88</span>
</li>
```
The code I wrote will grab (Low, 36.69) without problem. I have spent hours reading this forum and others trying to get bs4 to also break out (Bid, 36.88). The problem is, Bid comes out as "None" because of the nested span tags.
I am an old "c" programmer (GNU Cygwin) and this python, Beautifulsoup stuff is new to me. I love it though, awesome potential for interesting and time saving scripts.
Can anyone help with this question, I hope I have posed it well enough.
Please keep it simple because I am definitely a newbie.
thanks in advance.
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19882594",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2974790/"
] |
I have faced this problem.
The solution is very simple (after a lot of testing and error)
You must add id attribute to your tag,
for instance:
```
<p:calendar id="date_selector" value="#{dpnl.fechaHasta}" pattern="dd/MM/yyyy" />
```
|
At the facet named output use the tag h:outputText instead of p:inputText
|
19,882,594
|
I am trying to pull company information from the following website:
<http://www.theglobeandmail.com/globe-investor/markets/stocks/summary/?q=T-T>
I see from there page source that there are nested span statements like:
```
<li class="clearfix">
<span class="label">Low</span>
<span class="giw-a-t-sc-data">36.39</span>
</li>
<li class="clearfix">
<span class="label">Bid<span class="giw-a-t-sc-bidSize smallsize">x0</span></span>
<span class="giw-a-t-sc-data">36.88</span>
</li>
```
The code I wrote will grab (Low, 36.69) without problem. I have spent hours reading this forum and others trying to get bs4 to also break out (Bid, 36.88). The problem is, Bid comes out as "None" because of the nested span tags.
I am an old "c" programmer (GNU Cygwin) and this python, Beautifulsoup stuff is new to me. I love it though, awesome potential for interesting and time saving scripts.
Can anyone help with this question, I hope I have posed it well enough.
Please keep it simple because I am definitely a newbie.
thanks in advance.
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19882594",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2974790/"
] |
I have faced this problem.
The solution is very simple (after a lot of testing and error)
You must add id attribute to your tag,
for instance:
```
<p:calendar id="date_selector" value="#{dpnl.fechaHasta}" pattern="dd/MM/yyyy" />
```
|
make sure you use java.util.date for your date fields(i.e fechaDesde etc) in your bean.
|
53,474,065
|
I am trying to `upgrade` `matplotlib`. I'm doing this via `!pip` and it seems to work. When I check the list in the `IPython console`:
```
!pip list
```
It returns the latest version of `matplotlib`
```
matplotlib 3.0.2
```
But when I check the version in the editor it returns
```
2.2.2
```
The very first line in the text editor shows
```
#!/usr/bin/env python3
```
When inserting `!which pip` and `!which python` into the `IPython` `console` it returns the following:
```
!which python = /Users/XXXX/anaconda/bin/python
!which pip = /Users/XXXX/anaconda/bin/pip
```
|
2018/11/26
|
[
"https://Stackoverflow.com/questions/53474065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Using `str.len`
```
df[df.iloc[:,0].astype(str).str.len()!=7]
A
1 1.222222
2 1.222200
```
dput :
```
df=pd.DataFrame({'A':[1.22222,1.222222,1.2222]})
```
|
See if this works
`df1 = df['ZipCode'].astype(str).map(len)==5`
|
53,474,065
|
I am trying to `upgrade` `matplotlib`. I'm doing this via `!pip` and it seems to work. When I check the list in the `IPython console`:
```
!pip list
```
It returns the latest version of `matplotlib`
```
matplotlib 3.0.2
```
But when I check the version in the editor it returns
```
2.2.2
```
The very first line in the text editor shows
```
#!/usr/bin/env python3
```
When inserting `!which pip` and `!which python` into the `IPython` `console` it returns the following:
```
!which python = /Users/XXXX/anaconda/bin/python
!which pip = /Users/XXXX/anaconda/bin/pip
```
|
2018/11/26
|
[
"https://Stackoverflow.com/questions/53474065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can write this more concisely using Pandas filtering rather than loops and ifs.
Here is an example:
```
valid_zips = mydata[mydata.astype(str).str.len() == 7]
```
or
```
zip_code_upper_bound = 100000
valid_zips = mydata[mydata < zip_code_upper_bound]
```
assuming fractional numbers are not included in your set. Note that the first example will remove shorter zips, while the second will leave them in, which you might want as they could have had leading zeros.
Sample output:
With `df` defined as (from your example):
```
Zip Item1 Item2 Item3
0 78264.0 pan elephant blue
1 73909.0 steamer panda yellow
2 2602.0 pot rhino orange
3 59661.0 fork zebra green
4 861893.0 sink ocelot red
5 77892.0 spatula doggie brown
```
Using the following code:
```
df[df.Zip.astype(str).str.len() == 7]
```
The result is:
```
Zip Item1 Item2 Item3
0 78264.0 pan elephant blue
1 73909.0 steamer panda yellow
3 59661.0 fork zebra green
5 77892.0 spatula doggie brown
```
|
See if this works
`df1 = df['ZipCode'].astype(str).map(len)==5`
|
53,474,065
|
I am trying to `upgrade` `matplotlib`. I'm doing this via `!pip` and it seems to work. When I check the list in the `IPython console`:
```
!pip list
```
It returns the latest version of `matplotlib`
```
matplotlib 3.0.2
```
But when I check the version in the editor it returns
```
2.2.2
```
The very first line in the text editor shows
```
#!/usr/bin/env python3
```
When inserting `!which pip` and `!which python` into the `IPython` `console` it returns the following:
```
!which python = /Users/XXXX/anaconda/bin/python
!which pip = /Users/XXXX/anaconda/bin/pip
```
|
2018/11/26
|
[
"https://Stackoverflow.com/questions/53474065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can write this more concisely using Pandas filtering rather than loops and ifs.
Here is an example:
```
valid_zips = mydata[mydata.astype(str).str.len() == 7]
```
or
```
zip_code_upper_bound = 100000
valid_zips = mydata[mydata < zip_code_upper_bound]
```
assuming fractional numbers are not included in your set. Note that the first example will remove shorter zips, while the second will leave them in, which you might want as they could have had leading zeros.
Sample output:
With `df` defined as (from your example):
```
Zip Item1 Item2 Item3
0 78264.0 pan elephant blue
1 73909.0 steamer panda yellow
2 2602.0 pot rhino orange
3 59661.0 fork zebra green
4 861893.0 sink ocelot red
5 77892.0 spatula doggie brown
```
Using the following code:
```
df[df.Zip.astype(str).str.len() == 7]
```
The result is:
```
Zip Item1 Item2 Item3
0 78264.0 pan elephant blue
1 73909.0 steamer panda yellow
3 59661.0 fork zebra green
5 77892.0 spatula doggie brown
```
|
Using `str.len`
```
df[df.iloc[:,0].astype(str).str.len()!=7]
A
1 1.222222
2 1.222200
```
dput :
```
df=pd.DataFrame({'A':[1.22222,1.222222,1.2222]})
```
|
62,823,948
|
I have a dataframe with two levels of columns index.
Reproducible Dataset.
---------------------
```
df = pd.DataFrame(
[ ['Gaz','Gaz','Gaz','Gaz'],
['X','X','X','X'],
['Y','Y','Y','Y'],
['Z','Z','Z','Z']],
columns=pd.MultiIndex.from_arrays([['A','A','C','D'],
['Name','Name','Company','Company']])
```

I want to rename the duplicated MultiIndex columns, only when level-0 and level-1 combined is duplicated. Then add a suffix number to the end. Like the one below.

Below is a solution I found, but it only works for single level column index.
```
class renamer():
def __init__(self):
self.d = dict()
def __call__(self, x):
if x not in self.d:
self.d[x] = 0
return x
else:
self.d[x] += 1
return "%s_%d" % (x, self.d[x])
df = df.rename(columns=renamer())
```
I think the above method can be modified to support the multi level situation, but I am too new to pandas/python.
Thanks in advance.
@Datanovice
This is to clarify to you about the output what I need.
I have the snippet below.
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
[ ['Gaz','Gaz','Gaz','Gaz'],
['X','X','X','X'],
['Y','Y','Y','Y'],
['Z','Z','Z','Z']],
columns=pd.MultiIndex.from_arrays([
['A','A','C','A'],
['A','A','C','A'],
['Company','Company','Company','Name']]))
s = pd.DataFrame(df.columns.tolist())
cond = s.groupby(0).cumcount()
s = [np.where(cond.gt(0),s[i] + '_' + cond.astype(str),s[i]) for i in
range(df.columns.nlevels)]
s = pd.DataFrame(s)
#print(s)
df.columns = pd.MultiIndex.from_arrays(s.values.tolist())
print(df)
```
The current result is-
[](https://i.stack.imgur.com/ZAz9c.png)
What I need is the last piece of column index should not be counted as duplicated, as as "A-A-Name" is not same with the first two.
Thank you again.
|
2020/07/09
|
[
"https://Stackoverflow.com/questions/62823948",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13875213/"
] |
If I understand well you look for a mechanism, that allows you to display a terminal on a web server.
Then you want to run an interactive python script on that terminal, right.
So in the end the solution to share a terminal does not necessarily have to be written in python, right? (Though I must admit that I prefer python solutions if I find them, but sometimes being pragmatic isn't a bad idea)
You might google for http and terminal emulators.
Perhaps ttyd fits the bill. <https://github.com/tsl0922/ttyd>
Building on linux could be done with
```
sudo apt-get install build-essential cmake git libjson-c-dev libwebsockets-dev
git clone https://github.com/tsl0922/ttyd.git
cd ttyd && mkdir build && cd build
cmake ..
make && make install
```
Usage would be something like:
ttyd -p 8888 yourpythonscript.py
and then you could connect with a web browser with `http://hostip:8888`
you might of course 'hide' this url behind a reverse proxy and add authentification to it
or add options like `--credential username:password` to password protect the url.
**Addendum:**
If you want to share multiple scripts with different people and the shareing is more a on the fly thing, then you might look at tty-share ( <https://github.com/elisescu/tty-share> ) and tty-server ( <https://github.com/elisescu/tty-server> )
tty-server can be run in a docker container.
tty-share can be used to run a script on your machine on one of your terminals. It will output a url, that you can give to the person you want to share the specific session with)
If you think that's interesting I might elaborate on this one
|
*>> Insert security disclaimer here <<*
Easiest most hacktastic way to do it is to create a `div` element where you'll store your output and an `input` element to enter commands. Then you can ajax `POST` the command to a back-end controller.
The controller would take the command and run it while capturing the output of the command and sending it back to the web page for it to render it in the `div`
In python I use this to capture command output:
```py
from subprocess import Popen, STDOUT, PIPE
proc = Popen(['ls', '-l'], stdout=PIPE, stderr=STDOUT, cwd='/working/directory')
proc.wait()
return proc.stdout.read()
```
|
64,267,498
|
I try to upload a big file (4GB) with a PUT on a DRF viewset.
During the upload my memory is stable. At 100%, the python runserver process takes more and more RAM and is killed by the kernel. I have a logging line in the `put` method of this `APIView` but the process is killed before this method call.
I use this setting to force file usage `FILE_UPLOAD_HANDLERS = ["django.core.files.uploadhandler.TemporaryFileUploadHandler"]`
Where does this memory peak comes from? I guess it try to load the file content in memory but why (and where)?
More information:
* I tried DEBUG true and false
* The runserver is in a docker behind a traefik but there is no limitation in traefik AFAIK and the upload reaches 100%
* I do not know yet if I would get the same behavior with `daphne` instead of runserver
* EDIT: front use a `Content-Type multipart/form-data`
* EDIT: I have tried `FileUploadParser` and `(FormParser, MultiPartParser)` for parser\_classes in my `APIView`
|
2020/10/08
|
[
"https://Stackoverflow.com/questions/64267498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5877122/"
] |
TL;DR:
------
Neither a DRF nor a Django issue, it's a [2.5 years known Daphne issue](https://github.com/django/daphne/issues/126). The solution is to use uvicorn, hypercorn, or something else for the time being.
Explanations
------------
What you're seeing here is not coming from Django Rest Framework as:
* The FileUploadParser is meant to handle large file uploads, as [it reads the file chunk by chunk](https://github.com/encode/django-rest-framework/blob/335054a5d36b352a58286b303b608b6bf48152f8/rest_framework/parsers.py#L177-L183);
* Your view not being executed rules out the parsers [which aren't executed until you access the `request.FILES`](https://github.com/encode/django-rest-framework/blob/5828d8f7ca167b11296733a2b54f9d6fca29b7b0/rest_framework/request.py#L436-L443) property
The fact that you're mentioning Daphne reminds me of this [SO answer](https://stackoverflow.com/a/55237320/2441358) which mentions a similar problem and points to a code that Daphne doesn't handle large file uploads as **it loads the whole body** in RAM before passing it to the view. (The code is still present in their master branch at the time of writing)
You're seeing the same behavior with `runserver` because when installed, Daphne replaces the initial runserver command with itself to provide WebSockets support for dev purposes.
To make sure that it's the real culprit, try to disable Channels/run the default Django runserver and see for yourself if your app is killed by the OOM Killer.
|
I don't know if it works with django rest, but you can try to chunk de file.
```
[...]
anexo_files = request.FILES.getlist('anexo_file_'+str(k))
index = 0
for file in anexo_files:
index = index + 1
extension = os.path.splitext(str(file))[1]
nome_arquivo_anexo = 'media/uploads/' + os.path.splitext(str(file))[0] + "_" + str(index) + datetime.datetime.now().strftime("%m%d%Y%H%M%S") + extension
handle_uploaded_file(file, nome_arquivo_anexo)
AnexoProjeto.objects.create(
projeto=projeto,
arquivo_anexo = nome_arquivo_anexo
)
[...]
```
Where handle\_uploaded\_file is
```
def handle_uploaded_file(f, nome_arquivo):
with open(nome_arquivo, 'wb+') as destination:
for chunk in f.chunks():
destination.write(chunk)
```
|
57,420,008
|
Recently I came across logging in python.
I have the following code in test.py file
```
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.StreamHandler())
logger.debug("test Message")
```
Now, is there any way I can print the resulting `Logrecord` object generated by `logger.debug("test Message")` because it's stated in the documentation that
>
> LogRecord instances are created automatically by the Logger every time something is logged
>
>
>
<https://docs.python.org/3/library/logging.html#logrecord-objects>
I checked saving `debug` into a variable and print it
```
test = logger.debug("test Message")
print(test)
```
the output is `NONE`
My goal is to check/view the final Logrecord object generated by `logging.debug(test.py)` in the same test.py by using `print()` This is for my own understanding.
```
print(LogrecordObject.__dict__)
```
So how to get hold of the `Logrecord` object generated by `logger.debug("test Message")`
|
2019/08/08
|
[
"https://Stackoverflow.com/questions/57420008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2897115/"
] |
There is no return in `debug()`
```
# Here is the snippet for the source code
def debug(self, msg, *args, **kwargs):
if self.isEnabledFor(DEBUG):
self._log(DEBUG, msg, args, **kwargs)
```
If you wanna get LogRecord return, you need to redefine a `debug()`, you can overwrite like this:
```
import logging
DEBUG_LEVELV_NUM = 9
logging.addLevelName(DEBUG_LEVELV_NUM, "MY_DEBUG")
def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False):
sinfo = None
fn, lno, func = "(unknown file)", 0, "(unknown function)"
if exc_info:
if isinstance(exc_info, BaseException):
exc_info = (type(exc_info), exc_info, exc_info.__traceback__)
elif not isinstance(exc_info, tuple):
exc_info = sys.exc_info()
record = self.makeRecord(self.name, level, fn, lno, msg, args,
exc_info, func, extra, sinfo)
self.handle(record)
return record
def my_debug(self, message, *args, **kws):
if self.isEnabledFor(DEBUG_LEVELV_NUM):
# Yes, logger takes its '*args' as 'args'.
record = self._log(DEBUG_LEVELV_NUM, message, args, **kws)
return record
logger = logging.getLogger(__name__)
logging.Logger.my_debug = my_debug
logging.Logger._log = _log
logger.setLevel(DEBUG_LEVELV_NUM)
logger.addHandler(logging.StreamHandler())
test = logger.my_debug('test custom debug')
print(test)
```
Reference:
[How to add a custom loglevel to Python's logging facility](https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility)
|
You can create a handler that instead of formatting the LogRecord instance to a string, just save it in a list to be viewed and inspected later:
```
import logging
import sys
# A new handler to store "raw" LogRecords instances
class RecordsListHandler(logging.Handler):
"""
A handler class which stores LogRecord entries in a list
"""
def __init__(self, records_list):
"""
Initiate the handler
:param records_list: a list to store the LogRecords entries
"""
self.records_list = records_list
super().__init__()
def emit(self, record):
self.records_list.append(record)
# A list to store the "raw" LogRecord instances
logs_list = []
# Your logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# Add the regular stream handler to print logs to the console, if you like
logger.addHandler(logging.StreamHandler(sys.stdout))
# Add the RecordsListHandler to store the log records objects
logger.addHandler(RecordsListHandler(logs_list))
if __name__ == '__main__':
logger.debug("test Message")
print(logs_list)
```
Output:
```
test Message
[<LogRecord: __main__, 10, C:/Automation/Exercises/222.py, 36, "test Message">]
```
|
69,497,348
|
I'm new to python I got a question that might be easy but i can't get it.
i wanted to make aprogram that user gives email as username and password as password ,the program should check if email is in corect format and if its not it **should print something and get email again** so i used regex (Im giving this inputs to database and i thought using LIKE query but i don't think that might help)
so whats the problem with my code?!it keeps wrong email
```
import re
regex = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
def check(email):
if (re.fullmatch(regex, email)):
return
else:
print("corect format is like amireza@gmail.com")
return
while __name__ == '__main__':
username = input()
check(username)
password = input()
```
|
2021/10/08
|
[
"https://Stackoverflow.com/questions/69497348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16260312/"
] |
here is a working code for you:
```py
import re
regex = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
def check(email):
if (re.fullmatch(regex, email)):
return True
else:
print("invalid input! correct format is like amireza@gmail.com")
return False
while __name__ == '__main__':
while True:
username = input("please enter email\n")
if check(username) is True:
break
password = input("please enter password\n")
break
print("username: %s, password: %s" % (username, password))
```
key correction:
* your helper should return a boolean which lets you know if the input is legit or not. thus I returned a boolean to outside scope
one more thing: since you run it as a standalone script, the outermost while condition (`while __name__ == '__main__'`) will always be `True`, which means you have to `break` out of it when you want to end your program execution. For simplicity I'd suggest using `if __name__ == '__main__'` instead
|
You could change your function `check` to return a boolean output that tells you whether the check was successful, as in
```
def check(email):
if (re.fullmatch(regex, email)):
return True
else:
print("corect format is like amireza@gmail.com")
return False
```
And then add a loop to your main code:
```
if __name__ == '__main__':
username = input()
while not check(username):
username = input()
```
Note that I did not actually run your code, but this should work.
EDIT: Heh, right, as the other answer explains, you should change your `while` into an `if`, I edited my code correspondingly.
|
58,752,089
|
I was writing on VisualCode studio, but I keep getting the same error message.
>
> selenium.common.exceptions.WebDriverException: Message: 'chromedriver.exe' executable needs to be in PATH.
>
>
>
Is it simply because you just can't run webdriver on vscode studio?
I've already tried
```
from selenium import webdriver
driver=webdriver.Chrome(executable_path=r"C:Users/.../chromedriver.exe")
```
```
driver=webdriver.Chrome("C:Users/.../chromedriver.exe")
```
and basically every solution you can find online regarding this problem.
I've download chromedriver from here: <https://chromedriver.chromium.org/>.
I've also added the file in PATH by clicking "system">>"environment variables", and added the downloaded file containing chromedriver.exe in both user variables and system variables of PATH.
I've also tried coping the chromedriver.exe file in to the python3.7/scripts file, then added the file manually in PATH, then restart my computer.
Can someone please help me on this matter? or just recommended some place I can successfully run the webdriver?
|
2019/11/07
|
[
"https://Stackoverflow.com/questions/58752089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8836876/"
] |
I had te same problem and there is two ways to solve this issue. The main reason of this "*unreachable*" path is that visual studio code desn't have permissions to run from the path environment unlike any other system installed programs.
So by installing VS Code but the **System Installer** version would be enough. And the other way is by putting the ChromeDriver.exe file into the */Scripts* folder of your python installation folder (*i.e ...\AppData\Local\Programs\Python\Python37\Scripts*).
---
I did both things and it worked for me
|
If you're on Windows, go into CMD (Command Prompt) and type in "chromedriver.exe." If chromedriver is executable in PATH, the system will print out "Starting Chromedriver [version]..." Else, you need to chromedriver to path. Then again, it could just be a fault of the IDE, try using Python's built-in IDLE...
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
These two websites really boosted my Vim productivity with all languages:
<http://nvie.com/posts/how-i-boosted-my-vim/>
<http://stevelosh.com/blog/2010/09/coming-home-to-vim/>
|
Whether or not wavy red lines are displayed is related to the theme you're using, not the syntax checker or language. So long as your syntax file (try <http://www.vim.org/scripts/script.php?script_id=790> ) checks for errors, you can show the errors with something like:
```
:hi Error guifg=#ff0000 gui=undercurl
```
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
I use the [`PyFlakes` vim script](http://github.com/kevinw/pyflakes-vim), and I'm pretty satisfied with it. Also, if you'd like PEP8 checking, try [this script](http://www.vim.org/scripts/script.php?script_id=2914).
|
Whether or not wavy red lines are displayed is related to the theme you're using, not the syntax checker or language. So long as your syntax file (try <http://www.vim.org/scripts/script.php?script_id=790> ) checks for errors, you can show the errors with something like:
```
:hi Error guifg=#ff0000 gui=undercurl
```
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
I'm using [Syntastic](http://www.vim.org/scripts/script.php?script_id=2736) plugin. It's working great so far. I use it instead of just pyflakes (Syntastic uses Pyflakes) because when doing Python development, I develop for web, so I need to edit Javascript and well and having a validation on the fly for various languages is a plus.
|
Whether or not wavy red lines are displayed is related to the theme you're using, not the syntax checker or language. So long as your syntax file (try <http://www.vim.org/scripts/script.php?script_id=790> ) checks for errors, you can show the errors with something like:
```
:hi Error guifg=#ff0000 gui=undercurl
```
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
This question was asked in 2010, but as of now, you have a simple solution.
After the release of Vim 8 in September 2016, which supports asynchronous I/O support, you can use [Asynchronous Lint Engine](https://github.com/w0rp/ale).
It supports most major languages, and of course you have to install the linter yourself or it won't work.
Be aware that it conflicts with [syntastic](https://github.com/vim-syntastic/syntastic), so you'd better disable or remove it if you have it installed.
|
Whether or not wavy red lines are displayed is related to the theme you're using, not the syntax checker or language. So long as your syntax file (try <http://www.vim.org/scripts/script.php?script_id=790> ) checks for errors, you can show the errors with something like:
```
:hi Error guifg=#ff0000 gui=undercurl
```
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
I use the [`PyFlakes` vim script](http://github.com/kevinw/pyflakes-vim), and I'm pretty satisfied with it. Also, if you'd like PEP8 checking, try [this script](http://www.vim.org/scripts/script.php?script_id=2914).
|
These two websites really boosted my Vim productivity with all languages:
<http://nvie.com/posts/how-i-boosted-my-vim/>
<http://stevelosh.com/blog/2010/09/coming-home-to-vim/>
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
These two websites really boosted my Vim productivity with all languages:
<http://nvie.com/posts/how-i-boosted-my-vim/>
<http://stevelosh.com/blog/2010/09/coming-home-to-vim/>
|
This question was asked in 2010, but as of now, you have a simple solution.
After the release of Vim 8 in September 2016, which supports asynchronous I/O support, you can use [Asynchronous Lint Engine](https://github.com/w0rp/ale).
It supports most major languages, and of course you have to install the linter yourself or it won't work.
Be aware that it conflicts with [syntastic](https://github.com/vim-syntastic/syntastic), so you'd better disable or remove it if you have it installed.
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
I use the [`PyFlakes` vim script](http://github.com/kevinw/pyflakes-vim), and I'm pretty satisfied with it. Also, if you'd like PEP8 checking, try [this script](http://www.vim.org/scripts/script.php?script_id=2914).
|
I'm using [Syntastic](http://www.vim.org/scripts/script.php?script_id=2736) plugin. It's working great so far. I use it instead of just pyflakes (Syntastic uses Pyflakes) because when doing Python development, I develop for web, so I need to edit Javascript and well and having a validation on the fly for various languages is a plus.
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
I use the [`PyFlakes` vim script](http://github.com/kevinw/pyflakes-vim), and I'm pretty satisfied with it. Also, if you'd like PEP8 checking, try [this script](http://www.vim.org/scripts/script.php?script_id=2914).
|
This question was asked in 2010, but as of now, you have a simple solution.
After the release of Vim 8 in September 2016, which supports asynchronous I/O support, you can use [Asynchronous Lint Engine](https://github.com/w0rp/ale).
It supports most major languages, and of course you have to install the linter yourself or it won't work.
Be aware that it conflicts with [syntastic](https://github.com/vim-syntastic/syntastic), so you'd better disable or remove it if you have it installed.
|
3,778,486
|
I have visited Vim website , script section and found several synthax checkers for python. But which one to choose ? I would prefer something that supports python 3 as well, even though I code in python 2.6 currently.
Do all these checkers need a module like pychecker and pyflakes ?
I could install the most popular from scripts database but I thought to get some recommendations here first from what you consider the best and why. The script will have to work MACOS, windows and ubuntu, with MACOS being my highest priority.
In case you are wondering I am looking for syntax checking like the one used by PyDev in Eclipse IDE which underlines with a red wavy line all erros as you type.
|
2010/09/23
|
[
"https://Stackoverflow.com/questions/3778486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/453642/"
] |
I'm using [Syntastic](http://www.vim.org/scripts/script.php?script_id=2736) plugin. It's working great so far. I use it instead of just pyflakes (Syntastic uses Pyflakes) because when doing Python development, I develop for web, so I need to edit Javascript and well and having a validation on the fly for various languages is a plus.
|
This question was asked in 2010, but as of now, you have a simple solution.
After the release of Vim 8 in September 2016, which supports asynchronous I/O support, you can use [Asynchronous Lint Engine](https://github.com/w0rp/ale).
It supports most major languages, and of course you have to install the linter yourself or it won't work.
Be aware that it conflicts with [syntastic](https://github.com/vim-syntastic/syntastic), so you'd better disable or remove it if you have it installed.
|
35,224,675
|
I'm preparing a toy `spark.ml` example. `Spark version 1.6.0`, running on top of `Oracle JDK version 1.8.0_65`, pyspark, ipython notebook.
First, it hardly has anything to do with [Spark, ML, StringIndexer: handling unseen labels](https://stackoverflow.com/questions/34681534/spark-ml-stringindexer-handling-unseen-labels). The exception is thrown while fitting a pipeline to a dataset, not transforming it. And suppressing the exception might not be a solution here, since, I'm afraid, the dataset gets messed pretty bad in this case.
My dataset is about 800Mb uncompressed, so it might be hard to reproduce (smaller subsets seem to dodge this issue).
The dataset looks like this:
```
+--------------------+-----------+-----+-------+-----+--------------------+
| url| ip| rs| lang|label| txt|
+--------------------+-----------+-----+-------+-----+--------------------+
|http://3d-detmold...|217.160.215|378.0| de| 0.0|homwillkommskip c...|
| http://3davto.ru/| 188.225.16|891.0| id| 1.0|оформить заказ пе...|
| http://404.szm.com/| 85.248.42| 58.0| cs| 0.0|kliknite tu alebo...|
| http://404.xls.hu/| 212.52.166|168.0| hu| 0.0|honlapkészítés404...|
|http://a--m--a--t...| 66.6.43|462.0| en| 0.0|back top archiv r...|
|http://a-wrf.ru/c...| 78.108.80|126.0|unknown| 1.0| |
|http://a-wrf.ru/s...| 78.108.80|214.0| ru| 1.0|установк фаркопна...|
+--------------------+-----------+-----+-------+-----+--------------------+
```
The value being predicted is `label`. The whole pipeline applied to it:
```python
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler, StringIndexer, OneHotEncoder, Tokenizer, HashingTF
from pyspark.ml.classification import LogisticRegression
train, test = munge(src_dataframe).randomSplit([70., 30.], seed=12345)
pipe_stages = [
StringIndexer(inputCol='lang', outputCol='lang_idx'),
OneHotEncoder(inputCol='lang_idx', outputCol='lang_onehot'),
Tokenizer(inputCol='ip', outputCol='ip_tokens'),
HashingTF(numFeatures=2**10, inputCol='ip_tokens', outputCol='ip_vector'),
Tokenizer(inputCol='txt', outputCol='txt_tokens'),
HashingTF(numFeatures=2**18, inputCol='txt_tokens', outputCol='txt_vector'),
VectorAssembler(inputCols=['lang_onehot', 'ip_vector', 'txt_vector'], outputCol='features'),
LogisticRegression(labelCol='label', featuresCol='features')
]
pipe = Pipeline(stages=pipe_stages)
pipemodel = pipe.fit(train)
```
And here is the stacktrace:
```
Py4JJavaError: An error occurred while calling o10793.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 18 in stage 627.0 failed 1 times, most recent failure: Lost task 18.0 in stage 627.0 (TID 23259, localhost): org.apache.spark.SparkException: Unseen label: pl-PL.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157)
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1113)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:271)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:159)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:90)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Unseen label: pl-PL.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157)
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
```
The most interesting line is:
```
org.apache.spark.SparkException: Unseen label: pl-PL.
```
No idea, how `pl-PL` which is a value from `lang` column could have gotten mixed up in the `label` column, which is a `float`, not `string` edited: some hasty coclusions, corrected thanks to @zero323
I've looked further into it and found, that `pl-PL` is a value from the testing part of the dataset, not training. So now I don't even know where to look for the culprit: it might easily be the `randomSplit` code, not `StringIndexer`, and who knows what else.
How do I investigate this?
|
2016/02/05
|
[
"https://Stackoverflow.com/questions/35224675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3868574/"
] |
Okay I think I got this. At least I got this working.
Caching the dataframe(including train/test partes) solves the problem. That's what I found in this JIRA issue: <https://issues.apache.org/jira/browse/SPARK-12590>.
So it's not a bug, just the fact that `randomSample` might yield a different result on the same, but differently partitioned dataset. And apparently, some of my munging functions (or `Pipeline`) involve repartition, therefore, results of the trainset recomputation from its definition might diverge.
What still interests me - it's the reproducibility: it's always 'pl-PL' row that gets mixed in the wrong part of the dataset, i.e. it's not random repartition. It's deterministic, just inconsistent. I wonder how exactly it happens.
|
`Unseen label` [is a generic message which doesn't correspond to a specific column](https://github.com/apache/spark/blob/branch-1.6/mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala#L157). Most likely problem is with a following stage:
```
StringIndexer(inputCol='lang', outputCol='lang_idx')
```
with `pl-PL` present in `train("lang")` and not present in `test("lang")`.
You can correct it using `setHandleInvalid` with `skip`:
```py
from pyspark.ml.feature import StringIndexer
train = sc.parallelize([(1, "foo"), (2, "bar")]).toDF(["k", "v"])
test = sc.parallelize([(3, "foo"), (4, "foobar")]).toDF(["k", "v"])
indexer = StringIndexer(inputCol="v", outputCol="vi")
indexer.fit(train).transform(test).show()
## Py4JJavaError: An error occurred while calling o112.showString.
## : org.apache.spark.SparkException: Job aborted due to stage failure:
## ...
## org.apache.spark.SparkException: Unseen label: foobar.
indexer.setHandleInvalid("skip").fit(train).transform(test).show()
## +---+---+---+
## | k| v| vi|
## +---+---+---+
## | 3|foo|1.0|
## +---+---+---+
```
or, in the latest versions, `keep`:
```py
indexer.setHandleInvalid("keep").fit(train).transform(test).show()
## +---+------+---+
## | k| v| vi|
## +---+------+---+
## | 3| foo|0.0|
## | 4|foobar|2.0|
## +---+------+---+
```
|
38,385,983
|
As a beginner creating a simple python text editor I have encountered a confusing bug in which I am able to print out the text file with the read\_file() function when I first open it, but after I amend the text file using write\_file(), reading the file again simple returns whitespace.
Additionally, any critique of my code would be appreciated. Thank you.
```
import os
def main():
file = open_file()
quit = False
while quit == False:
print('Current file open is {}'.format(file.name))
print('(\'read\', \'write\', \'rename\', \'change file\', \'quit\',)')
action = raw_input('> ')
if str(action) == 'read':
read_file(file)
elif str(action) == 'write':
file = write_file(file)
elif str(action) == 'rename':
file = rename(file)
elif str(action) == 'change file':
file.close()
open_file()
elif str(action) == 'quit':
break
else:
print('Incorrect action.')
def open_file():
print('Create/open a file')
filename = raw_input('Filename: ')
try:
file = open(str(filename), 'r+')
return file
except:
print('An error occured')
return open_file()
def read_file(file):
try:
print('{}, {}'.format(file.name, file))
print(file.read())
except:
print('An error occured')
return None
def write_file(file):
print('Type to start writing to your file.')
#read_file(file)
add_text = raw_input('> ')
file.write(str(add_text))
return file
def rename(file):
new_name = raw_input('New file name: ')
os.rename(file.name, str(new_name))
return file
main()
```
|
2016/07/15
|
[
"https://Stackoverflow.com/questions/38385983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6335429/"
] |
First, **file** is a predefined package; please don't use it for a variable name, or you may have trouble getting to some of the facilities. Try **my\_file** or just the C-language **fp** (for "file pointer").
After you write new information to the file, your position pointer (bookmark) is likely at the end of the file. Reading more will get you nowhere. You need to either close and reopen the file, or call fp.seek() to get to the desired location. For instance, **fp.seek(0)** will reset the pointer to the start of the file.
|
When it comes to reading and writing files in python, if you do not call the method `(filename).close()` after making a change to a file, it will not save anything to it because it thinks you're still a) writing to it or b) still reading it!
Hope this helps!
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
I think, if i understand you correctly, you can see [here, "Templating in Python"](http://wiki.python.org/moin/Templating).
|
Use a templating engine such as [Genshi](http://genshi.edgewall.org/) or [Jinja2](https://jinja.palletsprojects.com/en/2.11.x/).
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
Use a templating engine such as [Genshi](http://genshi.edgewall.org/) or [Jinja2](https://jinja.palletsprojects.com/en/2.11.x/).
|
Templating, as suggested in other answers, is probably the best answer (I wrote an early, quirky templating module called [yaptu](http://code.activestate.com/recipes/52305/), but modern mature ones as suggested in other answers will probably make you happier;-).
However, though it's been a long time since I last used it, I fondly recall the [Quixote](http://www.mems-exchange.org/software/quixote/) approach, which is roughly a "reverse templating" (embedding HTML generation within Python, rather than viceversa as normal templating does). Maybe you should take a look and see if you like it better;-).
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
[Dominate](https://github.com/Knio/dominate) is a Python library for creating HTML documents and fragments directly in code without using templating. You could create a simple image gallery with something like this:
```py
import glob
from dominate import document
from dominate.tags import *
photos = glob.glob('photos/*.jpg')
with document(title='Photos') as doc:
h1('Photos')
for path in photos:
div(img(src=path), _class='photo')
with open('gallery.html', 'w') as f:
f.write(doc.render())
```
Output:
```
<!DOCTYPE html>
<html>
<head>
<title>Photos</title>
</head>
<body>
<h1>Photos</h1>
<div class="photo">
<img src="photos/IMG_5115.jpg">
</div>
<div class="photo">
<img src="photos/IMG_5117.jpg">
</div>
</body>
</html>
```
Disclaimer: I am the author of dominate
|
Use a templating engine such as [Genshi](http://genshi.edgewall.org/) or [Jinja2](https://jinja.palletsprojects.com/en/2.11.x/).
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
I think, if i understand you correctly, you can see [here, "Templating in Python"](http://wiki.python.org/moin/Templating).
|
Templating, as suggested in other answers, is probably the best answer (I wrote an early, quirky templating module called [yaptu](http://code.activestate.com/recipes/52305/), but modern mature ones as suggested in other answers will probably make you happier;-).
However, though it's been a long time since I last used it, I fondly recall the [Quixote](http://www.mems-exchange.org/software/quixote/) approach, which is roughly a "reverse templating" (embedding HTML generation within Python, rather than viceversa as normal templating does). Maybe you should take a look and see if you like it better;-).
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
[Dominate](https://github.com/Knio/dominate) is a Python library for creating HTML documents and fragments directly in code without using templating. You could create a simple image gallery with something like this:
```py
import glob
from dominate import document
from dominate.tags import *
photos = glob.glob('photos/*.jpg')
with document(title='Photos') as doc:
h1('Photos')
for path in photos:
div(img(src=path), _class='photo')
with open('gallery.html', 'w') as f:
f.write(doc.render())
```
Output:
```
<!DOCTYPE html>
<html>
<head>
<title>Photos</title>
</head>
<body>
<h1>Photos</h1>
<div class="photo">
<img src="photos/IMG_5115.jpg">
</div>
<div class="photo">
<img src="photos/IMG_5117.jpg">
</div>
</body>
</html>
```
Disclaimer: I am the author of dominate
|
I think, if i understand you correctly, you can see [here, "Templating in Python"](http://wiki.python.org/moin/Templating).
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
I think, if i understand you correctly, you can see [here, "Templating in Python"](http://wiki.python.org/moin/Templating).
|
Python is a batteries included language. So why not use `xml.dom.minidom`?
```py
from typing import List
from xml.dom.minidom import getDOMImplementation, Document
def getDOM() -> Document:
impl = getDOMImplementation()
dt = impl.createDocumentType(
"html",
"-//W3C//DTD XHTML 1.0 Strict//EN",
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd",
)
return impl.createDocument("http://www.w3.org/1999/xhtml", "html", dt)
def ul(items: List[str]) -> str:
dom = getDOM()
html = dom.documentElement
ul = dom.createElement("ul")
for item in items:
li = dom.createElement("li")
li.appendChild(dom.createTextNode(item))
ul.appendChild(li)
html.appendChild(ul)
return dom.toxml()
if __name__ == "__main__":
print(ul(["first item", "second item", "third item"]))
```
outputs:
```html
<?xml version="1.0" ?>
<!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'>
<html>
<ul>
<li>first item</li>
<li>second item</li>
<li>third item</li>
</ul>
</html>
```
The interface does not look like pythonic, but if you have been a fronted developer and used JavaScript DOM manipulation, it matches your mind better and yes it frees you from adding a needless dependency.
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
[Dominate](https://github.com/Knio/dominate) is a Python library for creating HTML documents and fragments directly in code without using templating. You could create a simple image gallery with something like this:
```py
import glob
from dominate import document
from dominate.tags import *
photos = glob.glob('photos/*.jpg')
with document(title='Photos') as doc:
h1('Photos')
for path in photos:
div(img(src=path), _class='photo')
with open('gallery.html', 'w') as f:
f.write(doc.render())
```
Output:
```
<!DOCTYPE html>
<html>
<head>
<title>Photos</title>
</head>
<body>
<h1>Photos</h1>
<div class="photo">
<img src="photos/IMG_5115.jpg">
</div>
<div class="photo">
<img src="photos/IMG_5117.jpg">
</div>
</body>
</html>
```
Disclaimer: I am the author of dominate
|
Templating, as suggested in other answers, is probably the best answer (I wrote an early, quirky templating module called [yaptu](http://code.activestate.com/recipes/52305/), but modern mature ones as suggested in other answers will probably make you happier;-).
However, though it's been a long time since I last used it, I fondly recall the [Quixote](http://www.mems-exchange.org/software/quixote/) approach, which is roughly a "reverse templating" (embedding HTML generation within Python, rather than viceversa as normal templating does). Maybe you should take a look and see if you like it better;-).
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
Python is a batteries included language. So why not use `xml.dom.minidom`?
```py
from typing import List
from xml.dom.minidom import getDOMImplementation, Document
def getDOM() -> Document:
impl = getDOMImplementation()
dt = impl.createDocumentType(
"html",
"-//W3C//DTD XHTML 1.0 Strict//EN",
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd",
)
return impl.createDocument("http://www.w3.org/1999/xhtml", "html", dt)
def ul(items: List[str]) -> str:
dom = getDOM()
html = dom.documentElement
ul = dom.createElement("ul")
for item in items:
li = dom.createElement("li")
li.appendChild(dom.createTextNode(item))
ul.appendChild(li)
html.appendChild(ul)
return dom.toxml()
if __name__ == "__main__":
print(ul(["first item", "second item", "third item"]))
```
outputs:
```html
<?xml version="1.0" ?>
<!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'>
<html>
<ul>
<li>first item</li>
<li>second item</li>
<li>third item</li>
</ul>
</html>
```
The interface does not look like pythonic, but if you have been a fronted developer and used JavaScript DOM manipulation, it matches your mind better and yes it frees you from adding a needless dependency.
|
Templating, as suggested in other answers, is probably the best answer (I wrote an early, quirky templating module called [yaptu](http://code.activestate.com/recipes/52305/), but modern mature ones as suggested in other answers will probably make you happier;-).
However, though it's been a long time since I last used it, I fondly recall the [Quixote](http://www.mems-exchange.org/software/quixote/) approach, which is roughly a "reverse templating" (embedding HTML generation within Python, rather than viceversa as normal templating does). Maybe you should take a look and see if you like it better;-).
|
2,301,163
|
I am looking for a way to create html files dynamically in python. I am writing a gallery script, which iterates over directories, collecting file meta data. I intended to then use this data to automatically create a picture gallery, based on html. Something very simple, just a table of pictures.
I really don't think writing to a file manually is the best method, and the code may be very long. So is there a better way to do this, possibly html specific?
|
2010/02/20
|
[
"https://Stackoverflow.com/questions/2301163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/106534/"
] |
[Dominate](https://github.com/Knio/dominate) is a Python library for creating HTML documents and fragments directly in code without using templating. You could create a simple image gallery with something like this:
```py
import glob
from dominate import document
from dominate.tags import *
photos = glob.glob('photos/*.jpg')
with document(title='Photos') as doc:
h1('Photos')
for path in photos:
div(img(src=path), _class='photo')
with open('gallery.html', 'w') as f:
f.write(doc.render())
```
Output:
```
<!DOCTYPE html>
<html>
<head>
<title>Photos</title>
</head>
<body>
<h1>Photos</h1>
<div class="photo">
<img src="photos/IMG_5115.jpg">
</div>
<div class="photo">
<img src="photos/IMG_5117.jpg">
</div>
</body>
</html>
```
Disclaimer: I am the author of dominate
|
Python is a batteries included language. So why not use `xml.dom.minidom`?
```py
from typing import List
from xml.dom.minidom import getDOMImplementation, Document
def getDOM() -> Document:
impl = getDOMImplementation()
dt = impl.createDocumentType(
"html",
"-//W3C//DTD XHTML 1.0 Strict//EN",
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd",
)
return impl.createDocument("http://www.w3.org/1999/xhtml", "html", dt)
def ul(items: List[str]) -> str:
dom = getDOM()
html = dom.documentElement
ul = dom.createElement("ul")
for item in items:
li = dom.createElement("li")
li.appendChild(dom.createTextNode(item))
ul.appendChild(li)
html.appendChild(ul)
return dom.toxml()
if __name__ == "__main__":
print(ul(["first item", "second item", "third item"]))
```
outputs:
```html
<?xml version="1.0" ?>
<!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'>
<html>
<ul>
<li>first item</li>
<li>second item</li>
<li>third item</li>
</ul>
</html>
```
The interface does not look like pythonic, but if you have been a fronted developer and used JavaScript DOM manipulation, it matches your mind better and yes it frees you from adding a needless dependency.
|
37,536,868
|
Not a maths major or a cs major, I just fool around with python (usually making scripts for simulations/theorycrafting on video games) and I discovered just how bad random.randint is performance wise. It's got me wondering why random.randint or random.randrange are used/made the way they are. I made a function that produces (for all intents and actual purposes) identical results to random.randint:
```
big_bleeping_float= (2**64 - 2)/(2**64 - 2)
def fastrandint(start, stop):
return start + int(random.random() * (stop - start + big_bleeping_float))
```
There is a massive 180% speed boost using that to generate an integer in the range (inclusive) 0-65 compared to random.randrange(0, 66), the next fastest method.
```
>>> timeit.timeit('random.randint(0, 66)', setup='from numpy import random', number=10000)
0.03165552873121058
>>> timeit.timeit('random.randint(0, 65)', setup='import random', number=10000)
0.022374771118336412
>>> timeit.timeit('random.randrange(0, 66)', setup='import random', number=10000)
0.01937231027605435
>>> timeit.timeit('fastrandint(0, 65)', setup='import random; from fasterthanrandomrandom import fastrandint', number=10000)
0.0067909916844523755
```
Furthermore, the adaptation of this function as an alternative to random.choice is 75% faster, and I'm sure adding larger-than-one stepped ranges would be faster (although I didn't test that). For almost double the speed boost as using the fastrandint function you can simply write it inline:
```
>>> timeit.timeit('int(random.random() * (65 + big_bleeping_float))', setup='import random; big_bleeping_float= (2**64 - 2)/(2**64 - 2)', number=10000)
0.0037642723021917845
```
So in summary, why am I wrong that my function is a better, why is it faster if it is better, and is there a yet even faster way to do what I'm doing?
|
2016/05/31
|
[
"https://Stackoverflow.com/questions/37536868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5511209/"
] |
`random.randint()` and others are calling into `random.getrandbits()` which may be less efficient that direct calls to `random()`, but for good reason.
It is actually more correct to use a `randint` that calls into `random.getrandbits()`, as it can be done in an unbiased manner.
You can see that using random.random to generate values in a range ends up being biased since there are only M floating point values between 0 and 1 (for M pretty large). Take an N that doesn't divide into M, then if we write M = k N + r for `0<r<N`. At best, using `random.random() * (N+1)`
we'll get `r` numbers coming out with probability (k+1)/M and `N-r` numbers coming out with probability `k/M`. (This is *at best*, using the pigeon hole principle - in practice I'd expect the bias to be even worse).
Note that this bias is only noticeable for
* A large number of sampling
* where N is a large fraction of M the number of floats in (0,1]
So it probably won't matter to you, unless you know you need unbiased values - such as for scientific computing etc.
In contrast, a value from `randint(0,N)` can be unbiased by using rejection sampling from repeated calls to `random.getrandbits()`. Of course managing this can introduce additional overhead.
**Aside**
If you end up using a custom random implementation then
From the [python 3 docs](https://docs.python.org/3/library/random.html)
>
> Almost all module functions depend on the basic function random(), which
> generates a random float uniformly in the semi-open range [0.0, 1.0).
>
>
>
This suggests that `randint` and others may be implemented using `random.random`. If this is the case I would expect them to be slower,
incurring at least one addition function call overhead per call.
Looking at the code referenced in <https://stackoverflow.com/a/37540577/221955> you can see that this will happen if the random implementation doesn't provide a `getrandbits()` function.
|
This is probably rarely a problem but `randint(0,10**1000)` works while `fastrandint(0,10**1000)` crashes. The slower time is probably the price you need to pay to have a function that works for all possible cases...
|
37,536,868
|
Not a maths major or a cs major, I just fool around with python (usually making scripts for simulations/theorycrafting on video games) and I discovered just how bad random.randint is performance wise. It's got me wondering why random.randint or random.randrange are used/made the way they are. I made a function that produces (for all intents and actual purposes) identical results to random.randint:
```
big_bleeping_float= (2**64 - 2)/(2**64 - 2)
def fastrandint(start, stop):
return start + int(random.random() * (stop - start + big_bleeping_float))
```
There is a massive 180% speed boost using that to generate an integer in the range (inclusive) 0-65 compared to random.randrange(0, 66), the next fastest method.
```
>>> timeit.timeit('random.randint(0, 66)', setup='from numpy import random', number=10000)
0.03165552873121058
>>> timeit.timeit('random.randint(0, 65)', setup='import random', number=10000)
0.022374771118336412
>>> timeit.timeit('random.randrange(0, 66)', setup='import random', number=10000)
0.01937231027605435
>>> timeit.timeit('fastrandint(0, 65)', setup='import random; from fasterthanrandomrandom import fastrandint', number=10000)
0.0067909916844523755
```
Furthermore, the adaptation of this function as an alternative to random.choice is 75% faster, and I'm sure adding larger-than-one stepped ranges would be faster (although I didn't test that). For almost double the speed boost as using the fastrandint function you can simply write it inline:
```
>>> timeit.timeit('int(random.random() * (65 + big_bleeping_float))', setup='import random; big_bleeping_float= (2**64 - 2)/(2**64 - 2)', number=10000)
0.0037642723021917845
```
So in summary, why am I wrong that my function is a better, why is it faster if it is better, and is there a yet even faster way to do what I'm doing?
|
2016/05/31
|
[
"https://Stackoverflow.com/questions/37536868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5511209/"
] |
[`randint`](https://hg.python.org/cpython/file/3.5/Lib/random.py#l214) calls [`randrange`](https://hg.python.org/cpython/file/3.5/Lib/random.py#l170) which does a bunch of range/type checks and conversions and then uses [`_randbelow`](https://hg.python.org/cpython/file/3.5/Lib/random.py#l220) to generate a random int. `_randbelow` again does some range checks and finally uses [`random`](https://hg.python.org/cpython/file/3.5/Lib/random.py#l647).
So if you remove all the checks for edge cases and some function call overhead, it's no surprise your `fastrandint` is quicker.
|
This is probably rarely a problem but `randint(0,10**1000)` works while `fastrandint(0,10**1000)` crashes. The slower time is probably the price you need to pay to have a function that works for all possible cases...
|
38,101,112
|
I'm trying to create an iOS Titanium Module using a pre-compiled CommonJS module. As the README file says:
>
> All JavaScript files in the assets directory are IGNORED except if you create a
> file named "com.moduletest.js" in this directory in which case it will be
> wrapped by native code, compiled, and used as your module. This allows you to
> run pure JavaScript modules that are pre-compiled.
>
>
>
I've created the file like this:
```
function ModuleTest(url){
if(url){
return url;
}
}
exports.ModuleTest = ModuleTest;
```
I'm using the 5.1.2.GA SDK (also tried with 5.3.0.GA) and I can build the module successfully either with `python build.py` or `titanium build --platform iOS --build-only`.
Then, in my test app doing:
```
var test = require('com.moduletest');
var url = new test.ModuleTest('http://url');
```
Gives me this error:
[](https://i.stack.imgur.com/D2sQ2.png)
undefined is not a constructor.
I've been trying a lot of alternatives but nothing seems to work and I didn't find any help on documentation about pre-compiled JS modules for iOS. Actually, the same process works great for Android!
Do you have some idea why?
My environment:
XCode 7.3.1
Operating System
Name - Mac OS X
Version - 10.11.5
Architecture - 64bit
# CPUs - 8
Memory - 16.0GB
Node.js
Node.js Version - 0.12.7
npm Version - 2.11.3
Appcelerator CLI
Installer - 4.2.6
Core Package - 5.3.0
Titanium CLI
CLI Version - 5.0.9
node-appc Version - 0.2.31
Maybe this is something related to my Node version or appc CLI, not sure =/
Thank you!
|
2016/06/29
|
[
"https://Stackoverflow.com/questions/38101112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1272263/"
] |
There are 2 solutions.
1) Don't put it in assets, but in the `/app/lib` folder as others have mentioned.
2) wrap it as an actual commonjs module, like the [module I wrote](http://github.com/Topener/To.ImageCache)
In both cases, you can just use `require('modulename')`. In case 2 you will need to add it to the `tiapp.xml` file just like any other module.
The path of your file will come in `/modules/commonjs/modulename/version/module.js` or something similar. My linked module will show you the requirements and paths needed.
|
I use a slightly different pattern that works excellent:
First a small snippet from my "module":
```
Stopwatch = function(listener) {
this.totalElapsed = 0; // * elapsed number of ms in total
this.listener = (listener != undefined ? listener : null); // * function to receive onTick events
};
Stopwatch.prototype.getElapsed = function() {
return this.totalElapsed;
};
module.exports = Stopwatch;
```
And then this is the way I use it:
```
var StopWatch = require('utils/StopWatch');
var stopWatch = new StopWatch(listenerFunction);
console.log('elapsed: ' + stopWatch.getElapsed());
```
|
71,821,635
|
I have installed two frameworks of Python 3.10. There is `wxPython310` for 64-bit Python. But there aren't any `wxPython` for 32-bit Python.
I tried to install `wxPython` with `https://wxpython.org/Phoenix/snapshot-builds/wxPython-4.1.2a1.dev5259+d3bdb143.tar.gz`, but it shows me the error code like this.
```
Running setup.py install for wxPython ... error
error: subprocess-exited-with-error
× Running setup.py install for wxPython did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\lib\site-packages\setuptools\dist.py:717: UserWarning: Usage of dash-separated 'license-file' will not be supported in future versions. Please use the underscore name 'license_file' instead
warnings.warn(
C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\lib\site-packages\setuptools\dist.py:294: DistDeprecationWarning: use_2to3 is ignored.
warnings.warn(f"{attr} is ignored.", DistDeprecationWarning)
running install
running build
C:\Users\tiger\AppData\Local\Temp\pip-req-build-b6xigzyz\build.py:42: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils.dep_util import newer, newer_group
Traceback (most recent call last):
File "C:\Users\tiger\AppData\Local\Temp\pip-req-build-b6xigzyz\build.py", line 49, in <module>
from buildtools.wxpysip import sip_runner
File "C:\Users\tiger\AppData\Local\Temp\pip-req-build-b6xigzyz\buildtools\wxpysip.py", line 20, in <module>
from sipbuild.code_generator import (set_globals, parse, generateCode,
ModuleNotFoundError: No module named 'sipbuild'
WARNING: Building this way assumes that all generated files have been
generated already. If that is not the case then use build.py directly
to generate the source and perform the build stage. You can use
--skip-build with the bdist_* or install commands to avoid this
message and the wxWidgets and Phoenix build steps in the future.
"C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\python.exe" -u build.py build
Command '"C:\Users\tiger\AppData\Local\Programs\Python\Python310-32\python.exe" -u build.py build' failed with exit code 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> wxPython
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
```
|
2022/04/11
|
[
"https://Stackoverflow.com/questions/71821635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15512931/"
] |
There are some issues with Python 3.10. The easiest way to deal with this situation is to downgrade your python version to 3.9.13.
The last wxPython came before Python 3.10 if I am not mistaken.
I was going through the same situation and tried a couple of solutions because I did not want to downgrade my python version, but I was wasting too much time that I did not have.
So for now just downgrade and wait for new releases, or use Tkinter.
|
Common problem with installing various versions is python interpreters that used for the installation
Make sure you use compatible version of python to install wxPython310
What IDE you use ?
for all case scenarios I would recommend to make sure that the installation done with the right Python version , if you don't know then reinstall required versions and make sure the versions you installed support the package
Tutorial for installing : [Link](https://www.tutorialspoint.com/wxpython/wxpython_environment.htm)
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
I like the [answer to this question](http://docs.python.org/2/faq/programming.html#how-can-my-code-discover-the-name-of-an-object) that's found in the Python programming FAQ, quoting Fredrik Lundh:
>
> The same way as you get the name of that cat you found on your porch:
> the cat (object) itself cannot tell you its name, and it
> doesn’t really care – so the only way to find out what it’s called is
> to ask all your neighbours (namespaces) if it’s their cat (object)...
>
>
> ....and don’t be surprised if you’ll find that it’s known by many names, or no name at all!
>
>
>
|
Here's an answer that only became possible as of Python 3.6 with f-strings:
```
x = 10
print(f'{x=}') # Outputs x=10
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
I like the [answer to this question](http://docs.python.org/2/faq/programming.html#how-can-my-code-discover-the-name-of-an-object) that's found in the Python programming FAQ, quoting Fredrik Lundh:
>
> The same way as you get the name of that cat you found on your porch:
> the cat (object) itself cannot tell you its name, and it
> doesn’t really care – so the only way to find out what it’s called is
> to ask all your neighbours (namespaces) if it’s their cat (object)...
>
>
> ....and don’t be surprised if you’ll find that it’s known by many names, or no name at all!
>
>
>
|
**New Solution Using `readline`**
If you're in an interactive session, here's an extremely naive solution that will usually work:
```
def show(x):
from readline import get_current_history_length, get_history_item
print(get_history_item(get_current_history_length()).strip()[5:-1] + ' = ' + str(x))
```
All it does is read the last line input in the interactive session buffer, remove any leading or trailing whitespace, then give you everything but the first five characters (hopefully `show(`) and the last character (hopefully `)`), thus leaving you with whatever was passed in.
Example:
```
>>> a = 10
>>> show(a)
a = 10
>>> b = 10
>>> show(b)
b = 10
>>> show(10)
10 = 10
>>> show([10]*10)
[10]*10 = [10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
>>> show('Hello' + 'World'.rjust(10))
'Hello' + 'World'.rjust(10) = Hello World
```
If you're on OS X using the version of Python that comes with it, you don't have `readline` installed by default, but you can install it via `pip`. If you're on Windows, `readline` doesn't exist for you... you might be able to use `pyreadline` from `pip` but I've never tried it so I can't say if it's an acceptable substitute or not.
I leave making the above code more bullet-proof as an exercise for the reader. Things to consider would be how to make it handle things like this:
```
show(show(show(10)))
show(
10
)
```
If you want this kind of thing to show variables names from a script, you can look into using inspect and getting the source code of the calling frame. But given I can't think of why you would ever want to use `show()` in a script or why you would complicate the function just to handle people intentionally screwing with it as I did above, I'm not going to waste my time right now figuring it out.
**Original Solution Using `inspect`**
Here's my original solution, which is more complicated and has a more glaring set of caveats, but is more portable since it only uses `inspect`, not `readline`, so runs on all platforms and whether you're in an interactive session or in a script:
```
def show(x):
from inspect import currentframe
# Using inspect, figure out what the calling environment looked like by merging
# what was available from builtin, globals, and locals.
# Do it in this order to emulate shadowing variables
# (locals shadow globals shadow builtins).
callingFrame = currentframe().f_back
callingEnv = callingFrame.f_builtins.copy()
callingEnv.update(callingFrame.f_globals)
callingEnv.update(callingFrame.f_locals)
# Get the variables in the calling environment equal to what was passed in.
possibleRoots = [item[0] for item in callingEnv.items() if item[1] == x]
# If there are none, whatever you were given was more than just an identifier.
if not possibleRoots:
root = '<unnamed>'
else:
# If there is exactly one identifier equal to it,
# that's probably the one you want.
# This assumption could be wrong - you may have been given
# something more than just an identifier.
if len(possibleRoots) == 1:
root = str(possibleRoots[0])
else:
# More than one possibility? List them all.
# Again, though, it could actually be unnamed.
root = '<'
for possibleRoot in possibleRoots[:-1]:
root += str(possibleRoot) + ', '
root += 'or ' + str(possibleRoots[-1]) + '>'
print(root + ' = ' + str(x))
```
Here's a case where it works perfectly (the one from the question):
```
>>> a = 10
>>> show(a)
a = 10
```
Here's another fun case:
```
>>> show(quit)
quit = Use quit() or Ctrl-Z plus Return to exit
```
Now you know how that functionality was implemented in the Python interpreter - `quit` is a built-in identifier for a `str` that says how to properly quit.
Here's a few cases where it's less than you might want, but... acceptable?
```
>>> b = 10
>>> show(b)
<a, or b> = 10
>>> show(11)
<unnamed> = 11
>>> show([a])
<unnamed> = [10]
```
And here's a case where it prints out a true statement, but definitely not what you were looking for:
```
>>> show(10)
<a, or b> = 10
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
I forebode that the following solution will gain several criticisms
```
def show(*x):
for el in x:
fl = None
for gname,gobj in globals().iteritems():
if el==gobj:
print '%s == %r' % (gname,el)
fl = True
if not fl:
print 'There is no identifier assigned to %r in the global namespace' % el
un = 1
y = 'a'
a = 12
b = c = 45
arguments = ('a', 1, 10)
lolo = [45,'a',a,'heat']
print '============================================'
show(12)
show(a)
print '============================================'
show(45)
print
show(b)
print '============================================'
show(arguments)
print
show(('a', 1, 10))
print '@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
show(*arguments)
print '@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
show(*(arguments[1:3] + (b,)))
```
result
```
============================================
a == 12
a == 12
============================================
c == 45
b == 45
c == 45
b == 45
============================================
arguments == ('a', 1, 10)
arguments == ('a', 1, 10)
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
y == 'a'
un == 1
There is no identifier assigned to 10 in the global namespace
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
un == 1
There is no identifier assigned to 10 in the global namespace
c == 45
b == 45
```
|
**New Solution Using `readline`**
If you're in an interactive session, here's an extremely naive solution that will usually work:
```
def show(x):
from readline import get_current_history_length, get_history_item
print(get_history_item(get_current_history_length()).strip()[5:-1] + ' = ' + str(x))
```
All it does is read the last line input in the interactive session buffer, remove any leading or trailing whitespace, then give you everything but the first five characters (hopefully `show(`) and the last character (hopefully `)`), thus leaving you with whatever was passed in.
Example:
```
>>> a = 10
>>> show(a)
a = 10
>>> b = 10
>>> show(b)
b = 10
>>> show(10)
10 = 10
>>> show([10]*10)
[10]*10 = [10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
>>> show('Hello' + 'World'.rjust(10))
'Hello' + 'World'.rjust(10) = Hello World
```
If you're on OS X using the version of Python that comes with it, you don't have `readline` installed by default, but you can install it via `pip`. If you're on Windows, `readline` doesn't exist for you... you might be able to use `pyreadline` from `pip` but I've never tried it so I can't say if it's an acceptable substitute or not.
I leave making the above code more bullet-proof as an exercise for the reader. Things to consider would be how to make it handle things like this:
```
show(show(show(10)))
show(
10
)
```
If you want this kind of thing to show variables names from a script, you can look into using inspect and getting the source code of the calling frame. But given I can't think of why you would ever want to use `show()` in a script or why you would complicate the function just to handle people intentionally screwing with it as I did above, I'm not going to waste my time right now figuring it out.
**Original Solution Using `inspect`**
Here's my original solution, which is more complicated and has a more glaring set of caveats, but is more portable since it only uses `inspect`, not `readline`, so runs on all platforms and whether you're in an interactive session or in a script:
```
def show(x):
from inspect import currentframe
# Using inspect, figure out what the calling environment looked like by merging
# what was available from builtin, globals, and locals.
# Do it in this order to emulate shadowing variables
# (locals shadow globals shadow builtins).
callingFrame = currentframe().f_back
callingEnv = callingFrame.f_builtins.copy()
callingEnv.update(callingFrame.f_globals)
callingEnv.update(callingFrame.f_locals)
# Get the variables in the calling environment equal to what was passed in.
possibleRoots = [item[0] for item in callingEnv.items() if item[1] == x]
# If there are none, whatever you were given was more than just an identifier.
if not possibleRoots:
root = '<unnamed>'
else:
# If there is exactly one identifier equal to it,
# that's probably the one you want.
# This assumption could be wrong - you may have been given
# something more than just an identifier.
if len(possibleRoots) == 1:
root = str(possibleRoots[0])
else:
# More than one possibility? List them all.
# Again, though, it could actually be unnamed.
root = '<'
for possibleRoot in possibleRoots[:-1]:
root += str(possibleRoot) + ', '
root += 'or ' + str(possibleRoots[-1]) + '>'
print(root + ' = ' + str(x))
```
Here's a case where it works perfectly (the one from the question):
```
>>> a = 10
>>> show(a)
a = 10
```
Here's another fun case:
```
>>> show(quit)
quit = Use quit() or Ctrl-Z plus Return to exit
```
Now you know how that functionality was implemented in the Python interpreter - `quit` is a built-in identifier for a `str` that says how to properly quit.
Here's a few cases where it's less than you might want, but... acceptable?
```
>>> b = 10
>>> show(b)
<a, or b> = 10
>>> show(11)
<unnamed> = 11
>>> show([a])
<unnamed> = [10]
```
And here's a case where it prints out a true statement, but definitely not what you were looking for:
```
>>> show(10)
<a, or b> = 10
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
No, you cannot know what the name was of the local variable used to pass a value to your function.
This is an impossible task in any case. What would be the variable name in the following example?
```
arguments = ('a', 1, 10)
somefunction(*(arguments[:2] + [10]))
```
Here we pass in 3 arguments, two taken from a tuple we defined earlier, and one literal value, and all three are passed in using the variable argument list syntax.
|
Here's an answer that only became possible as of Python 3.6 with f-strings:
```
x = 10
print(f'{x=}') # Outputs x=10
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
Not exactly like this. However, you can achieve something similar:
```
def show(**kwargs):
print(', '.join('%s=%s' % kv for kv in kwargs.items()))
show(a=20)
```
|
It seems that it's impossible in Python but it's actually possible in C++.
```
#define show(x) std::cout << #x << " = " << x << std::endl
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
It seems that it's impossible in Python but it's actually possible in C++.
```
#define show(x) std::cout << #x << " = " << x << std::endl
```
|
Here's an answer that only became possible as of Python 3.6 with f-strings:
```
x = 10
print(f'{x=}') # Outputs x=10
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
No, you cannot know what the name was of the local variable used to pass a value to your function.
This is an impossible task in any case. What would be the variable name in the following example?
```
arguments = ('a', 1, 10)
somefunction(*(arguments[:2] + [10]))
```
Here we pass in 3 arguments, two taken from a tuple we defined earlier, and one literal value, and all three are passed in using the variable argument list syntax.
|
**New Solution Using `readline`**
If you're in an interactive session, here's an extremely naive solution that will usually work:
```
def show(x):
from readline import get_current_history_length, get_history_item
print(get_history_item(get_current_history_length()).strip()[5:-1] + ' = ' + str(x))
```
All it does is read the last line input in the interactive session buffer, remove any leading or trailing whitespace, then give you everything but the first five characters (hopefully `show(`) and the last character (hopefully `)`), thus leaving you with whatever was passed in.
Example:
```
>>> a = 10
>>> show(a)
a = 10
>>> b = 10
>>> show(b)
b = 10
>>> show(10)
10 = 10
>>> show([10]*10)
[10]*10 = [10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
>>> show('Hello' + 'World'.rjust(10))
'Hello' + 'World'.rjust(10) = Hello World
```
If you're on OS X using the version of Python that comes with it, you don't have `readline` installed by default, but you can install it via `pip`. If you're on Windows, `readline` doesn't exist for you... you might be able to use `pyreadline` from `pip` but I've never tried it so I can't say if it's an acceptable substitute or not.
I leave making the above code more bullet-proof as an exercise for the reader. Things to consider would be how to make it handle things like this:
```
show(show(show(10)))
show(
10
)
```
If you want this kind of thing to show variables names from a script, you can look into using inspect and getting the source code of the calling frame. But given I can't think of why you would ever want to use `show()` in a script or why you would complicate the function just to handle people intentionally screwing with it as I did above, I'm not going to waste my time right now figuring it out.
**Original Solution Using `inspect`**
Here's my original solution, which is more complicated and has a more glaring set of caveats, but is more portable since it only uses `inspect`, not `readline`, so runs on all platforms and whether you're in an interactive session or in a script:
```
def show(x):
from inspect import currentframe
# Using inspect, figure out what the calling environment looked like by merging
# what was available from builtin, globals, and locals.
# Do it in this order to emulate shadowing variables
# (locals shadow globals shadow builtins).
callingFrame = currentframe().f_back
callingEnv = callingFrame.f_builtins.copy()
callingEnv.update(callingFrame.f_globals)
callingEnv.update(callingFrame.f_locals)
# Get the variables in the calling environment equal to what was passed in.
possibleRoots = [item[0] for item in callingEnv.items() if item[1] == x]
# If there are none, whatever you were given was more than just an identifier.
if not possibleRoots:
root = '<unnamed>'
else:
# If there is exactly one identifier equal to it,
# that's probably the one you want.
# This assumption could be wrong - you may have been given
# something more than just an identifier.
if len(possibleRoots) == 1:
root = str(possibleRoots[0])
else:
# More than one possibility? List them all.
# Again, though, it could actually be unnamed.
root = '<'
for possibleRoot in possibleRoots[:-1]:
root += str(possibleRoot) + ', '
root += 'or ' + str(possibleRoots[-1]) + '>'
print(root + ' = ' + str(x))
```
Here's a case where it works perfectly (the one from the question):
```
>>> a = 10
>>> show(a)
a = 10
```
Here's another fun case:
```
>>> show(quit)
quit = Use quit() or Ctrl-Z plus Return to exit
```
Now you know how that functionality was implemented in the Python interpreter - `quit` is a built-in identifier for a `str` that says how to properly quit.
Here's a few cases where it's less than you might want, but... acceptable?
```
>>> b = 10
>>> show(b)
<a, or b> = 10
>>> show(11)
<unnamed> = 11
>>> show([a])
<unnamed> = [10]
```
And here's a case where it prints out a true statement, but definitely not what you were looking for:
```
>>> show(10)
<a, or b> = 10
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
I forebode that the following solution will gain several criticisms
```
def show(*x):
for el in x:
fl = None
for gname,gobj in globals().iteritems():
if el==gobj:
print '%s == %r' % (gname,el)
fl = True
if not fl:
print 'There is no identifier assigned to %r in the global namespace' % el
un = 1
y = 'a'
a = 12
b = c = 45
arguments = ('a', 1, 10)
lolo = [45,'a',a,'heat']
print '============================================'
show(12)
show(a)
print '============================================'
show(45)
print
show(b)
print '============================================'
show(arguments)
print
show(('a', 1, 10))
print '@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
show(*arguments)
print '@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
show(*(arguments[1:3] + (b,)))
```
result
```
============================================
a == 12
a == 12
============================================
c == 45
b == 45
c == 45
b == 45
============================================
arguments == ('a', 1, 10)
arguments == ('a', 1, 10)
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
y == 'a'
un == 1
There is no identifier assigned to 10 in the global namespace
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
un == 1
There is no identifier assigned to 10 in the global namespace
c == 45
b == 45
```
|
Here's an answer that only became possible as of Python 3.6 with f-strings:
```
x = 10
print(f'{x=}') # Outputs x=10
```
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
No, you cannot know what the name was of the local variable used to pass a value to your function.
This is an impossible task in any case. What would be the variable name in the following example?
```
arguments = ('a', 1, 10)
somefunction(*(arguments[:2] + [10]))
```
Here we pass in 3 arguments, two taken from a tuple we defined earlier, and one literal value, and all three are passed in using the variable argument list syntax.
|
I like the [answer to this question](http://docs.python.org/2/faq/programming.html#how-can-my-code-discover-the-name-of-an-object) that's found in the Python programming FAQ, quoting Fredrik Lundh:
>
> The same way as you get the name of that cat you found on your porch:
> the cat (object) itself cannot tell you its name, and it
> doesn’t really care – so the only way to find out what it’s called is
> to ask all your neighbours (namespaces) if it’s their cat (object)...
>
>
> ....and don’t be surprised if you’ll find that it’s known by many names, or no name at all!
>
>
>
|
13,827,543
|
I like to know what was the local variable names when they are passed to a function. I'm not sure whether this is possible at all. Let's consider this example:
function definition:
```
def show(x):
print(x)
```
usage:
```
a = 10
show(a)
```
this prints 10. But I like to print "a = 10". Is this possible in python?
|
2012/12/11
|
[
"https://Stackoverflow.com/questions/13827543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/624074/"
] |
No, you cannot know what the name was of the local variable used to pass a value to your function.
This is an impossible task in any case. What would be the variable name in the following example?
```
arguments = ('a', 1, 10)
somefunction(*(arguments[:2] + [10]))
```
Here we pass in 3 arguments, two taken from a tuple we defined earlier, and one literal value, and all three are passed in using the variable argument list syntax.
|
Not exactly like this. However, you can achieve something similar:
```
def show(**kwargs):
print(', '.join('%s=%s' % kv for kv in kwargs.items()))
show(a=20)
```
|
60,877,741
|
I'm trying to write a script with python/numpy/scipy for data manipulation, fitting and plotting of angle dependent magnetoresistance measurements. I'm new to Python, got the frame code from my PhD advisor, and managed to add few hundred lines of code to the frame. After a while I noticed that some measurements had multiple blunders, and since the script should do all the manipulation automatically, I tried to mask those points and fit the curve to the unmasked points (the curve is a sine squared superposed on a linear function, so numpy.ma.polyfit isn't really a choice).
However, after masking both x and y coordinates of the problematic points, the fitting would still take them into consideration, even though they wouldn't be shown in the plot. The example is simplified, but the same is happening;
```
import numpy.ma as ma
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def Funk(x, k, y0):
return k*x + y0
fig,ax= plt.subplots()
x=ma.masked_array([1,2,3,4,5,6,7,8,9,10],mask=[0,0,0,0,0,0,1,1,1,1])
y=ma.masked_array([1,2,3,4,5,30,35,40,45,50], mask=[0,0,0,0,0,1,1,1,1,1])
fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x, y)
ax.plot(x, Funk(x, fitParamsFunk[0], fitParamsFunk[1]))
ax.errorbar(x, y, yerr = None, ms=3, fmt='-o')
plt.show()
```
[The second half of the points is masked and not shown in the plot, but still taken into consideration.](https://i.stack.imgur.com/GwYSs.jpg)
While writing the post I figured out that I can do this:
```
def Funk(x, k, y0):
return k*x + y0
fig,ax= plt.subplots()
x=np.array([1,2,3,4,5,6,7,8,9,10])
y=np.array([1,2,3,4,5,30,35,40,45,50])
mask=np.array([0,0,0,0,0,1,1,1,1,1])
fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x[mask], y[mask])
ax.plot(x, Funk(x, fitParamsFunk[0], fitParamsFunk[1]))
ax.errorbar(x, y, yerr = None, ms=3, fmt='-o')
plt.show()
```
[What I actually wanted](https://i.stack.imgur.com/eAeWf.jpg)
I guess that scipy curve\_fit isn't meant to deal with masked arrays, but I still would like to know whether there is any workaround for this (I need to work with masked arrays because the number of data points is >10e6, but I'm only plotting 100 at once, so I would need to take the mask of the part of the array that I want to plot and assign it to another array, while copying the values of the array to another or setting the original mask to False)? Thanks for any suggestions
|
2020/03/26
|
[
"https://Stackoverflow.com/questions/60877741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13131921/"
] |
If you only want to consider the valid entries, you can use the inverse of the mask as an index:
```
x = ma.masked_array([1,2,3,4,5,6,7,8,9,10], mask=[0,0,0,0,0,1,1,1,1,1]) # changed mask
y = ma.masked_array([1,2,3,4,5,30,35,40,45,50], mask=[0,0,0,0,0,1,1,1,1,1])
fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x[~x.mask], y[~y.mask])
```
PS: Note that both arrays need to have the same amount of valid entries.
|
The use of mask in numerical calculus is equivalent to the use of the Heaviside step function in analytical calculus. For example this becomes very simple by application for piecewise linear regression:
[](https://i.stack.imgur.com/Zg2Z3.gif)
They are several examples of piecewise linear regression in the paper : <https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf>
Using the method shown in this paper, the very simple calculus below leads to the expected form of result :
[](https://i.stack.imgur.com/q7AL2.gif)
Note : In case of large number of points, if there was several points with slightly different abscissae in the transition area it sould be more accurate to apply the case considered pages 29-31 of the paper referenced above.
|
60,877,741
|
I'm trying to write a script with python/numpy/scipy for data manipulation, fitting and plotting of angle dependent magnetoresistance measurements. I'm new to Python, got the frame code from my PhD advisor, and managed to add few hundred lines of code to the frame. After a while I noticed that some measurements had multiple blunders, and since the script should do all the manipulation automatically, I tried to mask those points and fit the curve to the unmasked points (the curve is a sine squared superposed on a linear function, so numpy.ma.polyfit isn't really a choice).
However, after masking both x and y coordinates of the problematic points, the fitting would still take them into consideration, even though they wouldn't be shown in the plot. The example is simplified, but the same is happening;
```
import numpy.ma as ma
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def Funk(x, k, y0):
return k*x + y0
fig,ax= plt.subplots()
x=ma.masked_array([1,2,3,4,5,6,7,8,9,10],mask=[0,0,0,0,0,0,1,1,1,1])
y=ma.masked_array([1,2,3,4,5,30,35,40,45,50], mask=[0,0,0,0,0,1,1,1,1,1])
fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x, y)
ax.plot(x, Funk(x, fitParamsFunk[0], fitParamsFunk[1]))
ax.errorbar(x, y, yerr = None, ms=3, fmt='-o')
plt.show()
```
[The second half of the points is masked and not shown in the plot, but still taken into consideration.](https://i.stack.imgur.com/GwYSs.jpg)
While writing the post I figured out that I can do this:
```
def Funk(x, k, y0):
return k*x + y0
fig,ax= plt.subplots()
x=np.array([1,2,3,4,5,6,7,8,9,10])
y=np.array([1,2,3,4,5,30,35,40,45,50])
mask=np.array([0,0,0,0,0,1,1,1,1,1])
fitParamsFunk, fitCovariancesFunk = curve_fit(Funk, x[mask], y[mask])
ax.plot(x, Funk(x, fitParamsFunk[0], fitParamsFunk[1]))
ax.errorbar(x, y, yerr = None, ms=3, fmt='-o')
plt.show()
```
[What I actually wanted](https://i.stack.imgur.com/eAeWf.jpg)
I guess that scipy curve\_fit isn't meant to deal with masked arrays, but I still would like to know whether there is any workaround for this (I need to work with masked arrays because the number of data points is >10e6, but I'm only plotting 100 at once, so I would need to take the mask of the part of the array that I want to plot and assign it to another array, while copying the values of the array to another or setting the original mask to False)? Thanks for any suggestions
|
2020/03/26
|
[
"https://Stackoverflow.com/questions/60877741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13131921/"
] |
I think that what you want to do is to define a mask that lists the indices of the "good data points" and then use that as the points to fit (and/or to plot).
As a lead author of lmfit, I would recommend using that library for curve-fitting: it has many useful features over `curve_fit`. With this, your example might look like this:
```
import numpy as np
import matplotlib.pyplot as plt
from lmfit import Model
def Funk(x, k, y0, good_points=None): # note: add keyword argument
f = k*x + y0
if good_points is not None:
f = f[good_points] # apply mask of good data points
return f
x = np.array([1,2,3,4,5, 6,7,8.,9,10])
y = np.array([1,2,3,4,5,30,35.,40,45,50])
y += np.random.normal(size=len(x), scale=0.19) # add some noise to make it fun
# make an array of the indices of the "good data points"
# does not need to be contiguous.
good_points=np.array([0,1,2,3,4])
# turn your model function Funk into an lmfit Model
mymodel = Model(Funk)
# create parameters, giving initial values. Note that parameters are
# named using the names of your function's argument and that keyword
# arguments with non-numeric defaults like 'good points' are seen to
# *not* be parameters. Like the independent variable `x`, you'll
# need to pass that in when you do the fit.
# also: parameters can be fixed, or given `min` and `max` attributes
params = mymodel.make_params(k=1.4, y0=0.2)
params['k'].min = 0
# do the fit to the 'good data', passing in the parameters, the
# independent variable `x` and the `good_points` mask.
result = mymodel.fit(y[good_points], params, x=x, good_points=good_points)
# print out a report of best fit values, uncertainties, correlations, etc.
print(result.fit_report())
# plot the results, again using the good_points array as needed.
plt.plot(x, y, 'o', label='all data')
plt.plot(x[good_points], result.best_fit[good_points], label='fit to good data')
plt.legend()
plt.show()
```
This will print out
```
[[Model]]
Model(Funk)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 7
# data points = 5
# variables = 2
chi-square = 0.02302999
reduced chi-square = 0.00767666
Akaike info crit = -22.9019787
Bayesian info crit = -23.6831029
[[Variables]]
k: 1.02460577 +/- 0.02770680 (2.70%) (init = 1.4)
y0: -0.04135096 +/- 0.09189305 (222.23%) (init = 0.2)
[[Correlations]] (unreported correlations are < 0.100)
C(k, y0) = -0.905
```
and produce a plot of
[](https://i.stack.imgur.com/3m8Le.png)
hope that helps get you started.
|
The use of mask in numerical calculus is equivalent to the use of the Heaviside step function in analytical calculus. For example this becomes very simple by application for piecewise linear regression:
[](https://i.stack.imgur.com/Zg2Z3.gif)
They are several examples of piecewise linear regression in the paper : <https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf>
Using the method shown in this paper, the very simple calculus below leads to the expected form of result :
[](https://i.stack.imgur.com/q7AL2.gif)
Note : In case of large number of points, if there was several points with slightly different abscissae in the transition area it sould be more accurate to apply the case considered pages 29-31 of the paper referenced above.
|
46,497,838
|
We have a Python client that connects to the Amazon S3 via a VPC endpoint. Our code uses boto and we are able to connect and download from S3.
After migration from boto to boto3, we noticed that the VPC endpoint connection no longer works. Below is a copy snippet that can reproduce the problem.
```sh
python -c "import boto3;
s3 = boto3.resource('s3',
aws_access_key_id='foo',
aws_secret_access_key='bar');
s3.Bucket('some-bucket').download_file('hello-remote.txt',
'hello-local.txt')"
```
got the below error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python27\lib\site-packages\boto3-1.4.0-py2.7.egg\boto3\s3\inject.py",
line 163, in bucket_download_file
ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
File "C:\Python27\lib\site-packages\boto3-1.4.0-py2.7.egg\boto3\s3\inject.py",
line 125, in download_file
extra_args=ExtraArgs, callback=Callback)
File "C:\Python27\lib\site-packages\boto3-1.4.0-py2.7.egg\boto3\s3\transfer.py
", line 269, in download_file
future.result()
File "build\bdist.win32\egg\s3transfer\futures.py", line 73, in result
File "build\bdist.win32\egg\s3transfer\futures.py", line 233, in result
botocore.vendored.requests.exceptions.ConnectionError: ('Connection aborted.', e
rror(10060, 'A connection attempt failed because the connected party did not pro
perly respond after a period of time, or established connection failed because c
onnected host has failed to respond'))
```
Does anyone know if boto3 support connection to S3 via VPC endpoint and/or was able to get it to work? We are using boto3-1.4.0.
|
2017/09/29
|
[
"https://Stackoverflow.com/questions/46497838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/456481/"
] |
This is most likely a configuration error in your VPC endpoint policies. If your policies are correct, then Boto3 never knows exactly how it's able to reach the S3 location, it really is up to the policies to allow/forbid this type of traffic.
Here's a quick walkthrough of what you can do for troubleshooting: <https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/>
Other relevant docs:
* <https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html>
* <https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html#vpc-endpoint-policies>
* <https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3>
|
It depends on your AWS policies and roles defined.
Shortest way to make your code run is to make the S3 bucket Public [ not recommended]
else add your IP in the security policies and then re-run the code.
Details of it can be found here.
<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html>
Use IP whitelisting to secure your AWS Transfer for SFTP servers
<https://aws.amazon.com/blogs/storage/use-ip-whitelisting-to-secure-your-aws-transfer-for-sftp-servers/>
|
50,648,152
|
I want to install a rpm package, (e.g. python 3), and all of its dependencies in a linux server that does not have internet connection.
How can I do that?
|
2018/06/01
|
[
"https://Stackoverflow.com/questions/50648152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2128078/"
] |
Assuming you already downloaded the package before from another machine that has internet access and FTP the files to your server, you can use the following command to install a rpm
```
rpm -ivh package_name_x85_64.rpm
```
options:
* i = This installs a new package.
* v = Print verbose information
* h = Print 50 hash marks as the package archive is unpacked.
You can also check the rpm manual for more options and details
|
There is a way, but it is quite tricky and might mess up your servers, so be **very careful**.
Nomenclature:
* **online** : your system that is connected to the repositories
* **offline**: your system that is not connected
Steps:
Compress your rpm database from the **offline** system and transfer it to the **online** system:
```
cd /var/lib/rpm/
tar -cvzf /tmp/rpmdb.tgz *
scp /tmp/rpmdb.tgz root@online:/tmp
```
on your **online** system; replace your rpm db with the one from the **offline** system:
```
cp -r /var/lib/rpm{,.bak} # back up your rpmdb from your online system. Make sure not to lose this!!
rm -rf /var/lib/rpm/*
cd /var/lib/rpm
tar -xvf /tmp/rpmdb.tgz # now your online system pretends to have the rpm database from the offline system. Don't start really installing / uninstalling rpms or you'll break everything
```
now simulate your update with download-only (I didn't run this with yum but with zypper, but it should be similar):
```
zypper up --download-only
```
Now you can fetch all the downloaded packages and they should suffice for updating your offline system
And now restore your **online** machine:
```
rm -rf /var/lib/rpm
cp -r /var/lib/rpm{.bak,}
```
|
50,648,152
|
I want to install a rpm package, (e.g. python 3), and all of its dependencies in a linux server that does not have internet connection.
How can I do that?
|
2018/06/01
|
[
"https://Stackoverflow.com/questions/50648152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2128078/"
] |
In CentOS/RedHat you can use `yumdownloader` for specific packages, this downloads all RPMs required, then, compress the directory, upload it to the server without Internet access and install RPMs.
[Here](https://adrianescutia.github.io/adrianes/snippets/kubernetes/install-kubernetes-docker-offline#download-docker-online-machinedocker-host) you can find and example, installing Kubernetes without Internet access.
```sh
yumdownloader --assumeyes --destdir=/var/rpm_dir/docker-ce --resolve docker-ce
tar -czvf d4r-k8s.tar.gz /var/rpm_dir
# Upload files
scp d4r-k8s.tar.gz root@YOUR-IP:/root
# Connect to your server
ssh root@YOUR-IP
tar -xzvf /root/d4r-k8s.tar.gz -C /
# install Docker:
yum install -y --cacheonly --disablerepo=* /var/rpm_dir/docker-ce/*.rpm
```
|
Assuming you already downloaded the package before from another machine that has internet access and FTP the files to your server, you can use the following command to install a rpm
```
rpm -ivh package_name_x85_64.rpm
```
options:
* i = This installs a new package.
* v = Print verbose information
* h = Print 50 hash marks as the package archive is unpacked.
You can also check the rpm manual for more options and details
|
50,648,152
|
I want to install a rpm package, (e.g. python 3), and all of its dependencies in a linux server that does not have internet connection.
How can I do that?
|
2018/06/01
|
[
"https://Stackoverflow.com/questions/50648152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2128078/"
] |
In CentOS/RedHat you can use `yumdownloader` for specific packages, this downloads all RPMs required, then, compress the directory, upload it to the server without Internet access and install RPMs.
[Here](https://adrianescutia.github.io/adrianes/snippets/kubernetes/install-kubernetes-docker-offline#download-docker-online-machinedocker-host) you can find and example, installing Kubernetes without Internet access.
```sh
yumdownloader --assumeyes --destdir=/var/rpm_dir/docker-ce --resolve docker-ce
tar -czvf d4r-k8s.tar.gz /var/rpm_dir
# Upload files
scp d4r-k8s.tar.gz root@YOUR-IP:/root
# Connect to your server
ssh root@YOUR-IP
tar -xzvf /root/d4r-k8s.tar.gz -C /
# install Docker:
yum install -y --cacheonly --disablerepo=* /var/rpm_dir/docker-ce/*.rpm
```
|
There is a way, but it is quite tricky and might mess up your servers, so be **very careful**.
Nomenclature:
* **online** : your system that is connected to the repositories
* **offline**: your system that is not connected
Steps:
Compress your rpm database from the **offline** system and transfer it to the **online** system:
```
cd /var/lib/rpm/
tar -cvzf /tmp/rpmdb.tgz *
scp /tmp/rpmdb.tgz root@online:/tmp
```
on your **online** system; replace your rpm db with the one from the **offline** system:
```
cp -r /var/lib/rpm{,.bak} # back up your rpmdb from your online system. Make sure not to lose this!!
rm -rf /var/lib/rpm/*
cd /var/lib/rpm
tar -xvf /tmp/rpmdb.tgz # now your online system pretends to have the rpm database from the offline system. Don't start really installing / uninstalling rpms or you'll break everything
```
now simulate your update with download-only (I didn't run this with yum but with zypper, but it should be similar):
```
zypper up --download-only
```
Now you can fetch all the downloaded packages and they should suffice for updating your offline system
And now restore your **online** machine:
```
rm -rf /var/lib/rpm
cp -r /var/lib/rpm{.bak,}
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
With `bash` version >= 3.0 and a regex:
```
[[ "$string" =~ _(.+)\. ]] && echo "${BASH_REMATCH[1]}"
```
|
This is easy, except that it includes the initial underscore:
```
ls | grep -o "_[^.]*"
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
With `bash` version >= 3.0 and a regex:
```
[[ "$string" =~ _(.+)\. ]] && echo "${BASH_REMATCH[1]}"
```
|
Using `sed`
```
$ sed 's/[^_]*_//;s/\..*//' input_file
AAA_123_k
CCC
KK_45
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
If you need to process the file names one at a time (eg, within a `while read` loop) you can perform two parameter expansions, eg:
```
$ string='/my/directory/file1_AAA_123_k.txt.2'
$ tmp="${string#*_}"
$ tmp="${tmp%%.*}"
$ echo "${tmp}"
AAA_123_k
```
One idea to parse a list of file names at the same time:
```
$ cat file.list
/my/directory/file1_AAA_123_k.txt.2
/my/directory/file2_CCC.txt
/my/directory/file2_KK_45.txt
$ sed -En 's/[^_]*_([^.]+).*/\1/p' file.list
AAA_123_k
CCC
KK_45
```
|
This is easy, except that it includes the initial underscore:
```
ls | grep -o "_[^.]*"
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
With your shown samples, with GNU `grep` you could try following code.
```
grep -oP '.*?_\K([^.]*)' Input_file
```
Explanation: Using GNU `grep`'s `-oP` options here to print exact match and to enable PCRE regex respectively. In main program using regex `.*?_\K([^.]*)` to get value between 1st `_` and first occurrence of `.`. Explanation of regex is as follows:
***Explanation of regex:***
```
.*?_ ##Matching from starting of line to till first occurrence of _ by using lazy match .*?
\K ##\K will forget all previous matched values by regex to make sure only needed values are printed.
([^.]*) ##Matching everything till first occurrence of dot as per need.
```
|
If you need to process the file names one at a time (eg, within a `while read` loop) you can perform two parameter expansions, eg:
```
$ string='/my/directory/file1_AAA_123_k.txt.2'
$ tmp="${string#*_}"
$ tmp="${tmp%%.*}"
$ echo "${tmp}"
AAA_123_k
```
One idea to parse a list of file names at the same time:
```
$ cat file.list
/my/directory/file1_AAA_123_k.txt.2
/my/directory/file2_CCC.txt
/my/directory/file2_KK_45.txt
$ sed -En 's/[^_]*_([^.]+).*/\1/p' file.list
AAA_123_k
CCC
KK_45
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
A simpler `sed` solution without any capturing group:
```sh
sed -E 's/^[^_]*_|\.[^.]*$//g' file
AAA_123_k
CCC
KK_45
```
|
Using `sed`
```
$ sed 's/[^_]*_//;s/\..*//' input_file
AAA_123_k
CCC
KK_45
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
A simpler `sed` solution without any capturing group:
```sh
sed -E 's/^[^_]*_|\.[^.]*$//g' file
AAA_123_k
CCC
KK_45
```
|
This is easy, except that it includes the initial underscore:
```
ls | grep -o "_[^.]*"
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
You can use a single `sed` command like
```sh
sed -n 's~^.*/[^_/]*_\([^/]*\)\.[^./]*$~\1~p' <<< "$string"
sed -nE 's~^.*/[^_/]*_([^/]*)\.[^./]*$~\1~p' <<< "$string"
```
See the [online demo](https://ideone.com/1f8XLQ). *Details*:
* `^` - start of string
* `.*` - any text
* `/` - a `/` char
* `[^_/]*` - zero or more chars other than `/` and `_`
* `_` - a `_` char
* `\([^/]*\)` (POSIX BRE) / `([^/]*)` (POSIX ERE, enabled with `E` option) - Group 1: any zero or more chars other than `/`
* `\.` - a dot
* `[^./]*` - zero or more chars other than `.` and `/`
* `$` - end of string.
With `-n`, default line output is suppressed and `p` only prints the result of successful substitution.
|
Using `sed`
```
$ sed 's/[^_]*_//;s/\..*//' input_file
AAA_123_k
CCC
KK_45
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
With `bash` version >= 3.0 and a regex:
```
[[ "$string" =~ _(.+)\. ]] && echo "${BASH_REMATCH[1]}"
```
|
If you need to process the file names one at a time (eg, within a `while read` loop) you can perform two parameter expansions, eg:
```
$ string='/my/directory/file1_AAA_123_k.txt.2'
$ tmp="${string#*_}"
$ tmp="${tmp%%.*}"
$ echo "${tmp}"
AAA_123_k
```
One idea to parse a list of file names at the same time:
```
$ cat file.list
/my/directory/file1_AAA_123_k.txt.2
/my/directory/file2_CCC.txt
/my/directory/file2_KK_45.txt
$ sed -En 's/[^_]*_([^.]+).*/\1/p' file.list
AAA_123_k
CCC
KK_45
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
You can use a single `sed` command like
```sh
sed -n 's~^.*/[^_/]*_\([^/]*\)\.[^./]*$~\1~p' <<< "$string"
sed -nE 's~^.*/[^_/]*_([^/]*)\.[^./]*$~\1~p' <<< "$string"
```
See the [online demo](https://ideone.com/1f8XLQ). *Details*:
* `^` - start of string
* `.*` - any text
* `/` - a `/` char
* `[^_/]*` - zero or more chars other than `/` and `_`
* `_` - a `_` char
* `\([^/]*\)` (POSIX BRE) / `([^/]*)` (POSIX ERE, enabled with `E` option) - Group 1: any zero or more chars other than `/`
* `\.` - a dot
* `[^./]*` - zero or more chars other than `.` and `/`
* `$` - end of string.
With `-n`, default line output is suppressed and `p` only prints the result of successful substitution.
|
This is easy, except that it includes the initial underscore:
```
ls | grep -o "_[^.]*"
```
|
71,300,876
|
Using python elasticsearch-dsl:
```
class Record(Document):
tags = Keyword()
tags_suggest = Completion(preserve_position_increments=False)
def clean(self):
self.tags_suggest = {
"input": self.tags
}
class Index:
name = 'my-index'
settings = {
"number_of_shards": 2,
}
```
When I index
```
r1 = Record(tags=['my favourite tag', 'my hated tag'])
r2 = Record(tags=['my good tag', 'my bad tag'])
```
And when I try to use autocomplete with the word in the middle:
```
dsl = Record.search()
dsl = dsl.suggest("auto_complete", "favo", completion={"field": "tags_suggest"})
search_response = dsl.execute()
for option in search_response.suggest.auto_complete[0].options:
print(option.to_dict())
```
It won't return anything, but it will when I search "my favo". Any good practices to fix that (make it return 'my favourite tag' when I request suggestions for "favo")?
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71300876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9016861/"
] |
With your shown samples, with GNU `grep` you could try following code.
```
grep -oP '.*?_\K([^.]*)' Input_file
```
Explanation: Using GNU `grep`'s `-oP` options here to print exact match and to enable PCRE regex respectively. In main program using regex `.*?_\K([^.]*)` to get value between 1st `_` and first occurrence of `.`. Explanation of regex is as follows:
***Explanation of regex:***
```
.*?_ ##Matching from starting of line to till first occurrence of _ by using lazy match .*?
\K ##\K will forget all previous matched values by regex to make sure only needed values are printed.
([^.]*) ##Matching everything till first occurrence of dot as per need.
```
|
Using `sed`
```
$ sed 's/[^_]*_//;s/\..*//' input_file
AAA_123_k
CCC
KK_45
```
|
49,889,323
|
I have a script named `patchWidth.py` and it parses command line arguments with `argparse`:
```
# read command line arguments -- the code is able to process multiple files
parser = argparse.ArgumentParser(description='angle simulation trajectories')
parser.add_argument('filenames', metavar='filename', type=str, nargs='+')
parser.add_argument('-vec', metavar='v', type=float, nargs=3)
```
Suppose this script is run with the following:
```
>>> python patchWidth.py file.dat -vec 0. 0. 1.
```
Is there a way to get this entire thing as a string in python? I would like to be able to print to the output file what command was run with what arguments.
|
2018/04/18
|
[
"https://Stackoverflow.com/questions/49889323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112406/"
] |
Yes, you can use the sys module:
```
import sys
str(sys.argv) # arguments as string
```
Note that `argv[0]` is the script name. For more information, take a look at the [sys module documentation](https://docs.python.org/3/library/sys.html#sys.argv).
|
I do not know if it would be the best option, but...
```
import sys
" ".join(sys.argv)
```
Will return a string like `/the/path/of/file/my_file.py arg1 arg2 arg3`
|
49,889,323
|
I have a script named `patchWidth.py` and it parses command line arguments with `argparse`:
```
# read command line arguments -- the code is able to process multiple files
parser = argparse.ArgumentParser(description='angle simulation trajectories')
parser.add_argument('filenames', metavar='filename', type=str, nargs='+')
parser.add_argument('-vec', metavar='v', type=float, nargs=3)
```
Suppose this script is run with the following:
```
>>> python patchWidth.py file.dat -vec 0. 0. 1.
```
Is there a way to get this entire thing as a string in python? I would like to be able to print to the output file what command was run with what arguments.
|
2018/04/18
|
[
"https://Stackoverflow.com/questions/49889323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112406/"
] |
Yes, you can use the sys module:
```
import sys
str(sys.argv) # arguments as string
```
Note that `argv[0]` is the script name. For more information, take a look at the [sys module documentation](https://docs.python.org/3/library/sys.html#sys.argv).
|
This will work with commands that have space-separated strings in them.
```
import sys
" ".join("\""+arg+"\"" if " " in arg else arg for arg in sys.argv)
```
Sample output:
```
$ python3 /tmp/derp.py "my arg" 1 2 3
python3 /tmp/derp.py "my arg" 1 2 3
```
This won't work if there's a string argument with a quotation mark in it, to get around that you'd have to delimit the quotes like: `arg.replace("\"", "\\\"")`. I left it out for brevity.
|
49,889,323
|
I have a script named `patchWidth.py` and it parses command line arguments with `argparse`:
```
# read command line arguments -- the code is able to process multiple files
parser = argparse.ArgumentParser(description='angle simulation trajectories')
parser.add_argument('filenames', metavar='filename', type=str, nargs='+')
parser.add_argument('-vec', metavar='v', type=float, nargs=3)
```
Suppose this script is run with the following:
```
>>> python patchWidth.py file.dat -vec 0. 0. 1.
```
Is there a way to get this entire thing as a string in python? I would like to be able to print to the output file what command was run with what arguments.
|
2018/04/18
|
[
"https://Stackoverflow.com/questions/49889323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2112406/"
] |
I do not know if it would be the best option, but...
```
import sys
" ".join(sys.argv)
```
Will return a string like `/the/path/of/file/my_file.py arg1 arg2 arg3`
|
This will work with commands that have space-separated strings in them.
```
import sys
" ".join("\""+arg+"\"" if " " in arg else arg for arg in sys.argv)
```
Sample output:
```
$ python3 /tmp/derp.py "my arg" 1 2 3
python3 /tmp/derp.py "my arg" 1 2 3
```
This won't work if there's a string argument with a quotation mark in it, to get around that you'd have to delimit the quotes like: `arg.replace("\"", "\\\"")`. I left it out for brevity.
|
70,899,538
|
Right now I have an Arraylist in java. When I call
```
myarraylist.get(0)
myarraylist.get(1)
myarraylist.get(2)
[0, 5, 10, 16]
[24, 29, 30, 35, 41, 45, 50]
[0, 6, 41, 45, 58]
```
are all different lists. What I need to do is get the first and second element of each of these lists, and put it in a list, like so:
```
[0,5]
[24,29]
[0,6]
```
I have tried different for loops and it seems like there is an easy way to do this in python but not in java.
|
2022/01/28
|
[
"https://Stackoverflow.com/questions/70899538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17215849/"
] |
`List<Integer> sublist = myarraylist.subList(0, 2);`
For `List#subList(int fromIndex, int toIndex)` the `toIndex` is exclusive. Therefore, to get the first two elements (indexes 0 and 1), the `toIndex` value has to be 2.
|
Try reading about Java 8 Stream API, specifically:
* [map method](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#map-java.util.function.Function-)
* [collect method](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#collect-java.util.stream.Collector-)
This should help you achieve what you need.
|
70,899,538
|
Right now I have an Arraylist in java. When I call
```
myarraylist.get(0)
myarraylist.get(1)
myarraylist.get(2)
[0, 5, 10, 16]
[24, 29, 30, 35, 41, 45, 50]
[0, 6, 41, 45, 58]
```
are all different lists. What I need to do is get the first and second element of each of these lists, and put it in a list, like so:
```
[0,5]
[24,29]
[0,6]
```
I have tried different for loops and it seems like there is an easy way to do this in python but not in java.
|
2022/01/28
|
[
"https://Stackoverflow.com/questions/70899538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17215849/"
] |
`List<Integer> sublist = myarraylist.subList(0, 2);`
For `List#subList(int fromIndex, int toIndex)` the `toIndex` is exclusive. Therefore, to get the first two elements (indexes 0 and 1), the `toIndex` value has to be 2.
|
1. Map through nested lists.
2. Create a sub-list of each nested list.
3. Collect the stream back into a new list.
```java
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
class Scratch {
public static void main(String[] args) {
List<List<Integer>> myArrayList = new ArrayList<>();
myArrayList.add(Arrays.asList(0, 5, 10, 16));
myArrayList.add(Arrays.asList(24, 29, 30, 35, 41, 45, 50));
myArrayList.add(Arrays.asList(0, 6, 41, 45, 58));
System.out.println(myArrayList.stream().map(l -> l.subList(0, 2)).collect(Collectors.toList()));
// [[0, 5], [24, 29], [0, 6]]
}
}
```
|
24,090,225
|
Best way to remove all characters of a string until new line character is met python?
```
str = 'fakeline\nfirstline\nsecondline\nthirdline'
into
str = 'firstline\nsecondline\nthirdline'
```
|
2014/06/06
|
[
"https://Stackoverflow.com/questions/24090225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3388884/"
] |
Get the index of the newline and use it to [slice](https://stackoverflow.com/questions/509211/pythons-slice-notation) the string:
```
>>> s = 'fakeline\nfirstline\nsecondline\nthirdline'
>>> s[s.index('\n')+1:] # add 1 to get the character after the newline
'firstline\nsecondline\nthirdline'
```
Also, don't name your string `str` as it shadows the built in `str` function.
**Edit:**
Another way (from Valentin Lorentz's comment):
```
s.split('\n', 1)[1]
```
I like this better than my answer. It's splits the string just once and grabs the latter half of the split.
|
str.split("\n") gives a list of all the newline delimited segments. You can simply append the ones you want with + afterwards. For your case, you can use a slice
```
newstr = "".join(str.split("\n")[1::])
```
|
24,090,225
|
Best way to remove all characters of a string until new line character is met python?
```
str = 'fakeline\nfirstline\nsecondline\nthirdline'
into
str = 'firstline\nsecondline\nthirdline'
```
|
2014/06/06
|
[
"https://Stackoverflow.com/questions/24090225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3388884/"
] |
Get the index of the newline and use it to [slice](https://stackoverflow.com/questions/509211/pythons-slice-notation) the string:
```
>>> s = 'fakeline\nfirstline\nsecondline\nthirdline'
>>> s[s.index('\n')+1:] # add 1 to get the character after the newline
'firstline\nsecondline\nthirdline'
```
Also, don't name your string `str` as it shadows the built in `str` function.
**Edit:**
Another way (from Valentin Lorentz's comment):
```
s.split('\n', 1)[1]
```
I like this better than my answer. It's splits the string just once and grabs the latter half of the split.
|
You can use [`re.sub()`](https://docs.python.org/2/library/re.html#re.sub) ( *regular expression* replacement ) as well.
```
>>> import re
>>> s = 'fakeline\nfirstline\nsecondline\nthirdline'
>>> re.sub(r'^.*\n', '', s)
'firstline\nsecondline\nthirdline'
```
|
42,136,707
|
Hello I'm trying to make an live info screen to a school project,
I'm reading through a file which does a lot of different thing which depending of what line it's reading.
```
dclist = []
interface = ""
vrfmem = ""
db = sqlite3.connect('data/main.db')
cursor = db.cursor()
cursor.execute('''SELECT r1 FROM routers''')
all_rows = cursor.fetchall()
for row in all_rows:
dclist.append(row[0])
for items in dclist:
f = open('data/'+ items + '.txt', 'r+')
for line in f:
if 'interface Vlan' in line:
interface = re.search(r'(?<=\interface Vlan).*', line).group(0)
if 'vrf member' in line.next():
vrfmem = interface = re.search(r'(?<=\vrf member).*', line).group(0)
else:
vrfmem = "default"
if 'ip address' in line:
print(items + interface + vrfmem + "ip her" )
db.commit()
db.close()
```
As seen in the code, every line in my document i want to check the next line because if it matches a certain string, i set a variable.
from what i could read myself to, python has a built in function next() that is suppost to be able to do the job for me. But when i run my code im presented with `AttributeError: 'str' object has no attribute 'next'
`
|
2017/02/09
|
[
"https://Stackoverflow.com/questions/42136707",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4448852/"
] |
You install `gulp` globally for using simple `gulp` command in your terminal and install `gulp` locally (with `package.json` dependency) in order not to lose the dependency, because you can install your project to any computer, call `npm i` and access `gulp` with `./node_modules/.bin/gulp` without any additional installations
|
You don't even need to have installed `gulp` globaly. Just have it locally and put gulp commands in package.json scripts like this:
```
"scripts": {
"start": "gulp",
"speed-test": "gulp speed-test -v",
"build-prod": "gulp build-prod",
"test": "NODE_ENV=test jasmine JASMINE_CONFIG_PATH=spec/support/jasmine.json"
},
```
Than everyone working on same project can just `npm install` and start running commands without even having gulp globally installed.
* `npm start` will run `gulp`
* `npm run speed-test` will run `gulp speed-test -v`
* `npm run build-prod` will run `gulp build-prod`
And of course add as many commands as you want there. And if someone from team have or wants to have `gulp` globally than they can run `gulp` commands directly from terminal.
|
60,445,740
|
I have an excel file which generates chart based on the data available, the chart name is `thisChart`.
I want to copy `thisChart` from excel file to the ppt file. Now I know the 2 ways to do that ie VBA and python(using win32com.client). The problem with VBA is that its really time consuming and it randomly crashes this needing constant supervision thus I planned to do the same using python.
After researching I found out about `win32com.client` in python which allowed me to do the same.
I used the following script to do so.
```
# Grab the Active Instance of Excel.
ExcelApp = win32com.client.GetActiveObject("Excel.Application")
ExcelApp.Visible = True
# Grab the workbook with the charts.
xlWorkbook = ExcelApp.Workbooks.Open(r'C:\Users\prashant.kumar\Desktop\testxl.xlsx')
# Create a new instance of PowerPoint and make sure it's visible.
PPTApp = win32com.client.gencache.EnsureDispatch("PowerPoint.Application")
PPTApp.Visible = True
# Add a presentation to the PowerPoint Application, returns a Presentation Object.
PPTPresentation = PPTApp.Presentations.Add()
# Loop through each Worksheet.
for xlWorksheet in xlWorkbook.Worksheets:
# Grab the ChartObjects Collection for each sheet.
xlCharts = xlWorksheet.ChartObjects()
# Loop through each Chart in the ChartObjects Collection.
for index, xlChart in enumerate(xlCharts):
# Each chart needs to be on it's own slide, so at this point create a new slide.
PPTSlide = PPTPresentation.Slides.Add(Index=index + 1, Layout=12) # 12 is a blank layout
# Display something to the user.
print('Exporting Chart {} from Worksheet {}'.format(xlChart.Name, xlWorksheet.Name))
# Copy the chart.
xlChart.Copy()
# Paste the Object to the Slide
PPTSlide.Shapes.PasteSpecial(DataType=1)
# Save the presentation.
PPTPresentation.SaveAs(r"C:\Users\prashant.kumar\Desktop\outppt")
```
but it pastes the chart as an image whereas I want the chart to be pasted as interactive chart (just how it is when in the excel file)
The reason is that the quality deteriorates and it does not give me much flexibility to add minor modifications to chart in the ppt in future when needed.
Here is a comparison of the 2 outputs
[](https://i.stack.imgur.com/6ljWE.png)
The quality difference can be seen here and it gets worse when I zoom in.
Now my question is, Is there any way to paste the chart from excel to ppt in the chart format using python or any other way which is faster than VBA?
PS. I don't want to read the excel file data and generate chart in python and then paste to PPT since the actual charts are really complicated and would probably be very hard to make
|
2020/02/28
|
[
"https://Stackoverflow.com/questions/60445740",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6372189/"
] |
You can use `CASE` clause to differential which Unit need to be displayed.
For example:
```
SELECT (CASE
WHEN price_col >= 1000000 THEN CONCAT(price_col/100000,'B')
WHEN price_col >= 100000 THEN CONCAT(price_col/100000,'M')
WHEN price_col >= 1000 THEN CONCAT(price_col/1000,'K')
ELSE price_col END) as new_price_col
FROM Table
```
|
SELECT (CASE WHEN length(price)=9 THEN CONCAT(price/100000,'M')
ELSE (CASE WHEN lenght(price)=10 THEN CONCAT(price/1000000,'B' END) END) AS price
|
49,059,461
|
I have the following Python dict:
```
{
'parameter_010': False,
'parameter_009': False,
'parameter_008': False,
'parameter_005': 'C<sub>MAX</sub>',
'parameter_004': 'L',
'parameter_007': False,
'parameter_006': 'R',
'parameter_001': 'Foo',
'id': 7542,
'parameter_003': 'D',
'parameter_002': 'M'
}
```
As seen there are a number of fields named `parameter_nnn` where `nnn` is a sequential number. Some are `False` and others have values populated.
I would like to generate a list with just the `parameter_nnn` field values which, but just the ones which contains a given value, sorted by number from `001` upwards.
So in this specific case the desired output is:
```
["Foo", "M", "D", "L", "CMAX", "R"]
```
Which would be the pythonic way of doing this? I obviously can start iterating but wondering if there is something better than that.
Python 2.7
|
2018/03/01
|
[
"https://Stackoverflow.com/questions/49059461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5328289/"
] |
So, assuming you know that you are working with a JSON and how to deserialize:
```
>>> import json
>>> s = """{
... "parameter_010": false,
... "parameter_009": false,
... "parameter_008": false,
... "parameter_005": "CMAX",
... "parameter_004": "L",
... "parameter_007": false,
... "parameter_006": "R",
... "parameter_001": "Foo",
... "id": 7542,
... "parameter_003": "D",
... "parameter_002": "M"
... }"""
>>> d = json.loads(s)
```
If your `parameter_nnn` always and strictly follow this format, you can simply sort the items filtered by your requirements (since lexicographical sorting is what you want!):
```
>>> sorted([(k,v) for k, v in d.items() if v and k.startswith('parameter')])
[('parameter_001', 'Foo'), ('parameter_002', 'M'), ('parameter_003', 'D'), ('parameter_004', 'L'), ('parameter_005', 'CMAX'), ('parameter_006', 'R')]
```
If you just want the values, just do another pass:
```
>>> [v for _,v in sorted([(k,v) for k, v in d.items() if v and k.startswith('parameter')])]
['Foo', 'M', 'D', 'L', 'CMAX', 'R']
>>>
```
Note, you are going to have to loop somehow...
A more readable version:
```
>>> selection = [(k,v) for k, v in d.items() if v and k.startswith('parameter')]
>>> [v for _,v in sorted(selection)]
['Foo', 'M', 'D', 'L', 'CMAX', 'R']
```
### EDIT: Major Caveat
Note, if the values can be `0` or any other falsy value that you actually want, then this won't work, so for example:
```
>>> pprint(d)
{'id': 7542,
'parameter_001': 'Foo',
'parameter_002': 'M',
'parameter_003': 'D',
'parameter_004': 'L',
'parameter_005': 'CMAX',
'parameter_006': 'R',
'parameter_007': False,
'parameter_008': False,
'parameter_009': False,
'parameter_010': False,
'parameter_011': 0}
>>> selection = [(k,v) for k, v in d.items() if v and k.startswith('parameter')]
>>> [v for _, v in sorted(selection)]
['Foo', 'M', 'D', 'L', 'CMAX', 'R']
```
So if you want to filter instances of *`False`* specifically (and not `0`) then you have to use *`is`*:
```
>>> selection = [(k,v) for k, v in d.items() if v is not False and k.startswith('parameter')]
>>> [v for _, v in sorted(selection)]
['Foo', 'M', 'D', 'L', 'CMAX', 'R', 0]
```
|
Here is one solution:
```
list(zip(*sorted(i for i in d.items() if i[0].startswith('parameter') and i[1])))[1]
# ('Foo', 'M', 'D', 'L', 'C<sub>MAX</sub>', 'R')
```
**Explanation**
* We filter for 2 conditions: key starts with 'parameter' and value is Truthy.
* `sorted` on `d.items()` returns a list of tuples sorted by dictionary key.
* `list(zip(*..))[0]` returns a tuple of values after the previous filtering and sorting.
* I haven't dealt with `<sub></sub>` as I have no idea where this is from and what logic should be applied to remove this (and other?) tagging.
|
49,059,461
|
I have the following Python dict:
```
{
'parameter_010': False,
'parameter_009': False,
'parameter_008': False,
'parameter_005': 'C<sub>MAX</sub>',
'parameter_004': 'L',
'parameter_007': False,
'parameter_006': 'R',
'parameter_001': 'Foo',
'id': 7542,
'parameter_003': 'D',
'parameter_002': 'M'
}
```
As seen there are a number of fields named `parameter_nnn` where `nnn` is a sequential number. Some are `False` and others have values populated.
I would like to generate a list with just the `parameter_nnn` field values which, but just the ones which contains a given value, sorted by number from `001` upwards.
So in this specific case the desired output is:
```
["Foo", "M", "D", "L", "CMAX", "R"]
```
Which would be the pythonic way of doing this? I obviously can start iterating but wondering if there is something better than that.
Python 2.7
|
2018/03/01
|
[
"https://Stackoverflow.com/questions/49059461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5328289/"
] |
So, assuming you know that you are working with a JSON and how to deserialize:
```
>>> import json
>>> s = """{
... "parameter_010": false,
... "parameter_009": false,
... "parameter_008": false,
... "parameter_005": "CMAX",
... "parameter_004": "L",
... "parameter_007": false,
... "parameter_006": "R",
... "parameter_001": "Foo",
... "id": 7542,
... "parameter_003": "D",
... "parameter_002": "M"
... }"""
>>> d = json.loads(s)
```
If your `parameter_nnn` always and strictly follow this format, you can simply sort the items filtered by your requirements (since lexicographical sorting is what you want!):
```
>>> sorted([(k,v) for k, v in d.items() if v and k.startswith('parameter')])
[('parameter_001', 'Foo'), ('parameter_002', 'M'), ('parameter_003', 'D'), ('parameter_004', 'L'), ('parameter_005', 'CMAX'), ('parameter_006', 'R')]
```
If you just want the values, just do another pass:
```
>>> [v for _,v in sorted([(k,v) for k, v in d.items() if v and k.startswith('parameter')])]
['Foo', 'M', 'D', 'L', 'CMAX', 'R']
>>>
```
Note, you are going to have to loop somehow...
A more readable version:
```
>>> selection = [(k,v) for k, v in d.items() if v and k.startswith('parameter')]
>>> [v for _,v in sorted(selection)]
['Foo', 'M', 'D', 'L', 'CMAX', 'R']
```
### EDIT: Major Caveat
Note, if the values can be `0` or any other falsy value that you actually want, then this won't work, so for example:
```
>>> pprint(d)
{'id': 7542,
'parameter_001': 'Foo',
'parameter_002': 'M',
'parameter_003': 'D',
'parameter_004': 'L',
'parameter_005': 'CMAX',
'parameter_006': 'R',
'parameter_007': False,
'parameter_008': False,
'parameter_009': False,
'parameter_010': False,
'parameter_011': 0}
>>> selection = [(k,v) for k, v in d.items() if v and k.startswith('parameter')]
>>> [v for _, v in sorted(selection)]
['Foo', 'M', 'D', 'L', 'CMAX', 'R']
```
So if you want to filter instances of *`False`* specifically (and not `0`) then you have to use *`is`*:
```
>>> selection = [(k,v) for k, v in d.items() if v is not False and k.startswith('parameter')]
>>> [v for _, v in sorted(selection)]
['Foo', 'M', 'D', 'L', 'CMAX', 'R', 0]
```
|
```
import collections
dicty = {
"parameter_010": False,
"parameter_009": False,
"parameter_008": False,
"parameter_005": "CMAX",
"parameter_004": "L",
"parameter_007": False,
"parameter_006": "R",
"parameter_001": "Foo",
"id": 7542,
"parameter_003": "D",
"parameter_002": "M"
}
result = []
od = collections.OrderedDict(sorted(dicty.items()))
for k, v in od.iteritems():
if v != False and "parameter" in k:
result.append(v)
print(result)
```
|
13,637,150
|
I am trying to call an .exe file that's not in my local Python directory using `subprocess.call()`. The command (as I type it into cmd.exe) is exactly as follows: `"C:\Program Files\R\R-2.15.2\bin\Rscript.exe" --vanilla C:\python\buyback_parse_guide.r`
The script runs, does what I need to do, and I have confirmed the output is correct.
Here's my python code, which I thought would do the exact same thing:
```
## Set Rcmd
Rcmd = r'"C:\Program Files\R\R-2.15.2\bin\Rscript.exe"'
## Set Rargs
Rargs = r'--vanilla C:\python\buyback_parse_guide.r'
retval = subprocess.call([Rcmd,Rargs],shell=True)
```
When I call `retval` in my Python console, it returns `1` and the .R script doesn't run, but I get no errors. I'm pretty sure this is a really simple syntax error... help? Much thanks!
|
2012/11/30
|
[
"https://Stackoverflow.com/questions/13637150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/489426/"
] |
To quote [the docs](http://docs.python.org/2/library/subprocess.html#popen-constructor):
>
> If shell is True, it is recommended to pass args as a string rather than as a sequence.
>
>
>
Splitting it up (either manually, or via `shlex`) just so `subprocess` can recombine them so the shell can split them again is silly.
I'm not sure why you think you need `shell=True` here. (If you don't have a good reason, you generally don't want it…) But even without `shell=True`:
>
> On Windows, if args is a sequence, it will be converted to a string in a manner described in Converting an argument sequence to a string on Windows. This is because the underlying CreateProcess() operates on strings.
>
>
>
So, just give the shell the command line:
```
Rcmd = r'"C:\Program Files\R\R-2.15.2\bin\Rscript.exe" --vanilla C:\python\buyback_parse_guide.r'
retval = subprocess.call(Rcmd, shell=True)
```
|
According to [the docs](http://stat.ethz.ch/R-manual/R-patched/library/utils/html/Rscript.html), Rscript:
>
> … is an alternative front end for use in #! scripts and other scripting applications.
>
>
> … is convenient for writing #! scripts… (The standard Windows command line has no concept of #! scripts, but Cygwin shells do.)
>
>
> … is only supported on systems with the execv system call.
>
>
>
So, it is not the way to run R scripts from another program under Windows.
[This answer](https://stackoverflow.com/questions/3412911/difference-between-r-exe-rcmd-exe-rscript-exe-and-rterm-exe) says:
>
> Rscript.exe is your friend for batch scripts… For everything else, there's R.exe
>
>
>
So, unless you have some good reason to be using Rscript outside of a batch script, you should switch to R.exe.
You may wonder why it works under cmd.exe, but not from Python. I don't know the answer to that, and I don't think it's worth digging through code or experimenting to find out, but I can make some guesses.
One possibility is that when you're running from the command line, that's a `cmd.exe` that controls a terminal, while when you're running from `subprocess.call(shell=True)` or `os.system`, that's a headless `cmd.exe`. Running a .bat/.cmd batch file gets you a non-headless `cmd`, but running `cmd` directly from another app does not. R has historically had all kinds of complexities dealing with the Windows terminal, which is why they used to have separate Rterm.exe and Rcmd.exe tools. Nowadays, those are both merged into R.exe, and it should work just fine either way. But if you try doing things the docs say not to do, that may not be tested, it's perfectly reasonable that it may not work.
At any rate, it doesn't really matter why it works in some situations even though it's not documented to. That certainly doesn't mean it should work in other situations it's not documented to work in, or that you should try to force it to do so. Just do the right thing and run `R.exe` instead of `Rscript.exe`.
Unless you have some information that contradicts everything I've found in the documentation and everywhere else I can find, I'm placing my money on Rscript.exe itself being the problem.
You'll have to read the documentation on the invocation differences between `Rscript.exe` and `R.exe`, but they're not identical. According to [the intro docs](http://cran.r-project.org/doc/manuals/R-intro.html#Scripting-with-R),:
>
> If you just want to run a file foo.R of R commands, the recommended way is to use R CMD BATCH foo.R
>
>
>
According to your comment above:
>
> When I type "C:\R\R-2.15.2\bin\i386\R.exe" CMD BATCH C:\python\buyback\_parse\_guide.r into cmd.exe, the .R script runs successfully. What's the proper syntax for passing this into python?
>
>
>
That depends on the platform. On Windows, a list of arguments gets turned into a string, so you're better off just using a string so you don't have to debug the joining; on Unix, a string gets split into a list of arguments, so you're better off using a list so you don't have to debug the joining.
Since there are no spaces in the path, I'd take the quotes out.
So:
```
rcmd = r'C:\R\R-2.15.2\bin\i386\R.exe CMD BATCH C:\python\buyback_parse_guide.r'
retval = subprocess.call(rcmd)
```
|
63,106,413
|
I'm trying to find a python solution to extract the length of a specific sequence within a fasta file using the full header of the sequence as the query. The full header is stored as a variable earlier in the pipeline (i.e. "CONTIG"). I would like to save the output of this script as a variable to then use later on in the same pipeline.
Below is an updated version of the script using code provided by Lucía Balestrazzi.
Additional information: The following with-statement is nested inside a larger for-loop that cycles through subsamples of an original genome. The first subsample fasta in my directory has a single sequence ">chr1:0-40129801" with a length of 40129801. I'm trying to write out a text file "OUTPUT" that has some basic information about each subsample fasta. This text file will be used as an input for another program downstream.
Header names in the original fasta file are chr1, chr2, etc... while the header names in the subsample fastas are something along the lines of:
batch1.fa >chr1:0-40k
batch2.fa >chr1:40k-80k
...etc...
```
import Bio.SeqIO as IO
record_dict = IO.to_dict(IO.parse(ORIGINAL_GENOME, "fasta")) #not the subsample
with open(GENOME_SUBSAMPLE, 'r') as FIN:
for LINE in FIN:
if LINE.startswith('>'):
#Example of "LINE"... >chr1:0-40129801
HEADER = re.sub('>','',LINE)
#HEADER = chr1:0-40129801
HEADER2 = re.sub('\n','',HEADER)
#HEADER2 = chr1:0-40129801 (no return character on the end)
CONTIG = HEADER2.split(":")[0]
#CONTIG = chr1
PART2_HEADER = HEADER2.split(":")[1]
#PART2_HEADER = 0-40129801
START = int(PART2_HEADER.split("-")[0])
#START = 0
END = int(PART2_HEADER.split("-")[1])
#END = 40129801
LENGTH = END-START
#LENGTH = 40129801 minus 0 = 40129801
#This is where I'm stuck...
ORIGINAL_CONTIG_LENGTH = len(record_dict[CONTIG]) #This returns "KeyError: 1"
#ORIGINAL_CONTIG_LENGTH = 223705999 (this is from the full genome, not the subsample).
OUTPUT.write(str(START) + '\t' + str(HEADER2) + '\t' + str(LENGTH) + '\t' + str(CONTIG) + '\t' + str(ORIGINAL_CONTIG_LENGTH) + '\n')
#OUTPUT = 0 chr1:0-40129801 40129801 chr1 223705999
OUTPUT.close()
```
I'm relatively new to bioinformatics. I know I'm messing up on how I'm using the dictionary, but I'm not quite sure how to fix it.
Any advice would be greatly appreciated. Thanks!
|
2020/07/26
|
[
"https://Stackoverflow.com/questions/63106413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7614836/"
] |
You can do it this way:
```
import Bio.SeqIO as IO
record_dict = IO.to_dict(IO.parse("genome.fa", "fasta"))
print(len(record_dict["chr1"]))
```
or
```
import Bio.SeqIO as IO
record_dict = IO.to_dict(IO.parse("genome.fa", "fasta"))
seq = record_dict["chr1"]
print(len(seq))
```
EDIT: Alternative code
```
import Bio.SeqIO as IO
record_dict = IO.to_dict(IO.parse("genome.fa", "fasta")
names = record_dict.keys()
for HEADER in names:
#HEADER = chr1:0-40129801
ORIGINAL_CONTIG_LENGTH = len(record_dict[HEADER])
CONTIG = HEADER.split(":")[0]
#CONTIG = chr1
PART2_HEADER = HEADER.split(":")[1]
#PART2_HEADER = 0-40129801
START = int(PART2_HEADER.split("-")[0])
END = int(PART2_HEADER.split("-")[1])
LENGTH = END-START
```
The idea is that you define the dict once, get the value of its keys (all the contigs headers) and store them as a variable, and then loop through the headers extracting the info you need. No need to loop through the file.
Cheers
|
This works, just changed the "CONTIG" variable to a string. Thanks Lucía for all your help the last couple of days!
```
import Bio.SeqIO as IO
record_dict = IO.to_dict(IO.parse(ORIGINAL_GENOME, "fasta")) #not the subsample
with open(GENOME_SUBSAMPLE, 'r') as FIN:
for LINE in FIN:
if LINE.startswith('>'):
#Example of "LINE"... >chr1:0-40129801
HEADER = re.sub('>','',LINE)
#HEADER = chr1:0-40129801
HEADER2 = re.sub('\n','',HEADER)
#HEADER2 = chr1:0-40129801 (no return character on the end)
CONTIG = HEADER2.split(":")[0]
#CONTIG = chr1
PART2_HEADER = HEADER2.split(":")[1]
#PART2_HEADER = 0-40129801
START = int(PART2_HEADER.split("-")[0])
#START = 0
END = int(PART2_HEADER.split("-")[1])
#END = 40129801
LENGTH = END-START
#LENGTH = 40129801 minus 0 = 40129801
#This is where I'm stuck...
ORIGINAL_CONTIG_LENGTH = len(record_dict[str(CONTIG)])
#ORIGINAL_CONTIG_LENGTH = 223705999 (this is from the full genome, not the subsample).
OUTPUT.write(str(START) + '\t' + str(HEADER2) + '\t' + str(LENGTH) + '\t' + str(CONTIG) + '\t' + str(ORIGINAL_CONTIG_LENGTH) + '\n')
#OUTPUT = 0 chr1:0-40129801 40129801 chr1 223705999
OUTPUT.close()
```
|
35,846,943
|
I was creating a function to compute trimmed mean. To do this I removed highest and lowest percent of data and then the mean is computed as usual. What I have so far is :
```
def trimmed_mean(data, percent):
from numpy import percentile
if percent < 50:
data_trimmed = [i for i in data
if i > percentile(data, percent)
and i < percentile(data, 100-percent)]
else:
data_trimmed = [i for i in data
if i < percentile(data, percent)
and i > percentile(data, 100-percent)]
return sum(data_trimmed) / float(len(data_trimmed))
```
But I do get the wrong result. So, for `[37, 33, 33, 32, 29, 28, 28, 23, 22, 22, 22, 21, 21, 21, 20, 20, 19, 19, 18, 18, 18, 18, 16, 15, 14, 14, 14, 12, 12, 9, 6]` by 10% mean should be `20.16` while I get `20.0`.
Is there any other way to do removing top and bottom data in python?
Or is there anything else that I have done wrong?
|
2016/03/07
|
[
"https://Stackoverflow.com/questions/35846943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4408820/"
] |
You can take a look at this related question:[Trimmed Mean with Percentage Limit in Python?](https://stackoverflow.com/questions/19441730/trimmed-mean-with-percentage-limit-in-python)
In short for scipy version > 0.14.0 the following does the job
```
from scipy import stats
m = stats.trim_mean(X, percentage)
```
If you do not want to have an dependency on an external library then you can of course revert to an approach as shown in Chip Grandits answer.
|
I would suggest sorting the array first and then just take a "slice in the the middle."
```
#some "fancy" numpy sort or even just plain old sorted()
#sorted_data = sorted(data) #uncomment to use plain python sorted
n = len(sorted_data)
outliers = n*percent/100 #may want some rounding logic if n is small
trimmed_data = sorted_data[outliers: n-outliers]
```
|
35,846,943
|
I was creating a function to compute trimmed mean. To do this I removed highest and lowest percent of data and then the mean is computed as usual. What I have so far is :
```
def trimmed_mean(data, percent):
from numpy import percentile
if percent < 50:
data_trimmed = [i for i in data
if i > percentile(data, percent)
and i < percentile(data, 100-percent)]
else:
data_trimmed = [i for i in data
if i < percentile(data, percent)
and i > percentile(data, 100-percent)]
return sum(data_trimmed) / float(len(data_trimmed))
```
But I do get the wrong result. So, for `[37, 33, 33, 32, 29, 28, 28, 23, 22, 22, 22, 21, 21, 21, 20, 20, 19, 19, 18, 18, 18, 18, 16, 15, 14, 14, 14, 12, 12, 9, 6]` by 10% mean should be `20.16` while I get `20.0`.
Is there any other way to do removing top and bottom data in python?
Or is there anything else that I have done wrong?
|
2016/03/07
|
[
"https://Stackoverflow.com/questions/35846943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4408820/"
] |
You can take a look at this related question:[Trimmed Mean with Percentage Limit in Python?](https://stackoverflow.com/questions/19441730/trimmed-mean-with-percentage-limit-in-python)
In short for scipy version > 0.14.0 the following does the job
```
from scipy import stats
m = stats.trim_mean(X, percentage)
```
If you do not want to have an dependency on an external library then you can of course revert to an approach as shown in Chip Grandits answer.
|
Here:
```
import numpy as np
def trimmed_mean(data, percent):
data = np.array(sorted(data))
trim = int(percent*data.size/100.0)
return data[trim:-trim].mean()
```
|
35,846,943
|
I was creating a function to compute trimmed mean. To do this I removed highest and lowest percent of data and then the mean is computed as usual. What I have so far is :
```
def trimmed_mean(data, percent):
from numpy import percentile
if percent < 50:
data_trimmed = [i for i in data
if i > percentile(data, percent)
and i < percentile(data, 100-percent)]
else:
data_trimmed = [i for i in data
if i < percentile(data, percent)
and i > percentile(data, 100-percent)]
return sum(data_trimmed) / float(len(data_trimmed))
```
But I do get the wrong result. So, for `[37, 33, 33, 32, 29, 28, 28, 23, 22, 22, 22, 21, 21, 21, 20, 20, 19, 19, 18, 18, 18, 18, 16, 15, 14, 14, 14, 12, 12, 9, 6]` by 10% mean should be `20.16` while I get `20.0`.
Is there any other way to do removing top and bottom data in python?
Or is there anything else that I have done wrong?
|
2016/03/07
|
[
"https://Stackoverflow.com/questions/35846943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4408820/"
] |
You can take a look at this related question:[Trimmed Mean with Percentage Limit in Python?](https://stackoverflow.com/questions/19441730/trimmed-mean-with-percentage-limit-in-python)
In short for scipy version > 0.14.0 the following does the job
```
from scipy import stats
m = stats.trim_mean(X, percentage)
```
If you do not want to have an dependency on an external library then you can of course revert to an approach as shown in Chip Grandits answer.
|
Maybe this'll work:
```
data = [37, 33, 33, 32, 29, 28, 28, 23, 22, 22, 22, 21, 21, 21, 20, 20, 19, 19, 18, 18, 18, 18, 16, 15, 14, 14, 14, 12, 12, 9, 6]
percent = .1 # == 10%
def trimmed_mean(data, percent):
# sort list
data = sorted(data)
# number of elements to remove from both ends of list
g = int(percent * len(data))
# remove elements
data = data[g:-g]
# cast sum to float to avoid implicit casting to int
return float(sum(data)) / len(data)
print trimmed_mean(data, percent)
```
Output:
```
$ python trimmed_mean.py
20.16
```
|
73,603,035
|
We know we can use `sep.join()` or `+=` to concatenate strings. For example,
```
a = ["123f", "asd", "y]
print("".join(a))
# output: 1234asdy
```
In Java, stringbuilder would creat a new string, and put the two string on the both sides of plus together, so it will cost `O(n^2)`. But in Python, how will `join` method do for multiway merge?
A similar question is [How python implements concatenation?](https://stackoverflow.com/questions/56522126/how-python-implements-concatenation) It explains `+=` for two way merge.
|
2022/09/04
|
[
"https://Stackoverflow.com/questions/73603035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16395591/"
] |
for cpython version 3.X you can see the source code [here](https://github.com/python/cpython/blob/main/Objects/stringlib/join.h) and it does indeed calculate the total length beforehand and only does 1 allocation.
On a side note, if your application is limited by the speed of joining strings such that you have to think about join implementation then you shouldn't be using python, and instead go for c++.
|
The operation is O(n). `join` takes an iterable. If its not already a sequence, `join` will create one. Then, using the size of the the separator and the size of each string in the list, a new string object is created. A series of `memcpy` then creates the object. Creating the list, getting the sizes and doing the `memcpy` are all linear.
`+=` is faster and still O(n). A new object the size of the two strings to be concatenated is created, and 2 memcpy do the work. Of course it is only concatenating two strings. If you want to do more, `join` soon becomes the better option.
|
20,169,509
|
Can I import a word document into a python program so that its content can be read and questions can be answered using the data in the document. what would be procedure of using the data in the file
```
with open('animal data.txt', 'r')
```
i used this but is not working
|
2013/11/24
|
[
"https://Stackoverflow.com/questions/20169509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2994135/"
] |
XE16 supports OpenGL in live cards. Use the class GlRenderer: <https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/timeline/GlRenderer>
|
I would look at your app and determine if you want to have more user input or not and whether you want it to live in a specific part of your Timeline or just have it be launched when the user wants it.
Specifically, since Live Cards live in the Timeline, they will not be able to capture the swipe backward or swipe forwards gestures since those navigate the Timeline. See the "When to use Live Cards" section of: <https://developers.google.com/glass/develop/gdk/ui/index>
If you use an Immersion, however you will be able to use those swipe backwards and forwards gestures as well as these others: <https://developers.google.com/glass/develop/gdk/input/touch> This will give you complete control over the UI and touchpad, with the exception that swipe down should exit your Immersion.
The downside is that once the user exits your Immersion, they will need to start it again likely with a Voice Trigger, whereas a Live Card can live on in part of your Timeline.
You should be able to do your rendering in both a Surface, which a LiveCard can use or in whatever View you choose to put in your Activity which is what an Immersion is. GLSurfaceView for example may be what you need and that internally uses a Surface: <http://developer.android.com/guide/topics/graphics/opengl.html> Note that you'll want to avoid RemoteViews but I think you already figured that out.
|
10,443,295
|
So I have a set of data which I am able to convert to form separate numpy arrays of R, G, B bands. Now I need to combine them to form an RGB image.
I tried 'Image' to do the job but it requires 'mode' to be attributed.
I tried to do a trick. I would use Image.fromarray() to take the array to image but it attains 'F' mode by default when Image.merge requires 'L' mode images to merge. If I would declare the attribute of array in fromarray() to 'L' at first place, all the R G B images become distorted.
But, if I save the images and then open them and then merge, it works fine. Image reads the image with 'L' mode.
Now I have two issues.
First, I dont think it is an elegant way of doing the work. So if anyone knows the better way of doing it, please tell
Secondly, Image.SAVE is not working properly. Following are the errors I face:
```
In [7]: Image.SAVE(imagefile, 'JPEG')
----------------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/media/New Volume/Documents/My own works/ISAC/SAMPLES/<ipython console> in <module>()
TypeError: 'dict' object is not callable
```
Please suggest solutions.
And please mind that the image is around 4000x4000 size array.
|
2012/05/04
|
[
"https://Stackoverflow.com/questions/10443295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1372149/"
] |
Your distortion i believe is caused by the way you are splitting your original image into its individual bands and then resizing it again before putting it into merge;
```
`
image=Image.open("your image")
print(image.size) #size is inverted i.e columns first rows second eg: 500,250
#convert to array
li_r=list(image.getdata(band=0))
arr_r=np.array(li_r,dtype="uint8")
li_g=list(image.getdata(band=1))
arr_g=np.array(li_g,dtype="uint8")
li_b=list(image.getdata(band=2))
arr_b=np.array(li_b,dtype="uint8")
# reshape
reshaper=arr_r.reshape(250,500) #size flipped so it reshapes correctly
reshapeb=arr_b.reshape(250,500)
reshapeg=arr_g.reshape(250,500)
imr=Image.fromarray(reshaper,mode=None) # mode I
imb=Image.fromarray(reshapeb,mode=None)
img=Image.fromarray(reshapeg,mode=None)
#merge
merged=Image.merge("RGB",(imr,img,imb))
merged.show()
`
```
this works well !
|
If using PIL Image convert it to array and then proceed with the below, else using matplotlib or cv2 perform directly.
```
image = cv2.imread('')[:,:,::-1]
image_2 = image[10:150,10:100]
print(image_2.shape)
img_r = image_2[:,:,0]
img_g = image_2[:,:,1]
img_b = image_2[:,:,2]
image_2 = img_r*0.2989 + 0.587*img_g + 0.114*img_b
image[10:150,10:100,0] = image_2
image[10:150,10:100,1] = image_2
image[10:150,10:100,2] = image_2
plt.imshow(image,cmap='gray')
```
|
10,443,295
|
So I have a set of data which I am able to convert to form separate numpy arrays of R, G, B bands. Now I need to combine them to form an RGB image.
I tried 'Image' to do the job but it requires 'mode' to be attributed.
I tried to do a trick. I would use Image.fromarray() to take the array to image but it attains 'F' mode by default when Image.merge requires 'L' mode images to merge. If I would declare the attribute of array in fromarray() to 'L' at first place, all the R G B images become distorted.
But, if I save the images and then open them and then merge, it works fine. Image reads the image with 'L' mode.
Now I have two issues.
First, I dont think it is an elegant way of doing the work. So if anyone knows the better way of doing it, please tell
Secondly, Image.SAVE is not working properly. Following are the errors I face:
```
In [7]: Image.SAVE(imagefile, 'JPEG')
----------------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/media/New Volume/Documents/My own works/ISAC/SAMPLES/<ipython console> in <module>()
TypeError: 'dict' object is not callable
```
Please suggest solutions.
And please mind that the image is around 4000x4000 size array.
|
2012/05/04
|
[
"https://Stackoverflow.com/questions/10443295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1372149/"
] |
I don't really understand your question but here is an example of something similar I've done recently that seems like it might help:
```
# r, g, and b are 512x512 float arrays with values >= 0 and < 1.
from PIL import Image
import numpy as np
rgbArray = np.zeros((512,512,3), 'uint8')
rgbArray[..., 0] = r*256
rgbArray[..., 1] = g*256
rgbArray[..., 2] = b*256
img = Image.fromarray(rgbArray)
img.save('myimg.jpeg')
```
I hope that helps
|
```
rgb = np.dstack((r,g,b)) # stacks 3 h x w arrays -> h x w x 3
```
This code doesnt create 3d array if you pass 3 channels. 2 channels remain.
|
10,443,295
|
So I have a set of data which I am able to convert to form separate numpy arrays of R, G, B bands. Now I need to combine them to form an RGB image.
I tried 'Image' to do the job but it requires 'mode' to be attributed.
I tried to do a trick. I would use Image.fromarray() to take the array to image but it attains 'F' mode by default when Image.merge requires 'L' mode images to merge. If I would declare the attribute of array in fromarray() to 'L' at first place, all the R G B images become distorted.
But, if I save the images and then open them and then merge, it works fine. Image reads the image with 'L' mode.
Now I have two issues.
First, I dont think it is an elegant way of doing the work. So if anyone knows the better way of doing it, please tell
Secondly, Image.SAVE is not working properly. Following are the errors I face:
```
In [7]: Image.SAVE(imagefile, 'JPEG')
----------------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/media/New Volume/Documents/My own works/ISAC/SAMPLES/<ipython console> in <module>()
TypeError: 'dict' object is not callable
```
Please suggest solutions.
And please mind that the image is around 4000x4000 size array.
|
2012/05/04
|
[
"https://Stackoverflow.com/questions/10443295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1372149/"
] |
Convert the numpy arrays to `uint8` before passing them to `Image.fromarray`
Eg. if you have floats in the range [0..1]:
```
r = Image.fromarray(numpy.uint8(r_array*255.999))
```
|
Your distortion i believe is caused by the way you are splitting your original image into its individual bands and then resizing it again before putting it into merge;
```
`
image=Image.open("your image")
print(image.size) #size is inverted i.e columns first rows second eg: 500,250
#convert to array
li_r=list(image.getdata(band=0))
arr_r=np.array(li_r,dtype="uint8")
li_g=list(image.getdata(band=1))
arr_g=np.array(li_g,dtype="uint8")
li_b=list(image.getdata(band=2))
arr_b=np.array(li_b,dtype="uint8")
# reshape
reshaper=arr_r.reshape(250,500) #size flipped so it reshapes correctly
reshapeb=arr_b.reshape(250,500)
reshapeg=arr_g.reshape(250,500)
imr=Image.fromarray(reshaper,mode=None) # mode I
imb=Image.fromarray(reshapeb,mode=None)
img=Image.fromarray(reshapeg,mode=None)
#merge
merged=Image.merge("RGB",(imr,img,imb))
merged.show()
`
```
this works well !
|
10,443,295
|
So I have a set of data which I am able to convert to form separate numpy arrays of R, G, B bands. Now I need to combine them to form an RGB image.
I tried 'Image' to do the job but it requires 'mode' to be attributed.
I tried to do a trick. I would use Image.fromarray() to take the array to image but it attains 'F' mode by default when Image.merge requires 'L' mode images to merge. If I would declare the attribute of array in fromarray() to 'L' at first place, all the R G B images become distorted.
But, if I save the images and then open them and then merge, it works fine. Image reads the image with 'L' mode.
Now I have two issues.
First, I dont think it is an elegant way of doing the work. So if anyone knows the better way of doing it, please tell
Secondly, Image.SAVE is not working properly. Following are the errors I face:
```
In [7]: Image.SAVE(imagefile, 'JPEG')
----------------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/media/New Volume/Documents/My own works/ISAC/SAMPLES/<ipython console> in <module>()
TypeError: 'dict' object is not callable
```
Please suggest solutions.
And please mind that the image is around 4000x4000 size array.
|
2012/05/04
|
[
"https://Stackoverflow.com/questions/10443295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1372149/"
] |
```
rgb = np.dstack((r,g,b)) # stacks 3 h x w arrays -> h x w x 3
```
This code doesnt create 3d array if you pass 3 channels. 2 channels remain.
|
Your distortion i believe is caused by the way you are splitting your original image into its individual bands and then resizing it again before putting it into merge;
```
`
image=Image.open("your image")
print(image.size) #size is inverted i.e columns first rows second eg: 500,250
#convert to array
li_r=list(image.getdata(band=0))
arr_r=np.array(li_r,dtype="uint8")
li_g=list(image.getdata(band=1))
arr_g=np.array(li_g,dtype="uint8")
li_b=list(image.getdata(band=2))
arr_b=np.array(li_b,dtype="uint8")
# reshape
reshaper=arr_r.reshape(250,500) #size flipped so it reshapes correctly
reshapeb=arr_b.reshape(250,500)
reshapeg=arr_g.reshape(250,500)
imr=Image.fromarray(reshaper,mode=None) # mode I
imb=Image.fromarray(reshapeb,mode=None)
img=Image.fromarray(reshapeg,mode=None)
#merge
merged=Image.merge("RGB",(imr,img,imb))
merged.show()
`
```
this works well !
|
10,443,295
|
So I have a set of data which I am able to convert to form separate numpy arrays of R, G, B bands. Now I need to combine them to form an RGB image.
I tried 'Image' to do the job but it requires 'mode' to be attributed.
I tried to do a trick. I would use Image.fromarray() to take the array to image but it attains 'F' mode by default when Image.merge requires 'L' mode images to merge. If I would declare the attribute of array in fromarray() to 'L' at first place, all the R G B images become distorted.
But, if I save the images and then open them and then merge, it works fine. Image reads the image with 'L' mode.
Now I have two issues.
First, I dont think it is an elegant way of doing the work. So if anyone knows the better way of doing it, please tell
Secondly, Image.SAVE is not working properly. Following are the errors I face:
```
In [7]: Image.SAVE(imagefile, 'JPEG')
----------------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/media/New Volume/Documents/My own works/ISAC/SAMPLES/<ipython console> in <module>()
TypeError: 'dict' object is not callable
```
Please suggest solutions.
And please mind that the image is around 4000x4000 size array.
|
2012/05/04
|
[
"https://Stackoverflow.com/questions/10443295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1372149/"
] |
I don't really understand your question but here is an example of something similar I've done recently that seems like it might help:
```
# r, g, and b are 512x512 float arrays with values >= 0 and < 1.
from PIL import Image
import numpy as np
rgbArray = np.zeros((512,512,3), 'uint8')
rgbArray[..., 0] = r*256
rgbArray[..., 1] = g*256
rgbArray[..., 2] = b*256
img = Image.fromarray(rgbArray)
img.save('myimg.jpeg')
```
I hope that helps
|
If using PIL Image convert it to array and then proceed with the below, else using matplotlib or cv2 perform directly.
```
image = cv2.imread('')[:,:,::-1]
image_2 = image[10:150,10:100]
print(image_2.shape)
img_r = image_2[:,:,0]
img_g = image_2[:,:,1]
img_b = image_2[:,:,2]
image_2 = img_r*0.2989 + 0.587*img_g + 0.114*img_b
image[10:150,10:100,0] = image_2
image[10:150,10:100,1] = image_2
image[10:150,10:100,2] = image_2
plt.imshow(image,cmap='gray')
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.