qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
70,026,043
|
I'm trying to figure out why I'm getting this error message.
I'm running the webdriver.Chrome() with selenium in a Windows environment.
When I run:
```
driver.get("http://www.google.com")
```
Or any url for that matter, python returns an HTML doc to the terminal saying;
```
Access denied, your system policy has denied access to the requested URL. For assistance contact your network support team
```
I'm not not sure what the issue so I need to figure out the error so I can send it to my IT. This is on a company managed device.
|
2021/11/18
|
[
"https://Stackoverflow.com/questions/70026043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16909626/"
] |
An alternative to using `upvar` is to use `set <var_name>` to retrieve the value of var\_name. When <var\_name> is `${mod}_sig`, then you can use `set` to retrieve the value of the variable without the possibility of altering the value of the original variable (like `upvar`)
```
set modules {moduleA moduleB moduleC}
set moduleA_sig {1 2 3 4}
set moduleB_sig {11 22 33 44}
set moduleC_sig {111 222 333 444}
foreach mod $modules {
# Get the value of the variable named ${mod}_sig.
puts "$mod: [set ${mod}_sig]"
}
```
|
This is where you want the [`upvar`](https://www.tcl-lang.org/man/tcl8.6/TclCmd/upvar.htm) command
to "alias" one variable to another.
```
set modules {moduleA moduleB moduleC}
set moduleA_sig {1 2 3 4}
set moduleB_sig {11 22 33 44}
set moduleC_sig {111 222 333 444}
foreach mod $modules {
upvar 0 ${mod}_sig this
puts [list $mod $this]
}
```
outputs
```none
moduleA {1 2 3 4}
moduleB {11 22 33 44}
moduleC {111 222 333 444}
```
Here, use level 0 to indicate the current stack frame.
Any modifications you make to `$this` will be reflected in `$moduleX_sig`.
```
set x 42
upvar 0 x y
incr y
puts $x ;# => 43
```
|
70,026,043
|
I'm trying to figure out why I'm getting this error message.
I'm running the webdriver.Chrome() with selenium in a Windows environment.
When I run:
```
driver.get("http://www.google.com")
```
Or any url for that matter, python returns an HTML doc to the terminal saying;
```
Access denied, your system policy has denied access to the requested URL. For assistance contact your network support team
```
I'm not not sure what the issue so I need to figure out the error so I can send it to my IT. This is on a company managed device.
|
2021/11/18
|
[
"https://Stackoverflow.com/questions/70026043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16909626/"
] |
Another approach is to use an array (Or [dict](https://www.tcl.tk/man/tcl8.6/TclCmd/dict.html)) to hold values instead of trying to compose variable names on the fly at runtime.
Array:
```
#!/usr/bin/env tclsh
set modules {moduleA moduleB moduleC}
set sig(moduleA) {1 2 3 4}
set sig(moduleB) {11 22 33 44}
set sig(moduleC) {111 222 333 444}
foreach mod $modules {
puts "$mod: $sig($mod)"
}
```
Or with a dict:
```
#!/usr/bin/env tclsh
set modules {moduleA moduleB moduleC}
dict set sig moduleA {1 2 3 4}
dict set sig moduleB {11 22 33 44}
dict set sig moduleC {111 222 333 444}
foreach mod $modules {
puts "$mod: [dict get $sig $mod]"
}
```
|
This is where you want the [`upvar`](https://www.tcl-lang.org/man/tcl8.6/TclCmd/upvar.htm) command
to "alias" one variable to another.
```
set modules {moduleA moduleB moduleC}
set moduleA_sig {1 2 3 4}
set moduleB_sig {11 22 33 44}
set moduleC_sig {111 222 333 444}
foreach mod $modules {
upvar 0 ${mod}_sig this
puts [list $mod $this]
}
```
outputs
```none
moduleA {1 2 3 4}
moduleB {11 22 33 44}
moduleC {111 222 333 444}
```
Here, use level 0 to indicate the current stack frame.
Any modifications you make to `$this` will be reflected in `$moduleX_sig`.
```
set x 42
upvar 0 x y
incr y
puts $x ;# => 43
```
|
41,789,133
|
Tensorflow r0.12's documentation for tf.nn.rnn\_cell.LSTMCell describes this as the init:
```
tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None)
```
where `state` is as follows:
>
> state: if state\_is\_tuple is False, this must be a state Tensor, 2-D, batch x state\_size. If state\_is\_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c\_state and m\_state.
>
>
>
What aare `c_state` and `m_state` and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation.
[Here is a link to that page in the documentation.](https://web.archive.org/web/20170223030652/https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods)
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41789133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299052/"
] |
I've stumbled upon same question, here's how I understand it! Minimalistic LSTM example:
```
import tensorflow as tf
sample_input = tf.constant([[1,2,3]],dtype=tf.float32)
LSTM_CELL_SIZE = 2
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True)
state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
init_op = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init_op)
print sess.run(output)
```
Notice that `state_is_tuple=True` so when passing `state` to this `cell`, it needs to be in the `tuple` form. `c_state` and `m_state` are probably "Memory State" and "Cell State", though I honestly am NOT sure, as these terms are only mentioned in the docs. In the code and papers about `LSTM` - letters `h` and `c` are commonly used to denote "output value" and "cell state".
<http://colah.github.io/posts/2015-08-Understanding-LSTMs/>
Those tensors represent combined internal state of the cell, and should be passed together. Old way to do it was to simply concatenate them, and new way is to use tuples.
OLD WAY:
```
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=False)
state = tf.zeros([1,LSTM_CELL_SIZE*2])
output, state_new = lstm_cell(sample_input, state)
```
NEW WAY:
```
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True)
state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
```
So, basically all we did, is changed `state` from being 1 tensor of length `4` into two tensors of length `2`. The content remained the same. `[0,0,0,0]` becomes `([0,0],[0,0])`. (This is supposed to make it faster)
|
Maybe this excerpt from the code will help
```
def __call__(self, inputs, state, scope=None):
"""Long short-term memory cell (LSTM)."""
with vs.variable_scope(scope or type(self).__name__): # "BasicLSTMCell"
# Parameters of gates are concatenated into one multiply for efficiency.
if self._state_is_tuple:
c, h = state
else:
c, h = array_ops.split(1, 2, state)
concat = _linear([inputs, h], 4 * self._num_units, True)
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
i, j, f, o = array_ops.split(1, 4, concat)
new_c = (c * sigmoid(f + self._forget_bias) + sigmoid(i) *
self._activation(j))
new_h = self._activation(new_c) * sigmoid(o)
if self._state_is_tuple:
new_state = LSTMStateTuple(new_c, new_h)
else:
new_state = array_ops.concat(1, [new_c, new_h])
return new_h, new_state
```
|
41,789,133
|
Tensorflow r0.12's documentation for tf.nn.rnn\_cell.LSTMCell describes this as the init:
```
tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None)
```
where `state` is as follows:
>
> state: if state\_is\_tuple is False, this must be a state Tensor, 2-D, batch x state\_size. If state\_is\_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c\_state and m\_state.
>
>
>
What aare `c_state` and `m_state` and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation.
[Here is a link to that page in the documentation.](https://web.archive.org/web/20170223030652/https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods)
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41789133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299052/"
] |
I agree that the documentation is unclear. Looking at [`tf.nn.rnn_cell.LSTMCell.__call__`](https://github.com/tensorflow/tensorflow/blob/66d5d1fa0c192ca4c9b75cde216866805eb160f2/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py#L291-L382) clarifies (I took the code from TensorFlow 1.0.0):
```
def __call__(self, inputs, state, scope=None):
"""Run one step of LSTM.
Args:
inputs: input Tensor, 2D, batch x num_units.
state: if `state_is_tuple` is False, this must be a state Tensor,
`2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
tuple of state Tensors, both `2-D`, with column sizes `c_state` and
`m_state`.
scope: VariableScope for the created subgraph; defaults to "lstm_cell".
Returns:
A tuple containing:
- A `2-D, [batch x output_dim]`, Tensor representing the output of the
LSTM after reading `inputs` when previous state was `state`.
Here output_dim is:
num_proj if num_proj was set,
num_units otherwise.
- Tensor(s) representing the new state of LSTM after reading `inputs` when
the previous state was `state`. Same type and shape(s) as `state`.
Raises:
ValueError: If input size cannot be inferred from inputs via
static shape inference.
"""
num_proj = self._num_units if self._num_proj is None else self._num_proj
if self._state_is_tuple:
(c_prev, m_prev) = state
else:
c_prev = array_ops.slice(state, [0, 0], [-1, self._num_units])
m_prev = array_ops.slice(state, [0, self._num_units], [-1, num_proj])
dtype = inputs.dtype
input_size = inputs.get_shape().with_rank(2)[1]
if input_size.value is None:
raise ValueError("Could not infer input size from inputs.get_shape()[-1]")
with vs.variable_scope(scope or "lstm_cell",
initializer=self._initializer) as unit_scope:
if self._num_unit_shards is not None:
unit_scope.set_partitioner(
partitioned_variables.fixed_size_partitioner(
self._num_unit_shards))
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
lstm_matrix = _linear([inputs, m_prev], 4 * self._num_units, bias=True,
scope=scope)
i, j, f, o = array_ops.split(
value=lstm_matrix, num_or_size_splits=4, axis=1)
# Diagonal connections
if self._use_peepholes:
with vs.variable_scope(unit_scope) as projection_scope:
if self._num_unit_shards is not None:
projection_scope.set_partitioner(None)
w_f_diag = vs.get_variable(
"w_f_diag", shape=[self._num_units], dtype=dtype)
w_i_diag = vs.get_variable(
"w_i_diag", shape=[self._num_units], dtype=dtype)
w_o_diag = vs.get_variable(
"w_o_diag", shape=[self._num_units], dtype=dtype)
if self._use_peepholes:
c = (sigmoid(f + self._forget_bias + w_f_diag * c_prev) * c_prev +
sigmoid(i + w_i_diag * c_prev) * self._activation(j))
else:
c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
self._activation(j))
if self._cell_clip is not None:
# pylint: disable=invalid-unary-operand-type
c = clip_ops.clip_by_value(c, -self._cell_clip, self._cell_clip)
# pylint: enable=invalid-unary-operand-type
if self._use_peepholes:
m = sigmoid(o + w_o_diag * c) * self._activation(c)
else:
m = sigmoid(o) * self._activation(c)
if self._num_proj is not None:
with vs.variable_scope("projection") as proj_scope:
if self._num_proj_shards is not None:
proj_scope.set_partitioner(
partitioned_variables.fixed_size_partitioner(
self._num_proj_shards))
m = _linear(m, self._num_proj, bias=False, scope=scope)
if self._proj_clip is not None:
# pylint: disable=invalid-unary-operand-type
m = clip_ops.clip_by_value(m, -self._proj_clip, self._proj_clip)
# pylint: enable=invalid-unary-operand-type
new_state = (LSTMStateTuple(c, m) if self._state_is_tuple else
array_ops.concat([c, m], 1))
return m, new_state
```
The key lines are:
```
c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
self._activation(j))
```
and
```
m = sigmoid(o) * self._activation(c)
```
and
```
new_state = (LSTMStateTuple(c, m)
```
If you compare the code to compute `c` and `m` with the LSTM equations (see below), you can see it corresponds to the cell state (typically denoted with `c`) and hidden state (typically denoted with `h`), respectively:
[](https://i.stack.imgur.com/fFh9H.png)
`new_state = (LSTMStateTuple(c, m)` indicates that the first element of the returned state tuple is `c` (cell state a.k.a. `c_state`), and the second element of the returned state tuple is `m` (hidden state a.k.a. `m_state`).
|
Maybe this excerpt from the code will help
```
def __call__(self, inputs, state, scope=None):
"""Long short-term memory cell (LSTM)."""
with vs.variable_scope(scope or type(self).__name__): # "BasicLSTMCell"
# Parameters of gates are concatenated into one multiply for efficiency.
if self._state_is_tuple:
c, h = state
else:
c, h = array_ops.split(1, 2, state)
concat = _linear([inputs, h], 4 * self._num_units, True)
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
i, j, f, o = array_ops.split(1, 4, concat)
new_c = (c * sigmoid(f + self._forget_bias) + sigmoid(i) *
self._activation(j))
new_h = self._activation(new_c) * sigmoid(o)
if self._state_is_tuple:
new_state = LSTMStateTuple(new_c, new_h)
else:
new_state = array_ops.concat(1, [new_c, new_h])
return new_h, new_state
```
|
41,789,133
|
Tensorflow r0.12's documentation for tf.nn.rnn\_cell.LSTMCell describes this as the init:
```
tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None)
```
where `state` is as follows:
>
> state: if state\_is\_tuple is False, this must be a state Tensor, 2-D, batch x state\_size. If state\_is\_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c\_state and m\_state.
>
>
>
What aare `c_state` and `m_state` and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation.
[Here is a link to that page in the documentation.](https://web.archive.org/web/20170223030652/https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods)
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41789133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299052/"
] |
Maybe this excerpt from the code will help
```
def __call__(self, inputs, state, scope=None):
"""Long short-term memory cell (LSTM)."""
with vs.variable_scope(scope or type(self).__name__): # "BasicLSTMCell"
# Parameters of gates are concatenated into one multiply for efficiency.
if self._state_is_tuple:
c, h = state
else:
c, h = array_ops.split(1, 2, state)
concat = _linear([inputs, h], 4 * self._num_units, True)
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
i, j, f, o = array_ops.split(1, 4, concat)
new_c = (c * sigmoid(f + self._forget_bias) + sigmoid(i) *
self._activation(j))
new_h = self._activation(new_c) * sigmoid(o)
if self._state_is_tuple:
new_state = LSTMStateTuple(new_c, new_h)
else:
new_state = array_ops.concat(1, [new_c, new_h])
return new_h, new_state
```
|
<https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/ops/rnn_cell_impl.py>
Line #308 - 314
class LSTMStateTuple(\_LSTMStateTuple):
"""Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order.
Only used when `state_is_tuple=True`.
"""
|
41,789,133
|
Tensorflow r0.12's documentation for tf.nn.rnn\_cell.LSTMCell describes this as the init:
```
tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None)
```
where `state` is as follows:
>
> state: if state\_is\_tuple is False, this must be a state Tensor, 2-D, batch x state\_size. If state\_is\_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c\_state and m\_state.
>
>
>
What aare `c_state` and `m_state` and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation.
[Here is a link to that page in the documentation.](https://web.archive.org/web/20170223030652/https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods)
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41789133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299052/"
] |
I've stumbled upon same question, here's how I understand it! Minimalistic LSTM example:
```
import tensorflow as tf
sample_input = tf.constant([[1,2,3]],dtype=tf.float32)
LSTM_CELL_SIZE = 2
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True)
state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
init_op = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init_op)
print sess.run(output)
```
Notice that `state_is_tuple=True` so when passing `state` to this `cell`, it needs to be in the `tuple` form. `c_state` and `m_state` are probably "Memory State" and "Cell State", though I honestly am NOT sure, as these terms are only mentioned in the docs. In the code and papers about `LSTM` - letters `h` and `c` are commonly used to denote "output value" and "cell state".
<http://colah.github.io/posts/2015-08-Understanding-LSTMs/>
Those tensors represent combined internal state of the cell, and should be passed together. Old way to do it was to simply concatenate them, and new way is to use tuples.
OLD WAY:
```
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=False)
state = tf.zeros([1,LSTM_CELL_SIZE*2])
output, state_new = lstm_cell(sample_input, state)
```
NEW WAY:
```
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True)
state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
```
So, basically all we did, is changed `state` from being 1 tensor of length `4` into two tensors of length `2`. The content remained the same. `[0,0,0,0]` becomes `([0,0],[0,0])`. (This is supposed to make it faster)
|
I agree that the documentation is unclear. Looking at [`tf.nn.rnn_cell.LSTMCell.__call__`](https://github.com/tensorflow/tensorflow/blob/66d5d1fa0c192ca4c9b75cde216866805eb160f2/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py#L291-L382) clarifies (I took the code from TensorFlow 1.0.0):
```
def __call__(self, inputs, state, scope=None):
"""Run one step of LSTM.
Args:
inputs: input Tensor, 2D, batch x num_units.
state: if `state_is_tuple` is False, this must be a state Tensor,
`2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
tuple of state Tensors, both `2-D`, with column sizes `c_state` and
`m_state`.
scope: VariableScope for the created subgraph; defaults to "lstm_cell".
Returns:
A tuple containing:
- A `2-D, [batch x output_dim]`, Tensor representing the output of the
LSTM after reading `inputs` when previous state was `state`.
Here output_dim is:
num_proj if num_proj was set,
num_units otherwise.
- Tensor(s) representing the new state of LSTM after reading `inputs` when
the previous state was `state`. Same type and shape(s) as `state`.
Raises:
ValueError: If input size cannot be inferred from inputs via
static shape inference.
"""
num_proj = self._num_units if self._num_proj is None else self._num_proj
if self._state_is_tuple:
(c_prev, m_prev) = state
else:
c_prev = array_ops.slice(state, [0, 0], [-1, self._num_units])
m_prev = array_ops.slice(state, [0, self._num_units], [-1, num_proj])
dtype = inputs.dtype
input_size = inputs.get_shape().with_rank(2)[1]
if input_size.value is None:
raise ValueError("Could not infer input size from inputs.get_shape()[-1]")
with vs.variable_scope(scope or "lstm_cell",
initializer=self._initializer) as unit_scope:
if self._num_unit_shards is not None:
unit_scope.set_partitioner(
partitioned_variables.fixed_size_partitioner(
self._num_unit_shards))
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
lstm_matrix = _linear([inputs, m_prev], 4 * self._num_units, bias=True,
scope=scope)
i, j, f, o = array_ops.split(
value=lstm_matrix, num_or_size_splits=4, axis=1)
# Diagonal connections
if self._use_peepholes:
with vs.variable_scope(unit_scope) as projection_scope:
if self._num_unit_shards is not None:
projection_scope.set_partitioner(None)
w_f_diag = vs.get_variable(
"w_f_diag", shape=[self._num_units], dtype=dtype)
w_i_diag = vs.get_variable(
"w_i_diag", shape=[self._num_units], dtype=dtype)
w_o_diag = vs.get_variable(
"w_o_diag", shape=[self._num_units], dtype=dtype)
if self._use_peepholes:
c = (sigmoid(f + self._forget_bias + w_f_diag * c_prev) * c_prev +
sigmoid(i + w_i_diag * c_prev) * self._activation(j))
else:
c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
self._activation(j))
if self._cell_clip is not None:
# pylint: disable=invalid-unary-operand-type
c = clip_ops.clip_by_value(c, -self._cell_clip, self._cell_clip)
# pylint: enable=invalid-unary-operand-type
if self._use_peepholes:
m = sigmoid(o + w_o_diag * c) * self._activation(c)
else:
m = sigmoid(o) * self._activation(c)
if self._num_proj is not None:
with vs.variable_scope("projection") as proj_scope:
if self._num_proj_shards is not None:
proj_scope.set_partitioner(
partitioned_variables.fixed_size_partitioner(
self._num_proj_shards))
m = _linear(m, self._num_proj, bias=False, scope=scope)
if self._proj_clip is not None:
# pylint: disable=invalid-unary-operand-type
m = clip_ops.clip_by_value(m, -self._proj_clip, self._proj_clip)
# pylint: enable=invalid-unary-operand-type
new_state = (LSTMStateTuple(c, m) if self._state_is_tuple else
array_ops.concat([c, m], 1))
return m, new_state
```
The key lines are:
```
c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
self._activation(j))
```
and
```
m = sigmoid(o) * self._activation(c)
```
and
```
new_state = (LSTMStateTuple(c, m)
```
If you compare the code to compute `c` and `m` with the LSTM equations (see below), you can see it corresponds to the cell state (typically denoted with `c`) and hidden state (typically denoted with `h`), respectively:
[](https://i.stack.imgur.com/fFh9H.png)
`new_state = (LSTMStateTuple(c, m)` indicates that the first element of the returned state tuple is `c` (cell state a.k.a. `c_state`), and the second element of the returned state tuple is `m` (hidden state a.k.a. `m_state`).
|
41,789,133
|
Tensorflow r0.12's documentation for tf.nn.rnn\_cell.LSTMCell describes this as the init:
```
tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None)
```
where `state` is as follows:
>
> state: if state\_is\_tuple is False, this must be a state Tensor, 2-D, batch x state\_size. If state\_is\_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c\_state and m\_state.
>
>
>
What aare `c_state` and `m_state` and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation.
[Here is a link to that page in the documentation.](https://web.archive.org/web/20170223030652/https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods)
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41789133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299052/"
] |
I've stumbled upon same question, here's how I understand it! Minimalistic LSTM example:
```
import tensorflow as tf
sample_input = tf.constant([[1,2,3]],dtype=tf.float32)
LSTM_CELL_SIZE = 2
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True)
state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
init_op = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init_op)
print sess.run(output)
```
Notice that `state_is_tuple=True` so when passing `state` to this `cell`, it needs to be in the `tuple` form. `c_state` and `m_state` are probably "Memory State" and "Cell State", though I honestly am NOT sure, as these terms are only mentioned in the docs. In the code and papers about `LSTM` - letters `h` and `c` are commonly used to denote "output value" and "cell state".
<http://colah.github.io/posts/2015-08-Understanding-LSTMs/>
Those tensors represent combined internal state of the cell, and should be passed together. Old way to do it was to simply concatenate them, and new way is to use tuples.
OLD WAY:
```
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=False)
state = tf.zeros([1,LSTM_CELL_SIZE*2])
output, state_new = lstm_cell(sample_input, state)
```
NEW WAY:
```
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_CELL_SIZE, state_is_tuple=True)
state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2
output, state_new = lstm_cell(sample_input, state)
```
So, basically all we did, is changed `state` from being 1 tensor of length `4` into two tensors of length `2`. The content remained the same. `[0,0,0,0]` becomes `([0,0],[0,0])`. (This is supposed to make it faster)
|
<https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/ops/rnn_cell_impl.py>
Line #308 - 314
class LSTMStateTuple(\_LSTMStateTuple):
"""Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order.
Only used when `state_is_tuple=True`.
"""
|
41,789,133
|
Tensorflow r0.12's documentation for tf.nn.rnn\_cell.LSTMCell describes this as the init:
```
tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None)
```
where `state` is as follows:
>
> state: if state\_is\_tuple is False, this must be a state Tensor, 2-D, batch x state\_size. If state\_is\_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c\_state and m\_state.
>
>
>
What aare `c_state` and `m_state` and how do they fit into LSTMs? I cannot find reference to them anywhere in the documentation.
[Here is a link to that page in the documentation.](https://web.archive.org/web/20170223030652/https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods)
|
2017/01/22
|
[
"https://Stackoverflow.com/questions/41789133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5299052/"
] |
I agree that the documentation is unclear. Looking at [`tf.nn.rnn_cell.LSTMCell.__call__`](https://github.com/tensorflow/tensorflow/blob/66d5d1fa0c192ca4c9b75cde216866805eb160f2/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py#L291-L382) clarifies (I took the code from TensorFlow 1.0.0):
```
def __call__(self, inputs, state, scope=None):
"""Run one step of LSTM.
Args:
inputs: input Tensor, 2D, batch x num_units.
state: if `state_is_tuple` is False, this must be a state Tensor,
`2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
tuple of state Tensors, both `2-D`, with column sizes `c_state` and
`m_state`.
scope: VariableScope for the created subgraph; defaults to "lstm_cell".
Returns:
A tuple containing:
- A `2-D, [batch x output_dim]`, Tensor representing the output of the
LSTM after reading `inputs` when previous state was `state`.
Here output_dim is:
num_proj if num_proj was set,
num_units otherwise.
- Tensor(s) representing the new state of LSTM after reading `inputs` when
the previous state was `state`. Same type and shape(s) as `state`.
Raises:
ValueError: If input size cannot be inferred from inputs via
static shape inference.
"""
num_proj = self._num_units if self._num_proj is None else self._num_proj
if self._state_is_tuple:
(c_prev, m_prev) = state
else:
c_prev = array_ops.slice(state, [0, 0], [-1, self._num_units])
m_prev = array_ops.slice(state, [0, self._num_units], [-1, num_proj])
dtype = inputs.dtype
input_size = inputs.get_shape().with_rank(2)[1]
if input_size.value is None:
raise ValueError("Could not infer input size from inputs.get_shape()[-1]")
with vs.variable_scope(scope or "lstm_cell",
initializer=self._initializer) as unit_scope:
if self._num_unit_shards is not None:
unit_scope.set_partitioner(
partitioned_variables.fixed_size_partitioner(
self._num_unit_shards))
# i = input_gate, j = new_input, f = forget_gate, o = output_gate
lstm_matrix = _linear([inputs, m_prev], 4 * self._num_units, bias=True,
scope=scope)
i, j, f, o = array_ops.split(
value=lstm_matrix, num_or_size_splits=4, axis=1)
# Diagonal connections
if self._use_peepholes:
with vs.variable_scope(unit_scope) as projection_scope:
if self._num_unit_shards is not None:
projection_scope.set_partitioner(None)
w_f_diag = vs.get_variable(
"w_f_diag", shape=[self._num_units], dtype=dtype)
w_i_diag = vs.get_variable(
"w_i_diag", shape=[self._num_units], dtype=dtype)
w_o_diag = vs.get_variable(
"w_o_diag", shape=[self._num_units], dtype=dtype)
if self._use_peepholes:
c = (sigmoid(f + self._forget_bias + w_f_diag * c_prev) * c_prev +
sigmoid(i + w_i_diag * c_prev) * self._activation(j))
else:
c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
self._activation(j))
if self._cell_clip is not None:
# pylint: disable=invalid-unary-operand-type
c = clip_ops.clip_by_value(c, -self._cell_clip, self._cell_clip)
# pylint: enable=invalid-unary-operand-type
if self._use_peepholes:
m = sigmoid(o + w_o_diag * c) * self._activation(c)
else:
m = sigmoid(o) * self._activation(c)
if self._num_proj is not None:
with vs.variable_scope("projection") as proj_scope:
if self._num_proj_shards is not None:
proj_scope.set_partitioner(
partitioned_variables.fixed_size_partitioner(
self._num_proj_shards))
m = _linear(m, self._num_proj, bias=False, scope=scope)
if self._proj_clip is not None:
# pylint: disable=invalid-unary-operand-type
m = clip_ops.clip_by_value(m, -self._proj_clip, self._proj_clip)
# pylint: enable=invalid-unary-operand-type
new_state = (LSTMStateTuple(c, m) if self._state_is_tuple else
array_ops.concat([c, m], 1))
return m, new_state
```
The key lines are:
```
c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) *
self._activation(j))
```
and
```
m = sigmoid(o) * self._activation(c)
```
and
```
new_state = (LSTMStateTuple(c, m)
```
If you compare the code to compute `c` and `m` with the LSTM equations (see below), you can see it corresponds to the cell state (typically denoted with `c`) and hidden state (typically denoted with `h`), respectively:
[](https://i.stack.imgur.com/fFh9H.png)
`new_state = (LSTMStateTuple(c, m)` indicates that the first element of the returned state tuple is `c` (cell state a.k.a. `c_state`), and the second element of the returned state tuple is `m` (hidden state a.k.a. `m_state`).
|
<https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/ops/rnn_cell_impl.py>
Line #308 - 314
class LSTMStateTuple(\_LSTMStateTuple):
"""Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order.
Only used when `state_is_tuple=True`.
"""
|
71,524,462
|
I'm coding a little tool that displays the key presses on the screen with Tkinter, useful for screen recording.
**Is there a way to get a listener for all key presses of the system *globally* with Tkinter?** (for every keystroke including `F1`, `CTRL`, ..., even when the Tkinter window does not have the focus)
I currently know a solution with `pyHook.HookManager()`, `pythoncom.PumpMessages()`, and also solutions from [Listen for a shortcut (like WIN+A) even if the Python script does not have the focus](https://stackoverflow.com/questions/59191177/listen-for-a-shortcut-like-wina-even-if-the-python-script-does-not-have-the-f) but is there a 100% `tkinter` solution?
Indeed, [`pyhook`](https://pypi.org/project/pyHook/) is only for Python 2, and [`pyhook3`](https://github.com/gggfreak2003/PyHook3) seems to be abandoned, so I would prefer a **built-in Python3 / Tkinter solution** for Windows.
|
2022/03/18
|
[
"https://Stackoverflow.com/questions/71524462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1422096/"
] |
**Solution 1**: if you need to catch keyboard events in your current window, you can use:
```
from tkinter import *
def key_press(event):
key = event.char
print(f"'{key}' is pressed")
root = Tk()
root.geometry('640x480')
root.bind('<Key>', key_press)
mainloop()
```
**Solution 2**: if you want to capture keys regardless of which window has focus, you can use [`keyboard`](https://github.com/boppreh/keyboard)
|
As suggested in [tkinter using two keys at the same time](https://stackoverflow.com/questions/39606019/tkinter-using-two-keys-at-the-same-time), you can detect all key pressed at the same time with the following:
```py
history = []
def keyup(e):
print(e.keycode)
if e.keycode in history :
history.pop(history.index(e.keycode))
var.set(str(history))
def keydown(e):
if not e.keycode in history :
history.append(e.keycode)
var.set(str(history))
root = Tk()
root.bind("<KeyPress>", keydown)
root.bind("<KeyRelease>", keyup)
```
|
56,600,918
|
When you call DataFrame.to\_numpy(), pandas will find the NumPy dtype that can hold all of the dtypes in the DataFrame. But how to perform the reverse operation?
I have an 'numpy.ndarray' object 'pred'. It looks like this:
>
> [[0.00599913 0.00506044 0.00508315 ... 0.00540191 0.00542058 0.00542058]]
>
>
>
I am trying to do like this:
```
pred = np.uint8(pred)
print("Model predict:\n", pred.T)
```
But I get:
>
> [[0 0 0 ... 0 0 0]]
>
>
>
Why, after the conversion, I do not get something like this:
>
> 0 0 0 0 0 0 ... 0 0 0 0 0 0
>
>
>
And how to write the pred to a file?
```
pred.to_csv('pred.csv', header=None, index=False)
pred = pd.read_csv('pred.csv', sep=',', header=None)
```
Gives an error message:
```
AttributeError Traceback (most recent call last)
<ipython-input-68-b223b39b5db1> in <module>()
----> 1 pred.to_csv('pred.csv', header=None, index=False)
2 pred = pd.read_csv('pred.csv', sep=',', header=None)
AttributeError: 'numpy.ndarray' object has no attribute 'to_csv'
```
Please help me figure this out.
|
2019/06/14
|
[
"https://Stackoverflow.com/questions/56600918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11376406/"
] |
`pred` is an `ndarray`. It does not have a `to_csv` method. That's something a `pandas` `DataFrame` has.
But lets look at the first stuff.
Copying your array display, adding commas, lets me make a list:
```
In [1]: alist = [[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058, 0.
...: 00542058]]
In [2]: alist
Out[2]: [[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058, 0.00542058]]
```
and make an array from that:
```
In [3]: arr = np.array(alist)
In [8]: print(arr)
[[0.00599913 0.00506044 0.00508315 0.00540191 0.00542058 0.00542058]]
```
or the `repr` display that `ipython` gives as the default:
```
In [4]: arr
Out[4]:
array([[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058,
0.00542058]])
```
Because of the double brackets, this is a 2d array. Its transpose will have shape (6,1).
```
In [5]: arr.shape
Out[5]: (1, 6)
```
Conversion to `uint8` works as expected (I prefer the `astype` version). But
```
In [6]: np.uint8(arr)
Out[6]: array([[0, 0, 0, 0, 0, 0]], dtype=uint8)
In [7]: arr.astype('uint8')
Out[7]: array([[0, 0, 0, 0, 0, 0]], dtype=uint8)
```
The converted shape is as before (1,6).
The conversion is nearly meaningless. The values are all small between 1 and 0. Converting to small (1 byte) unsigned integers predictably produces all 0s.
|
```py
import numpy as np
import pandas as pd
x = [1,2,3,4,5,6,7]
x = np.array(x)
y = pd.Series(x)
print(y)
y.to_csv('a.csv')
```
|
56,600,918
|
When you call DataFrame.to\_numpy(), pandas will find the NumPy dtype that can hold all of the dtypes in the DataFrame. But how to perform the reverse operation?
I have an 'numpy.ndarray' object 'pred'. It looks like this:
>
> [[0.00599913 0.00506044 0.00508315 ... 0.00540191 0.00542058 0.00542058]]
>
>
>
I am trying to do like this:
```
pred = np.uint8(pred)
print("Model predict:\n", pred.T)
```
But I get:
>
> [[0 0 0 ... 0 0 0]]
>
>
>
Why, after the conversion, I do not get something like this:
>
> 0 0 0 0 0 0 ... 0 0 0 0 0 0
>
>
>
And how to write the pred to a file?
```
pred.to_csv('pred.csv', header=None, index=False)
pred = pd.read_csv('pred.csv', sep=',', header=None)
```
Gives an error message:
```
AttributeError Traceback (most recent call last)
<ipython-input-68-b223b39b5db1> in <module>()
----> 1 pred.to_csv('pred.csv', header=None, index=False)
2 pred = pd.read_csv('pred.csv', sep=',', header=None)
AttributeError: 'numpy.ndarray' object has no attribute 'to_csv'
```
Please help me figure this out.
|
2019/06/14
|
[
"https://Stackoverflow.com/questions/56600918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11376406/"
] |
You can solve the issue with one line of code to convert ndarray to pandas df and then to csv file.
```
pd.DataFrame(X_train_res).to_csv("x_train_smote_oversample.csv")
```
|
`pred` is an `ndarray`. It does not have a `to_csv` method. That's something a `pandas` `DataFrame` has.
But lets look at the first stuff.
Copying your array display, adding commas, lets me make a list:
```
In [1]: alist = [[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058, 0.
...: 00542058]]
In [2]: alist
Out[2]: [[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058, 0.00542058]]
```
and make an array from that:
```
In [3]: arr = np.array(alist)
In [8]: print(arr)
[[0.00599913 0.00506044 0.00508315 0.00540191 0.00542058 0.00542058]]
```
or the `repr` display that `ipython` gives as the default:
```
In [4]: arr
Out[4]:
array([[0.00599913, 0.00506044, 0.00508315, 0.00540191, 0.00542058,
0.00542058]])
```
Because of the double brackets, this is a 2d array. Its transpose will have shape (6,1).
```
In [5]: arr.shape
Out[5]: (1, 6)
```
Conversion to `uint8` works as expected (I prefer the `astype` version). But
```
In [6]: np.uint8(arr)
Out[6]: array([[0, 0, 0, 0, 0, 0]], dtype=uint8)
In [7]: arr.astype('uint8')
Out[7]: array([[0, 0, 0, 0, 0, 0]], dtype=uint8)
```
The converted shape is as before (1,6).
The conversion is nearly meaningless. The values are all small between 1 and 0. Converting to small (1 byte) unsigned integers predictably produces all 0s.
|
56,600,918
|
When you call DataFrame.to\_numpy(), pandas will find the NumPy dtype that can hold all of the dtypes in the DataFrame. But how to perform the reverse operation?
I have an 'numpy.ndarray' object 'pred'. It looks like this:
>
> [[0.00599913 0.00506044 0.00508315 ... 0.00540191 0.00542058 0.00542058]]
>
>
>
I am trying to do like this:
```
pred = np.uint8(pred)
print("Model predict:\n", pred.T)
```
But I get:
>
> [[0 0 0 ... 0 0 0]]
>
>
>
Why, after the conversion, I do not get something like this:
>
> 0 0 0 0 0 0 ... 0 0 0 0 0 0
>
>
>
And how to write the pred to a file?
```
pred.to_csv('pred.csv', header=None, index=False)
pred = pd.read_csv('pred.csv', sep=',', header=None)
```
Gives an error message:
```
AttributeError Traceback (most recent call last)
<ipython-input-68-b223b39b5db1> in <module>()
----> 1 pred.to_csv('pred.csv', header=None, index=False)
2 pred = pd.read_csv('pred.csv', sep=',', header=None)
AttributeError: 'numpy.ndarray' object has no attribute 'to_csv'
```
Please help me figure this out.
|
2019/06/14
|
[
"https://Stackoverflow.com/questions/56600918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11376406/"
] |
You can solve the issue with one line of code to convert ndarray to pandas df and then to csv file.
```
pd.DataFrame(X_train_res).to_csv("x_train_smote_oversample.csv")
```
|
```py
import numpy as np
import pandas as pd
x = [1,2,3,4,5,6,7]
x = np.array(x)
y = pd.Series(x)
print(y)
y.to_csv('a.csv')
```
|
56,651,258
|
I am trying to install the **owlready2** lib in Ubuntu by following the method below but I face a problem.
* I updated the system and applications
* Installed Python 3 and made it the working version (default)
* Installed pip3
* Used pip and pip3 to install the owlready2 lib
But I faced the below problem which seems to be a problem with the library package:
>
> error: can't copy './hermit/org/semanticweb/hermiT/hierarchy':
> doesn't exist or not a regular file"
>
>
> Command /usr/bin/python3 -c "import setuptools,
> tokenize;**file**='/tmp/pip\_buil
> d\_root/owlready2/setup.py';exec(compile(getattr(tokenize, 'open',
> open)(**file** ).read().replace('\r\n', '\n'), **file**, 'exec'))"
> install --record /tmp/pip-lq v533ik-record/install-record.txt
> --single-version-externally-managed --compile f ailed with error code 1 in /tmp/pip\_build\_root/owlready2 Storing debug log for failure in
> /home/ubuntu/.pip/pip.log
>
>
>
Does anyone have any ideas how to resolve this?
|
2019/06/18
|
[
"https://Stackoverflow.com/questions/56651258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5851759/"
] |
Try to install your package with the following command:
```
python3 -m pip install -I owlready2
```
If pip3 does not work, you also install Owlready2 manually : download the sources, then run in a terminal:
```
cd /path/to/Owlready2
python setup.py build
python setup.py install # as root
```
Also, that would be a good ide to install pip3 and try to install your package with `pip3`, commands below:
```
curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
python3 get-pip.py --user
```
|
I encountered the same problem.
It seems that the issue might lie in something that was added in version 0.14 (at the time of writing the newest version is 0.19). If the owlready2 version is newer than 0.13 then you will encounter the problem.
I have tested these Python versions - 3.7.3 (works), 3.6.8(works), 3.5.2(works until v0.13), 3.4.3(works until v0.13)
To install version v0.13 of owlready2:
```html
pip install owlready2==0.13
```
|
12,199,819
|
I'm a beginner programmer, and i've been trying to use the python markdown library in my web app. everything works fine, except the nl2br extension.
When I tried to convert text file to html using md.convert(text), it doesn't see to convert newlines to `<br>`.
for example, before I convert, the text is:
```
Puerto Rico
===========
------------------------------
### Game Rules
hello world!
```
after I convert, I get:
```
<h1>Puerto Rico</h1>
<hr />
<h3>Game Rules</h3>
<p>hello world!</p>
```
My understanding is that the blank spaces are represented by '\n' and should be converted to `<br>`, but I'm not getting that result. Here's my code:
```
import markdown
md = markdown.Markdown(safe_mode='escape',extensions=['nl2br'])
html = md.convert(text)
```
Please let me know if you have any idea or can point me in the right direction. Thank you.
|
2012/08/30
|
[
"https://Stackoverflow.com/questions/12199819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623886/"
] |
Try adding two or more white spaces at the end of a line to insert `<br/>` tags
Example:
```
hello
world
```
results in
```
<p>hello <br>
world</p>
```
Notice that there are two spaces after the word hello. This only works if you have some text before the two spaces at the end of a line. But this has nothing to do with your nl2br extension, this is markdown standard.
My advice is, if you don't explicitly have to do this conversion, just don't do it. Using paragraphs alias `<p>`-tags is the cleaner way to seperate text regions.
If you simply want to have more space after your `<h3>` headlines then define some css for it:
```
h3 { margin-bottom: 4em; }
```
Image if you do spacing with `<br>`-tags after your headlines in all your 500 wiki pages and later you decide that it's 20px too much space. Then you have to edit all your pages by hand and remove two `<br>`-tags on every page. Otherwise you just edit one line in a css file.
|
Found this question looking for a clarification myself. Hence adding an update despite being 7 years late.
---
Reference thread on the Python Markdown Project: <https://github.com/Python-Markdown/markdown/issues/707>
Turns out that this is indeed the expected behaviour, and hence, the `nl2br` extension converts only *single* newlines occurring *within* a block, **not** around it. Which means,
```
This is
a block
This is a different
block
```
gets converted to
```html
<p>This is<br/>block</p>\n<p>This is a different<br/>block</p>
```
but when you have **distinct, separate blocks**,
```
Puerto Rico
===========
------------------------------
### Game Rules
hello world!
```
all surrounding newlines are collapsed, and no `<br/>` tags are injected.
```html
<h1>Puerto Rico</h1>
<hr />
<h3>Game Rules</h3>
<p>hello world!</p>
```
|
44,548,111
|
I am working on a **SaaS** solution currently provisioning **sonarqube and gerrit** applications on kubernetes.
As part of that I want to create a new schema in my postgres database for every new application that I provision. Application is connecting using following connection string, (i.e., instance1, instance2, instance3... and so on)
```
jdbc:postgresql://localhost/gerrit?user=instance1&password=instance1¤tSchema=instance1
```
The solution works fine for the first occurrence of provisioning gerrit and sonarqube by creating associated tables in the new schema. However, it fails on second time with another new schema in the same database, these failures are most likely associated with application trying to create the associated tables but it **already exists**.
I am creating the schema with following sql.
```
create user instance1 with login password 'instance1';
CREATE SCHEMA instance1 AUTHORIZATION instance1;
ALTER ROLE instance1 SET search_path=instance1;
create user instance2 with login password 'instance2';
CREATE SCHEMA instance2 AUTHORIZATION instance2;
ALTER ROLE instance2 SET search_path=instance2;
```
I am having difficulty in understanding this behavior, how can two separate applications configured against two different schemas of same database could see each others tables.
In order to reproduce this problem I quickly wrote a python script to connect to two different schemas of same database and create the same table and it works fine.
```
import psycopg2
import sys
import random
_user = raw_input("user: ")
con = None
try:
con = psycopg2.connect(database='gerrit', user=_user,
password=_user, host='localhost')
cur = con.cursor()
cur.execute('SELECT version()')
ver = cur.fetchone()
print ver
table_name = 'tbl_%d' %(1)#random.randint(1,100))
cur.execute('CREATE TABLE %s (id serial, name varchar(32));' %(table_name))
cur.execute('INSERT INTO %s values (1, \'%s\');' %(table_name, table_name+_user))
con.commit()
cur.execute('SELECT * from %s' %(table_name))
ver = cur.fetchone()
print ver
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
```
Output as follows
```
$ python pg_test_connect.py
user: instance1
(1, 'tbl_1instance1')
$ python pg_test_connect.py
user: instance2
(1, 'tbl_1instance2')
```
Since I am able to verify this workflow from python, is this the limitation from **JDBC** or the applications(gerrit & sonarqube), Has anyone came across this problem with postgres?
|
2017/06/14
|
[
"https://Stackoverflow.com/questions/44548111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4189388/"
] |
for those kind of operation it would be better to use collections,
the method [removeAll()](https://docs.oracle.com/javase/8/docs/api/java/util/List.html#removeAll-java.util.Collection-) will filter the data containers, from the doc:
>
> Removes from this list all of its elements that are contained in the
> specified collection (optional operation).
>
>
>
```
List<Integer> myVarListA = Arrays.asList(26, 13, 88, 9);
List<Integer> myVarListB = Arrays.asList(26, 1, 8, 12);
List<Integer> myVarListAcomplementB = new ArrayList<>(myVarListA);
List<Integer> myVarListBcomplementA = new ArrayList<>(myVarListB);
myVarListAcomplementB.removeAll(myVarListB);
myVarListBcomplementA.removeAll(myVarListA);
System.out.println("elements in A but no in B: " + myVarListAcomplementB);
System.out.println("elements in B but no in A: " + myVarListBcomplementA);
myVarListBcomplementA.addAll(myVarListAcomplementB);
System.out.println("both together: " + myVarListBcomplementA);
```
|
You may try this:
```
a.removeAll(b);
```
|
44,548,111
|
I am working on a **SaaS** solution currently provisioning **sonarqube and gerrit** applications on kubernetes.
As part of that I want to create a new schema in my postgres database for every new application that I provision. Application is connecting using following connection string, (i.e., instance1, instance2, instance3... and so on)
```
jdbc:postgresql://localhost/gerrit?user=instance1&password=instance1¤tSchema=instance1
```
The solution works fine for the first occurrence of provisioning gerrit and sonarqube by creating associated tables in the new schema. However, it fails on second time with another new schema in the same database, these failures are most likely associated with application trying to create the associated tables but it **already exists**.
I am creating the schema with following sql.
```
create user instance1 with login password 'instance1';
CREATE SCHEMA instance1 AUTHORIZATION instance1;
ALTER ROLE instance1 SET search_path=instance1;
create user instance2 with login password 'instance2';
CREATE SCHEMA instance2 AUTHORIZATION instance2;
ALTER ROLE instance2 SET search_path=instance2;
```
I am having difficulty in understanding this behavior, how can two separate applications configured against two different schemas of same database could see each others tables.
In order to reproduce this problem I quickly wrote a python script to connect to two different schemas of same database and create the same table and it works fine.
```
import psycopg2
import sys
import random
_user = raw_input("user: ")
con = None
try:
con = psycopg2.connect(database='gerrit', user=_user,
password=_user, host='localhost')
cur = con.cursor()
cur.execute('SELECT version()')
ver = cur.fetchone()
print ver
table_name = 'tbl_%d' %(1)#random.randint(1,100))
cur.execute('CREATE TABLE %s (id serial, name varchar(32));' %(table_name))
cur.execute('INSERT INTO %s values (1, \'%s\');' %(table_name, table_name+_user))
con.commit()
cur.execute('SELECT * from %s' %(table_name))
ver = cur.fetchone()
print ver
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
```
Output as follows
```
$ python pg_test_connect.py
user: instance1
(1, 'tbl_1instance1')
$ python pg_test_connect.py
user: instance2
(1, 'tbl_1instance2')
```
Since I am able to verify this workflow from python, is this the limitation from **JDBC** or the applications(gerrit & sonarqube), Has anyone came across this problem with postgres?
|
2017/06/14
|
[
"https://Stackoverflow.com/questions/44548111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4189388/"
] |
for those kind of operation it would be better to use collections,
the method [removeAll()](https://docs.oracle.com/javase/8/docs/api/java/util/List.html#removeAll-java.util.Collection-) will filter the data containers, from the doc:
>
> Removes from this list all of its elements that are contained in the
> specified collection (optional operation).
>
>
>
```
List<Integer> myVarListA = Arrays.asList(26, 13, 88, 9);
List<Integer> myVarListB = Arrays.asList(26, 1, 8, 12);
List<Integer> myVarListAcomplementB = new ArrayList<>(myVarListA);
List<Integer> myVarListBcomplementA = new ArrayList<>(myVarListB);
myVarListAcomplementB.removeAll(myVarListB);
myVarListBcomplementA.removeAll(myVarListA);
System.out.println("elements in A but no in B: " + myVarListAcomplementB);
System.out.println("elements in B but no in A: " + myVarListBcomplementA);
myVarListBcomplementA.addAll(myVarListAcomplementB);
System.out.println("both together: " + myVarListBcomplementA);
```
|
Simply parse the second list and add unique elements to the first, delete others.
```
for (Integer elem : secondList)
if (firstList.contains(elem))
firstList.remove(elem);
else
firstList.add(elem);
```
In `firstList` you will have the values present only in one of the lists.
|
44,548,111
|
I am working on a **SaaS** solution currently provisioning **sonarqube and gerrit** applications on kubernetes.
As part of that I want to create a new schema in my postgres database for every new application that I provision. Application is connecting using following connection string, (i.e., instance1, instance2, instance3... and so on)
```
jdbc:postgresql://localhost/gerrit?user=instance1&password=instance1¤tSchema=instance1
```
The solution works fine for the first occurrence of provisioning gerrit and sonarqube by creating associated tables in the new schema. However, it fails on second time with another new schema in the same database, these failures are most likely associated with application trying to create the associated tables but it **already exists**.
I am creating the schema with following sql.
```
create user instance1 with login password 'instance1';
CREATE SCHEMA instance1 AUTHORIZATION instance1;
ALTER ROLE instance1 SET search_path=instance1;
create user instance2 with login password 'instance2';
CREATE SCHEMA instance2 AUTHORIZATION instance2;
ALTER ROLE instance2 SET search_path=instance2;
```
I am having difficulty in understanding this behavior, how can two separate applications configured against two different schemas of same database could see each others tables.
In order to reproduce this problem I quickly wrote a python script to connect to two different schemas of same database and create the same table and it works fine.
```
import psycopg2
import sys
import random
_user = raw_input("user: ")
con = None
try:
con = psycopg2.connect(database='gerrit', user=_user,
password=_user, host='localhost')
cur = con.cursor()
cur.execute('SELECT version()')
ver = cur.fetchone()
print ver
table_name = 'tbl_%d' %(1)#random.randint(1,100))
cur.execute('CREATE TABLE %s (id serial, name varchar(32));' %(table_name))
cur.execute('INSERT INTO %s values (1, \'%s\');' %(table_name, table_name+_user))
con.commit()
cur.execute('SELECT * from %s' %(table_name))
ver = cur.fetchone()
print ver
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
```
Output as follows
```
$ python pg_test_connect.py
user: instance1
(1, 'tbl_1instance1')
$ python pg_test_connect.py
user: instance2
(1, 'tbl_1instance2')
```
Since I am able to verify this workflow from python, is this the limitation from **JDBC** or the applications(gerrit & sonarqube), Has anyone came across this problem with postgres?
|
2017/06/14
|
[
"https://Stackoverflow.com/questions/44548111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4189388/"
] |
for those kind of operation it would be better to use collections,
the method [removeAll()](https://docs.oracle.com/javase/8/docs/api/java/util/List.html#removeAll-java.util.Collection-) will filter the data containers, from the doc:
>
> Removes from this list all of its elements that are contained in the
> specified collection (optional operation).
>
>
>
```
List<Integer> myVarListA = Arrays.asList(26, 13, 88, 9);
List<Integer> myVarListB = Arrays.asList(26, 1, 8, 12);
List<Integer> myVarListAcomplementB = new ArrayList<>(myVarListA);
List<Integer> myVarListBcomplementA = new ArrayList<>(myVarListB);
myVarListAcomplementB.removeAll(myVarListB);
myVarListBcomplementA.removeAll(myVarListA);
System.out.println("elements in A but no in B: " + myVarListAcomplementB);
System.out.println("elements in B but no in A: " + myVarListBcomplementA);
myVarListBcomplementA.addAll(myVarListAcomplementB);
System.out.println("both together: " + myVarListBcomplementA);
```
|
Simple solution is to calculate the intersection and remove that from the desired list. If you only want what is missing, you can optimize this a little by going a solution like ΦXocę 웃 Пepeúpa ツ.
The great thing about this solution is that you can easily expand this to use 3+ sets/lists. Instead of A-B = diff or A-I(A,B)=diff, you can also do A-I(A,I(B,C)) to find what A is missing from the common set between A, B and C.
```
public static <T> HashSet<T> intersection(Collection<T> a, Collection<T> b) {
HashSet<T> aMinusB = new HashSet<>(a);
aMinusB.removeAll(b);
HashSet<T> common = new HashSet<>(a);
common.removeAll(aMinusB);
return common;
}
```
Let's call the intersection set `Collection I = intersection(a,b);`.
Now if you want to find what is missing from list A that was in B:
```
new LinkedList(A).removeAll(I);//ordered and possibly containing duplicates
OR
new ArrayList(A).removeAll(I);//ordered and possibly containing duplicates. Faster copy time, but slower to remove elements. Experiment with this and LinkedList for speed.
OR
new LinkedHashSet<T>(a).removeAll(I);//ordered and unique
OR
new HashSet<T>(a).removeAll(I);//unique and no order
```
Also, this question is effectively duplicate of [How to do union, intersect, difference and reverse data in java](https://stackoverflow.com/questions/3590677/how-to-do-union-intersect-difference-and-reverse-data-in-java)
|
44,548,111
|
I am working on a **SaaS** solution currently provisioning **sonarqube and gerrit** applications on kubernetes.
As part of that I want to create a new schema in my postgres database for every new application that I provision. Application is connecting using following connection string, (i.e., instance1, instance2, instance3... and so on)
```
jdbc:postgresql://localhost/gerrit?user=instance1&password=instance1¤tSchema=instance1
```
The solution works fine for the first occurrence of provisioning gerrit and sonarqube by creating associated tables in the new schema. However, it fails on second time with another new schema in the same database, these failures are most likely associated with application trying to create the associated tables but it **already exists**.
I am creating the schema with following sql.
```
create user instance1 with login password 'instance1';
CREATE SCHEMA instance1 AUTHORIZATION instance1;
ALTER ROLE instance1 SET search_path=instance1;
create user instance2 with login password 'instance2';
CREATE SCHEMA instance2 AUTHORIZATION instance2;
ALTER ROLE instance2 SET search_path=instance2;
```
I am having difficulty in understanding this behavior, how can two separate applications configured against two different schemas of same database could see each others tables.
In order to reproduce this problem I quickly wrote a python script to connect to two different schemas of same database and create the same table and it works fine.
```
import psycopg2
import sys
import random
_user = raw_input("user: ")
con = None
try:
con = psycopg2.connect(database='gerrit', user=_user,
password=_user, host='localhost')
cur = con.cursor()
cur.execute('SELECT version()')
ver = cur.fetchone()
print ver
table_name = 'tbl_%d' %(1)#random.randint(1,100))
cur.execute('CREATE TABLE %s (id serial, name varchar(32));' %(table_name))
cur.execute('INSERT INTO %s values (1, \'%s\');' %(table_name, table_name+_user))
con.commit()
cur.execute('SELECT * from %s' %(table_name))
ver = cur.fetchone()
print ver
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
```
Output as follows
```
$ python pg_test_connect.py
user: instance1
(1, 'tbl_1instance1')
$ python pg_test_connect.py
user: instance2
(1, 'tbl_1instance2')
```
Since I am able to verify this workflow from python, is this the limitation from **JDBC** or the applications(gerrit & sonarqube), Has anyone came across this problem with postgres?
|
2017/06/14
|
[
"https://Stackoverflow.com/questions/44548111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4189388/"
] |
Simple solution is to calculate the intersection and remove that from the desired list. If you only want what is missing, you can optimize this a little by going a solution like ΦXocę 웃 Пepeúpa ツ.
The great thing about this solution is that you can easily expand this to use 3+ sets/lists. Instead of A-B = diff or A-I(A,B)=diff, you can also do A-I(A,I(B,C)) to find what A is missing from the common set between A, B and C.
```
public static <T> HashSet<T> intersection(Collection<T> a, Collection<T> b) {
HashSet<T> aMinusB = new HashSet<>(a);
aMinusB.removeAll(b);
HashSet<T> common = new HashSet<>(a);
common.removeAll(aMinusB);
return common;
}
```
Let's call the intersection set `Collection I = intersection(a,b);`.
Now if you want to find what is missing from list A that was in B:
```
new LinkedList(A).removeAll(I);//ordered and possibly containing duplicates
OR
new ArrayList(A).removeAll(I);//ordered and possibly containing duplicates. Faster copy time, but slower to remove elements. Experiment with this and LinkedList for speed.
OR
new LinkedHashSet<T>(a).removeAll(I);//ordered and unique
OR
new HashSet<T>(a).removeAll(I);//unique and no order
```
Also, this question is effectively duplicate of [How to do union, intersect, difference and reverse data in java](https://stackoverflow.com/questions/3590677/how-to-do-union-intersect-difference-and-reverse-data-in-java)
|
You may try this:
```
a.removeAll(b);
```
|
44,548,111
|
I am working on a **SaaS** solution currently provisioning **sonarqube and gerrit** applications on kubernetes.
As part of that I want to create a new schema in my postgres database for every new application that I provision. Application is connecting using following connection string, (i.e., instance1, instance2, instance3... and so on)
```
jdbc:postgresql://localhost/gerrit?user=instance1&password=instance1¤tSchema=instance1
```
The solution works fine for the first occurrence of provisioning gerrit and sonarqube by creating associated tables in the new schema. However, it fails on second time with another new schema in the same database, these failures are most likely associated with application trying to create the associated tables but it **already exists**.
I am creating the schema with following sql.
```
create user instance1 with login password 'instance1';
CREATE SCHEMA instance1 AUTHORIZATION instance1;
ALTER ROLE instance1 SET search_path=instance1;
create user instance2 with login password 'instance2';
CREATE SCHEMA instance2 AUTHORIZATION instance2;
ALTER ROLE instance2 SET search_path=instance2;
```
I am having difficulty in understanding this behavior, how can two separate applications configured against two different schemas of same database could see each others tables.
In order to reproduce this problem I quickly wrote a python script to connect to two different schemas of same database and create the same table and it works fine.
```
import psycopg2
import sys
import random
_user = raw_input("user: ")
con = None
try:
con = psycopg2.connect(database='gerrit', user=_user,
password=_user, host='localhost')
cur = con.cursor()
cur.execute('SELECT version()')
ver = cur.fetchone()
print ver
table_name = 'tbl_%d' %(1)#random.randint(1,100))
cur.execute('CREATE TABLE %s (id serial, name varchar(32));' %(table_name))
cur.execute('INSERT INTO %s values (1, \'%s\');' %(table_name, table_name+_user))
con.commit()
cur.execute('SELECT * from %s' %(table_name))
ver = cur.fetchone()
print ver
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
```
Output as follows
```
$ python pg_test_connect.py
user: instance1
(1, 'tbl_1instance1')
$ python pg_test_connect.py
user: instance2
(1, 'tbl_1instance2')
```
Since I am able to verify this workflow from python, is this the limitation from **JDBC** or the applications(gerrit & sonarqube), Has anyone came across this problem with postgres?
|
2017/06/14
|
[
"https://Stackoverflow.com/questions/44548111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4189388/"
] |
Simple solution is to calculate the intersection and remove that from the desired list. If you only want what is missing, you can optimize this a little by going a solution like ΦXocę 웃 Пepeúpa ツ.
The great thing about this solution is that you can easily expand this to use 3+ sets/lists. Instead of A-B = diff or A-I(A,B)=diff, you can also do A-I(A,I(B,C)) to find what A is missing from the common set between A, B and C.
```
public static <T> HashSet<T> intersection(Collection<T> a, Collection<T> b) {
HashSet<T> aMinusB = new HashSet<>(a);
aMinusB.removeAll(b);
HashSet<T> common = new HashSet<>(a);
common.removeAll(aMinusB);
return common;
}
```
Let's call the intersection set `Collection I = intersection(a,b);`.
Now if you want to find what is missing from list A that was in B:
```
new LinkedList(A).removeAll(I);//ordered and possibly containing duplicates
OR
new ArrayList(A).removeAll(I);//ordered and possibly containing duplicates. Faster copy time, but slower to remove elements. Experiment with this and LinkedList for speed.
OR
new LinkedHashSet<T>(a).removeAll(I);//ordered and unique
OR
new HashSet<T>(a).removeAll(I);//unique and no order
```
Also, this question is effectively duplicate of [How to do union, intersect, difference and reverse data in java](https://stackoverflow.com/questions/3590677/how-to-do-union-intersect-difference-and-reverse-data-in-java)
|
Simply parse the second list and add unique elements to the first, delete others.
```
for (Integer elem : secondList)
if (firstList.contains(elem))
firstList.remove(elem);
else
firstList.add(elem);
```
In `firstList` you will have the values present only in one of the lists.
|
70,545,797
|
I have this image for a treeline crop. I need to find the general direction in which the crop is aligned. I'm trying to get the Hough lines of the image, and then find the mode of distribution of angles.[](https://i.stack.imgur.com/8GWAX.jpg)
I've been following [this tutorial](https://medium.com/@jamesthesken/crop-row-detection-using-python-and-opencv-93e456ddd974)on crop lines, however in that one, the crop lines are sparse. Here they are densely pack, and after grayscaling, blurring, and using canny edge detection, this is what i get
```py
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('drive/MyDrive/tree/sample.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gauss = cv2.GaussianBlur(gray, (3,3), 3)
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.imshow(gauss)
gscale = cv2.Canny(gauss, 80, 140)
plt.subplot(1,2,2)
plt.imshow(gscale)
plt.show()
```
(Left side blurred image without canny, left one preprocessed with canny)
[](https://i.stack.imgur.com/NvjOJ.png)
After that, I followed the tutorial and "skeletonized" the preprocessed image
```py
size = np.size(gscale)
skel = np.zeros(gscale.shape, np.uint8)
ret, gscale = cv2.threshold(gscale, 128, 255,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS, (3,3))
done = False
while not done:
eroded = cv2.erode(gscale, element)
temp = cv2.dilate(eroded, element)
temp = cv2.subtract(gscale, temp)
skel = cv2.bitwise_or(skel, temp)
gscale = eroded.copy()
zeros = size - cv2.countNonZero(gscale)
if zeros==size:
done = True
```
Giving me
[](https://i.stack.imgur.com/uahSo.png)
As you can see, there are a bunch of curvy lines still. When using the HoughLines algorithm on it, there are 11k lines scattered everywhere
```py
lines = cv2.HoughLinesP(skel,1,np.pi/180,130)
a,b,c = lines.shape
for i in range(a):
rho = lines[i][0][0]
theta = lines[i][0][1]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2, cv2.LINE_AA)#showing the results:
plt.figure(figsize=(15,15))
plt.subplot(121)#OpenCV reads images as BGR, this corrects so it is displayed as RGB
plt.plot()
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.title('Row Detection')
plt.xticks([])
plt.yticks([])
plt.subplot(122)
plt.plot()
plt.imshow(skel,cmap='gray')
plt.title('Skeletal Image')
plt.xticks([])
plt.yticks([])
plt.show()
```
[](https://i.stack.imgur.com/qawm9.png)
I am a newbie when it comes to cv2, so I have 0 clue what to do. Searched and tried a bunch of stuff but none works. How can I remove the mildly big dots, and remove the squiggly lines?
|
2021/12/31
|
[
"https://Stackoverflow.com/questions/70545797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15230180/"
] |
You can use a **2D FFT** to find the general direction in which the crop is aligned (as proposed by mozway in the comments). The idea is that the general direction can be easily extracted from *centred beaming rays appearing in the magnitude spectrum* when the input contains many lines in the same direction. You can find more information about how it works in [this previous post](https://stackoverflow.com/questions/37382491/how-to-find-orientation-of-image-using-fourier-transform). It works directly with the input image, but it is better to apply the Gaussian + Canny filters.
Here is the interesting part of the magnitude spectrum of the filtered gray image:
[](https://i.stack.imgur.com/hmBZ0.png)
The main beaming ray can be easily seen. You can extract its angle by iterating over many lines with an increasing angle and sum the magnitude values on each line as in the following figure:
[](https://i.stack.imgur.com/1azP7.png)
Here is the magnitude sum of each line plotted against the angle (in radian) of the line:
[](https://i.stack.imgur.com/L6u96.png)
Based on that, you just need to find the angle that maximize the computed sum.
Here is the resulting code:
```py
def computeAngle(arr):
# Naive inefficient algorithm
n, m = arr.shape
yCenter, xCenter = (n-1, m//2-1)
lineLen = m//2-2
sMax = 0.0
bestAngle = np.nan
for angle in np.arange(0, math.pi, math.pi/300):
i = np.arange(lineLen)
y, x = (np.sin(angle) * i + 0.5).astype(np.int_), (np.cos(angle) * i + 0.5).astype(np.int_)
s = np.sum(arr[yCenter-y, xCenter+x])
if s > sMax:
bestAngle = angle
sMax = s
return bestAngle
# Load the image in gray
img = cv2.imread('lines.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Apply some filters
gauss = cv2.GaussianBlur(gray, (3,3), 3)
gscale = cv2.Canny(gauss, 80, 140)
# Compute the 2D FFT of real values
freqs = np.fft.rfft2(gscale)
# Shift the frequencies (centering) and select the low frequencies
upperPart = freqs[:freqs.shape[0]//4,:freqs.shape[1]//2]
lowerPart = freqs[-freqs.shape[0]//4:,:freqs.shape[1]//2]
filteredFreqs = np.vstack((lowerPart, upperPart))
# Compute the magnitude spectrum
magnitude = np.log(np.abs(filteredFreqs))
# Correct the angle
magnitude = np.rot90(magnitude).copy()
# Find the major angle
bestAngle = computeAngle(magnitude)
```
|
Just for completeness I would like to post the Sobel Gradient Angle way as well.
General idea:
1. for every pixel, compute X and Y gradient (e.g. with Sobel)
2. Compute the angle from the X and Y gradient information and their distribution
3. find the modes e.g. with a histogram and select the highest one
Written in C++ but probably easily convertable to python:
```
int main()
{
try
{
cv::Mat img = cv::imread("C:/data/StackOverflow/cropLines/lines.jpg", cv::IMREAD_GRAYSCALE);
// tests with artificial lines:
//img = cv::Mat::zeros(img.size(), CV_8UC1);
//img = cv::Mat(img.size(), CV_8UC1, cv::Scalar::all(255));
//cv::line(img, cv::Point(0, img.rows), cv::Point(img.cols, 0), cv::Scalar::all(255), 10); // sample to check angles
//cv::line(img, cv::Point(img.cols, img.rows), cv::Point(0, 0), cv::Scalar::all(255), 10); // sample to check angles
//cv::line(img, cv::Point(img.cols, img.rows/2), cv::Point(0, img.rows/2), cv::Scalar::all(255), 10); // sample to check angles
//cv::line(img, cv::Point(img.cols/2, img.rows), cv::Point(img.cols/2, 0), cv::Scalar::all(255), 10); // sample to check angles
//cv::line(img, cv::Point(img.cols / 2, img.rows), cv::Point(img.cols / 2, 0), cv::Scalar::all(255), 10); // sample to check angles
//cv::line(img, cv::Point(img.cols / 2, img.rows), cv::Point(img.cols / 2, 0), cv::Scalar::all(0), 10); // sample to check angles
cv::imshow("img", img);
cv::Mat gradX, gradY, mag, angle;
cv::Sobel(img, gradX, CV_32F, 1, 0, 3);
cv::Sobel(img, gradY, CV_32F, 0, 1, 3);
cv::cartToPolar(gradX, gradY, mag, angle, true);
cv::Mat magMask = mag > 0; // dont use pixels where angle is 0 just because there is no gradient.
float scaleX = 3;
float scaleY = 0.15;
float maxValueY = 3000;
cv::Mat chart = cv::Mat(maxValueY * scaleY, 360 * scaleX, CV_8UC3);
cv::Point prev(-1, -1);
double window = 5.0; // window size 1 is much more noisy but still works
// this loop can probably be optimized with an optimized histogram compuation from any library
for (int i = 0; i <= 360; i = i + 1)
{
double amount = cv::countNonZero((angle >= i-window/2 & angle < i + window/2) & (magMask));
std::cout << i << "°: " << amount << std::endl;
cv::Point current(i*scaleX, chart.rows - amount*scaleY/window);
if (prev.x >= 0) cv::line(chart, prev, current, cv::Scalar(0, 0, 255), 1);
prev = current;
}
cv::line(chart, cv::Point(45 * scaleX, 0), cv::Point(45 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::line(chart, cv::Point(90 * scaleX, 0), cv::Point(90 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::line(chart, cv::Point(135 * scaleX, 0), cv::Point(135 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::line(chart, cv::Point(180 * scaleX, 0), cv::Point(180 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::line(chart, cv::Point(225 * scaleX, 0), cv::Point(225 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::line(chart, cv::Point(270 * scaleX, 0), cv::Point(270 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::line(chart, cv::Point(315 * scaleX, 0), cv::Point(315 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::line(chart, cv::Point(360 * scaleX, 0), cv::Point(360 * scaleX, 100 * scaleY), cv::Scalar(255, 0, 0), 1);
cv::imshow("chart", chart);
cv::imwrite("C:/data/StackOverflow/cropLines/chart.png", chart);
cv::imwrite("C:/data/StackOverflow/cropLines/input.png", img);
cv::waitKey(0);
}
catch (std::exception& e)
{
std::cout << e.what() << std::endl;
}
}
```
Giving this result, where every blue line at the top of the image is 45°. The maximm is as 52° (and its multiples of 180°), where the gradient is rotated by 90° compared to the line direction. So the line direction in that image is 142° clockwise from the x axis or 38° counter-clockwise.
[](https://i.stack.imgur.com/ZAHoA.png)
|
10,308,340
|
I have a list of rules for a given input file for my function. If any of them are violated in the file given, I want my program to return an error message and quit.
* Every gene in the file should be on the same chromosome
Thus for a lines such as:
NM\_001003443 chr11 + 5997152 5927598 5921052 5926098 1 5928752,5925972, 5927204,5396098,
NM\_001003444 chr11 + 5925152 5926098 5925152 5926098 2 5925152,5925652, 5925404,5926098,
NM\_001003489 chr11 + 5925145 5926093 5925115 5926045 4 5925151,5925762, 5987404,5908098,
etc.
Each line in the file will be variations of this line
Thus, I want to make sure every line in the file is on chr11
Yet I may be given a file with a different list of chr(and any number of numbers). Thus I want to write a function that will make sure whatever number is found on chr in the line is the same for every line.
Should I use a regular expression for this, or what should I do? This is in python by the way.
Such as: chr\d+ ?
I am unsure how to make sure that whatever is matched is the same in every line though...
I currently have:
```
from re import *
for line in file:
r = 'chr\d+'
i = search(r, line)
if i in line:
```
but I don't know how to make sure it is the same in every line...
**In reference to sajattack's answer**
```
fp = open(infile, 'r')
for line in fp:
filestring = ''
filestring +=line
chrlist = search('chr\d+', filestring)
chrlist = chrlist.group()
for chr in chrlist:
if chr != chrlist[0]:
print('Every gene in file not on same chromosome')
```
|
2012/04/25
|
[
"https://Stackoverflow.com/questions/10308340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1348509/"
] |
Just read the file and have a while loop check each line to make sure it contains `chr11`. There are string functions to search for substrings in a string. As soon as you find a line that returns false (does not contain `chr11`) then break out of the loop and set a flag `valid = false`.
```
import re
fp = open(infile, 'r')
fp.readline()
tar = re.findall(r'chr\d+', fp.readline())[0]
for line in fp:
if (line.find(tar) == -1):
print("Not valid")
break
```
This should search for a number in the line and check for validity.
|
Is it safe to assume that the first chr is the correct one? If so, use this:
```
import re
chrlist = re.findall("chr[0-9]+", open('file').read())
# ^ this is a list with all chr(whatever numbers)
for chr in chrlist:
if chr != chrlist[0]
print("Chr does not match")
break
```
|
10,308,340
|
I have a list of rules for a given input file for my function. If any of them are violated in the file given, I want my program to return an error message and quit.
* Every gene in the file should be on the same chromosome
Thus for a lines such as:
NM\_001003443 chr11 + 5997152 5927598 5921052 5926098 1 5928752,5925972, 5927204,5396098,
NM\_001003444 chr11 + 5925152 5926098 5925152 5926098 2 5925152,5925652, 5925404,5926098,
NM\_001003489 chr11 + 5925145 5926093 5925115 5926045 4 5925151,5925762, 5987404,5908098,
etc.
Each line in the file will be variations of this line
Thus, I want to make sure every line in the file is on chr11
Yet I may be given a file with a different list of chr(and any number of numbers). Thus I want to write a function that will make sure whatever number is found on chr in the line is the same for every line.
Should I use a regular expression for this, or what should I do? This is in python by the way.
Such as: chr\d+ ?
I am unsure how to make sure that whatever is matched is the same in every line though...
I currently have:
```
from re import *
for line in file:
r = 'chr\d+'
i = search(r, line)
if i in line:
```
but I don't know how to make sure it is the same in every line...
**In reference to sajattack's answer**
```
fp = open(infile, 'r')
for line in fp:
filestring = ''
filestring +=line
chrlist = search('chr\d+', filestring)
chrlist = chrlist.group()
for chr in chrlist:
if chr != chrlist[0]:
print('Every gene in file not on same chromosome')
```
|
2012/04/25
|
[
"https://Stackoverflow.com/questions/10308340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1348509/"
] |
Just read the file and have a while loop check each line to make sure it contains `chr11`. There are string functions to search for substrings in a string. As soon as you find a line that returns false (does not contain `chr11`) then break out of the loop and set a flag `valid = false`.
```
import re
fp = open(infile, 'r')
fp.readline()
tar = re.findall(r'chr\d+', fp.readline())[0]
for line in fp:
if (line.find(tar) == -1):
print("Not valid")
break
```
This should search for a number in the line and check for validity.
|
My solution uses a "match group" to collect the matched numbers from the "chr" string.
```
import re
pat = re.compile(r'\schr(\d+)\s')
def chr_val(line):
m = re.search(pat, line)
if m is not None:
return m.group(1)
else:
return ''
def is_valid(f):
line = f.readline()
v = chr_val(line)
if not v:
return False
return all(chr_val(line) == v for line in f)
with open("test.txt", "r") as f:
print("The file is {0}".format("valid" if is_valid(f) else "NOT valid"))
```
Notes:
* Pre-compiles the regular expression for speed.
* Uses a raw string (`r''`) to specify the regular expression.
* The pattern requires white space (`\s`) on either side of the `chr` string.
* `is_valid()` returns `False` if the first line doesn't have a good `chr` value. Then it returns a Boolean value that is true if all of the following lines match the `chr` value of the first line.
* Your sample code just prints something like `The file is True` so I made it a bit friendlier.
|
10,308,340
|
I have a list of rules for a given input file for my function. If any of them are violated in the file given, I want my program to return an error message and quit.
* Every gene in the file should be on the same chromosome
Thus for a lines such as:
NM\_001003443 chr11 + 5997152 5927598 5921052 5926098 1 5928752,5925972, 5927204,5396098,
NM\_001003444 chr11 + 5925152 5926098 5925152 5926098 2 5925152,5925652, 5925404,5926098,
NM\_001003489 chr11 + 5925145 5926093 5925115 5926045 4 5925151,5925762, 5987404,5908098,
etc.
Each line in the file will be variations of this line
Thus, I want to make sure every line in the file is on chr11
Yet I may be given a file with a different list of chr(and any number of numbers). Thus I want to write a function that will make sure whatever number is found on chr in the line is the same for every line.
Should I use a regular expression for this, or what should I do? This is in python by the way.
Such as: chr\d+ ?
I am unsure how to make sure that whatever is matched is the same in every line though...
I currently have:
```
from re import *
for line in file:
r = 'chr\d+'
i = search(r, line)
if i in line:
```
but I don't know how to make sure it is the same in every line...
**In reference to sajattack's answer**
```
fp = open(infile, 'r')
for line in fp:
filestring = ''
filestring +=line
chrlist = search('chr\d+', filestring)
chrlist = chrlist.group()
for chr in chrlist:
if chr != chrlist[0]:
print('Every gene in file not on same chromosome')
```
|
2012/04/25
|
[
"https://Stackoverflow.com/questions/10308340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1348509/"
] |
Is it safe to assume that the first chr is the correct one? If so, use this:
```
import re
chrlist = re.findall("chr[0-9]+", open('file').read())
# ^ this is a list with all chr(whatever numbers)
for chr in chrlist:
if chr != chrlist[0]
print("Chr does not match")
break
```
|
My solution uses a "match group" to collect the matched numbers from the "chr" string.
```
import re
pat = re.compile(r'\schr(\d+)\s')
def chr_val(line):
m = re.search(pat, line)
if m is not None:
return m.group(1)
else:
return ''
def is_valid(f):
line = f.readline()
v = chr_val(line)
if not v:
return False
return all(chr_val(line) == v for line in f)
with open("test.txt", "r") as f:
print("The file is {0}".format("valid" if is_valid(f) else "NOT valid"))
```
Notes:
* Pre-compiles the regular expression for speed.
* Uses a raw string (`r''`) to specify the regular expression.
* The pattern requires white space (`\s`) on either side of the `chr` string.
* `is_valid()` returns `False` if the first line doesn't have a good `chr` value. Then it returns a Boolean value that is true if all of the following lines match the `chr` value of the first line.
* Your sample code just prints something like `The file is True` so I made it a bit friendlier.
|
71,186,021
|
I´m new into python, so I apreciate any help.
I´m trying to develope a code that can search for a specific word in a csv file, but I don´t why he doesn´t recognize a word that I know it is in the program. I'm always getting "Não encontrei".
My code:
```
#Definir perfis
def pilar():
pilar = input("Perfil do pilar:")
csv_file=csv.reader(open(r"C:\Users\tomas\Documents\ISEP\5º Ano\TESE\PROGRAMA\PERFIS.csv"))
for row in csv_file:
if pilar in csv_file:
print("Pilar: ", pilar)
else:
print("Não encontrei")
pilar()
```
|
2022/02/19
|
[
"https://Stackoverflow.com/questions/71186021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
An enum without associated values conforms implicitly to Equatable **and** Hashable.
From the documentation:
[Hashable](https://developer.apple.com/documentation/swift/hashable/)
>
> When you define an enumeration without associated values, it gains Hashable conformance automatically, and you can add Hashable conformance to your other custom types by implementing the `hash(into:)` method.
>
>
>
[Equatable](https://developer.apple.com/documentation/swift/equatable)
>
> An enum without associated values has Equatable conformance even without the declaration.
>
>
>
If you don't trust the documentation prove it:
```
enum TestEnum {
case one, two, three, four, five
}
```
Equatable:
```
TestEnum.one == TestEnum.one // true
TestEnum.one == TestEnum.three // false
```
Hashable:
```
let dict : [TestEnum:Int] = [.one:1, .two:2]
// compiles, a dictionary key must be hashable
```
And `hashValue` is discouraged in Swift anyway.
|
To chime in:
If you *did* want to implement `==` yourself, the way to do it would be with pattern matching. Surprisingly (to me), the code below seems to call neither `~=` nor `==`.
```
enum TestEnum: Equatable {
case one, two, three, four, five
static func ==(lhs: TestEnum, rhs: TestEnum) -> Bool {
switch (lhs, rhs) {
case (.one, .one),(.two, .two), (.three, .three), (.four, .four), (.five, .five): return true
default: return false
}
}
}
```
That does kind of make sense. At bottom, you need the system to provide a way to compare these values. If one didn't exist all, it would be impossible for you to build one. By analogy, imagine how you might implement `==` for `Int`, if it wasn't already built-in. It would be impossible.
I'll also add: it doesn't make any sense to implement `==` based off `hashValue`. `hashValue` speeds up search algorithms by allowing them to rapidly cull a massive part of their search space. For example, say you're looking for milk in the grocery store. A good hashValue for a grocery product might be the section its in. If you saw that your milk's hashValue was "dairy", then you know right away that there's no point searching the deli, meat, produce and frozen food isles. But once you're in front of the dairy isle, you start linearly searching through all the items there. This is where `==` comes in. It needs a more granular notion of equality than the heuristic that `hashValue` provides. Butter and milk are both dairy, but you need to be able to discern them apart.
|
66,396,659
|
I am using imbalanced dataset(54:38:7%) with RFECV for feature selection like this:
```
# making a multi logloss metric
from sklearn.metrics import log_loss, make_scorer
log_loss_rfe = make_scorer(score_func=log_loss, greater_is_better=False)
# initiating Light GBM classifier
lgb_rfe = LGBMClassifier(objective='multiclass', learning_rate=0.01, verbose=0, force_col_wise=True,
random_state=100, n_estimators=5_000, n_jobs=7)
# initiating RFECV
rfe = RFECV(estimator=lgb_rfe, min_features_to_select=2, verbose=3, n_jobs=2, cv=3, scoring=log_loss_rfe)
# fitting it
rfe.fit(X=X_train, y=y_train)
```
And I got an error, presumably because the subsamples sklearn's RFECV has made doesn't have all of the classes from my data. I had no issues fitting the very same data outside of RFECV.
Here's the complete error:
```
---------------------------------------------------------------------------
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
r = call_item()
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 285, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 595, in __call__
return self.func(*args, **kwargs)
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/utils/fixes.py", line 222, in __call__
return self.function(*args, **kwargs)
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py", line 37, in _rfe_single_fit
return rfe._fit(
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py", line 259, in _fit
self.scores_.append(step_score(estimator, features))
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py", line 39, in <lambda>
lambda estimator, features: _score(
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/model_selection/_validation.py", line 674, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/metrics/_scorer.py", line 199, in __call__
return self._score(partial(_cached_call, None), estimator, X, y_true,
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/metrics/_scorer.py", line 242, in _score
return self._sign * self._score_func(y_true, y_pred,
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/utils/validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "/home/ubuntu/ds_jup_venv/lib/python3.8/site-packages/sklearn/metrics/_classification.py", line 2265, in log_loss
raise ValueError("y_true and y_pred contain different number of "
ValueError: y_true and y_pred contain different number of classes 3, 2. Please provide the true labels explicitly through the labels argument. Classes found in y_true: [0 1 2]
"""
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
<ipython-input-9-5feb62a6f457> in <module>
1 rfe = RFECV(estimator=lgb_rfe, min_features_to_select=2, verbose=3, n_jobs=2, cv=3, scoring=log_loss_rfe)
----> 2 rfe.fit(X=X_train, y=y_train)
~/ds_jup_venv/lib/python3.8/site-packages/sklearn/feature_selection/_rfe.py in fit(self, X, y, groups)
603 func = delayed(_rfe_single_fit)
604
--> 605 scores = parallel(
606 func(rfe, self.estimator, X, y, train, test, scorer)
607 for train, test in cv.split(X, y, groups))
~/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py in __call__(self, iterable)
1052
1053 with self._backend.retrieval_context():
-> 1054 self.retrieve()
1055 # Make sure that we get a last message telling us we are done
1056 elapsed_time = time.time() - self._start_time
~/ds_jup_venv/lib/python3.8/site-packages/joblib/parallel.py in retrieve(self)
931 try:
932 if getattr(self._backend, 'supports_timeout', False):
--> 933 self._output.extend(job.get(timeout=self.timeout))
934 else:
935 self._output.extend(job.get())
~/ds_jup_venv/lib/python3.8/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
540 AsyncResults.get from multiprocessing."""
541 try:
--> 542 return future.result(timeout=timeout)
543 except CfTimeoutError as e:
544 raise TimeoutError from e
1 frames
/usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
386 def __get_result(self):
387 if self._exception:
--> 388 raise self._exception
389 else:
390 return self._result
ValueError: y_true and y_pred contain different number of classes 3, 2. Please provide the true labels explicitly through the labels argument. Classes found in y_true: [0 1 2]
```
How to fix this to be able to select features recursively?
|
2021/02/27
|
[
"https://Stackoverflow.com/questions/66396659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11814996/"
] |
Log-loss needs the probability predictions, not the class predictions, so you should add
```py
log_loss_rfe = make_scorer(score_func=log_loss, needs_proba=True, greater_is_better=False)
```
The error is because without that, the passed `y_pred` is one-dimensional (classes 0,1,2) and `sklearn` [assumes it's a binary classification](https://github.com/scikit-learn/scikit-learn/blob/95119c13af77c76e150b753485c662b7c52a41a2/sklearn/metrics/_classification.py#L2254) problem and those predictions are probability of the positive class. To deal with that, it adds on the probability of the negative class, but then there are only two columns compared to your three classes.
|
Consider applying *stratified* cross-validation, which will try to preserve the fraction of samples for each class. Experiment with one of these scikit-learn cross-validators:
[`sklearn.model_selection.StratifiedKFold`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html?highlight=stratified#sklearn.model_selection.StratifiedKFold),
[`StratifiedShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html?highlight=stratified#sklearn.model_selection.StratifiedShuffleSplit),
[`RepeatedStratifiedKFold`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RepeatedStratifiedKFold.html?highlight=stratified#sklearn.model_selection.RepeatedStratifiedKFold), replacing `cv=3` in your `RFECV` with the chosen cross-validator.
**Edit**
I have missed the fact that `StratifiedKFold` is the default cross-validator in `RFECV`. Actually, the error is related to `log_loss_rfe`, which was defined with `needs_proba=False`. Credit to @BenReiniger!
|
73,251,418
|
I'd like to filter a `df` by date. But I would like all the values with any date before today's date (python).
For example from the table below, I'd like the rows that have a date before today's date (i.e. row 1 to row 3).
| ID | date |
| --- | --- |
| 1 | 2022-03-25 06:00:00 |
| 2 | 2022-04-25 06:00:00 |
| 3 | 2022-05-25 06:00:00 |
| 4 | 2022-08-25 06:00:00 |
Thanks
|
2022/08/05
|
[
"https://Stackoverflow.com/questions/73251418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18028924/"
] |
You can try this?
```
from datetime import datetime
df[df['date'] < datetime.now()]
```
Output:
```
ID date
0 1 2022-03-25 06:00:00
1 2 2022-04-25 06:00:00
2 3 2022-05-25 06:00:00
```
|
This will work
```
#convert column to datetime object
df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True, errors='coerce')
#filter column
df.loc[df['date'] < datetime.now()]
```
|
40,780,004
|
What's the result of returning `NotImplemented` from `__eq__` special method in python 3 (well 3.5 if it matters)?
The documentation isn't clear; the [only relevant text I found](https://docs.python.org/3/library/constants.html#NotImplemented) only vaguely refers to "some other fallback":
>
> When `NotImplemented` is returned, the interpreter will then try the reflected operation on the other type, or some other fallback, depending on the operator. If all attempted operations return `NotImplemented`, the interpreter will raise an appropriate exception. See [Implementing the arithmetic operations](https://docs.python.org/3/library/numbers.html#implementing-the-arithmetic-operations) for more details.
>
>
>
Unfortunately, the "more details" link doesn't mention `__eq__` at all.
My reading of this excerpt suggests that the code below should raise an "appropriate exception", but it does not:
```
class A:
def __eq__(self, other):
return NotImplemented
class B:
def __eq__(self, other):
return NotImplemented
# docs seems to say these lines should raise "an appropriate exception"
# but no exception is raised
a = A()
b = B()
a == b # evaluates as unequal
a == a # evaluates as equal
```
From experimenting, I think that when `NotImplemented` is returned from `__eq__`, the interpreter behaves as if `__eq__` wasn't defined in the first place (specifically, it first swaps the arguments, and if that doesn't resolve the issue, it compares using the default `__eq__` that evaluates "equal" if the two objects have the same identity). If that's the case, where in the documentation can I find the confirmation of this behavior?
Edit: see [Python issue 28785](http://bugs.python.org/issue28785)
|
2016/11/24
|
[
"https://Stackoverflow.com/questions/40780004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
Actually the `==` and `!=` check work identical to the ordering comparison operators (`<` and similar) except that they don't raise the **appropriate exception** but fall-back to identity comparison. That's the only difference.
This can be easily seen in the [CPython source code (version 3.5.10)](https://github.com/python/cpython/blob/v3.5.10/Objects/object.c#L649-L699). I will include a Python version of that source code (at least as far as it's possible):
```
_mirrored_op = {'__eq__': '__eq__', # a == b => b == a
'__ne__': '__ne__', # a != b => b != a
'__lt__': '__gt__', # a < b => b > a
'__le__': '__ge__', # a <= b => b >= a
'__ge__': '__le__', # a >= b => b <= a
'__gt__': '__lt__' # a > b => b < a
}
def richcmp(v, w, op):
checked_reverse = 0
# If the second operand is a true subclass of the first one start with
# a reversed operation.
if type(v) != type(w) and issubclass(type(w), type(v)) and hasattr(w, op):
checked_reverse = 1
res = getattr(w, _mirrored_op[op])(v) # reversed
if res is not NotImplemented:
return res
# Always try the not-reversed operation
if hasattr(v, op):
res = getattr(v, op)(w) # normal
if res is not NotImplemented:
return res
# If we haven't already tried the reversed operation try it now!
if not checked_reverse and hasattr(w, op):
res = getattr(w, _mirrored_op[op])(v) # reversed
if res is not NotImplemented:
return res
# Raise exception for ordering comparisons but use object identity in
# case we compare for equality or inequality
if op == '__eq__':
res = v is w
elif op == '__ne__':
res = v is not w
else:
raise TypeError('some error message')
return res
```
and calling `a == b` then evaluates as `richcmp(a, b, '__eq__')`. The `if op == '__eq__'` is the special case that makes your `a == b` return `False` (because they aren't identical objects) and your `a == a` return `True` (because they are).
However the behavior in Python 2.x was completely different. You could have up to 4 (or even 6, I don't remember exactly) comparisons before falling back to identity comparison!
|
Not sure where (or if) it is in the docs, but the basic behavior is:
* try the operation: `__eq__(lhs, rhs)`
* if result is not `NotImplemented` return it
* else try the reflected operation: `__eq__(rhs, lhs)`
* if result is not `NotImplemented` return it
* otherwise use appropriate fall back:
eq -> same objects? -> True, else False
ne -> different objects? -> True, else False
many others -> raise exception
The reason that `eq` and `ne` do *not* raise exceptions is:
* they can always be determined (apple == orange? no)
|
16,480,625
|
A variable `AA` is in `aaa.py`. I want to use this variable in my other python file `bbb.py`
How do I access this variable?
|
2013/05/10
|
[
"https://Stackoverflow.com/questions/16480625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2351602/"
] |
You're looking for [modules!](http://docs.python.org/2/tutorial/modules.html)
In `aaa.py`:
```
AA = 'Foo'
```
In `bbb.py`:
```
import aaa
print aaa.AA # Or print(aaa.AA) for Python 3
# Prints Foo
```
Or this works as well:
```
from aaa import AA
print AA
# Prints Foo
```
|
You can import it; this will execute the whole script though.
```
from aaa import AA
```
|
16,480,625
|
A variable `AA` is in `aaa.py`. I want to use this variable in my other python file `bbb.py`
How do I access this variable?
|
2013/05/10
|
[
"https://Stackoverflow.com/questions/16480625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2351602/"
] |
You can import it; this will execute the whole script though.
```
from aaa import AA
```
|
Although importing is the best approach (like poke or Haidro wrote), here is another workaround, if you're generating data with one script and want to access them in another, without executing "`bbb.py`". So if you're dealing with large lists/dicts, this approach works well, although it's an overkill if you simply trying to interchange a string or a number…
Besides that, you should describe your problem more detailed, since importing variables from another script is kind of hacky and may not be the way to go.
So here are two functions, the first one (`dumpobj`) dumps a variable (string, number, tuple, list, dict sets whatever) to a file, the second one (`loadobj`) reads it in. I decided to use `json`, since the data files are human readable.
You may want to try that, if you need it often:
```
import json
def dumpobj(the_object, filename) :
"""
Dumps the_object to filename via json. Use loadobj(filename)
to get it back.
>>> from tgio import dumpobj, loadobj
>>> foo = {'bar':42, 'narf':'fjoord', 23:[1,2,3,4,5]}
>>> dumpobj(foo, 'foo.var')
>>> bar = loadobj('foo.var')
>>> bar
{u'narf': u'fjoord', u'bar': 42, u'23': [1, 2, 3, 4, 5]}
"""
try:
with open(filename):
print(filename + " exists, I'll will overwrite it.")
except IOError:
print(filename + ' does not exist. Creating it...')
f = open(filename, 'w')
json.dump(the_object, f)
f.close()
def loadobj(filename) :
"""
Retrieves dumped data (via json - see dumpobj) from filename.
>>> from tgio import loadobj, dumpobj
>>> foo = {'bar':42, 'narf':'fjoord', 23:[1,2,3,4,5]}
>>> dumpobj(foo, 'foo.var')
>>> bar = loadobj('foo.var')
>>> bar
{u'narf': u'fjoord', u'bar': 42, u'23': [1, 2, 3, 4, 5]}
"""
try:
with open(filename):
print("Reading object from file " + filename)
except IOError:
print(filename + ' does not exist. Returning None.')
return None
f = open(filename, 'r')
the_object = json.load(f)
f.close
return the_object
```
See the usage examples in the docstrings!
|
16,480,625
|
A variable `AA` is in `aaa.py`. I want to use this variable in my other python file `bbb.py`
How do I access this variable?
|
2013/05/10
|
[
"https://Stackoverflow.com/questions/16480625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2351602/"
] |
You're looking for [modules!](http://docs.python.org/2/tutorial/modules.html)
In `aaa.py`:
```
AA = 'Foo'
```
In `bbb.py`:
```
import aaa
print aaa.AA # Or print(aaa.AA) for Python 3
# Prints Foo
```
Or this works as well:
```
from aaa import AA
print AA
# Prints Foo
```
|
In your file `bbb.py`, add the following:
```
import sys
sys.path.append("/path/to/aaa.py/folder/")
from aaa import AA
```
Also would suggest reading more about Python modules and how `import` works. [Official Documentation on Modules.](http://docs.python.org/2/tutorial/modules.html)
|
16,480,625
|
A variable `AA` is in `aaa.py`. I want to use this variable in my other python file `bbb.py`
How do I access this variable?
|
2013/05/10
|
[
"https://Stackoverflow.com/questions/16480625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2351602/"
] |
In your file `bbb.py`, add the following:
```
import sys
sys.path.append("/path/to/aaa.py/folder/")
from aaa import AA
```
Also would suggest reading more about Python modules and how `import` works. [Official Documentation on Modules.](http://docs.python.org/2/tutorial/modules.html)
|
Although importing is the best approach (like poke or Haidro wrote), here is another workaround, if you're generating data with one script and want to access them in another, without executing "`bbb.py`". So if you're dealing with large lists/dicts, this approach works well, although it's an overkill if you simply trying to interchange a string or a number…
Besides that, you should describe your problem more detailed, since importing variables from another script is kind of hacky and may not be the way to go.
So here are two functions, the first one (`dumpobj`) dumps a variable (string, number, tuple, list, dict sets whatever) to a file, the second one (`loadobj`) reads it in. I decided to use `json`, since the data files are human readable.
You may want to try that, if you need it often:
```
import json
def dumpobj(the_object, filename) :
"""
Dumps the_object to filename via json. Use loadobj(filename)
to get it back.
>>> from tgio import dumpobj, loadobj
>>> foo = {'bar':42, 'narf':'fjoord', 23:[1,2,3,4,5]}
>>> dumpobj(foo, 'foo.var')
>>> bar = loadobj('foo.var')
>>> bar
{u'narf': u'fjoord', u'bar': 42, u'23': [1, 2, 3, 4, 5]}
"""
try:
with open(filename):
print(filename + " exists, I'll will overwrite it.")
except IOError:
print(filename + ' does not exist. Creating it...')
f = open(filename, 'w')
json.dump(the_object, f)
f.close()
def loadobj(filename) :
"""
Retrieves dumped data (via json - see dumpobj) from filename.
>>> from tgio import loadobj, dumpobj
>>> foo = {'bar':42, 'narf':'fjoord', 23:[1,2,3,4,5]}
>>> dumpobj(foo, 'foo.var')
>>> bar = loadobj('foo.var')
>>> bar
{u'narf': u'fjoord', u'bar': 42, u'23': [1, 2, 3, 4, 5]}
"""
try:
with open(filename):
print("Reading object from file " + filename)
except IOError:
print(filename + ' does not exist. Returning None.')
return None
f = open(filename, 'r')
the_object = json.load(f)
f.close
return the_object
```
See the usage examples in the docstrings!
|
16,480,625
|
A variable `AA` is in `aaa.py`. I want to use this variable in my other python file `bbb.py`
How do I access this variable?
|
2013/05/10
|
[
"https://Stackoverflow.com/questions/16480625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2351602/"
] |
You're looking for [modules!](http://docs.python.org/2/tutorial/modules.html)
In `aaa.py`:
```
AA = 'Foo'
```
In `bbb.py`:
```
import aaa
print aaa.AA # Or print(aaa.AA) for Python 3
# Prints Foo
```
Or this works as well:
```
from aaa import AA
print AA
# Prints Foo
```
|
Although importing is the best approach (like poke or Haidro wrote), here is another workaround, if you're generating data with one script and want to access them in another, without executing "`bbb.py`". So if you're dealing with large lists/dicts, this approach works well, although it's an overkill if you simply trying to interchange a string or a number…
Besides that, you should describe your problem more detailed, since importing variables from another script is kind of hacky and may not be the way to go.
So here are two functions, the first one (`dumpobj`) dumps a variable (string, number, tuple, list, dict sets whatever) to a file, the second one (`loadobj`) reads it in. I decided to use `json`, since the data files are human readable.
You may want to try that, if you need it often:
```
import json
def dumpobj(the_object, filename) :
"""
Dumps the_object to filename via json. Use loadobj(filename)
to get it back.
>>> from tgio import dumpobj, loadobj
>>> foo = {'bar':42, 'narf':'fjoord', 23:[1,2,3,4,5]}
>>> dumpobj(foo, 'foo.var')
>>> bar = loadobj('foo.var')
>>> bar
{u'narf': u'fjoord', u'bar': 42, u'23': [1, 2, 3, 4, 5]}
"""
try:
with open(filename):
print(filename + " exists, I'll will overwrite it.")
except IOError:
print(filename + ' does not exist. Creating it...')
f = open(filename, 'w')
json.dump(the_object, f)
f.close()
def loadobj(filename) :
"""
Retrieves dumped data (via json - see dumpobj) from filename.
>>> from tgio import loadobj, dumpobj
>>> foo = {'bar':42, 'narf':'fjoord', 23:[1,2,3,4,5]}
>>> dumpobj(foo, 'foo.var')
>>> bar = loadobj('foo.var')
>>> bar
{u'narf': u'fjoord', u'bar': 42, u'23': [1, 2, 3, 4, 5]}
"""
try:
with open(filename):
print("Reading object from file " + filename)
except IOError:
print(filename + ' does not exist. Returning None.')
return None
f = open(filename, 'r')
the_object = json.load(f)
f.close
return the_object
```
See the usage examples in the docstrings!
|
54,849,211
|
I have two lists:
```
providers = ["a", "b", "c", "d", "e"]
ips = ["100.12.23.34", "199.134.3.01", "123.143.2.34", "154.234.4.66"]
```
I want the output to look like:
```
[{'provider_name':'a', 'server':'100.12.23.34'},.....]
```
How do i do this in python using for loop?
|
2019/02/24
|
[
"https://Stackoverflow.com/questions/54849211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11082580/"
] |
Here is an easy to follow solution. For more reading on the zip method if necessary [click here](https://www.programiz.com/python-programming/methods/built-in/zip).
```
new = []
for i, j in zip(providers, ips):
new.append({"provider_name": i, "server": j})
```
|
Use:
```
>>> providers = ["a", "b", "c", "d", "e"]
>>> ips = ["100.12.23.34", "199.134.3.01", "123.143.2.34", "154.234.4.66"]
>>> [{'provider_name':x, 'server':y} for x,y in zip(providers,ips)]
[{'provider_name': 'a', 'server': '100.12.23.34'}, {'provider_name': 'b', 'server': '199.134.3.01'}, {'provider_name': 'c', 'server': '123.143.2.34'}, {'provider_name': 'd', 'server': '154.234.4.66'}]
>>>
```
|
54,849,211
|
I have two lists:
```
providers = ["a", "b", "c", "d", "e"]
ips = ["100.12.23.34", "199.134.3.01", "123.143.2.34", "154.234.4.66"]
```
I want the output to look like:
```
[{'provider_name':'a', 'server':'100.12.23.34'},.....]
```
How do i do this in python using for loop?
|
2019/02/24
|
[
"https://Stackoverflow.com/questions/54849211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11082580/"
] |
Here is an easy to follow solution. For more reading on the zip method if necessary [click here](https://www.programiz.com/python-programming/methods/built-in/zip).
```
new = []
for i, j in zip(providers, ips):
new.append({"provider_name": i, "server": j})
```
|
Option without any zip:
```
providers = ["a", "b", "c", "d", "e"]
ips = ["100.12.23.34", "199.134.3.01", "123.143.2.34", "154.234.4.66"]
[ {'provider_name': providers[i], 'server': ips[i] } for i in range(len(ips)) ]
```
|
56,022,332
|
I have been trying to upload a Pandas dataframe to a JSON object in Cloud Storage using Cloud Function. Follwing is my code -
```
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_file(source_file_name)
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
final_file = pd.concat([df, df_second], axis=0)
final_file.to_json('/tmp/abc.json')
with open('/tmp/abc.json', 'r') as file_obj:
upload_blob('test-bucket',file_obj,'abc.json')
```
I am getting the following error in line - blob.upload\_from\_file(source\_file\_name)
```
Deployment failure:
Function failed on loading user code. Error message: Code in file main.py
can't be loaded.
Detailed stack trace: Traceback (most recent call last):
File "/env/local/lib/python3.7/site-
packages/google/cloud/functions/worker.py", line 305, in
check_or_load_user_function
_function_handler.load_user_function()
File "/env/local/lib/python3.7/site-
packages/google/cloud/functions/worker.py", line 184, in load_user_function
spec.loader.exec_module(main)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/user_code/main.py", line 6, in <module>
import datalab.storage as gcs
File "/env/local/lib/python3.7/site-packages/datalab/storage/__init__.py",
line 16, in <module>
from ._bucket import Bucket, Buckets
File "/env/local/lib/python3.7/site-packages/datalab/storage/_bucket.py",
line 21, in <module>
import datalab.context
File "/env/local/lib/python3.7/site-packages/datalab/context/__init__.py",
line 15, in <module>
from ._context import Context
File "/env/local/lib/python3.7/site-packages/datalab/context/_context.py",
line 20, in <module>
from . import _project
File "/env/local/lib/python3.7/site-packages/datalab/context/_project.py",
line 18, in <module>
import datalab.utils
File "/env/local/lib/python3.7/site-packages/datalab/utils/__init__.py",
line 15
from ._async import async, async_function, async_method
^
SyntaxError: invalid syntax
```
What possibly is the error?
|
2019/05/07
|
[
"https://Stackoverflow.com/questions/56022332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4853331/"
] |
You are passing a string to [blob.upload\_from\_file()](https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_file), but this method requires a file object. You probably want to use [blob.upload\_from\_filename()](https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_filename) instead. Check the sample in [the GCP docs](https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python).
Alternatively, you could get the file object, and keep using blob.upload\_from\_file(), but it's unnecessary extra lines.
```py
with open('/tmp/abc.json', 'r') as file_obj:
upload_blob('test-bucket', file_obj, 'abc.json')
```
|
Use a bucket object instead of string
something like `upload_blob(conn.get_bucket(mybucket),'/tmp/abc.json','abc.json')}`
|
39,040,250
|
I have scraped a website with scrapy and stored the data in a json file.
Link to the json file: <https://drive.google.com/file/d/0B6JCr_BzSFMHLURsTGdORmlPX0E/view?usp=sharing>
But the json isn't standard json and gives errors:
```
>>> import json
>>> with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file:
... data = json.load(file)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/root/anaconda2/lib/python2.7/json/__init__.py", line 291, in load
**kw)
File "/root/anaconda2/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/root/anaconda2/lib/python2.7/json/decoder.py", line 367, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 3 column 2 - line 3697 column 2 (char 45 - 3661517)
```
Then I tried this:
```
with open('/root/code/itjuzi/itjuzi/investorinfo.json','rb') as f:
data = f.readlines()
data = map(lambda x: x.decode('unicode_escape'), data)
>>> df = pd.DataFrame(data)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pd' is not defined
>>> import pandas as pd
>>> df = pd.DataFrame(data)
>>> print pd
<module 'pandas' from '/root/anaconda2/lib/python2.7/site-packages/pandas/__init__.pyc'>
>>> print df
[3697 rows x 1 columns]
```
Why does this only return 1 column?
How can I standardize the json file and read it with pandas correctly?
|
2016/08/19
|
[
"https://Stackoverflow.com/questions/39040250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2672481/"
] |
try this:
```
import json
with open('data.json') as data_file:
data = json.load(data_file)
```
This has the advantage of dealing well with large JSON files that do not fit in memory
EDIT:
Your data is not valid JSON.
Delete the following in the first 3 lines and it will validate:
```
[{
"website": ["\u5341\u65b9\u521b\u6295"]
}]
```
EDIT2[Since you need to access nested values from json]:
You can now also access single values like this:
```
data["one"][0]["id"] # will return 'value'
data["two"]["id"] # will return 'value'
data["three"] # will return 'value'
```
|
Try following codes: (you are missing one something)
```
>>> import json
>>> with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file:
... data = json.load(file.read())
```
|
5,852,199
|
I'm writing a web application (<http://www.checkio.org/>) which allows users to write python code. As one feedback metric among many, I'd like to enable profiling while running checks on this code. This is to allow users to get a very rough idea of the relative efficiency of various solutions.
I need the profile to be (reasonably) deterministic. I don't want other load on the web server to give a bad efficiency reading. Also, I'm worried that some profilers won't give a good measurement because these short scripts run so quickly. The timeit module shows a function being run thousands of time, but I'd like to not waste server reasources on this small features if possible.
It's not clear which (if any) of the standard profilers meet this need. Ideally the profiler would give units of "interpreter bytecode ticks" which would increment one per bytecode instruction. This would be a *very* rough measure, but meets the requirements of determinism and high-precision.
Which profiling system should I use?
|
2011/05/01
|
[
"https://Stackoverflow.com/questions/5852199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] |
Python's standard profiler module provides [deterministic profiling](http://docs.python.org/library/profile.html#what-is-deterministic-profiling).
|
I also suggest giving a try to yappi. (http://code.google.com/p/yappi/) In v0.62, it supports CPU time profiling and you can stop the profiler at any time you want...
|
45,363,629
|
Let's say I have a list l1 = [a,b,c,d,e] and I want to map it to a dictionary that would contain the following {a:1, b:2, c:3, d:4, e:5}
I know how to do it in a very naive way, but I would like something more 'pythonic'
The naive way:
```
dic = {}
j = 1
for i in list1:
dic[i] = j
j += 1
```
|
2017/07/28
|
[
"https://Stackoverflow.com/questions/45363629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4369996/"
] |
How about using a dictionary comprehension:
```
>>> {v: k for k, v in enumerate(l1, 1)}
{'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}
```
|
Just to make up for the flub earlier... You can use the `dict` type constructor with `itertools.count` and `zip`:
```
>>> L1 = ['a','b','c','d']
>>> from itertools import count
>>> dict(zip(L1, count(1)))
{'c': 3, 'b': 2, 'a': 1, 'd': 4}
```
|
65,056,382
|
I created a script that runs perfectly fine in visual studio code but I'm now trying to automate the script, which is proving to be a little tricky. I've turned the file into a Unix executable file for the automation but when I click on my script, the code that I’ve implemented doesn’t do what I want it to.
I’ve got a line of code that changes the name of all of the .csv files within a directory.
This is the particular line of code that I was talking about…
```
spath='/Users/emmanuel/Documents/Selenium/'
sourcefiles = os.listdir(spath)
for file in sourcefiles:
if file.endswith('.csv'):
os.rename(os.path.join(spath,file), "CC.csv")
```
The name that I want to change the .csv file to is CC.csv and I use this coding method as the name for the .csv file has a different name every-time that it's downloaded.
The issue I'm having now is that, when the script reaches this part of the code, the .csv file just disappears from the directory and is moved from the source directory but I want to prevent that from happening. Reason being that I need the file to stay in there in order to be further manipulated by the pandas module.
Is there a way to rename all of the .cvs files in a directory and keep them in the source directory after being renamed?
I’ve run this code in VSC and it runs perfectly fine but when I run it as a unit executable file I get this error…
```
Traceback (most recent call last):
File "/Users/emmanuel/Documents/Selenium/OptionsBotCode.py", line 120, in <module>
CC = pd.read_csv('/Users/emmanuel/Documents/Selenium/CC.csv')
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 688, in read_csv
return _read(filepath_or_buffer, kwds)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 454, in _read
parser = TextFileReader(fp_or_buf, **kwds)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 948, in __init__
self._make_engine(self.engine)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 1180, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/io/parsers.py", line 2010, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 382, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 674, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: [Errno 2] No such file or directory: '/Users/emmanuel/Documents/Selenium/CC.csv'
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
```
I’ve already tried using
```
print(os.getcwd())
```
And I’ve been able to find where the file supposedly travels to but when I look in the directory the file doesn’t appear to be there.
To put things into context, I need the ‘pandas’ module to access the file (in order to manipulate it), but I need to either change the file name according to the file type (.csv) or have pandas access the file type as opposed to the file name.
This is because, every-time this .csv file is downloaded, it has a different name, and in order to use the
```
pandas.read_csv
```
function, the bot would need the file to have the same name every time so it could repeat the process.
I won’t have any issue with changing multiple file names because, at the start of my script, there is a piece of code that clears out all existing .csv files in the directory so it will be only one file that is targeted. I now understand that rename does more than just rename. If the source and destination are both in the same folder, it will move the renamed file into a new directory. However, I want to keep the file in the same directory
|
2020/11/29
|
[
"https://Stackoverflow.com/questions/65056382",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You just need to escape the backslash `\`, so it turns into two backslashes `\\`
```js
console.log(JSON.parse('{"x":"Hello \\" test"}'))
```
|
```js
let mydata = `{"x":"Hello \" test "}`
let escapeJsonFunc = function(str) {
return str.replace(/\\/g,'\\');
};
console.log( escapeJsonFunc(mydata) )
```
|
57,728,801
|
I have a large (around 200Mb) single-line json file and I want to convert this to a more readable multi-line json (or txt) file.
I tried to open the file with text editors like sublime text and it takes forever to open. So, I would like to make the conversion without opening the file.
Therefore, I cannot use the interface suggested in [this](https://stackoverflow.com/questions/17003470/how-do-i-convert-a-single-line-json-file-into-a-human-readable-multi-line-file) SO question.
I tried to `pretty-print` the json file as suggested in [this](https://stackoverflow.com/a/46301431/8889727) answer by doing the following.
```
cat myjsonfile.json | python -m json.tool > pretty.json
```
But the terminal prints the following message and I get an empty `pretty.json` file.
```
Extra data: line 1 column 34255 - line 1 column 173769197 (char 34254 - 173769196)
```
I'm thinking of installing visual basic, just to convert the file. But is there a better and efficient way to do the conversion?
|
2019/08/30
|
[
"https://Stackoverflow.com/questions/57728801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8889727/"
] |
The simplest method would be using `jq` to pretty print the json:
```
jq . myjsonfile.json > pretty.json
```
But from the python output, I suspect the json file may be ill-formed.
|
if you can identify a character sequence that ends a line (e.g. a curly bracket followed by a semicolon) you can use sed for it
```
$ sed 's/};/\n/g' <<< "my};test};string"
my
test
string
```
|
57,728,801
|
I have a large (around 200Mb) single-line json file and I want to convert this to a more readable multi-line json (or txt) file.
I tried to open the file with text editors like sublime text and it takes forever to open. So, I would like to make the conversion without opening the file.
Therefore, I cannot use the interface suggested in [this](https://stackoverflow.com/questions/17003470/how-do-i-convert-a-single-line-json-file-into-a-human-readable-multi-line-file) SO question.
I tried to `pretty-print` the json file as suggested in [this](https://stackoverflow.com/a/46301431/8889727) answer by doing the following.
```
cat myjsonfile.json | python -m json.tool > pretty.json
```
But the terminal prints the following message and I get an empty `pretty.json` file.
```
Extra data: line 1 column 34255 - line 1 column 173769197 (char 34254 - 173769196)
```
I'm thinking of installing visual basic, just to convert the file. But is there a better and efficient way to do the conversion?
|
2019/08/30
|
[
"https://Stackoverflow.com/questions/57728801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8889727/"
] |
The simplest method would be using `jq` to pretty print the json:
```
jq . myjsonfile.json > pretty.json
```
But from the python output, I suspect the json file may be ill-formed.
|
alternatively, you can use [**`jtc`**](https://github.com/ldn-softdev/jtc) unix utility to pretty-print your one-liner json:
```
jtc myjsonfile.json
```
you can use there `-t` option to control indentation.
If you like *to convert* `myjsonfile.json` from one-liner into pretty-printed, then use option `-f`:
```
jtc -f myjsonfile.json
```
btw, to *convert it back* to one-liner again: `jtc -fr myjsonfile.json`
PS> Disclosure: I'm the creator of the [**`jtc`**](https://github.com/ldn-softdev/jtc) - shell cli tool for JSON operations
|
57,728,801
|
I have a large (around 200Mb) single-line json file and I want to convert this to a more readable multi-line json (or txt) file.
I tried to open the file with text editors like sublime text and it takes forever to open. So, I would like to make the conversion without opening the file.
Therefore, I cannot use the interface suggested in [this](https://stackoverflow.com/questions/17003470/how-do-i-convert-a-single-line-json-file-into-a-human-readable-multi-line-file) SO question.
I tried to `pretty-print` the json file as suggested in [this](https://stackoverflow.com/a/46301431/8889727) answer by doing the following.
```
cat myjsonfile.json | python -m json.tool > pretty.json
```
But the terminal prints the following message and I get an empty `pretty.json` file.
```
Extra data: line 1 column 34255 - line 1 column 173769197 (char 34254 - 173769196)
```
I'm thinking of installing visual basic, just to convert the file. But is there a better and efficient way to do the conversion?
|
2019/08/30
|
[
"https://Stackoverflow.com/questions/57728801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8889727/"
] |
alternatively, you can use [**`jtc`**](https://github.com/ldn-softdev/jtc) unix utility to pretty-print your one-liner json:
```
jtc myjsonfile.json
```
you can use there `-t` option to control indentation.
If you like *to convert* `myjsonfile.json` from one-liner into pretty-printed, then use option `-f`:
```
jtc -f myjsonfile.json
```
btw, to *convert it back* to one-liner again: `jtc -fr myjsonfile.json`
PS> Disclosure: I'm the creator of the [**`jtc`**](https://github.com/ldn-softdev/jtc) - shell cli tool for JSON operations
|
if you can identify a character sequence that ends a line (e.g. a curly bracket followed by a semicolon) you can use sed for it
```
$ sed 's/};/\n/g' <<< "my};test};string"
my
test
string
```
|
17,507,799
|
I've built a little app engine app that lets users upload short recordings. Some of the recordings are done in-browser with <https://github.com/mattdiamond/Recorderjs>, which creates wav files. To save space, I'd like to convert those to ogg before writing them to the app engine datastore, so that I use less of my outgoing bandwidth when I play the audio recordings back to users.
How can I do this? I googled around, and apparently there's a command line tool called oggenc that encodes to ogg -- but I'm pretty sure I can't install that (or, even if I could install it, make calls to it) on app engine.
I found a similar question at [Encode audio from getUserMedia() to a .OGG in JavaScript](https://stackoverflow.com/questions/16821919/encode-audio-from-getusermedia-to-a-ogg-in-javascript) -- this links to <https://github.com/jpemartins/speex.js>, a project that looks like it might eventually be able to convert from wav to ogg in javascript (which would be great), but, as far as I can tell, does not do so at the moment. At <https://github.com/jpemartins/speex.js/issues/4> the authors mentions that WAV -> ... -> OGG is not yet possible.
What else should I try?
Edit: My app engine code is written in Python, so another possibility would be to do the conversion there, with a python module that can convert wav to ogg. I think <http://pymedia.org/> can do this, but I'd have to somehow install it on app engine -- is that possible?
|
2013/07/06
|
[
"https://Stackoverflow.com/questions/17507799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/610668/"
] |
Pymedia isn't pure python so you won't be able to use it on app engine.
You probably want to build something on Compute Engine to do this.
|
Provided it's possible to replace Matt Diamond's recorderjs with its fork, [chris-rudmin/Recorderjs](https://github.com/chris-rudmin/Recorderjs) ([demo page](https://rawgit.com/chris-rudmin/Recorderjs/master/example.html)) in AppEngine, this should be feasible. Or first encode to WAV and use [opusenc.js](https://github.com/Rillke/opusenc.js) ([demo page](https://blog.rillke.com/opusenc.js/)), which is an Emscripten port of the Opusenc tool, to convert a temporary WAV file to Ogg-Opus client side.
|
20,423,599
|
If I write the following in python, I get a syntax error, why so?
```
a = 1
b = (a+=1)
```
I am using python version 2.7
what I get when I run it, the following:
```
>>> a = 1
>>> b = (a +=1)
File "<stdin>", line 1
b = (a +=1)
^
SyntaxError: invalid syntax
>>>
```
|
2013/12/06
|
[
"https://Stackoverflow.com/questions/20423599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1932405/"
] |
Unlike in some other languages, assignment (including augmented assignment, like `+=`) in Python is *not* an expression. This also affects things like this:
```
(a=1) > 2
```
which is legal in C, and several other languages.
The reason generally given for this is because it helps to prevent a class of bugs like this:
```
if a = 1: # instead of ==
pass
else:
pass
```
since assignment isn't an expression, this is a SyntaxError in Python. In the equivalent C code, it is a subtle bug where the variable will be *modified* rather than checked, the check will *always* be true (in C, like in Python, a non-zero integer is always truthy), and the else block can never fire.
You *can* still do chained assignment in Python, so this works:
```
>>> a = 1
>>> a = b = a+1
>>> a
2
>>> b
2
```
|
`a +=1` is a statement in Python and you can't assign a statement to a variable. Though it is a valid syntax in languages like C, PHP, etc but not Python.
```
b = (a+=1)
```
An equivalent version will be:
```
>>> a = 1
>>> a += 1
>>> b = a
```
|
20,423,599
|
If I write the following in python, I get a syntax error, why so?
```
a = 1
b = (a+=1)
```
I am using python version 2.7
what I get when I run it, the following:
```
>>> a = 1
>>> b = (a +=1)
File "<stdin>", line 1
b = (a +=1)
^
SyntaxError: invalid syntax
>>>
```
|
2013/12/06
|
[
"https://Stackoverflow.com/questions/20423599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1932405/"
] |
`a +=1` is a statement in Python and you can't assign a statement to a variable. Though it is a valid syntax in languages like C, PHP, etc but not Python.
```
b = (a+=1)
```
An equivalent version will be:
```
>>> a = 1
>>> a += 1
>>> b = a
```
|
All the answers provided here are good, I just want to add that you can achieve what you want in a one-line expression, but written in a different manner:
```
b, a = a+1, a+1
```
Here you're doing *almost* the same thing: incrementing `a` by 1, and assigning the value of `a+1` to b - I'm telling 'almost' because here we have two summations instead of one.
|
20,423,599
|
If I write the following in python, I get a syntax error, why so?
```
a = 1
b = (a+=1)
```
I am using python version 2.7
what I get when I run it, the following:
```
>>> a = 1
>>> b = (a +=1)
File "<stdin>", line 1
b = (a +=1)
^
SyntaxError: invalid syntax
>>>
```
|
2013/12/06
|
[
"https://Stackoverflow.com/questions/20423599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1932405/"
] |
Unlike in some other languages, assignment (including augmented assignment, like `+=`) in Python is *not* an expression. This also affects things like this:
```
(a=1) > 2
```
which is legal in C, and several other languages.
The reason generally given for this is because it helps to prevent a class of bugs like this:
```
if a = 1: # instead of ==
pass
else:
pass
```
since assignment isn't an expression, this is a SyntaxError in Python. In the equivalent C code, it is a subtle bug where the variable will be *modified* rather than checked, the check will *always* be true (in C, like in Python, a non-zero integer is always truthy), and the else block can never fire.
You *can* still do chained assignment in Python, so this works:
```
>>> a = 1
>>> a = b = a+1
>>> a
2
>>> b
2
```
|
As @Ashwini stated, `a+=1` is an assigment, not a value. You can't assign it to `b`, or any variable. What you probably want is:
```
b = a+1
```
|
20,423,599
|
If I write the following in python, I get a syntax error, why so?
```
a = 1
b = (a+=1)
```
I am using python version 2.7
what I get when I run it, the following:
```
>>> a = 1
>>> b = (a +=1)
File "<stdin>", line 1
b = (a +=1)
^
SyntaxError: invalid syntax
>>>
```
|
2013/12/06
|
[
"https://Stackoverflow.com/questions/20423599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1932405/"
] |
As @Ashwini stated, `a+=1` is an assigment, not a value. You can't assign it to `b`, or any variable. What you probably want is:
```
b = a+1
```
|
All the answers provided here are good, I just want to add that you can achieve what you want in a one-line expression, but written in a different manner:
```
b, a = a+1, a+1
```
Here you're doing *almost* the same thing: incrementing `a` by 1, and assigning the value of `a+1` to b - I'm telling 'almost' because here we have two summations instead of one.
|
20,423,599
|
If I write the following in python, I get a syntax error, why so?
```
a = 1
b = (a+=1)
```
I am using python version 2.7
what I get when I run it, the following:
```
>>> a = 1
>>> b = (a +=1)
File "<stdin>", line 1
b = (a +=1)
^
SyntaxError: invalid syntax
>>>
```
|
2013/12/06
|
[
"https://Stackoverflow.com/questions/20423599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1932405/"
] |
Unlike in some other languages, assignment (including augmented assignment, like `+=`) in Python is *not* an expression. This also affects things like this:
```
(a=1) > 2
```
which is legal in C, and several other languages.
The reason generally given for this is because it helps to prevent a class of bugs like this:
```
if a = 1: # instead of ==
pass
else:
pass
```
since assignment isn't an expression, this is a SyntaxError in Python. In the equivalent C code, it is a subtle bug where the variable will be *modified* rather than checked, the check will *always* be true (in C, like in Python, a non-zero integer is always truthy), and the else block can never fire.
You *can* still do chained assignment in Python, so this works:
```
>>> a = 1
>>> a = b = a+1
>>> a
2
>>> b
2
```
|
All the answers provided here are good, I just want to add that you can achieve what you want in a one-line expression, but written in a different manner:
```
b, a = a+1, a+1
```
Here you're doing *almost* the same thing: incrementing `a` by 1, and assigning the value of `a+1` to b - I'm telling 'almost' because here we have two summations instead of one.
|
1,423,214
|
I have this python code for opening a .cfg file, writing to it and saving it:
```
import ConfigParser
def get_lock_file():
cf = ConfigParser.ConfigParser()
cf.read("svn.lock")
return cf
def save_lock_file(configurationParser):
cf = configurationParser
config_file = open('svn.lock', 'w')
cf.write(config_file)
config_file.close()
```
Does this seem normal or am I missing something about how to open-write-save files? Is there a more standard way to read and write config files?
I ask because I have two methods that seem to do the same thing, they get the config file handle ('cf') call cf.set('blah', 'foo' bar) then use the save\_lock\_file(cf) call above. For one method it works and for the other method the write never takes place, unsure why at this point.
```
def used_like_this():
cf = get_lock_file()
cf.set('some_prop_section', 'some_prop', 'some_value')
save_lock_file(cf)
```
|
2009/09/14
|
[
"https://Stackoverflow.com/questions/1423214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/128508/"
] |
Just to note that configuration file handling is simpler with ConfigObj.
To read and then write a config file:
```
from configobj import ConfigObj
config = ConfigObj(filename)
value = config['entry']
config['entry'] = newvalue
config.write()
```
|
Looks good to me.
If both places call `get_lock_file`, then `cf.set(...)`, and then `save_lock_file`, and no exceptions are raised, this should work.
If you have different threads or processes accessing the same file you could have a race condition:
1. thread/process A reads the file
2. thread/process B reads the file
3. thread/process A updates the file
4. thread/process B updates the file
Now the file only contains B's updates, not A's.
Also, for safe file writing, don't forget the `with` statement (Python 2.5 and up), it'll save you a try/finally (which you should be using if you're not using `with`). From `ConfigParser`'s docs:
```
with open('example.cfg', 'wb') as configfile:
config.write(configfile)
```
|
1,423,214
|
I have this python code for opening a .cfg file, writing to it and saving it:
```
import ConfigParser
def get_lock_file():
cf = ConfigParser.ConfigParser()
cf.read("svn.lock")
return cf
def save_lock_file(configurationParser):
cf = configurationParser
config_file = open('svn.lock', 'w')
cf.write(config_file)
config_file.close()
```
Does this seem normal or am I missing something about how to open-write-save files? Is there a more standard way to read and write config files?
I ask because I have two methods that seem to do the same thing, they get the config file handle ('cf') call cf.set('blah', 'foo' bar) then use the save\_lock\_file(cf) call above. For one method it works and for the other method the write never takes place, unsure why at this point.
```
def used_like_this():
cf = get_lock_file()
cf.set('some_prop_section', 'some_prop', 'some_value')
save_lock_file(cf)
```
|
2009/09/14
|
[
"https://Stackoverflow.com/questions/1423214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/128508/"
] |
Just to note that configuration file handling is simpler with ConfigObj.
To read and then write a config file:
```
from configobj import ConfigObj
config = ConfigObj(filename)
value = config['entry']
config['entry'] = newvalue
config.write()
```
|
Works for me.
```
C:\temp>type svn.lock
[some_prop_section]
Hello=World
C:\temp>python
ActivePython 2.6.2.2 (ActiveState Software Inc.) based on
Python 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ConfigParser
>>> def get_lock_file():
... cf = ConfigParser.ConfigParser()
... cf.read("svn.lock")
... return cf
...
>>> def save_lock_file(configurationParser):
... cf = configurationParser
... config_file = open('svn.lock', 'w')
... cf.write(config_file)
... config_file.close()
...
>>> def used_like_this():
... cf = get_lock_file()
... cf.set('some_prop_section', 'some_prop', 'some_value')
... save_lock_file(cf)
...
>>> used_like_this()
>>> ^Z
C:\temp>type svn.lock
[some_prop_section]
hello = World
some_prop = some_value
C:\temp>
```
|
64,436,875
|
I have a python package project 'webapi' and I want to set up in a way so that other people can "pip install webapi". If I want to put it on a private server with a specific ip: xx.xx.xx.xx.
So other people with the access right don't need to git clone the project and install it locally into their virtual environment. Instead, they can simply do:
```
pip install webapi
```
And they can start to use it, just as use other public python libraries. In order to do this, how can I start with this? Is there some tutorial to help with this? I tried a few keywords to search for instruction, but haven't found something useful.
|
2020/10/20
|
[
"https://Stackoverflow.com/questions/64436875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3943868/"
] |
I guess the question is unclear, if you want to upload your webapi package to PyPi you can read [this article](https://medium.com/@joel.barmettler/how-to-upload-your-python-package-to-pypi-65edc5fe9c56). But this will make your package public and i'm not quite sure that's what you want.
If what you want is a private pypi server then checkout this package [private-pypi](https://pypi.org/project/private-pypi/). There's a whole description of how to set your server and use it in the documentation.
|
It seems like you need a software repository management application such as [Pulp](https://pulpproject.org/) take a look at [their plugin section](https://pulpproject.org/content-plugins/) and their documentation is [here](https://pulp-python.readthedocs.io/en/latest/). I use it as a private python repository for systems that for security reason are isolated from the internet but it can also host your own custom Python packages.
|
64,436,875
|
I have a python package project 'webapi' and I want to set up in a way so that other people can "pip install webapi". If I want to put it on a private server with a specific ip: xx.xx.xx.xx.
So other people with the access right don't need to git clone the project and install it locally into their virtual environment. Instead, they can simply do:
```
pip install webapi
```
And they can start to use it, just as use other public python libraries. In order to do this, how can I start with this? Is there some tutorial to help with this? I tried a few keywords to search for instruction, but haven't found something useful.
|
2020/10/20
|
[
"https://Stackoverflow.com/questions/64436875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3943868/"
] |
I guess the question is unclear, if you want to upload your webapi package to PyPi you can read [this article](https://medium.com/@joel.barmettler/how-to-upload-your-python-package-to-pypi-65edc5fe9c56). But this will make your package public and i'm not quite sure that's what you want.
If what you want is a private pypi server then checkout this package [private-pypi](https://pypi.org/project/private-pypi/). There's a whole description of how to set your server and use it in the documentation.
|
Linode provides another way of setting up a private pip repos. Most important is to work with your team to decide which package dependencies that need to be maintained.
[guides/how-to-create-a-private-python-package-repository/](https://www.linode.com/docs/guides/how-to-create-a-private-python-package-repository/)
|
2,726,171
|
I am trying to change the font size using python's ImageDraw library.
You can do something like [this](http://infohost.nmt.edu/tcc/help/pubs/pil/image-font.html):
```
fontPath = "/usr/share/fonts/dejavu-lgc/DejaVuLGCSansCondensed-Bold.ttf"
sans16 = ImageFont.truetype ( fontPath, 16 )
im = Image.new ( "RGB", (200,50), "#ddd" )
draw = ImageDraw.Draw ( im )
draw.text ( (10,10), "Run awayyyy!", font=sans16, fill="red" )
```
The problem is that I don't want to specify a font. I want to use the default font and just change the size of the font. This seems to me that it *should* be simple, but I can't find documentation on how to do this.
|
2010/04/28
|
[
"https://Stackoverflow.com/questions/2726171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889/"
] |
Per [PIL's docs](http://www.pythonware.com/library/pil/handbook/imagedraw.htm), `ImageDraw`'s default font is a bitmap font, and therefore it cannot be scaled. For scaling, you need to select a true-type font. I hope it's not difficult to find a nice truetype font that "looks kinda like" the default font in your desired font-size!
|
Just do this
```
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
def image_char(char,image_size, font_size):
img = Image.new("RGB", (image_size, image_size), (255,255,255))
print img.getpixel((0,0))
draw = ImageDraw.Draw(img)
font_path = "/Users/admin/Library/Fonts/InputSans-Regular.ttf"
font = ImageFont.truetype(font_path, font_size)
draw.text((5, 5), char, (0,0,0),font=font)
img.show()
main():
image_char("A",36,16)
if __name__ == '__main__':
sys.exit(main())
```
|
2,726,171
|
I am trying to change the font size using python's ImageDraw library.
You can do something like [this](http://infohost.nmt.edu/tcc/help/pubs/pil/image-font.html):
```
fontPath = "/usr/share/fonts/dejavu-lgc/DejaVuLGCSansCondensed-Bold.ttf"
sans16 = ImageFont.truetype ( fontPath, 16 )
im = Image.new ( "RGB", (200,50), "#ddd" )
draw = ImageDraw.Draw ( im )
draw.text ( (10,10), "Run awayyyy!", font=sans16, fill="red" )
```
The problem is that I don't want to specify a font. I want to use the default font and just change the size of the font. This seems to me that it *should* be simple, but I can't find documentation on how to do this.
|
2010/04/28
|
[
"https://Stackoverflow.com/questions/2726171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889/"
] |
Per [PIL's docs](http://www.pythonware.com/library/pil/handbook/imagedraw.htm), `ImageDraw`'s default font is a bitmap font, and therefore it cannot be scaled. For scaling, you need to select a true-type font. I hope it's not difficult to find a nice truetype font that "looks kinda like" the default font in your desired font-size!
|
You could scale the image down, draw the text and then scale the image up again:
```
im = ImageOps.scale(im, 0.5)
draw = ImageDraw.Draw(im)
draw.text((0, 0),"MyText",(0,0,0))
im = ImageOps.scale(im, 2)
```
This effectively increases the font size by a factor of 2.
|
2,726,171
|
I am trying to change the font size using python's ImageDraw library.
You can do something like [this](http://infohost.nmt.edu/tcc/help/pubs/pil/image-font.html):
```
fontPath = "/usr/share/fonts/dejavu-lgc/DejaVuLGCSansCondensed-Bold.ttf"
sans16 = ImageFont.truetype ( fontPath, 16 )
im = Image.new ( "RGB", (200,50), "#ddd" )
draw = ImageDraw.Draw ( im )
draw.text ( (10,10), "Run awayyyy!", font=sans16, fill="red" )
```
The problem is that I don't want to specify a font. I want to use the default font and just change the size of the font. This seems to me that it *should* be simple, but I can't find documentation on how to do this.
|
2010/04/28
|
[
"https://Stackoverflow.com/questions/2726171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889/"
] |
Just do this
```
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
def image_char(char,image_size, font_size):
img = Image.new("RGB", (image_size, image_size), (255,255,255))
print img.getpixel((0,0))
draw = ImageDraw.Draw(img)
font_path = "/Users/admin/Library/Fonts/InputSans-Regular.ttf"
font = ImageFont.truetype(font_path, font_size)
draw.text((5, 5), char, (0,0,0),font=font)
img.show()
main():
image_char("A",36,16)
if __name__ == '__main__':
sys.exit(main())
```
|
You could scale the image down, draw the text and then scale the image up again:
```
im = ImageOps.scale(im, 0.5)
draw = ImageDraw.Draw(im)
draw.text((0, 0),"MyText",(0,0,0))
im = ImageOps.scale(im, 2)
```
This effectively increases the font size by a factor of 2.
|
24,701,171
|
I want to stream an "infinite" (i.e. continuous) amount of data using HTTP Post. Basically, I want to send the POST request header and then stream the content (where the content length is unknown). I looked through <http://docs.python-requests.org/en/latest/user/advanced/> and it seems to have the facility. The one question I have is it says in the document " To stream and upload, simply provide a file-like object for your body". What does "file-like" object mean? The data I wish to stream comes from a sensor. How do I implement a "file-like" object which will read data from the sensor and pass it to the caller?
Sorry about my ignorance here but I am feeling my way through python (i.e. learning as I go along. hmm.. looks like a snake. It feels slithery. Trying to avoid the business end of the critter... :-) ).
Thank you in advance for your help.
Ranga.
|
2014/07/11
|
[
"https://Stackoverflow.com/questions/24701171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2932861/"
] |
Just wanted to give you an answer so you could close your question:
It sounds like what you're really looking for is python websockets. Internally, you make a HTTP request to upgrade the connection to a websocket, and after the handshake you are free to stream data both ways. Python makes this easy, [for example](http://www.kennethreitz.org/essays/introducing-flask-sockets):
```
from flask import Flask
from flask_sockets import Sockets
app = Flask(__name__)
sockets = Sockets(app)
@sockets.route('/echo')
def echo_socket(ws):
while True:
message = ws.receive()
ws.send(message)
@app.route('/')
def hello():
return 'Hello World!'
```
Websockets do support full duplex communication, but you seem only interested in the server-to-client part. In that case you can just stream data using `ws.send()`. I'm not sure if this is what you're looking for, but it should provide a solution.
|
A file-like object is an object with a "read" method that accept a size and returns a binary data buffer for the next chunk of data.
One example that looks like that is indeed, the file object, if you want to read from the filesystem.
Another common case is the [StringIO](https://docs.python.org/2/library/stringio.html) class, which reads and writes to a buffer.
In your case, you would need to implement a "file-like object" by yourself, which would simply read from the sensor.
```
class Sensor(object):
def __init__(self, sensor_thing)
self.sensor_thing = sensor_thing
def read(self, size):
return self.convert_to_binary(self.sensor_thing.read_from_sensor())
def convert_to_binary(self, sensor_data)
....
```
|
36,270,161
|
I want to get every value of 'Lemma' in this json:
```
{'sentences':
[{'indexeddependencies': [], 'words':
[
['Cinnamomum', {'CharacterOffsetBegin': '0', 'CharacterOffsetEnd': '10', 'Lemma': 'Cinnamomum', 'PartOfSpeech': 'NNP', 'NamedEntityTag': 'O'}],
['.', {'CharacterOffsetBegin': '14', 'CharacterOffsetEnd': '15', 'Lemma': '.', 'PartOfSpeech': '.', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'Cinnamomum.', 'dependencies': []
},
{'indexeddependencies': [], 'words':
[
['specific', {'CharacterOffsetBegin': '16', 'CharacterOffsetEnd': '24', 'Lemma': 'specific', 'PartOfSpeech': 'JJ', 'NamedEntityTag': 'O'}],
['immunoglobulin', {'CharacterOffsetBegin': '25', 'CharacterOffsetEnd': '39', 'Lemma': 'immunoglobulin', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}],
['measurement', {'CharacterOffsetBegin': '51', 'CharacterOffsetEnd': '62', 'Lemma': 'measurement', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'specific immunoglobulin measurement', 'dependencies': []
}]
}
```
How can I get every value using python? There are five Lemma keys but I can't get all of them.
I've tried this, but it doesn't work:
```
for i in range(len(words)): #in this case the range of i would be 5
lemma = result["sentences"][0]["words"][i][1]["Lemma"]
```
|
2016/03/28
|
[
"https://Stackoverflow.com/questions/36270161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6125957/"
] |
I'm not sure why do you have this data structure - assuming you cannot change/reshape it to better suit your queries and use cases and that `Lemma` key would always be present:
```
>>> [word[1]['Lemma']
for sentence in data['sentences']
for word in sentence['words']]
['Cinnamomum', '.', 'specific', 'immunoglobulin', 'measurement']
```
|
this simple code traverses everything and finds all Lemma values (btw. your json should have " instead of ' as string quotes, I guess:
```
import json
with open('lemma.json') as f:
data = json.load(f)
def traverse(node):
for key in node:
if isinstance(node, list):
traverse(key)
elif isinstance(node, dict):
if key == 'Lemma':
print key, node[key]
continue
traverse(node[key])
traverse(data)
```
|
36,270,161
|
I want to get every value of 'Lemma' in this json:
```
{'sentences':
[{'indexeddependencies': [], 'words':
[
['Cinnamomum', {'CharacterOffsetBegin': '0', 'CharacterOffsetEnd': '10', 'Lemma': 'Cinnamomum', 'PartOfSpeech': 'NNP', 'NamedEntityTag': 'O'}],
['.', {'CharacterOffsetBegin': '14', 'CharacterOffsetEnd': '15', 'Lemma': '.', 'PartOfSpeech': '.', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'Cinnamomum.', 'dependencies': []
},
{'indexeddependencies': [], 'words':
[
['specific', {'CharacterOffsetBegin': '16', 'CharacterOffsetEnd': '24', 'Lemma': 'specific', 'PartOfSpeech': 'JJ', 'NamedEntityTag': 'O'}],
['immunoglobulin', {'CharacterOffsetBegin': '25', 'CharacterOffsetEnd': '39', 'Lemma': 'immunoglobulin', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}],
['measurement', {'CharacterOffsetBegin': '51', 'CharacterOffsetEnd': '62', 'Lemma': 'measurement', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'specific immunoglobulin measurement', 'dependencies': []
}]
}
```
How can I get every value using python? There are five Lemma keys but I can't get all of them.
I've tried this, but it doesn't work:
```
for i in range(len(words)): #in this case the range of i would be 5
lemma = result["sentences"][0]["words"][i][1]["Lemma"]
```
|
2016/03/28
|
[
"https://Stackoverflow.com/questions/36270161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6125957/"
] |
I'm not sure why do you have this data structure - assuming you cannot change/reshape it to better suit your queries and use cases and that `Lemma` key would always be present:
```
>>> [word[1]['Lemma']
for sentence in data['sentences']
for word in sentence['words']]
['Cinnamomum', '.', 'specific', 'immunoglobulin', 'measurement']
```
|
You can use the [JSON encoder and decoder library](https://docs.python.org/2/library/json.html)
If you use that library you write:
`import json
json.loads(result)`
Anyway, i try put your json in a validator and i obtain a error
|
36,270,161
|
I want to get every value of 'Lemma' in this json:
```
{'sentences':
[{'indexeddependencies': [], 'words':
[
['Cinnamomum', {'CharacterOffsetBegin': '0', 'CharacterOffsetEnd': '10', 'Lemma': 'Cinnamomum', 'PartOfSpeech': 'NNP', 'NamedEntityTag': 'O'}],
['.', {'CharacterOffsetBegin': '14', 'CharacterOffsetEnd': '15', 'Lemma': '.', 'PartOfSpeech': '.', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'Cinnamomum.', 'dependencies': []
},
{'indexeddependencies': [], 'words':
[
['specific', {'CharacterOffsetBegin': '16', 'CharacterOffsetEnd': '24', 'Lemma': 'specific', 'PartOfSpeech': 'JJ', 'NamedEntityTag': 'O'}],
['immunoglobulin', {'CharacterOffsetBegin': '25', 'CharacterOffsetEnd': '39', 'Lemma': 'immunoglobulin', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}],
['measurement', {'CharacterOffsetBegin': '51', 'CharacterOffsetEnd': '62', 'Lemma': 'measurement', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'specific immunoglobulin measurement', 'dependencies': []
}]
}
```
How can I get every value using python? There are five Lemma keys but I can't get all of them.
I've tried this, but it doesn't work:
```
for i in range(len(words)): #in this case the range of i would be 5
lemma = result["sentences"][0]["words"][i][1]["Lemma"]
```
|
2016/03/28
|
[
"https://Stackoverflow.com/questions/36270161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6125957/"
] |
1. change single quotes to double quotes by `sed -i 's/\'/\"/g' sample.json`
2. convert to json object and parse it by module `json`
`import json
with open('sample.json', encoding='utf-8') as data_file:
data = json.loads(data_file.read())
for sentence in data['sentences']:
for word in sentence['words']:
print(word[1]['Lemma'])`
Result:
`Cinnamomum
.
specific
immunoglobulin
measurement`
|
this simple code traverses everything and finds all Lemma values (btw. your json should have " instead of ' as string quotes, I guess:
```
import json
with open('lemma.json') as f:
data = json.load(f)
def traverse(node):
for key in node:
if isinstance(node, list):
traverse(key)
elif isinstance(node, dict):
if key == 'Lemma':
print key, node[key]
continue
traverse(node[key])
traverse(data)
```
|
36,270,161
|
I want to get every value of 'Lemma' in this json:
```
{'sentences':
[{'indexeddependencies': [], 'words':
[
['Cinnamomum', {'CharacterOffsetBegin': '0', 'CharacterOffsetEnd': '10', 'Lemma': 'Cinnamomum', 'PartOfSpeech': 'NNP', 'NamedEntityTag': 'O'}],
['.', {'CharacterOffsetBegin': '14', 'CharacterOffsetEnd': '15', 'Lemma': '.', 'PartOfSpeech': '.', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'Cinnamomum.', 'dependencies': []
},
{'indexeddependencies': [], 'words':
[
['specific', {'CharacterOffsetBegin': '16', 'CharacterOffsetEnd': '24', 'Lemma': 'specific', 'PartOfSpeech': 'JJ', 'NamedEntityTag': 'O'}],
['immunoglobulin', {'CharacterOffsetBegin': '25', 'CharacterOffsetEnd': '39', 'Lemma': 'immunoglobulin', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}],
['measurement', {'CharacterOffsetBegin': '51', 'CharacterOffsetEnd': '62', 'Lemma': 'measurement', 'PartOfSpeech': 'NN', 'NamedEntityTag': 'O'}]
], 'parsetree': [], 'text': 'specific immunoglobulin measurement', 'dependencies': []
}]
}
```
How can I get every value using python? There are five Lemma keys but I can't get all of them.
I've tried this, but it doesn't work:
```
for i in range(len(words)): #in this case the range of i would be 5
lemma = result["sentences"][0]["words"][i][1]["Lemma"]
```
|
2016/03/28
|
[
"https://Stackoverflow.com/questions/36270161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6125957/"
] |
1. change single quotes to double quotes by `sed -i 's/\'/\"/g' sample.json`
2. convert to json object and parse it by module `json`
`import json
with open('sample.json', encoding='utf-8') as data_file:
data = json.loads(data_file.read())
for sentence in data['sentences']:
for word in sentence['words']:
print(word[1]['Lemma'])`
Result:
`Cinnamomum
.
specific
immunoglobulin
measurement`
|
You can use the [JSON encoder and decoder library](https://docs.python.org/2/library/json.html)
If you use that library you write:
`import json
json.loads(result)`
Anyway, i try put your json in a validator and i obtain a error
|
48,409,243
|
I tried to install google cloud module on Ubuntu 16.04 for python 3 but it shows `permission error 13`
this error is shown many times during installations for my python environment `PermissionError: [Errno 13] Permission denied: /usr/lib/python3/dist-packages/httplib2-0.9.1.egg-info`
|
2018/01/23
|
[
"https://Stackoverflow.com/questions/48409243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9177827/"
] |
These are documented well enough in the common literature:
[location](https://en.wikipedia.org/wiki/Location_parameter), [mu](https://en.wikipedia.org/wiki/Poisson_distribution), and the page you cited -- "well enough" is assuming that you're familiar enough with the field's vocabulary to work your way through the technical docs.
* `loc` is the N-dimensional reference point of the distribution, that centroid being chosen appropriately to the function. For this application, it's simply the left end of the desired distribution (scalar). This defaults to 0, and is only changed if your application starts at something other than 0.
* `mu` is the mean of the function.
* `size` is the sample size.
The Poisson distribution has only the one shape parameter: mu. The variance, mean, and frequency are lock-stepped to each other.
|
UHXW is asking what do these arguments mean in simple terms. Prune's answers could be simplified.
The loc is like the lowest x value of your distribution the mu is like the middle of your distribution. Look at
<https://www.datacamp.com/community/tutorials/probability-distributions-python>
The uniform function generates a uniform continuous variable between the specified interval via its loc and scale arguments. This distribution is constant between loc and loc + scale. The size arguments describe the number of random variates. If you want to maintain reproducibility, include a random\_state argument assigned to a number.
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
I suggest putting the hack into the class definition. This way, once the class is defined, it supports JSON. Example:
```
import json
class MyClass( object ):
def _jsonSupport( *args ):
def default( self, xObject ):
return { 'type': 'MyClass', 'name': xObject.name() }
def objectHook( obj ):
if 'type' not in obj:
return obj
if obj[ 'type' ] != 'MyClass':
return obj
return MyClass( obj[ 'name' ] )
json.JSONEncoder.default = default
json._default_decoder = json.JSONDecoder( object_hook = objectHook )
_jsonSupport()
def __init__( self, name ):
self._name = name
def name( self ):
return self._name
def __repr__( self ):
return '<MyClass(name=%s)>' % self._name
myObject = MyClass( 'Magneto' )
jsonString = json.dumps( [ myObject, 'some', { 'other': 'objects' } ] )
print "json representation:", jsonString
decoded = json.loads( jsonString )
print "after decoding, our object is the first in the list", decoded[ 0 ]
```
|
For production environment, prepare rather own module of `json` with your own custom encoder, to make it clear that you overrides something.
Monkey-patch is not recommended, but you can do monkey patch in your testenv.
For example,
```
class JSONDatetimeAndPhonesEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime.date, datetime.datetime)):
return obj.date().isoformat()
elif isinstance(obj, basestring):
try:
number = phonenumbers.parse(obj)
except phonenumbers.NumberParseException:
return json.JSONEncoder.default(self, obj)
else:
return phonenumbers.format_number(number, phonenumbers.PhoneNumberFormat.NATIONAL)
else:
return json.JSONEncoder.default(self, obj)
```
you want:
payload = json.dumps(your\_data, cls=JSONDatetimeAndPhonesEncoder)
or:
payload = your\_dumps(your\_data)
or:
payload = your\_json.dumps(your\_data)
however in testing environment, go a head:
```
@pytest.fixture(scope='session', autouse=True)
def testenv_monkey_patching():
json._default_encoder = JSONDatetimeAndPhonesEncoder()
```
which will apply your encoder to all `json.dumps` occurrences.
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
As I said in a comment to your question, after looking at the `json` module's source code, it does not appear to lend itself to doing what you want. However the goal could be achieved by what is known as [*monkey-patching*](https://en.wikipedia.org/wiki/Monkey-patch)
(see question [*What is a monkey patch?*](https://stackoverflow.com/questions/5626193/what-is-a-monkey-patch)).
This could be done in your package's `__init__.py` initialization script and would affect all subsequent `json` module serialization since modules are generally only loaded once and the result is cached in `sys.modules`.
The patch changes the default json encoder's `default` method—the default `default()`.
Here's an example implemented as a standalone module for simplicity's sake:
Module: `make_json_serializable.py`
```
""" Module that monkey-patches json module when it's imported so
JSONEncoder.default() automatically checks for a special "to_json()"
method and uses it to encode the object if found.
"""
from json import JSONEncoder
def _default(self, obj):
return getattr(obj.__class__, "to_json", _default.default)(obj)
_default.default = JSONEncoder.default # Save unmodified default.
JSONEncoder.default = _default # Replace it.
```
Using it is trivial since the patch is applied by simply importing the module.
Sample client script:
```
import json
import make_json_serializable # apply monkey-patch
class Foo(object):
def __init__(self, name):
self.name = name
def to_json(self): # New special method.
""" Convert to JSON format string representation. """
return '{"name": "%s"}' % self.name
foo = Foo('sazpaz')
print(json.dumps(foo)) # -> "{\"name\": \"sazpaz\"}"
```
To retain the object type information, the special method can also include it in the string returned:
```
return ('{"type": "%s", "name": "%s"}' %
(self.__class__.__name__, self.name))
```
Which produces the following JSON that now includes the class name:
```none
"{\"type\": \"Foo\", \"name\": \"sazpaz\"}"
```
***Magick Lies Here***
Even better than having the replacement `default()` look for a specially named method, would be for it to be able to serialize most Python objects *automatically*, including user-defined class instances, without needing to add a special method. After researching a number of alternatives, the following — based on an [answer](https://stackoverflow.com/a/8230373/355230) by @Raymond Hettinger to another question — which uses the `pickle` module, seemed closest to that ideal to me:
Module: `make_json_serializable2.py`
```
""" Module that imports the json module and monkey-patches it so
JSONEncoder.default() automatically pickles any Python objects
encountered that aren't standard JSON data types.
"""
from json import JSONEncoder
import pickle
def _default(self, obj):
return {'_python_object': pickle.dumps(obj)}
JSONEncoder.default = _default # Replace with the above.
```
Of course everything can't be pickled—extension types for example. However there are ways defined to handle them via the pickle protocol by writing special methods—similar to what you suggested and I described earlier—but doing that would likely be necessary for a far fewer number of cases.
**Deserializing**
Regardless, using the pickle protocol also means it would be fairly easy to reconstruct the original Python object by providing a custom `object_hook` function argument on any `json.loads()` calls that used any `'_python_object'` key in the dictionary passed in, whenever it has one. Something like:
```
def as_python_object(dct):
try:
return pickle.loads(str(dct['_python_object']))
except KeyError:
return dct
pyobj = json.loads(json_str, object_hook=as_python_object)
```
If this has to be done in many places, it might be worthwhile to define a wrapper function that automatically supplied the extra keyword argument:
```
json_pkloads = functools.partial(json.loads, object_hook=as_python_object)
pyobj = json_pkloads(json_str)
```
Naturally, this could be monkey-patched it into the `json` module as well, making the function the default `object_hook` (instead of `None`).
I got the idea for using `pickle` from an [answer](https://stackoverflow.com/a/8230373/355230) by [Raymond Hettinger](https://stackoverflow.com/users/1001643/raymond-hettinger) to another JSON serialization question, whom I consider exceptionally credible as well as an official source (as in Python core developer).
### Portability to Python 3
The code above does not work as shown in Python 3 because `json.dumps()` returns a `bytes` object which the `JSONEncoder` can't handle. However the approach is still valid. A simple way to workaround the issue is to `latin1` "decode" the value returned from `pickle.dumps()` and then "encode" it from `latin1` before passing it on to `pickle.loads()` in the `as_python_object()` function. This works because arbitrary binary strings are valid `latin1` which can always be decoded to Unicode and then encoded back to the original string again (as pointed out in [this answer](https://stackoverflow.com/a/22621777/355230) by [Sven Marnach](https://stackoverflow.com/users/279627/sven-marnach)).
(Although the following works fine in Python 2, the `latin1` decoding and encoding it does is superfluous.)
```
from decimal import Decimal
class PythonObjectEncoder(json.JSONEncoder):
def default(self, obj):
return {'_python_object': pickle.dumps(obj).decode('latin1')}
def as_python_object(dct):
try:
return pickle.loads(dct['_python_object'].encode('latin1'))
except KeyError:
return dct
class Foo(object): # Some user-defined class.
def __init__(self, name):
self.name = name
def __eq__(self, other):
if type(other) is type(self): # Instances of same class?
return self.name == other.name
return NotImplemented
__hash__ = None
data = [1,2,3, set(['knights', 'who', 'say', 'ni']), {'key':'value'},
Foo('Bar'), Decimal('3.141592653589793238462643383279502884197169')]
j = json.dumps(data, cls=PythonObjectEncoder, indent=4)
data2 = json.loads(j, object_hook=as_python_object)
assert data == data2 # both should be same
```
|
For production environment, prepare rather own module of `json` with your own custom encoder, to make it clear that you overrides something.
Monkey-patch is not recommended, but you can do monkey patch in your testenv.
For example,
```
class JSONDatetimeAndPhonesEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime.date, datetime.datetime)):
return obj.date().isoformat()
elif isinstance(obj, basestring):
try:
number = phonenumbers.parse(obj)
except phonenumbers.NumberParseException:
return json.JSONEncoder.default(self, obj)
else:
return phonenumbers.format_number(number, phonenumbers.PhoneNumberFormat.NATIONAL)
else:
return json.JSONEncoder.default(self, obj)
```
you want:
payload = json.dumps(your\_data, cls=JSONDatetimeAndPhonesEncoder)
or:
payload = your\_dumps(your\_data)
or:
payload = your\_json.dumps(your\_data)
however in testing environment, go a head:
```
@pytest.fixture(scope='session', autouse=True)
def testenv_monkey_patching():
json._default_encoder = JSONDatetimeAndPhonesEncoder()
```
which will apply your encoder to all `json.dumps` occurrences.
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
You can extend the dict class like so:
```
#!/usr/local/bin/python3
import json
class Serializable(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# hack to fix _json.so make_encoder serialize properly
self.__setitem__('dummy', 1)
def _myattrs(self):
return [
(x, self._repr(getattr(self, x)))
for x in self.__dir__()
if x not in Serializable().__dir__()
]
def _repr(self, value):
if isinstance(value, (str, int, float, list, tuple, dict)):
return value
else:
return repr(value)
def __repr__(self):
return '<%s.%s object at %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self))
)
def keys(self):
return iter([x[0] for x in self._myattrs()])
def values(self):
return iter([x[1] for x in self._myattrs()])
def items(self):
return iter(self._myattrs())
```
Now to make your classes serializable with the regular encoder, extend 'Serializable':
```
class MySerializableClass(Serializable):
attr_1 = 'first attribute'
attr_2 = 23
def my_function(self):
print('do something here')
obj = MySerializableClass()
```
`print(obj)` will print something like:
```
<__main__.MySerializableClass object at 0x1073525e8>
```
`print(json.dumps(obj, indent=4))` will print something like:
```
{
"attr_1": "first attribute",
"attr_2": 23,
"my_function": "<bound method MySerializableClass.my_function of <__main__.MySerializableClass object at 0x1073525e8>>"
}
```
|
I suggest putting the hack into the class definition. This way, once the class is defined, it supports JSON. Example:
```
import json
class MyClass( object ):
def _jsonSupport( *args ):
def default( self, xObject ):
return { 'type': 'MyClass', 'name': xObject.name() }
def objectHook( obj ):
if 'type' not in obj:
return obj
if obj[ 'type' ] != 'MyClass':
return obj
return MyClass( obj[ 'name' ] )
json.JSONEncoder.default = default
json._default_decoder = json.JSONDecoder( object_hook = objectHook )
_jsonSupport()
def __init__( self, name ):
self._name = name
def name( self ):
return self._name
def __repr__( self ):
return '<MyClass(name=%s)>' % self._name
myObject = MyClass( 'Magneto' )
jsonString = json.dumps( [ myObject, 'some', { 'other': 'objects' } ] )
print "json representation:", jsonString
decoded = json.loads( jsonString )
print "after decoding, our object is the first in the list", decoded[ 0 ]
```
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
You can extend the dict class like so:
```
#!/usr/local/bin/python3
import json
class Serializable(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# hack to fix _json.so make_encoder serialize properly
self.__setitem__('dummy', 1)
def _myattrs(self):
return [
(x, self._repr(getattr(self, x)))
for x in self.__dir__()
if x not in Serializable().__dir__()
]
def _repr(self, value):
if isinstance(value, (str, int, float, list, tuple, dict)):
return value
else:
return repr(value)
def __repr__(self):
return '<%s.%s object at %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self))
)
def keys(self):
return iter([x[0] for x in self._myattrs()])
def values(self):
return iter([x[1] for x in self._myattrs()])
def items(self):
return iter(self._myattrs())
```
Now to make your classes serializable with the regular encoder, extend 'Serializable':
```
class MySerializableClass(Serializable):
attr_1 = 'first attribute'
attr_2 = 23
def my_function(self):
print('do something here')
obj = MySerializableClass()
```
`print(obj)` will print something like:
```
<__main__.MySerializableClass object at 0x1073525e8>
```
`print(json.dumps(obj, indent=4))` will print something like:
```
{
"attr_1": "first attribute",
"attr_2": 23,
"my_function": "<bound method MySerializableClass.my_function of <__main__.MySerializableClass object at 0x1073525e8>>"
}
```
|
The problem with overriding `JSONEncoder().default` is that you can do it only once. If you stumble upon anything a special data type that does not work with that pattern (like if you use a strange encoding). With the pattern below, you can always make your class JSON serializable, provided that the class field you want to serialize is serializable itself (and can be added to a python list, barely anything). Otherwise, you have to apply recursively the same pattern to your json field (or extract the serializable data from it):
```
# base class that will make all derivatives JSON serializable:
class JSONSerializable(list): # need to derive from a serializable class.
def __init__(self, value = None):
self = [ value ]
def setJSONSerializableValue(self, value):
self = [ value ]
def getJSONSerializableValue(self):
return self[1] if len(self) else None
# derive your classes from JSONSerializable:
class MyJSONSerializableObject(JSONSerializable):
def __init__(self): # or any other function
# ....
# suppose your__json__field is the class member to be serialized.
# it has to be serializable itself.
# Every time you want to set it, call this function:
self.setJSONSerializableValue(your__json__field)
# ...
# ... and when you need access to it, get this way:
do_something_with_your__json__field(self.getJSONSerializableValue())
# now you have a JSON default-serializable class:
a = MyJSONSerializableObject()
print json.dumps(a)
```
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
As I said in a comment to your question, after looking at the `json` module's source code, it does not appear to lend itself to doing what you want. However the goal could be achieved by what is known as [*monkey-patching*](https://en.wikipedia.org/wiki/Monkey-patch)
(see question [*What is a monkey patch?*](https://stackoverflow.com/questions/5626193/what-is-a-monkey-patch)).
This could be done in your package's `__init__.py` initialization script and would affect all subsequent `json` module serialization since modules are generally only loaded once and the result is cached in `sys.modules`.
The patch changes the default json encoder's `default` method—the default `default()`.
Here's an example implemented as a standalone module for simplicity's sake:
Module: `make_json_serializable.py`
```
""" Module that monkey-patches json module when it's imported so
JSONEncoder.default() automatically checks for a special "to_json()"
method and uses it to encode the object if found.
"""
from json import JSONEncoder
def _default(self, obj):
return getattr(obj.__class__, "to_json", _default.default)(obj)
_default.default = JSONEncoder.default # Save unmodified default.
JSONEncoder.default = _default # Replace it.
```
Using it is trivial since the patch is applied by simply importing the module.
Sample client script:
```
import json
import make_json_serializable # apply monkey-patch
class Foo(object):
def __init__(self, name):
self.name = name
def to_json(self): # New special method.
""" Convert to JSON format string representation. """
return '{"name": "%s"}' % self.name
foo = Foo('sazpaz')
print(json.dumps(foo)) # -> "{\"name\": \"sazpaz\"}"
```
To retain the object type information, the special method can also include it in the string returned:
```
return ('{"type": "%s", "name": "%s"}' %
(self.__class__.__name__, self.name))
```
Which produces the following JSON that now includes the class name:
```none
"{\"type\": \"Foo\", \"name\": \"sazpaz\"}"
```
***Magick Lies Here***
Even better than having the replacement `default()` look for a specially named method, would be for it to be able to serialize most Python objects *automatically*, including user-defined class instances, without needing to add a special method. After researching a number of alternatives, the following — based on an [answer](https://stackoverflow.com/a/8230373/355230) by @Raymond Hettinger to another question — which uses the `pickle` module, seemed closest to that ideal to me:
Module: `make_json_serializable2.py`
```
""" Module that imports the json module and monkey-patches it so
JSONEncoder.default() automatically pickles any Python objects
encountered that aren't standard JSON data types.
"""
from json import JSONEncoder
import pickle
def _default(self, obj):
return {'_python_object': pickle.dumps(obj)}
JSONEncoder.default = _default # Replace with the above.
```
Of course everything can't be pickled—extension types for example. However there are ways defined to handle them via the pickle protocol by writing special methods—similar to what you suggested and I described earlier—but doing that would likely be necessary for a far fewer number of cases.
**Deserializing**
Regardless, using the pickle protocol also means it would be fairly easy to reconstruct the original Python object by providing a custom `object_hook` function argument on any `json.loads()` calls that used any `'_python_object'` key in the dictionary passed in, whenever it has one. Something like:
```
def as_python_object(dct):
try:
return pickle.loads(str(dct['_python_object']))
except KeyError:
return dct
pyobj = json.loads(json_str, object_hook=as_python_object)
```
If this has to be done in many places, it might be worthwhile to define a wrapper function that automatically supplied the extra keyword argument:
```
json_pkloads = functools.partial(json.loads, object_hook=as_python_object)
pyobj = json_pkloads(json_str)
```
Naturally, this could be monkey-patched it into the `json` module as well, making the function the default `object_hook` (instead of `None`).
I got the idea for using `pickle` from an [answer](https://stackoverflow.com/a/8230373/355230) by [Raymond Hettinger](https://stackoverflow.com/users/1001643/raymond-hettinger) to another JSON serialization question, whom I consider exceptionally credible as well as an official source (as in Python core developer).
### Portability to Python 3
The code above does not work as shown in Python 3 because `json.dumps()` returns a `bytes` object which the `JSONEncoder` can't handle. However the approach is still valid. A simple way to workaround the issue is to `latin1` "decode" the value returned from `pickle.dumps()` and then "encode" it from `latin1` before passing it on to `pickle.loads()` in the `as_python_object()` function. This works because arbitrary binary strings are valid `latin1` which can always be decoded to Unicode and then encoded back to the original string again (as pointed out in [this answer](https://stackoverflow.com/a/22621777/355230) by [Sven Marnach](https://stackoverflow.com/users/279627/sven-marnach)).
(Although the following works fine in Python 2, the `latin1` decoding and encoding it does is superfluous.)
```
from decimal import Decimal
class PythonObjectEncoder(json.JSONEncoder):
def default(self, obj):
return {'_python_object': pickle.dumps(obj).decode('latin1')}
def as_python_object(dct):
try:
return pickle.loads(dct['_python_object'].encode('latin1'))
except KeyError:
return dct
class Foo(object): # Some user-defined class.
def __init__(self, name):
self.name = name
def __eq__(self, other):
if type(other) is type(self): # Instances of same class?
return self.name == other.name
return NotImplemented
__hash__ = None
data = [1,2,3, set(['knights', 'who', 'say', 'ni']), {'key':'value'},
Foo('Bar'), Decimal('3.141592653589793238462643383279502884197169')]
j = json.dumps(data, cls=PythonObjectEncoder, indent=4)
data2 = json.loads(j, object_hook=as_python_object)
assert data == data2 # both should be same
```
|
You can extend the dict class like so:
```
#!/usr/local/bin/python3
import json
class Serializable(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# hack to fix _json.so make_encoder serialize properly
self.__setitem__('dummy', 1)
def _myattrs(self):
return [
(x, self._repr(getattr(self, x)))
for x in self.__dir__()
if x not in Serializable().__dir__()
]
def _repr(self, value):
if isinstance(value, (str, int, float, list, tuple, dict)):
return value
else:
return repr(value)
def __repr__(self):
return '<%s.%s object at %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self))
)
def keys(self):
return iter([x[0] for x in self._myattrs()])
def values(self):
return iter([x[1] for x in self._myattrs()])
def items(self):
return iter(self._myattrs())
```
Now to make your classes serializable with the regular encoder, extend 'Serializable':
```
class MySerializableClass(Serializable):
attr_1 = 'first attribute'
attr_2 = 23
def my_function(self):
print('do something here')
obj = MySerializableClass()
```
`print(obj)` will print something like:
```
<__main__.MySerializableClass object at 0x1073525e8>
```
`print(json.dumps(obj, indent=4))` will print something like:
```
{
"attr_1": "first attribute",
"attr_2": 23,
"my_function": "<bound method MySerializableClass.my_function of <__main__.MySerializableClass object at 0x1073525e8>>"
}
```
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
You can extend the dict class like so:
```
#!/usr/local/bin/python3
import json
class Serializable(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# hack to fix _json.so make_encoder serialize properly
self.__setitem__('dummy', 1)
def _myattrs(self):
return [
(x, self._repr(getattr(self, x)))
for x in self.__dir__()
if x not in Serializable().__dir__()
]
def _repr(self, value):
if isinstance(value, (str, int, float, list, tuple, dict)):
return value
else:
return repr(value)
def __repr__(self):
return '<%s.%s object at %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self))
)
def keys(self):
return iter([x[0] for x in self._myattrs()])
def values(self):
return iter([x[1] for x in self._myattrs()])
def items(self):
return iter(self._myattrs())
```
Now to make your classes serializable with the regular encoder, extend 'Serializable':
```
class MySerializableClass(Serializable):
attr_1 = 'first attribute'
attr_2 = 23
def my_function(self):
print('do something here')
obj = MySerializableClass()
```
`print(obj)` will print something like:
```
<__main__.MySerializableClass object at 0x1073525e8>
```
`print(json.dumps(obj, indent=4))` will print something like:
```
{
"attr_1": "first attribute",
"attr_2": 23,
"my_function": "<bound method MySerializableClass.my_function of <__main__.MySerializableClass object at 0x1073525e8>>"
}
```
|
For production environment, prepare rather own module of `json` with your own custom encoder, to make it clear that you overrides something.
Monkey-patch is not recommended, but you can do monkey patch in your testenv.
For example,
```
class JSONDatetimeAndPhonesEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime.date, datetime.datetime)):
return obj.date().isoformat()
elif isinstance(obj, basestring):
try:
number = phonenumbers.parse(obj)
except phonenumbers.NumberParseException:
return json.JSONEncoder.default(self, obj)
else:
return phonenumbers.format_number(number, phonenumbers.PhoneNumberFormat.NATIONAL)
else:
return json.JSONEncoder.default(self, obj)
```
you want:
payload = json.dumps(your\_data, cls=JSONDatetimeAndPhonesEncoder)
or:
payload = your\_dumps(your\_data)
or:
payload = your\_json.dumps(your\_data)
however in testing environment, go a head:
```
@pytest.fixture(scope='session', autouse=True)
def testenv_monkey_patching():
json._default_encoder = JSONDatetimeAndPhonesEncoder()
```
which will apply your encoder to all `json.dumps` occurrences.
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
You can extend the dict class like so:
```
#!/usr/local/bin/python3
import json
class Serializable(dict):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# hack to fix _json.so make_encoder serialize properly
self.__setitem__('dummy', 1)
def _myattrs(self):
return [
(x, self._repr(getattr(self, x)))
for x in self.__dir__()
if x not in Serializable().__dir__()
]
def _repr(self, value):
if isinstance(value, (str, int, float, list, tuple, dict)):
return value
else:
return repr(value)
def __repr__(self):
return '<%s.%s object at %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self))
)
def keys(self):
return iter([x[0] for x in self._myattrs()])
def values(self):
return iter([x[1] for x in self._myattrs()])
def items(self):
return iter(self._myattrs())
```
Now to make your classes serializable with the regular encoder, extend 'Serializable':
```
class MySerializableClass(Serializable):
attr_1 = 'first attribute'
attr_2 = 23
def my_function(self):
print('do something here')
obj = MySerializableClass()
```
`print(obj)` will print something like:
```
<__main__.MySerializableClass object at 0x1073525e8>
```
`print(json.dumps(obj, indent=4))` will print something like:
```
{
"attr_1": "first attribute",
"attr_2": 23,
"my_function": "<bound method MySerializableClass.my_function of <__main__.MySerializableClass object at 0x1073525e8>>"
}
```
|
I don't understand why you can't write a `serialize` function for your own class? You implement the custom encoder inside the class itself and allow "people" to call the serialize function that will essentially return `self.__dict__` with functions stripped out.
edit:
[This question](https://stackoverflow.com/questions/3768895/python-how-to-make-a-class-json-serializable) agrees with me, that the most simple way is write your own method and return the json serialized data that you want. They also recommend to try jsonpickle, but now you're adding an additional dependency for beauty when the correct solution comes built in.
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
I suggest putting the hack into the class definition. This way, once the class is defined, it supports JSON. Example:
```
import json
class MyClass( object ):
def _jsonSupport( *args ):
def default( self, xObject ):
return { 'type': 'MyClass', 'name': xObject.name() }
def objectHook( obj ):
if 'type' not in obj:
return obj
if obj[ 'type' ] != 'MyClass':
return obj
return MyClass( obj[ 'name' ] )
json.JSONEncoder.default = default
json._default_decoder = json.JSONDecoder( object_hook = objectHook )
_jsonSupport()
def __init__( self, name ):
self._name = name
def name( self ):
return self._name
def __repr__( self ):
return '<MyClass(name=%s)>' % self._name
myObject = MyClass( 'Magneto' )
jsonString = json.dumps( [ myObject, 'some', { 'other': 'objects' } ] )
print "json representation:", jsonString
decoded = json.loads( jsonString )
print "after decoding, our object is the first in the list", decoded[ 0 ]
```
|
I don't understand why you can't write a `serialize` function for your own class? You implement the custom encoder inside the class itself and allow "people" to call the serialize function that will essentially return `self.__dict__` with functions stripped out.
edit:
[This question](https://stackoverflow.com/questions/3768895/python-how-to-make-a-class-json-serializable) agrees with me, that the most simple way is write your own method and return the json serialized data that you want. They also recommend to try jsonpickle, but now you're adding an additional dependency for beauty when the correct solution comes built in.
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
The problem with overriding `JSONEncoder().default` is that you can do it only once. If you stumble upon anything a special data type that does not work with that pattern (like if you use a strange encoding). With the pattern below, you can always make your class JSON serializable, provided that the class field you want to serialize is serializable itself (and can be added to a python list, barely anything). Otherwise, you have to apply recursively the same pattern to your json field (or extract the serializable data from it):
```
# base class that will make all derivatives JSON serializable:
class JSONSerializable(list): # need to derive from a serializable class.
def __init__(self, value = None):
self = [ value ]
def setJSONSerializableValue(self, value):
self = [ value ]
def getJSONSerializableValue(self):
return self[1] if len(self) else None
# derive your classes from JSONSerializable:
class MyJSONSerializableObject(JSONSerializable):
def __init__(self): # or any other function
# ....
# suppose your__json__field is the class member to be serialized.
# it has to be serializable itself.
# Every time you want to set it, call this function:
self.setJSONSerializableValue(your__json__field)
# ...
# ... and when you need access to it, get this way:
do_something_with_your__json__field(self.getJSONSerializableValue())
# now you have a JSON default-serializable class:
a = MyJSONSerializableObject()
print json.dumps(a)
```
|
For production environment, prepare rather own module of `json` with your own custom encoder, to make it clear that you overrides something.
Monkey-patch is not recommended, but you can do monkey patch in your testenv.
For example,
```
class JSONDatetimeAndPhonesEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime.date, datetime.datetime)):
return obj.date().isoformat()
elif isinstance(obj, basestring):
try:
number = phonenumbers.parse(obj)
except phonenumbers.NumberParseException:
return json.JSONEncoder.default(self, obj)
else:
return phonenumbers.format_number(number, phonenumbers.PhoneNumberFormat.NATIONAL)
else:
return json.JSONEncoder.default(self, obj)
```
you want:
payload = json.dumps(your\_data, cls=JSONDatetimeAndPhonesEncoder)
or:
payload = your\_dumps(your\_data)
or:
payload = your\_json.dumps(your\_data)
however in testing environment, go a head:
```
@pytest.fixture(scope='session', autouse=True)
def testenv_monkey_patching():
json._default_encoder = JSONDatetimeAndPhonesEncoder()
```
which will apply your encoder to all `json.dumps` occurrences.
|
18,478,287
|
The regular way of JSON-serializing custom non-serializable objects is to subclass `json.JSONEncoder` and then pass a custom encoder to `json.dumps()`.
It usually looks like this:
```
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Foo):
return obj.to_json()
return json.JSONEncoder.default(self, obj)
print(json.dumps(obj, cls=CustomEncoder))
```
What I'm trying to do, is to make something serializable with the default encoder. I looked around but couldn't find anything.
My thought is that there would be some field in which the encoder looks at to determine the json encoding. Something similar to `__str__`. Perhaps a `__json__` field.
Is there something like this in python?
I want to make one class of a module I'm making to be JSON serializable to everyone that uses the package without them worrying about implementing their own [trivial] custom encoders.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18478287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
As I said in a comment to your question, after looking at the `json` module's source code, it does not appear to lend itself to doing what you want. However the goal could be achieved by what is known as [*monkey-patching*](https://en.wikipedia.org/wiki/Monkey-patch)
(see question [*What is a monkey patch?*](https://stackoverflow.com/questions/5626193/what-is-a-monkey-patch)).
This could be done in your package's `__init__.py` initialization script and would affect all subsequent `json` module serialization since modules are generally only loaded once and the result is cached in `sys.modules`.
The patch changes the default json encoder's `default` method—the default `default()`.
Here's an example implemented as a standalone module for simplicity's sake:
Module: `make_json_serializable.py`
```
""" Module that monkey-patches json module when it's imported so
JSONEncoder.default() automatically checks for a special "to_json()"
method and uses it to encode the object if found.
"""
from json import JSONEncoder
def _default(self, obj):
return getattr(obj.__class__, "to_json", _default.default)(obj)
_default.default = JSONEncoder.default # Save unmodified default.
JSONEncoder.default = _default # Replace it.
```
Using it is trivial since the patch is applied by simply importing the module.
Sample client script:
```
import json
import make_json_serializable # apply monkey-patch
class Foo(object):
def __init__(self, name):
self.name = name
def to_json(self): # New special method.
""" Convert to JSON format string representation. """
return '{"name": "%s"}' % self.name
foo = Foo('sazpaz')
print(json.dumps(foo)) # -> "{\"name\": \"sazpaz\"}"
```
To retain the object type information, the special method can also include it in the string returned:
```
return ('{"type": "%s", "name": "%s"}' %
(self.__class__.__name__, self.name))
```
Which produces the following JSON that now includes the class name:
```none
"{\"type\": \"Foo\", \"name\": \"sazpaz\"}"
```
***Magick Lies Here***
Even better than having the replacement `default()` look for a specially named method, would be for it to be able to serialize most Python objects *automatically*, including user-defined class instances, without needing to add a special method. After researching a number of alternatives, the following — based on an [answer](https://stackoverflow.com/a/8230373/355230) by @Raymond Hettinger to another question — which uses the `pickle` module, seemed closest to that ideal to me:
Module: `make_json_serializable2.py`
```
""" Module that imports the json module and monkey-patches it so
JSONEncoder.default() automatically pickles any Python objects
encountered that aren't standard JSON data types.
"""
from json import JSONEncoder
import pickle
def _default(self, obj):
return {'_python_object': pickle.dumps(obj)}
JSONEncoder.default = _default # Replace with the above.
```
Of course everything can't be pickled—extension types for example. However there are ways defined to handle them via the pickle protocol by writing special methods—similar to what you suggested and I described earlier—but doing that would likely be necessary for a far fewer number of cases.
**Deserializing**
Regardless, using the pickle protocol also means it would be fairly easy to reconstruct the original Python object by providing a custom `object_hook` function argument on any `json.loads()` calls that used any `'_python_object'` key in the dictionary passed in, whenever it has one. Something like:
```
def as_python_object(dct):
try:
return pickle.loads(str(dct['_python_object']))
except KeyError:
return dct
pyobj = json.loads(json_str, object_hook=as_python_object)
```
If this has to be done in many places, it might be worthwhile to define a wrapper function that automatically supplied the extra keyword argument:
```
json_pkloads = functools.partial(json.loads, object_hook=as_python_object)
pyobj = json_pkloads(json_str)
```
Naturally, this could be monkey-patched it into the `json` module as well, making the function the default `object_hook` (instead of `None`).
I got the idea for using `pickle` from an [answer](https://stackoverflow.com/a/8230373/355230) by [Raymond Hettinger](https://stackoverflow.com/users/1001643/raymond-hettinger) to another JSON serialization question, whom I consider exceptionally credible as well as an official source (as in Python core developer).
### Portability to Python 3
The code above does not work as shown in Python 3 because `json.dumps()` returns a `bytes` object which the `JSONEncoder` can't handle. However the approach is still valid. A simple way to workaround the issue is to `latin1` "decode" the value returned from `pickle.dumps()` and then "encode" it from `latin1` before passing it on to `pickle.loads()` in the `as_python_object()` function. This works because arbitrary binary strings are valid `latin1` which can always be decoded to Unicode and then encoded back to the original string again (as pointed out in [this answer](https://stackoverflow.com/a/22621777/355230) by [Sven Marnach](https://stackoverflow.com/users/279627/sven-marnach)).
(Although the following works fine in Python 2, the `latin1` decoding and encoding it does is superfluous.)
```
from decimal import Decimal
class PythonObjectEncoder(json.JSONEncoder):
def default(self, obj):
return {'_python_object': pickle.dumps(obj).decode('latin1')}
def as_python_object(dct):
try:
return pickle.loads(dct['_python_object'].encode('latin1'))
except KeyError:
return dct
class Foo(object): # Some user-defined class.
def __init__(self, name):
self.name = name
def __eq__(self, other):
if type(other) is type(self): # Instances of same class?
return self.name == other.name
return NotImplemented
__hash__ = None
data = [1,2,3, set(['knights', 'who', 'say', 'ni']), {'key':'value'},
Foo('Bar'), Decimal('3.141592653589793238462643383279502884197169')]
j = json.dumps(data, cls=PythonObjectEncoder, indent=4)
data2 = json.loads(j, object_hook=as_python_object)
assert data == data2 # both should be same
```
|
The problem with overriding `JSONEncoder().default` is that you can do it only once. If you stumble upon anything a special data type that does not work with that pattern (like if you use a strange encoding). With the pattern below, you can always make your class JSON serializable, provided that the class field you want to serialize is serializable itself (and can be added to a python list, barely anything). Otherwise, you have to apply recursively the same pattern to your json field (or extract the serializable data from it):
```
# base class that will make all derivatives JSON serializable:
class JSONSerializable(list): # need to derive from a serializable class.
def __init__(self, value = None):
self = [ value ]
def setJSONSerializableValue(self, value):
self = [ value ]
def getJSONSerializableValue(self):
return self[1] if len(self) else None
# derive your classes from JSONSerializable:
class MyJSONSerializableObject(JSONSerializable):
def __init__(self): # or any other function
# ....
# suppose your__json__field is the class member to be serialized.
# it has to be serializable itself.
# Every time you want to set it, call this function:
self.setJSONSerializableValue(your__json__field)
# ...
# ... and when you need access to it, get this way:
do_something_with_your__json__field(self.getJSONSerializableValue())
# now you have a JSON default-serializable class:
a = MyJSONSerializableObject()
print json.dumps(a)
```
|
55,200,708
|
I am getting started on using Zappa. However, I already had installed python 3.7 on my computer while Zappa uses 3.6. I installed python 3.6.8, but when I try to use zappa in the cmd (zappa init) it uses python 3.7 by default. How can I direct zappa to use 3.6 instead?
|
2019/03/16
|
[
"https://Stackoverflow.com/questions/55200708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11116762/"
] |
As mentioned in Zappa [README](https://github.com/Miserlou/Zappa#installation-and-configuration):
>
> Please note that Zappa must be installed into your project's virtual environment.
>
>
>
You should use something like `virtualenv` to create a virtual environment, which makes it easy to switch Python version.
If you use virtualenv, you can try create an environment by:
```
$ virtualenv -p /usr/bin/python3.6 venv
$ source activate venv
```
Then `pip install zappa` in this virtual environment.
|
I don't know about Zappa, but if you want use a specific version of python can do:
```
python3.6 my_program.py
```
and if whant use the command *python* with a specific version permanently, in **linux** modify the file */home/[user\_name]/.bashrc* and add the next line:
```
alias python=python3.6
```
|
55,200,708
|
I am getting started on using Zappa. However, I already had installed python 3.7 on my computer while Zappa uses 3.6. I installed python 3.6.8, but when I try to use zappa in the cmd (zappa init) it uses python 3.7 by default. How can I direct zappa to use 3.6 instead?
|
2019/03/16
|
[
"https://Stackoverflow.com/questions/55200708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11116762/"
] |
As mentioned in Zappa [README](https://github.com/Miserlou/Zappa#installation-and-configuration):
>
> Please note that Zappa must be installed into your project's virtual environment.
>
>
>
You should use something like `virtualenv` to create a virtual environment, which makes it easy to switch Python version.
If you use virtualenv, you can try create an environment by:
```
$ virtualenv -p /usr/bin/python3.6 venv
$ source activate venv
```
Then `pip install zappa` in this virtual environment.
|
You can use `virtualenv` to setup an environment using a specific Python version by:
```
% pip install virtualenv
% virtualenv -p python3.6 .venv
```
You can also use an absolute path to the Python executable if it's named the same but located in different folders.
Then switch to use the environment:
```
% source .venv/bin/activate
```
This environment uses Python 3.6, so install Zappa with pip like normal and you're good to go.
You can read more about usage of virtualenv [here](https://help.dreamhost.com/hc/en-us/articles/115000695551-Installing-and-using-virtualenv-with-Python-3).
|
69,654,700
|
So I'm trying to achieve something like this
```
from enum import Enum
tabulate_formats = ['fancy_grid', 'fancy_outline', 'github', 'grid']
class TableFormat(str, Enum):
for item in tabulate_formats:
exec(f"{item} = '{item}'")
```
Though i get this error
```
Traceback (most recent call last):
File "/app/src/main.py", line 25, in <module>
class TableFormat(str, Enum):
File "/app/src/main.py", line 26, in TableFormat
for item in tabulate_formats:
File "/usr/local/lib/python3.6/enum.py", line 92, in __setitem__
raise TypeError('Attempted to reuse key: %r' % key)
TypeError: Attempted to reuse key: 'item'
```
How do I properly assign them into the class
|
2021/10/21
|
[
"https://Stackoverflow.com/questions/69654700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3709060/"
] |
What you want to do doesn't involve editing an array, only editing the property of that array. [Array.prototype.push](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/push) and [Array.prototype.splice](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice) are used for adding and removing elements to and from an array, and aren't what you want to use here. Instead, you just want to set the property on the object at the first index of your array, which you can do like this:
```
this.houseList[1].parking = this.ParkingIsTrue;
```
Of course, you might need to tweak that slightly depending on how you need to determine which element in the array needs to be edited, and what its value needs to be. But I hope that example is at least a useful guide to the syntax that you will need to use.
---
The reason why you were getting the error "ERROR TypeError: this.houseList[i].push is not a function" is that `this.houseList[i]` resolves to an object like `{houseCode: '5678', street: 'Pike Street'}`. This value is not an array, and doesn't have a `push` method, so `this.houseList[i].push` resolves to `undefined`.
When you then tried to call that like a function, you get an error complaining that it's not a function because, well, it wasn't a function. It was `undefined`.
|
You can work with append `parking` property to object as below:
```js
this.houseList.splice(i, 0, {
...this.houseList[i],
parking: this.ParkingIsTrue,
});
```
[Sample Solution on StackBlitz](https://stackblitz.com/edit/angular-ivy-zggwz7?file=src/app/app.component.html)
---
References
----------
[JavaScript Ellipsis: Three dots ( … ) in JavaScript](https://dev.to/sagar/three-dots---in-javascript-26ci)
|
39,517,921
|
So I'm using tkinter python and I have an entry widget with Name text in it. I want to delete the text only when the widget is clicked on. This is what I have so far:
```
#Import tkinter to make gui
from tkinter import *
from tkinter import ttk#Sets title and creates gui
root = Tk()
root.title("Identity Form")
#Configures column and row settings and sets padding
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
name=StringVar()
age=StringVar()
gender=StringVar()
#Widgets to put in items, quanitity, and shipping days
name_entry = ttk.Entry(mainframe, width=20, textvariable=name)
name_entry.grid(column=2, row=2, sticky=(W, E))
age_entry2 = ttk.Entry(mainframe, width=20, textvariable=age)
age_entry2.grid(column=2, row=3, sticky=(W, E))
male = ttk.Radiobutton(mainframe, text='Male', variable=gender, value='male')
female = ttk.Radiobutton(mainframe, text='Female', variable=gender, value='female')
other = ttk.Radiobutton(mainframe, text='Other', variable=gender, value='other')
name_entry.insert(0,'Name')
```
This creates the entry widget and has the text "Name" inside it. When it is clicked upon how I want the text to disappear.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39517921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6570334/"
] |
If you're expecting a socket to stay open for minutes at a time, you're in for a world of hurt. That might work on Wi-Fi, but on cellular, there's a high probability of the connection glitching because of tower switching or some other random event outside your control. When that happens, the connection drops, and there's really nothing your app can do about it.
This really needs to be fixed by changing the way the client requests data so that the responses can be more asynchronous. Specifically:
* Make your request.
* On the server side, immediately provide the client with a unique identifier for that request and close the connection.
* Next, on the client side, periodically ask the server for its status.
+ If the connection times out, ask again.
+ If the server says that the results are not ready, wait a few seconds and ask again.
* On the server side, when processing is completed, store the results along with the identifier in a persistent fashion (e.g. in a file or database)
* When the client requests the results for that identifier, return the results if they are ready, or return a "not ready" error of some sort.
* Have a periodic cron job or similar on the server side to clean up old data that has not yet been collected.
With that model, it doesn't matter if the connection to the server closes, because a subsequent request will get the data successfully.
|
I faced this issue and spend more than 1 week to fix this. AND i just solved this issue by changing Wifi connection.
|
39,517,921
|
So I'm using tkinter python and I have an entry widget with Name text in it. I want to delete the text only when the widget is clicked on. This is what I have so far:
```
#Import tkinter to make gui
from tkinter import *
from tkinter import ttk#Sets title and creates gui
root = Tk()
root.title("Identity Form")
#Configures column and row settings and sets padding
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
name=StringVar()
age=StringVar()
gender=StringVar()
#Widgets to put in items, quanitity, and shipping days
name_entry = ttk.Entry(mainframe, width=20, textvariable=name)
name_entry.grid(column=2, row=2, sticky=(W, E))
age_entry2 = ttk.Entry(mainframe, width=20, textvariable=age)
age_entry2.grid(column=2, row=3, sticky=(W, E))
male = ttk.Radiobutton(mainframe, text='Male', variable=gender, value='male')
female = ttk.Radiobutton(mainframe, text='Female', variable=gender, value='female')
other = ttk.Radiobutton(mainframe, text='Other', variable=gender, value='other')
name_entry.insert(0,'Name')
```
This creates the entry widget and has the text "Name" inside it. When it is clicked upon how I want the text to disappear.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39517921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6570334/"
] |
If you're expecting a socket to stay open for minutes at a time, you're in for a world of hurt. That might work on Wi-Fi, but on cellular, there's a high probability of the connection glitching because of tower switching or some other random event outside your control. When that happens, the connection drops, and there's really nothing your app can do about it.
This really needs to be fixed by changing the way the client requests data so that the responses can be more asynchronous. Specifically:
* Make your request.
* On the server side, immediately provide the client with a unique identifier for that request and close the connection.
* Next, on the client side, periodically ask the server for its status.
+ If the connection times out, ask again.
+ If the server says that the results are not ready, wait a few seconds and ask again.
* On the server side, when processing is completed, store the results along with the identifier in a persistent fashion (e.g. in a file or database)
* When the client requests the results for that identifier, return the results if they are ready, or return a "not ready" error of some sort.
* Have a periodic cron job or similar on the server side to clean up old data that has not yet been collected.
With that model, it doesn't matter if the connection to the server closes, because a subsequent request will get the data successfully.
|
I faced the same issue and I am attaching a screenshot of the resolution to show how I resolved the issue.
In my case, the issue was that the API requests are blocked from the server Sucuri/Cloudproxy (Or you can say firewall service). Disabling the firewall resolved the issue
[](https://i.stack.imgur.com/GuupV.png)
|
39,517,921
|
So I'm using tkinter python and I have an entry widget with Name text in it. I want to delete the text only when the widget is clicked on. This is what I have so far:
```
#Import tkinter to make gui
from tkinter import *
from tkinter import ttk#Sets title and creates gui
root = Tk()
root.title("Identity Form")
#Configures column and row settings and sets padding
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
name=StringVar()
age=StringVar()
gender=StringVar()
#Widgets to put in items, quanitity, and shipping days
name_entry = ttk.Entry(mainframe, width=20, textvariable=name)
name_entry.grid(column=2, row=2, sticky=(W, E))
age_entry2 = ttk.Entry(mainframe, width=20, textvariable=age)
age_entry2.grid(column=2, row=3, sticky=(W, E))
male = ttk.Radiobutton(mainframe, text='Male', variable=gender, value='male')
female = ttk.Radiobutton(mainframe, text='Female', variable=gender, value='female')
other = ttk.Radiobutton(mainframe, text='Other', variable=gender, value='other')
name_entry.insert(0,'Name')
```
This creates the entry widget and has the text "Name" inside it. When it is clicked upon how I want the text to disappear.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39517921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6570334/"
] |
If you're expecting a socket to stay open for minutes at a time, you're in for a world of hurt. That might work on Wi-Fi, but on cellular, there's a high probability of the connection glitching because of tower switching or some other random event outside your control. When that happens, the connection drops, and there's really nothing your app can do about it.
This really needs to be fixed by changing the way the client requests data so that the responses can be more asynchronous. Specifically:
* Make your request.
* On the server side, immediately provide the client with a unique identifier for that request and close the connection.
* Next, on the client side, periodically ask the server for its status.
+ If the connection times out, ask again.
+ If the server says that the results are not ready, wait a few seconds and ask again.
* On the server side, when processing is completed, store the results along with the identifier in a persistent fashion (e.g. in a file or database)
* When the client requests the results for that identifier, return the results if they are ready, or return a "not ready" error of some sort.
* Have a periodic cron job or similar on the server side to clean up old data that has not yet been collected.
With that model, it doesn't matter if the connection to the server closes, because a subsequent request will get the data successfully.
|
I don't why but it's works when I add sleep before my request:
```
sleep(10000)
AF.request(ViewController.URL_SYSTEM+"/rest,get_profile", method: .post, parameters: params, encoding: JSONEncoding.default , headers: headers).responseJSON { (response) in
}
```
|
39,517,921
|
So I'm using tkinter python and I have an entry widget with Name text in it. I want to delete the text only when the widget is clicked on. This is what I have so far:
```
#Import tkinter to make gui
from tkinter import *
from tkinter import ttk#Sets title and creates gui
root = Tk()
root.title("Identity Form")
#Configures column and row settings and sets padding
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
name=StringVar()
age=StringVar()
gender=StringVar()
#Widgets to put in items, quanitity, and shipping days
name_entry = ttk.Entry(mainframe, width=20, textvariable=name)
name_entry.grid(column=2, row=2, sticky=(W, E))
age_entry2 = ttk.Entry(mainframe, width=20, textvariable=age)
age_entry2.grid(column=2, row=3, sticky=(W, E))
male = ttk.Radiobutton(mainframe, text='Male', variable=gender, value='male')
female = ttk.Radiobutton(mainframe, text='Female', variable=gender, value='female')
other = ttk.Radiobutton(mainframe, text='Other', variable=gender, value='other')
name_entry.insert(0,'Name')
```
This creates the entry widget and has the text "Name" inside it. When it is clicked upon how I want the text to disappear.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39517921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6570334/"
] |
I faced the same issue and I am attaching a screenshot of the resolution to show how I resolved the issue.
In my case, the issue was that the API requests are blocked from the server Sucuri/Cloudproxy (Or you can say firewall service). Disabling the firewall resolved the issue
[](https://i.stack.imgur.com/GuupV.png)
|
I faced this issue and spend more than 1 week to fix this. AND i just solved this issue by changing Wifi connection.
|
39,517,921
|
So I'm using tkinter python and I have an entry widget with Name text in it. I want to delete the text only when the widget is clicked on. This is what I have so far:
```
#Import tkinter to make gui
from tkinter import *
from tkinter import ttk#Sets title and creates gui
root = Tk()
root.title("Identity Form")
#Configures column and row settings and sets padding
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
name=StringVar()
age=StringVar()
gender=StringVar()
#Widgets to put in items, quanitity, and shipping days
name_entry = ttk.Entry(mainframe, width=20, textvariable=name)
name_entry.grid(column=2, row=2, sticky=(W, E))
age_entry2 = ttk.Entry(mainframe, width=20, textvariable=age)
age_entry2.grid(column=2, row=3, sticky=(W, E))
male = ttk.Radiobutton(mainframe, text='Male', variable=gender, value='male')
female = ttk.Radiobutton(mainframe, text='Female', variable=gender, value='female')
other = ttk.Radiobutton(mainframe, text='Other', variable=gender, value='other')
name_entry.insert(0,'Name')
```
This creates the entry widget and has the text "Name" inside it. When it is clicked upon how I want the text to disappear.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39517921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6570334/"
] |
I don't why but it's works when I add sleep before my request:
```
sleep(10000)
AF.request(ViewController.URL_SYSTEM+"/rest,get_profile", method: .post, parameters: params, encoding: JSONEncoding.default , headers: headers).responseJSON { (response) in
}
```
|
I faced this issue and spend more than 1 week to fix this. AND i just solved this issue by changing Wifi connection.
|
39,517,921
|
So I'm using tkinter python and I have an entry widget with Name text in it. I want to delete the text only when the widget is clicked on. This is what I have so far:
```
#Import tkinter to make gui
from tkinter import *
from tkinter import ttk#Sets title and creates gui
root = Tk()
root.title("Identity Form")
#Configures column and row settings and sets padding
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
name=StringVar()
age=StringVar()
gender=StringVar()
#Widgets to put in items, quanitity, and shipping days
name_entry = ttk.Entry(mainframe, width=20, textvariable=name)
name_entry.grid(column=2, row=2, sticky=(W, E))
age_entry2 = ttk.Entry(mainframe, width=20, textvariable=age)
age_entry2.grid(column=2, row=3, sticky=(W, E))
male = ttk.Radiobutton(mainframe, text='Male', variable=gender, value='male')
female = ttk.Radiobutton(mainframe, text='Female', variable=gender, value='female')
other = ttk.Radiobutton(mainframe, text='Other', variable=gender, value='other')
name_entry.insert(0,'Name')
```
This creates the entry widget and has the text "Name" inside it. When it is clicked upon how I want the text to disappear.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39517921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6570334/"
] |
I faced the same issue and I am attaching a screenshot of the resolution to show how I resolved the issue.
In my case, the issue was that the API requests are blocked from the server Sucuri/Cloudproxy (Or you can say firewall service). Disabling the firewall resolved the issue
[](https://i.stack.imgur.com/GuupV.png)
|
I don't why but it's works when I add sleep before my request:
```
sleep(10000)
AF.request(ViewController.URL_SYSTEM+"/rest,get_profile", method: .post, parameters: params, encoding: JSONEncoding.default , headers: headers).responseJSON { (response) in
}
```
|
40,009,358
|
I'm a 1st year CS student been struggling over the past few days on this lab task I received for python(2.7):
---
Write a Python script named higher-lower.py which:
first reads exactly one integer from standard input (10, in the example below),
then reads exactly five more integers from standard input and
for each of those five values outputs the message higher, lower or equal depending upon whether the new value is higher than, lower than or equal to the previous value.
(Must have good use of a while loop)
(Must not use a list)
Example standard input:
```
10
20
10
8
8
12
```
Example standard output:
```
higher
lower
lower
equal
higher
```
(1 string per line)
---
I have a working solution but when I upload it for correction I am told it's incorrect, This is my solution:
```
prev = input()
output = ""
s = 1
while s <= 5:
curr = input()
if prev < curr:
output = output + "higher\n"
elif curr < prev:
output = output + "lower\n"
else:
output = output + "equal\n"
s = s + 1
prev = curr
print output
```
I think it's incorrect because each higher/lower/equal is printed on a single string over 5 lines, where the task wants each higher/lower/equal to be printed as an individual string on each line.
Could anyone give me any hints in the right direction? I searched stackoverflow as well as google for a similar problem and couldn't find anything related to this. Any help would be appreciated!
|
2016/10/12
|
[
"https://Stackoverflow.com/questions/40009358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7010470/"
] |
Given your description, I suspect that the validation program wants to see a single result after each additional input. Have you tried that?
```
while s <= 5:
curr = input()
if prev < curr:
print "higher"
elif curr < prev:
print "lower"
else:
print "equal"
s = s + 1
prev = curr
```
|
```
magic_number = 3
# Your code here...
while True:
guess = int(input("Guess my number: "))
if guess == magic_number:
print "Correct!"
break
elif guess > magic_number:
print "Too high!"
else:
print "Too low!"
print "Great job guessing my number!"
```
|
53,881,731
|
How can I define an xpath command in python (scrapy) to accept any number at the place indicated in the code. I have already tried to put an `*` or `any()` at the position.
```
table = response.xpath('//*[@id="olnof_**here I want to accept any value**_altlinesodd"]/tr[1]/TD[1]/A[1]')
```
|
2018/12/21
|
[
"https://Stackoverflow.com/questions/53881731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10819357/"
] |
You can do this using [regular expressions](https://doc.scrapy.org/en/latest/topics/selectors.html#regular-expressions):
```
table = response.xpath('//*[re:test(@id, "^olnof_.+_altlinesodd$")]/tr[1]/TD[1]/A[1]')
```
|
You can try below workaround:
```
'//*[starts-with(@id, "olnof_") and contains(@id, "_altlinesodd")]/tr[1]/TD[1]/A[1]'
```
`ends-with(@id, "_altlinesodd")` suites better in this case, but Scrapy doesn't support `ends-with` syntax, so `contains` used instead
|
53,881,731
|
How can I define an xpath command in python (scrapy) to accept any number at the place indicated in the code. I have already tried to put an `*` or `any()` at the position.
```
table = response.xpath('//*[@id="olnof_**here I want to accept any value**_altlinesodd"]/tr[1]/TD[1]/A[1]')
```
|
2018/12/21
|
[
"https://Stackoverflow.com/questions/53881731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10819357/"
] |
Now assuming your any as like ANY, so you can try this.
`x_path = '//*[@id="olnof_{}_altlinesodd"]/tr[1]/TD[1]/A[1]'`
`x_path.format("put your any here, may b from rand function or extracting the value from somewhere else.")`
Then,
`table = response.xpath(x_path)` this will do the work.
|
You can try below workaround:
```
'//*[starts-with(@id, "olnof_") and contains(@id, "_altlinesodd")]/tr[1]/TD[1]/A[1]'
```
`ends-with(@id, "_altlinesodd")` suites better in this case, but Scrapy doesn't support `ends-with` syntax, so `contains` used instead
|
53,881,731
|
How can I define an xpath command in python (scrapy) to accept any number at the place indicated in the code. I have already tried to put an `*` or `any()` at the position.
```
table = response.xpath('//*[@id="olnof_**here I want to accept any value**_altlinesodd"]/tr[1]/TD[1]/A[1]')
```
|
2018/12/21
|
[
"https://Stackoverflow.com/questions/53881731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10819357/"
] |
You can do this using [regular expressions](https://doc.scrapy.org/en/latest/topics/selectors.html#regular-expressions):
```
table = response.xpath('//*[re:test(@id, "^olnof_.+_altlinesodd$")]/tr[1]/TD[1]/A[1]')
```
|
Now assuming your any as like ANY, so you can try this.
`x_path = '//*[@id="olnof_{}_altlinesodd"]/tr[1]/TD[1]/A[1]'`
`x_path.format("put your any here, may b from rand function or extracting the value from somewhere else.")`
Then,
`table = response.xpath(x_path)` this will do the work.
|
22,291,337
|
I know this one has been covered before, and perhaps isn't the most pythonic way of constructing a class, but I have a lot of different maya node classes with a lot @properties for retrieving/setting node data, and I want to see if procedurally building the attributes cuts down on overhead/mantinence.
I need to re-implement \_\_setattr\_\_ so that the standard behavior is maintained, but for certain special attributes, the value is get/set to an outside object.
I have seen examples of re-implementing \_\_setattr\_\_ on stack overflow, but I seem to be missing something.
I don't think i am maintaining the default functionality of **setAttr**
Here is an example:
```
externalData = {'translateX':1.0,'translateY':1.0,'translateZ':1.0}
attrKeys = ['translateX','translateY','translateZ']
class Transform(object):
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
raise AttributeError("No attribute named [%s]" %name)
def __setattr__(self, name, value):
print 'Setting --->', name
super(Transform, self).__setattr__(name, value)
if name in attrKeys:
externalData[name] = value
myInstance = Transform()
myInstance.translateX
# Result: 1.0 #
myInstance.translateX = 9999
myInstance.translateX
# Result: 9999 #
myInstance.name = 'myName'
myInstance.name
# AttributeError: No attribute named [name] #
```
!
|
2014/03/10
|
[
"https://Stackoverflow.com/questions/22291337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3395409/"
] |
This worked for me:
```
class Transform(object):
def __getattribute__(self, name):
if name in attrKeys:
return externalData[name]
return super(Transform, self).__getattribute__(name)
def __setattr__(self, name, value):
if name in attrKeys:
externalData[name] = value
else:
super(Transform, self).__setattr__(name, value)
```
However, I'm not sure this is a good route to go.
If the external operations are time consuming (say, you're using this to disguise access to a database or a config file) you may give users of the code the wrong impression about the cost. In a case like that you should use a method so users understand that they are initiating an action, not just looking at data.
OTOH if the access is quick, be careful that the encapsulation of your classes isn't broken. If you're doing this to get at maya scene data (pymel-style, or as in [this example](http://techartsurvival.blogspot.com/2014/02/rescuing-maya-gui-from-itself.html)) it's not a big deal since the time costs and stability of the data are more or less guaranteed. However you'd want to avoid the scenario in the example code you posted: it would be very easy to assume that having set 'translateX' to a given value it would stay put, where in fact there are lots of ways that the contents of the outside variables could get messed with, preventing you from being able to know your invariants while using the class. If the class is intended for throwaway use (say, its syntax sugar for a lot of fast repetitive processing inside as loop where no other operations are running) you could get away with it - but if not, internalize the data to your instances.
One last issue: If you have 'a lot of classes' you will also have to do a lot of boilerplate to make this work. If you are trying to wrap Maya scene data, read up on descriptors ([here's a great 5-minute video](http://nedbatchelder.com/blog/201306/explaining_descriptors.html)). You can wrap typical transform properties, for example, like this:
```
import maya.cmds as cmds
class MayaProperty(object):
'''
in a real implmentation you'd want to support different value types,
etc by storing flags appropriate to different commands....
'''
def __init__(self, cmd, flag):
self.Command = cmd
self.Flag = flag
def __get__(self, obj, objtype):
return self.Command(obj, **{'q':True, self.Flag:True} )
def __set__(self, obj, value):
self.Command(obj, **{ self.Flag:value})
class XformWrapper(object):
def __init__(self, obj):
self.Object = obj
def __repr__(self):
return self.Object # so that the command will work on the string name of the object
translation = MayaProperty(cmds.xform, 'translation')
rotation = MayaProperty(cmds.xform, 'rotation')
scale = MayaProperty(cmds.xform, 'scale')
```
In real code you'd need error handling and cleaner configuration but you see the idea.
The example linked above talks about using metaclasses to populate classes when you have lots of property descriptors to configure, that is a good route to go if you don't want to worry about all the boilerplate (though it does have a minor startup time penalty - I think that's one of the reasons for the notorious Pymel startup crawl...)
|
Why not also do the same thing in `__getattribute__`?
```
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
# raise AttributeError("No attribute named [%s]" %name)
return super(Transform, self).__getattribute__(name)
```
Test code
```
myInstance = Transform()
myInstance.translateX
print(externalData['translateX'])
myInstance.translateX = 9999
myInstance.translateX
print(externalData['translateX'])
myInstance.name = 'myName'
print myInstance.name
print myInstance.__dict__['name']
```
Output:
```
Getting ---> translateX
1.0
Setting ---> translateX
Getting ---> translateX
9999
Setting ---> name
Getting ---> name
myName
Getting ---> __dict__
myName
```
|
22,291,337
|
I know this one has been covered before, and perhaps isn't the most pythonic way of constructing a class, but I have a lot of different maya node classes with a lot @properties for retrieving/setting node data, and I want to see if procedurally building the attributes cuts down on overhead/mantinence.
I need to re-implement \_\_setattr\_\_ so that the standard behavior is maintained, but for certain special attributes, the value is get/set to an outside object.
I have seen examples of re-implementing \_\_setattr\_\_ on stack overflow, but I seem to be missing something.
I don't think i am maintaining the default functionality of **setAttr**
Here is an example:
```
externalData = {'translateX':1.0,'translateY':1.0,'translateZ':1.0}
attrKeys = ['translateX','translateY','translateZ']
class Transform(object):
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
raise AttributeError("No attribute named [%s]" %name)
def __setattr__(self, name, value):
print 'Setting --->', name
super(Transform, self).__setattr__(name, value)
if name in attrKeys:
externalData[name] = value
myInstance = Transform()
myInstance.translateX
# Result: 1.0 #
myInstance.translateX = 9999
myInstance.translateX
# Result: 9999 #
myInstance.name = 'myName'
myInstance.name
# AttributeError: No attribute named [name] #
```
!
|
2014/03/10
|
[
"https://Stackoverflow.com/questions/22291337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3395409/"
] |
I have decided to go with @theodox s and use descriptors
This seems to work nicely:
```
class Transform(object):
def __init__(self, name):
self.name = name
for key in ['translateX','translateY','translateZ']:
buildNodeAttr(self.__class__, '%s.%s' % (self.name, key))
def buildNodeAttr(cls, plug):
setattr(cls, plug.split('.')[-1], AttrDescriptor(plug))
class AttrDescriptor(object):
def __init__(self, plug):
self.plug = plug
def __get__(self, obj, objtype):
return mc.getAttr(self.plug)
def __set__(self, obj, val):
mc.setAttr(self.plug, val)
myTransform = Transform(mc.createNode('transform', name = 'transformA'))
myTransform.translateX = 999
```
As a side note...
It turns out my original code would have worked just by switching **getattribute** with **getattr**
no super needed
|
Why not also do the same thing in `__getattribute__`?
```
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
# raise AttributeError("No attribute named [%s]" %name)
return super(Transform, self).__getattribute__(name)
```
Test code
```
myInstance = Transform()
myInstance.translateX
print(externalData['translateX'])
myInstance.translateX = 9999
myInstance.translateX
print(externalData['translateX'])
myInstance.name = 'myName'
print myInstance.name
print myInstance.__dict__['name']
```
Output:
```
Getting ---> translateX
1.0
Setting ---> translateX
Getting ---> translateX
9999
Setting ---> name
Getting ---> name
myName
Getting ---> __dict__
myName
```
|
22,291,337
|
I know this one has been covered before, and perhaps isn't the most pythonic way of constructing a class, but I have a lot of different maya node classes with a lot @properties for retrieving/setting node data, and I want to see if procedurally building the attributes cuts down on overhead/mantinence.
I need to re-implement \_\_setattr\_\_ so that the standard behavior is maintained, but for certain special attributes, the value is get/set to an outside object.
I have seen examples of re-implementing \_\_setattr\_\_ on stack overflow, but I seem to be missing something.
I don't think i am maintaining the default functionality of **setAttr**
Here is an example:
```
externalData = {'translateX':1.0,'translateY':1.0,'translateZ':1.0}
attrKeys = ['translateX','translateY','translateZ']
class Transform(object):
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
raise AttributeError("No attribute named [%s]" %name)
def __setattr__(self, name, value):
print 'Setting --->', name
super(Transform, self).__setattr__(name, value)
if name in attrKeys:
externalData[name] = value
myInstance = Transform()
myInstance.translateX
# Result: 1.0 #
myInstance.translateX = 9999
myInstance.translateX
# Result: 9999 #
myInstance.name = 'myName'
myInstance.name
# AttributeError: No attribute named [name] #
```
!
|
2014/03/10
|
[
"https://Stackoverflow.com/questions/22291337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3395409/"
] |
This worked for me:
```
class Transform(object):
def __getattribute__(self, name):
if name in attrKeys:
return externalData[name]
return super(Transform, self).__getattribute__(name)
def __setattr__(self, name, value):
if name in attrKeys:
externalData[name] = value
else:
super(Transform, self).__setattr__(name, value)
```
However, I'm not sure this is a good route to go.
If the external operations are time consuming (say, you're using this to disguise access to a database or a config file) you may give users of the code the wrong impression about the cost. In a case like that you should use a method so users understand that they are initiating an action, not just looking at data.
OTOH if the access is quick, be careful that the encapsulation of your classes isn't broken. If you're doing this to get at maya scene data (pymel-style, or as in [this example](http://techartsurvival.blogspot.com/2014/02/rescuing-maya-gui-from-itself.html)) it's not a big deal since the time costs and stability of the data are more or less guaranteed. However you'd want to avoid the scenario in the example code you posted: it would be very easy to assume that having set 'translateX' to a given value it would stay put, where in fact there are lots of ways that the contents of the outside variables could get messed with, preventing you from being able to know your invariants while using the class. If the class is intended for throwaway use (say, its syntax sugar for a lot of fast repetitive processing inside as loop where no other operations are running) you could get away with it - but if not, internalize the data to your instances.
One last issue: If you have 'a lot of classes' you will also have to do a lot of boilerplate to make this work. If you are trying to wrap Maya scene data, read up on descriptors ([here's a great 5-minute video](http://nedbatchelder.com/blog/201306/explaining_descriptors.html)). You can wrap typical transform properties, for example, like this:
```
import maya.cmds as cmds
class MayaProperty(object):
'''
in a real implmentation you'd want to support different value types,
etc by storing flags appropriate to different commands....
'''
def __init__(self, cmd, flag):
self.Command = cmd
self.Flag = flag
def __get__(self, obj, objtype):
return self.Command(obj, **{'q':True, self.Flag:True} )
def __set__(self, obj, value):
self.Command(obj, **{ self.Flag:value})
class XformWrapper(object):
def __init__(self, obj):
self.Object = obj
def __repr__(self):
return self.Object # so that the command will work on the string name of the object
translation = MayaProperty(cmds.xform, 'translation')
rotation = MayaProperty(cmds.xform, 'rotation')
scale = MayaProperty(cmds.xform, 'scale')
```
In real code you'd need error handling and cleaner configuration but you see the idea.
The example linked above talks about using metaclasses to populate classes when you have lots of property descriptors to configure, that is a good route to go if you don't want to worry about all the boilerplate (though it does have a minor startup time penalty - I think that's one of the reasons for the notorious Pymel startup crawl...)
|
Here in your snippet:
```
class Transform(object):
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
raise AttributeError("No attribute named [%s]" %name)
def __setattr__(self, name, value):
print 'Setting --->', name
super(Transform, self).__setattr__(name, value)
if name in attrKeys:
externalData[name] = value
```
See, in your `__setattr__()` when you called for `myInstance.name = 'myName'`, `name` is not in `attrKeys`, so it doesn't insert into externalData dictionary but it add into `self.__dict__['name'] = value`
So, when you try to lookup for that particular name, you don't ve into your `externalData` dictionary so your `__getattribute__` is raise with an exception.
You can fix that by changing the `__getattribute__` instead of raising an exception change as below :
```
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
return object.__getattribute__(self, name)
```
|
22,291,337
|
I know this one has been covered before, and perhaps isn't the most pythonic way of constructing a class, but I have a lot of different maya node classes with a lot @properties for retrieving/setting node data, and I want to see if procedurally building the attributes cuts down on overhead/mantinence.
I need to re-implement \_\_setattr\_\_ so that the standard behavior is maintained, but for certain special attributes, the value is get/set to an outside object.
I have seen examples of re-implementing \_\_setattr\_\_ on stack overflow, but I seem to be missing something.
I don't think i am maintaining the default functionality of **setAttr**
Here is an example:
```
externalData = {'translateX':1.0,'translateY':1.0,'translateZ':1.0}
attrKeys = ['translateX','translateY','translateZ']
class Transform(object):
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
raise AttributeError("No attribute named [%s]" %name)
def __setattr__(self, name, value):
print 'Setting --->', name
super(Transform, self).__setattr__(name, value)
if name in attrKeys:
externalData[name] = value
myInstance = Transform()
myInstance.translateX
# Result: 1.0 #
myInstance.translateX = 9999
myInstance.translateX
# Result: 9999 #
myInstance.name = 'myName'
myInstance.name
# AttributeError: No attribute named [name] #
```
!
|
2014/03/10
|
[
"https://Stackoverflow.com/questions/22291337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3395409/"
] |
I have decided to go with @theodox s and use descriptors
This seems to work nicely:
```
class Transform(object):
def __init__(self, name):
self.name = name
for key in ['translateX','translateY','translateZ']:
buildNodeAttr(self.__class__, '%s.%s' % (self.name, key))
def buildNodeAttr(cls, plug):
setattr(cls, plug.split('.')[-1], AttrDescriptor(plug))
class AttrDescriptor(object):
def __init__(self, plug):
self.plug = plug
def __get__(self, obj, objtype):
return mc.getAttr(self.plug)
def __set__(self, obj, val):
mc.setAttr(self.plug, val)
myTransform = Transform(mc.createNode('transform', name = 'transformA'))
myTransform.translateX = 999
```
As a side note...
It turns out my original code would have worked just by switching **getattribute** with **getattr**
no super needed
|
Here in your snippet:
```
class Transform(object):
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
raise AttributeError("No attribute named [%s]" %name)
def __setattr__(self, name, value):
print 'Setting --->', name
super(Transform, self).__setattr__(name, value)
if name in attrKeys:
externalData[name] = value
```
See, in your `__setattr__()` when you called for `myInstance.name = 'myName'`, `name` is not in `attrKeys`, so it doesn't insert into externalData dictionary but it add into `self.__dict__['name'] = value`
So, when you try to lookup for that particular name, you don't ve into your `externalData` dictionary so your `__getattribute__` is raise with an exception.
You can fix that by changing the `__getattribute__` instead of raising an exception change as below :
```
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
return object.__getattribute__(self, name)
```
|
22,291,337
|
I know this one has been covered before, and perhaps isn't the most pythonic way of constructing a class, but I have a lot of different maya node classes with a lot @properties for retrieving/setting node data, and I want to see if procedurally building the attributes cuts down on overhead/mantinence.
I need to re-implement \_\_setattr\_\_ so that the standard behavior is maintained, but for certain special attributes, the value is get/set to an outside object.
I have seen examples of re-implementing \_\_setattr\_\_ on stack overflow, but I seem to be missing something.
I don't think i am maintaining the default functionality of **setAttr**
Here is an example:
```
externalData = {'translateX':1.0,'translateY':1.0,'translateZ':1.0}
attrKeys = ['translateX','translateY','translateZ']
class Transform(object):
def __getattribute__(self, name):
print 'Getting --->', name
if name in attrKeys:
return externalData[name]
else:
raise AttributeError("No attribute named [%s]" %name)
def __setattr__(self, name, value):
print 'Setting --->', name
super(Transform, self).__setattr__(name, value)
if name in attrKeys:
externalData[name] = value
myInstance = Transform()
myInstance.translateX
# Result: 1.0 #
myInstance.translateX = 9999
myInstance.translateX
# Result: 9999 #
myInstance.name = 'myName'
myInstance.name
# AttributeError: No attribute named [name] #
```
!
|
2014/03/10
|
[
"https://Stackoverflow.com/questions/22291337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3395409/"
] |
This worked for me:
```
class Transform(object):
def __getattribute__(self, name):
if name in attrKeys:
return externalData[name]
return super(Transform, self).__getattribute__(name)
def __setattr__(self, name, value):
if name in attrKeys:
externalData[name] = value
else:
super(Transform, self).__setattr__(name, value)
```
However, I'm not sure this is a good route to go.
If the external operations are time consuming (say, you're using this to disguise access to a database or a config file) you may give users of the code the wrong impression about the cost. In a case like that you should use a method so users understand that they are initiating an action, not just looking at data.
OTOH if the access is quick, be careful that the encapsulation of your classes isn't broken. If you're doing this to get at maya scene data (pymel-style, or as in [this example](http://techartsurvival.blogspot.com/2014/02/rescuing-maya-gui-from-itself.html)) it's not a big deal since the time costs and stability of the data are more or less guaranteed. However you'd want to avoid the scenario in the example code you posted: it would be very easy to assume that having set 'translateX' to a given value it would stay put, where in fact there are lots of ways that the contents of the outside variables could get messed with, preventing you from being able to know your invariants while using the class. If the class is intended for throwaway use (say, its syntax sugar for a lot of fast repetitive processing inside as loop where no other operations are running) you could get away with it - but if not, internalize the data to your instances.
One last issue: If you have 'a lot of classes' you will also have to do a lot of boilerplate to make this work. If you are trying to wrap Maya scene data, read up on descriptors ([here's a great 5-minute video](http://nedbatchelder.com/blog/201306/explaining_descriptors.html)). You can wrap typical transform properties, for example, like this:
```
import maya.cmds as cmds
class MayaProperty(object):
'''
in a real implmentation you'd want to support different value types,
etc by storing flags appropriate to different commands....
'''
def __init__(self, cmd, flag):
self.Command = cmd
self.Flag = flag
def __get__(self, obj, objtype):
return self.Command(obj, **{'q':True, self.Flag:True} )
def __set__(self, obj, value):
self.Command(obj, **{ self.Flag:value})
class XformWrapper(object):
def __init__(self, obj):
self.Object = obj
def __repr__(self):
return self.Object # so that the command will work on the string name of the object
translation = MayaProperty(cmds.xform, 'translation')
rotation = MayaProperty(cmds.xform, 'rotation')
scale = MayaProperty(cmds.xform, 'scale')
```
In real code you'd need error handling and cleaner configuration but you see the idea.
The example linked above talks about using metaclasses to populate classes when you have lots of property descriptors to configure, that is a good route to go if you don't want to worry about all the boilerplate (though it does have a minor startup time penalty - I think that's one of the reasons for the notorious Pymel startup crawl...)
|
I have decided to go with @theodox s and use descriptors
This seems to work nicely:
```
class Transform(object):
def __init__(self, name):
self.name = name
for key in ['translateX','translateY','translateZ']:
buildNodeAttr(self.__class__, '%s.%s' % (self.name, key))
def buildNodeAttr(cls, plug):
setattr(cls, plug.split('.')[-1], AttrDescriptor(plug))
class AttrDescriptor(object):
def __init__(self, plug):
self.plug = plug
def __get__(self, obj, objtype):
return mc.getAttr(self.plug)
def __set__(self, obj, val):
mc.setAttr(self.plug, val)
myTransform = Transform(mc.createNode('transform', name = 'transformA'))
myTransform.translateX = 999
```
As a side note...
It turns out my original code would have worked just by switching **getattribute** with **getattr**
no super needed
|
52,458,158
|
I am dealing with the dataset **titanic** from [*seaborn*].
```
titanic = seaborn.load_dataset('titanic')
```
I cut the age column into categorical bins.
```
age = pd.cut(titanic['age'], [0, 18, 80])
```
Then the problem comes, the groupby and pivot\_table give totally different results:
```
titanic.groupby(['sex', age, 'class'])['survived'].mean().unstack(-1)
titanic.pivot_table('survived', ['sex', age], 'class')
```
[groupby and pivot\_table results](https://i.stack.imgur.com/1Ecd9.png)
At first, I guess it is because the nan in **age**, then I used dataset processed by dropna to redo it.
```
titanic = titanic.dropna()
age = pd.cut(titanic['age'], [0, 18, 80], right = True)
titanic.groupby(['sex', age, 'class'])['survived'].mean().unstack(-1)
titanic.pivot_table('survived', ['sex', age], 'class')
```
This time I even got a totally different result.
[groupby and pivot\_table results after dropna](https://i.stack.imgur.com/6757L.png)
My python version is:Python 3.6.5 :: Anaconda, Inc.
pandas: 0.23.0
My operating system is MaxOS High Sierra 10.13.6
I tried again with python 3.7.0 and pandas 0.23.4, and no error occurs.
[result under python 3..7.0](https://i.stack.imgur.com/maLu2.png)
So I am wondering whether it is a bug of Anaconda?
|
2018/09/22
|
[
"https://Stackoverflow.com/questions/52458158",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10401026/"
] |
It seems to work best when the object's key is a single class.
You can instead do something like this:
```
class="fa" [ngClass]="{'fa-paper-plane': true, 'fa-spinner': false, 'fa-spin': false }"
```
Because the `fa` class should always apply, it's being done in a normal `class` attribute
|
When a expression is evaluated to true the classes passed in ngClass are added to the classList for the element and when expression is evaluated to false the classes passed in ngClass are removed from the classList for the element. Example :
```
<div>
<p>
<i [ngClass]="{'fa fa-spinner fa-spin': false, 'fa fa-telegram-plane': false}"> </i>
Random Text
</p>
<!-- DOM will have <i> </i> -->
<p>
<i [ngClass]="{'fa fa-spinner fa-spin test test1234': true, 'fa fa-telegram test test1234' : false}"> </i>
Random Text
</p>
<!-- DOM will have <i class="fa-spinner fa-spin"> </i> -->
<p>
<i [ngClass]="{'fa fa-spinner fa-spin': false, 'fa fa-telegram': true}"> </i>
Random Text
</p>
<!-- DOM will have <i class="fa fa-telegram"> </i> -->
<p>
<i [ngClass]="{'fa fa-telegram test test1234': false, 'fa fa-spinner fa-spin test test1234': true}"> </i>
Random Text
</p>
<!-- DOM will have <i class="fa fa-spinner fa-spin test test1234"> </i> -->
</div>
```
Example Code : <https://stackblitz.com/edit/angular-ngclass-kmherp?file=app%2Fapp.component.html>
|
45,619,018
|
I'm trying to use OpenCV to segment a bent rod from it's background then find the bends in it and calculate the angle between each bend.
The first part luckily is trivial with a enough contrast between the foreground and background.
A bit of erosion/dilation takes care of reflections/highlights when segmenting.
The second part is where I'm not sure how to approach it.
I can easily retrieve a contour (top and bottom are very similar so either would do),
but I can't seem to figure out is how to get split the contour into the straight parts and the bend rods to calculate the angles.
So far I've tried simplyfying the contours, but either I get too many or too few points and it feels difficult to pin point the right
settings to keep the straight parts straight and the bent parts simplified.
Here is my input image(bend.png)
[](https://i.stack.imgur.com/h0iZp.png)
And here's what I've tried so far:
```
#!/usr/bin/env python
import numpy as np
import cv2
threshold = 229
# erosion/dilation kernel
kernel = np.ones((5,5),np.uint8)
# contour simplification
epsilon = 0
# slider callbacks
def onThreshold(x):
global threshold
print "threshold = ",x
threshold = x
def onEpsilon(x):
global epsilon
epsilon = x * 0.01
print "epsilon = ",epsilon
# make a window to add sliders/preview to
cv2.namedWindow('processed')
#make some sliders
cv2.createTrackbar('threshold','processed',60,255,onThreshold)
cv2.createTrackbar('epsilon','processed',1,1000,onEpsilon)
# load image
img = cv2.imread('bend.png',0)
# continuously process for quick feedback
while 1:
# exit on ESC key
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
# Threshold
ret,processed = cv2.threshold(img,threshold,255,0)
# Invert
processed = (255-processed)
# Dilate
processed = cv2.dilate(processed,kernel)
processed = cv2.erode(processed,kernel)
# Canny
processed = cv2.Canny(processed,100,200)
contours, hierarchy = cv2.findContours(processed,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
approx = cv2.approxPolyDP(contours[0],epsilon,True)
# print len(approx)
cv2.drawContours(processed, [approx], -1, (255,255,255), 3)
demo = img.copy()
cv2.drawContours(demo, [approx], -1, (192,0,0), 3)
# show result
cv2.imshow('processed ',processed)
cv2.imshow('demo ',demo)
# exit
cv2.destroyAllWindows()
```
Here's what I've got so far, but I'm not convinced this is the best approach:
[](https://i.stack.imgur.com/6rllq.jpg)
[](https://i.stack.imgur.com/Knsj4.jpg)
I've tried to figure this out visually and what I've aimed for is something along these lines:
[](https://i.stack.imgur.com/787iX.png)
Because the end goal is to calculate the angle between bent parts something like this feels simpler:
[](https://i.stack.imgur.com/2ytKv.png)
My assumption that fitting lines and compute the angles between pairs of intersecting lines could work:
[](https://i.stack.imgur.com/wK15o.png)
I did a quick test using the [HoughLines OpenCV Python tutorial](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html), but regardless of the parameters passed I didn't get great results:
```
#!/usr/bin/env python
import numpy as np
import cv2
threshold = 229
minLineLength = 30
maxLineGap = 10
houghThresh = 15
# erosion/dilation kernel
kernel = np.ones((5,5),np.uint8)
# slider callbacks
def onMinLineLength(x):
global minLineLength
minLineLength = x
print "minLineLength = ",x
def onMaxLineGap(x):
global maxLineGap
maxLineGap = x
print "maxLineGap = ",x
def onHoughThresh(x):
global houghThresh
houghThresh = x
print "houghThresh = ",x
# make a window to add sliders/preview to
cv2.namedWindow('processed')
#make some sliders
cv2.createTrackbar('minLineLength','processed',1,50,onMinLineLength)
cv2.createTrackbar('maxLineGap','processed',5,30,onMaxLineGap)
cv2.createTrackbar('houghThresh','processed',15,50,onHoughThresh)
# load image
img = cv2.imread('bend.png',0)
# continuously process for quick feedback
while 1:
# exit on ESC key
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
# Threshold
ret,processed = cv2.threshold(img,threshold,255,0)
# Invert
processed = (255-processed)
# Dilate
processed = cv2.dilate(processed,kernel)
processed = cv2.erode(processed,kernel)
# Canny
processed = cv2.Canny(processed,100,200)
lineBottom = np.zeros(img.shape,np.uint8)
contours, hierarchy = cv2.findContours(processed,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
cv2.drawContours(lineBottom, contours, 0, (255,255,255), 1)
# HoughLinesP
houghResult = img.copy()
lines = cv2.HoughLinesP(lineBottom,1,np.pi/180,houghThresh,minLineLength,maxLineGap)
try:
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(houghResult,(x1,y1),(x2,y2),(0,255,0),2)
except Exception as e:
print e
# show result
cv2.imshow('lineBottom',lineBottom)
cv2.imshow('houghResult ',houghResult)
# exit
cv2.destroyAllWindows()
```
[](https://i.stack.imgur.com/Nf4Ax.jpg)
Is this a feasible approach ? If so, what's the correct way of doing line fitting in OpenCV Python ?
Otherwise, that's the best way to tackle this problem ?
**Update** Following Miki's advise I've tried OpenCV 3's LSD and got nicer results than with `HoughLinesP` but it looks like there's still some tweaking needed, although it doesn't look other than `cv2.createLineSegmentDetector` there aren't many options to play with:
[](https://i.stack.imgur.com/jBtj2.jpg)
|
2017/08/10
|
[
"https://Stackoverflow.com/questions/45619018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/89766/"
] |
It may be convenient to use curvature to find line segments. Here [example](http://www.morethantechnical.com/2012/12/07/resampling-smoothing-and-interest-points-of-curves-via-css-in-opencv-w-code/) of splitting contour by minimal curvature points, it may be better to use maximal curvature points in your case. B You can split your curve to parts, then each part approximate with line segment using RANSAC method.
|
Once you have the contour, you can analyze it using a method like the one proposed in this paper: <https://link.springer.com/article/10.1007/s10032-011-0175-3>
Basically, the contour is tracked calculating the curvature at each point.
Then you can use a curvature threshold to segment the contour into straight and curved sections.
|
45,619,018
|
I'm trying to use OpenCV to segment a bent rod from it's background then find the bends in it and calculate the angle between each bend.
The first part luckily is trivial with a enough contrast between the foreground and background.
A bit of erosion/dilation takes care of reflections/highlights when segmenting.
The second part is where I'm not sure how to approach it.
I can easily retrieve a contour (top and bottom are very similar so either would do),
but I can't seem to figure out is how to get split the contour into the straight parts and the bend rods to calculate the angles.
So far I've tried simplyfying the contours, but either I get too many or too few points and it feels difficult to pin point the right
settings to keep the straight parts straight and the bent parts simplified.
Here is my input image(bend.png)
[](https://i.stack.imgur.com/h0iZp.png)
And here's what I've tried so far:
```
#!/usr/bin/env python
import numpy as np
import cv2
threshold = 229
# erosion/dilation kernel
kernel = np.ones((5,5),np.uint8)
# contour simplification
epsilon = 0
# slider callbacks
def onThreshold(x):
global threshold
print "threshold = ",x
threshold = x
def onEpsilon(x):
global epsilon
epsilon = x * 0.01
print "epsilon = ",epsilon
# make a window to add sliders/preview to
cv2.namedWindow('processed')
#make some sliders
cv2.createTrackbar('threshold','processed',60,255,onThreshold)
cv2.createTrackbar('epsilon','processed',1,1000,onEpsilon)
# load image
img = cv2.imread('bend.png',0)
# continuously process for quick feedback
while 1:
# exit on ESC key
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
# Threshold
ret,processed = cv2.threshold(img,threshold,255,0)
# Invert
processed = (255-processed)
# Dilate
processed = cv2.dilate(processed,kernel)
processed = cv2.erode(processed,kernel)
# Canny
processed = cv2.Canny(processed,100,200)
contours, hierarchy = cv2.findContours(processed,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
approx = cv2.approxPolyDP(contours[0],epsilon,True)
# print len(approx)
cv2.drawContours(processed, [approx], -1, (255,255,255), 3)
demo = img.copy()
cv2.drawContours(demo, [approx], -1, (192,0,0), 3)
# show result
cv2.imshow('processed ',processed)
cv2.imshow('demo ',demo)
# exit
cv2.destroyAllWindows()
```
Here's what I've got so far, but I'm not convinced this is the best approach:
[](https://i.stack.imgur.com/6rllq.jpg)
[](https://i.stack.imgur.com/Knsj4.jpg)
I've tried to figure this out visually and what I've aimed for is something along these lines:
[](https://i.stack.imgur.com/787iX.png)
Because the end goal is to calculate the angle between bent parts something like this feels simpler:
[](https://i.stack.imgur.com/2ytKv.png)
My assumption that fitting lines and compute the angles between pairs of intersecting lines could work:
[](https://i.stack.imgur.com/wK15o.png)
I did a quick test using the [HoughLines OpenCV Python tutorial](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html), but regardless of the parameters passed I didn't get great results:
```
#!/usr/bin/env python
import numpy as np
import cv2
threshold = 229
minLineLength = 30
maxLineGap = 10
houghThresh = 15
# erosion/dilation kernel
kernel = np.ones((5,5),np.uint8)
# slider callbacks
def onMinLineLength(x):
global minLineLength
minLineLength = x
print "minLineLength = ",x
def onMaxLineGap(x):
global maxLineGap
maxLineGap = x
print "maxLineGap = ",x
def onHoughThresh(x):
global houghThresh
houghThresh = x
print "houghThresh = ",x
# make a window to add sliders/preview to
cv2.namedWindow('processed')
#make some sliders
cv2.createTrackbar('minLineLength','processed',1,50,onMinLineLength)
cv2.createTrackbar('maxLineGap','processed',5,30,onMaxLineGap)
cv2.createTrackbar('houghThresh','processed',15,50,onHoughThresh)
# load image
img = cv2.imread('bend.png',0)
# continuously process for quick feedback
while 1:
# exit on ESC key
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
# Threshold
ret,processed = cv2.threshold(img,threshold,255,0)
# Invert
processed = (255-processed)
# Dilate
processed = cv2.dilate(processed,kernel)
processed = cv2.erode(processed,kernel)
# Canny
processed = cv2.Canny(processed,100,200)
lineBottom = np.zeros(img.shape,np.uint8)
contours, hierarchy = cv2.findContours(processed,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
cv2.drawContours(lineBottom, contours, 0, (255,255,255), 1)
# HoughLinesP
houghResult = img.copy()
lines = cv2.HoughLinesP(lineBottom,1,np.pi/180,houghThresh,minLineLength,maxLineGap)
try:
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(houghResult,(x1,y1),(x2,y2),(0,255,0),2)
except Exception as e:
print e
# show result
cv2.imshow('lineBottom',lineBottom)
cv2.imshow('houghResult ',houghResult)
# exit
cv2.destroyAllWindows()
```
[](https://i.stack.imgur.com/Nf4Ax.jpg)
Is this a feasible approach ? If so, what's the correct way of doing line fitting in OpenCV Python ?
Otherwise, that's the best way to tackle this problem ?
**Update** Following Miki's advise I've tried OpenCV 3's LSD and got nicer results than with `HoughLinesP` but it looks like there's still some tweaking needed, although it doesn't look other than `cv2.createLineSegmentDetector` there aren't many options to play with:
[](https://i.stack.imgur.com/jBtj2.jpg)
|
2017/08/10
|
[
"https://Stackoverflow.com/questions/45619018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/89766/"
] |
I know this is old but I found this after having a similar problem
The method I used (after finding binary image) was along the lines of:
1. Find ends (points with fewest neighbors)
2. [skeletonize](https://scikit-image.org/docs/stable/auto_examples/edges/plot_skeleton.html) (optional)
3. Starting at one end find a few closest points using skimage [cdist](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html)
4. Perform a linear regression with these points and find all points in the image within a few pixels error of the line of best fit. I used [query\_ball\_point](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.query_ball_point.html)
5. This gives additional points within the same straight line. Order them by distance from the last fiducial point. Some of these might be projections of the line onto distant parts of the object and should be deleted.
6. Repeat steps 4 and 5 until no more points are added.
7. Once no more points are added to the line you find the start of the next valid line by looking at R-squared for the fit. The line should have very high R squared eg. > 0.95 (depending on the image - I was getting > 0.99). Keep changing starting point until high R squared is achieved.
8. This gives a bunch of line segments from where it should be easy to find the angles between them. One potential problem occurs when the segment is vertical (or horizontal) and the slope becomes infinite. When this occurred I just flipped the axes around. You can also get around this by defining end points of a line and finding all points within a [threshold distance from that line](https://stackoverflow.com/questions/39840030/distance-between-point-and-a-line-from-two-points) rather than doing the regression.
This involves a lot more coding than using the other methods suggested but execution time is fast and it gives much greater control over what is happening.
|
Once you have the contour, you can analyze it using a method like the one proposed in this paper: <https://link.springer.com/article/10.1007/s10032-011-0175-3>
Basically, the contour is tracked calculating the curvature at each point.
Then you can use a curvature threshold to segment the contour into straight and curved sections.
|
21,889,795
|
I couldn't find the right search terms for this question, so please apologize if this question has already been asked before.
Basically, I want to create a python function that allows you to name the columns (as a function parameter) that you will do certain kinds of analyses on.
For instance see below. Obviously this code doesn't work because 'yearattribute' is taken literally after the df. I'd appreciate your help!
```
def networkpairs2(df, Year):
"""
An effort to generalize the networkpairs function by allowing you to choose the
organization and actor parameter column names
"""
totaldf = df
yearattribute = '%s' %Year
print yearattribute
yearlist = list(np.unique(df.yearattribute))
print yearlist
return
```
|
2014/02/19
|
[
"https://Stackoverflow.com/questions/21889795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3314418/"
] |
If I understand you, `df[yearattribute].unique()` should work. You can index into DataFrame columns like a dictionary.
Aside #1: `totaldf = df` only makes `totaldf` a new name for `df`, it doesn't make a copy, and you don't use it anyway.
Aside #2: you're not returning anything.
|
You can use [`getattr`](http://docs.python.org/3/library/functions.html#getattr) here:
```
yearlist = list(np.unique(getattr(df, yearattribute)))
```
`getattr` allows you to access an attribute via a string representation of its name.
Below is a demonstration:
```
>>> class Foo:
... def __init__(self):
... self.attr = 'value'
...
>>> foo = Foo()
>>> getattr(foo, 'attr')
'value'
>>>
```
|
2,492,508
|
is there a python library for supporting OpenType features? where can i get it?
please do not guide to fontforge, i live in Iran , so i can not download anything from them.
|
2010/03/22
|
[
"https://Stackoverflow.com/questions/2492508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/275221/"
] |
The [Python Imaging Library (PIL)](http://www.pythonware.com/products/pil/) supports OpenType.
|
If you refer OpenType Fonts, there is python library for fontforge
<http://fontforge.sourceforge.net/python.html>
|
2,492,508
|
is there a python library for supporting OpenType features? where can i get it?
please do not guide to fontforge, i live in Iran , so i can not download anything from them.
|
2010/03/22
|
[
"https://Stackoverflow.com/questions/2492508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/275221/"
] |
If you refer OpenType Fonts, there is python library for fontforge
<http://fontforge.sourceforge.net/python.html>
|
You can use [Pango](http://www.pango.org) and [Cairo](http://cairographics.org) and then its python binding 'pycairo' available in the last site too.
You can access to some (not all) OpenType features with these libraries.
|
2,492,508
|
is there a python library for supporting OpenType features? where can i get it?
please do not guide to fontforge, i live in Iran , so i can not download anything from them.
|
2010/03/22
|
[
"https://Stackoverflow.com/questions/2492508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/275221/"
] |
The [Python Imaging Library (PIL)](http://www.pythonware.com/products/pil/) supports OpenType.
|
You can use [Pango](http://www.pango.org) and [Cairo](http://cairographics.org) and then its python binding 'pycairo' available in the last site too.
You can access to some (not all) OpenType features with these libraries.
|
70,422,733
|
Suppose you have a string:
```
text = "coding in python is a lot of fun"
```
And character positions:
```
positions = [(0,6),(10,16),(29,32)]
```
These are intervals, which cover certain words within text, i.e. coding, python and fun, respectively.
Using the character positions, how could you split the text on those words, to get this output:
```
['coding','in','python','is a lot of','fun']
```
This is just an example, but it should work for any string and any list of character positions.
I'm not looking for this:
`[text[i:j] for i,j in positions]`
|
2021/12/20
|
[
"https://Stackoverflow.com/questions/70422733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6515530/"
] |
I'd flatten `positions` to be `[0,6,10,16,29,32]` and then do something like
```py
positions.append(-1)
prev_positions = [0] + positions
words = []
for begin, end in zip(prev_positions, positions):
words.append(text[begin:end])
```
This exact code produces `['', 'coding', ' in ', 'python', ' is a lot of ', 'fun', '']`, so it needs some additional work to strip the whitespace
|
Below code works as expected
```
text = "coding in python is a lot of fun"
positions = [(0,6),(10,16),(29,32)]
textList = []
lastIndex = 0
for indexes in positions:
s = slice(indexes[0], indexes[1])
if positions.index(indexes) > 0:
print(lastIndex)
textList.append(text[lastIndex: indexes[0]])
textList.append(text[indexes[0]: indexes[1]])
lastIndex = indexes[1] + 1
print(textList)
```
Output: ['coding', 'in ', 'python', 'is a lot of ', 'fun']
Note: If space are not needed you can trim them
|
63,842,868
|
```
import pyautogui as py
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import keyboard
driver = webdriver.Chrome()
driver.get('https://www.youtube.com/watch?v=pcel9QTPx_g&list=RDpcel9QTPx_g&start_radio=1&t=11&ab_channel=%E5%BE%AE%E7%B3%96%E9%80%A2')
elem = driver.find_elements_by_id('video-title')
while True:
if keyboard.is_pressed('`'):
driver.find_element_by_xpath('//*[@id="movie_player"]/div[29]/div[2]/div[1]/button'.click()
for x in elem:
elem.click()
keyboard.wait('f4)
```
So i am trying to automate iterating through a bunch of songs playlist in selenium using python. I am trying make the code so when i pressed `, the video will pause and if i pressed f4 the code will skips to the next iteration. The f4 function is working just fine but the code inside the while loop isn't. Is it because when the code runs, when it gets to the for loop it won't get access to the while loop anymore? If you guys have any ideas or want to simplify my code feel free to answer!
|
2020/09/11
|
[
"https://Stackoverflow.com/questions/63842868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13781869/"
] |
```
import pyautogui as py
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import keyboard
driver = webdriver.Chrome()
driver.get('https://www.youtube.com/watch?v=pcel9QTPx_g&list=RDpcel9QTPx_g&start_radio=1&t=11&ab_channel=%E5%BE%AE%E7%B3%96%E9%80%A2')
elem = driver.find_elements_by_id('video-title')
for x in elem:
x.click()
while True:
if keyboard.is_pressed('f12'):
driver.find_element_by_xpath('//*[@id="movie_player"]/div[29]/div[2]/div[1]/button').click()
keyboard.wait('f12')
driver.find_element_by_xpath('//*[@id="movie_player"]/div[29]/div[2]/div[1]/button').click()
if keyboard.is_pressed('f4'):
break
keyboard.wait('f4')
```
|
I would also suggest you to use another key instead of f12 because it will disrupt your code when you are running it from chrome, it will open developer mode!
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
I repaired mine this with command:
>
> easy\_install pip
>
>
>
|
Delete all of the pip/pip3 stuff under .local including the packages.
```
sudo apt-get purge python-pip python3-pip
```
Now remove all pip3 files from local
```
sudo rm -rf /usr/local/bin/pip3
```
you can check which pip is installed other wise execute below one to remove all (No worries)
```
sudo rm -rf /usr/local/bin/pip3.*
```
Using pip and/or pip3, reinstall needed Python packages.
```
sudo apt-get install python-pip python3-pip
```
|
39,845,636
|
I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.
```
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
```
I found a similar [link](https://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found), but not helpful.
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39845636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5474316/"
] |
Just relink to resolve it. Find which python : `ls -l /usr/local/bin/python`
```
ln -sf /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/bin/pip /usr/local/bin/pip
```
Or reinstall pip: <https://pip.pypa.io/en/stable/installing/>
|
After a large upgrade in Mint -> 19, my system was a bit weird and I came across this problem too.
I checked the answer from @amangpt777 that may be the one to try
```
/usr/local/bin/pip # -> actually had a shebang calling python3
~/.local/bin/pip* # files were duplicated with the "sudo installed" /usr/local/bin/pip*
```
Running
```
sudo python get-pip.py # with script https://bootstrap.pypa.io/get-pip.py
sudo -H pip install setuptools
```
seem to solve the problem. I understand this as a problem with the root / user installation of python. Not sure if ananconda3 is also messing around with those scrips.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.