qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
29,692,140
If we make a pathological potato like this: ``` >>> class Potato: ... def __eq__(self, other): ... return False ... def __hash__(self): ... return random.randint(1, 10000) ... >>> p = Potato() >>> p == p False ``` We can break sets and dicts this way (*note:* it's the same even if `__eq__` returns `True`, it's mucking with the hash that broke them): ``` >>> p in {p} False >>> p in {p: 0} False ``` Also `len({p: 0, p: 0}) == 2`, and `{p: 0}[p]` raises KeyError, basically all mapping related stuff goes out the window, as expected. But what I didn't expect is that we *can't* break lists ``` >>> p in [p] True ``` Why is that? It seems that `list.__contains__` iterates, but it's first [checking identity](https://docs.python.org/3/c-api/object.html#c.PyObject_RichCompareBool) before checking equality. Since it is not the case that identity implies equality (see for example NaN object), what is the reason for lists short-circuiting on identity comparisons?
2015/04/17
[ "https://Stackoverflow.com/questions/29692140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/674039/" ]
`list`, `tuple`, etc., does indeed do an identity check before an equality check, and this behavior is motivated by [these invariants](http://bugs.python.org/issue4296#msg75735): ``` assert a in [a] assert a in (a,) assert [a].count(a) == 1 for a in container: assert a in container # this should ALWAYS be true ``` Unfortunately, `dict`s, `set`s, and friends operate by hashes, so if you mess with those you can indeed effectively break them. See [this issue](http://bugs.python.org/issue4296) and [this issue](http://bugs.python.org/issue11945) for some history.
In general, breaking the assumption that identity implies equality can break a variety of things in Python. It is true that NaN breaks this assumption, and thus NaN breaks some things in Python. Discussion can be found in [this Python bug](http://bugs.python.org/issue4296). In a pre-release version of Python 3.0, reliance on this assumption was removed, but the resolution of the bug was to put it back in (i.e., make Python 3 give the same behavior as Python 2, in which the identity check shortcut is done). The [documentation](https://docs.python.org/3/reference/expressions.html#comparisons) for Python 3 correctly says: > > For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression `x in y` is equivalent to `any(x is e or x == e for e in y)`. > > > However, it appears the documentation for Python 2 is incorrect, since it says: > > For the list and tuple types, x in y is true if and only if there exists an index i such that x == y[i] is true. > > > You could raise a documentation bug about this if you want, although it is a pretty esoteric issue so I doubt it will be high on anyone's priority list.
65,869,272
I've created a TensorFlow model that uses RaggedTensors. Model works fine and when calling `model.predict` and I get the expected results. ``` input = tf.ragged.constant([[[-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963], [-0.9984272718429565, -0.942145586013794, 2.48842453956604, -1.6836735010147095, -0.6360141634941101, -1.5785763263702393, -1.900200605392456, -0.49169546365737915, -0.6656315326690674, -0.6583622694015503, -0.5273991227149963], ]]) model.predict(input) >> array([[0.5138151 , 0.3277698 , 0.26122513]], dtype=float32) ``` I've deployed the model to a TensorFlow serving server and using the following code to invoke: ``` import json import requests headers = {"content-type": "application/json"} data = json.dumps({"instances":[ [-1.3523329846758267, ... more data ], [-1.3523329846758267, ... more data ], [-1.3523329846758267, ... more data ], [-1.3523329846758267, ... more data , [-1.3523329846758267, ... more data ], [-1.3523329846758267, ... more data ], [-1.3523329846758267, ... more data ], [-1.3523329846758267, ... more data }) json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers) predictions = json.loads(json_response.text) ``` But then I get the following error: ``` "instances is a plain list, but expecting list of objects as multiple input tensors required as per tensorinfo_map" ``` My model description: ``` MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['args_0'] tensor_info: dtype: DT_FLOAT shape: (-1, 11) name: serving_default_args_0:0 inputs['args_0_1'] tensor_info: dtype: DT_INT64 shape: (-1) name: serving_default_args_0_1:0 The given SavedModel SignatureDef contains the following output(s): outputs['dense_2'] tensor_info: dtype: DT_FLOAT shape: (-1, 3) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict WARNING: Logging before flag parsing goes to stderr. W0124 09:33:16.365564 140189730998144 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling __init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. Defined Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64) Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64) Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64) Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64) Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None, 11]), tf.float32, 1, tf.int64) Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None ``` What am I missing? **Update:** After inspecting `saved_model_cli` output, I suspect I should send the request as an object like below, but I'm not sure about the inputs... ``` { "instances": [ { "args_0": nested-list ?, "args_0_1": ??? } ] } ``` **Update2** A [Colab](https://colab.research.google.com/drive/1DQT616SpJQNY8cMAgXdymANReC6sDfYW?usp=sharing) to test this scenario, a link to download the model is included in the Colab. **Update 3:** As suggested by @Niteya Shah I've called the API with: ``` data = json.dumps({ "inputs": { "args_0": [[-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963]], "args_0_1": [1, 2] #Please Check what inputs come here ? } }) ``` And got the results (Finally!): ``` {'outputs': [[0.466771603, 0.455221593, 0.581544757]]} ``` Then called the model with the same data like so: ``` import numpy as np input = tf.ragged.constant([[ [-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963] ]]) model.predict(input) ``` And got different results: ``` array([[0.4817084 , 0.3649785 , 0.01603118]], dtype=float32) ``` I guess I'm still not there.
2021/01/24
[ "https://Stackoverflow.com/questions/65869272", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1115237/" ]
<https://www.tensorflow.org/tfx/serving/api_rest#predict_api> I think that you need to use a columnar format as recommended in the REST API instead of the row format because the dimensions of your 0th input do not match. This means that instead of instances you will have to use inputs. Since you also have multiple inputs, you will have to also mention that as a named input. A sample data request could look like this ``` data = json.dumps({ "inputs": { "args_0": [[-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0.6360141634941101, -1.5917999744415283, 0.6147914528846741, -0.49169546365737915, -0.6673409938812256, -0.6583622694015503, -0.5273991227149963]], "args_0_1": [10, 11] #Substitute this with the correct row partition values. } }) ``` Edit: I read about Ragged tensors from [here](https://www.tensorflow.org/guide/ragged_tensor#raggedtensor_encoding) and it seems that the second input may be the row partitions. I couldn't find it in the documentation about what row partition style is normally used so I am using the row lengths method. Luckily TensorFlow ragged provides [methods](https://www.tensorflow.org/guide/ragged_tensor#evaluating_ragged_tensors) that do this for us. Use the `values` and `row_splits` properties to access them. That should work.
Others may benefit from this, as it took me a while to stitch together: 1. Training a toy LSTM model on ragged tensors. 2. Loading it into TensorFlow Serving. 3. Making a prediction request with a serielized ragged tensor. If anyone knows how to rename "args\_0" and "args\_0\_1", please add. Relevant Git Issue: <https://github.com/tensorflow/tensorflow/issues/37226> Build & Save Model ================== TensorFlow version: 2.9.1 Python version: 3.8.12 ``` # Task: predict whether each sentence is a question or not. sentences = tf.constant( ['What makes you think she is a witch?', 'She turned me into a newt.', 'A newt?', 'Well, I got better.']) is_question = tf.constant([True, False, True, False]) # Preprocess the input strings. hash_buckets = 1000 words = tf.strings.split(sentences, ' ') hashed_words = tf.strings.to_hash_bucket_fast(words, hash_buckets) # Build the Keras model. keras_model = tf.keras.Sequential([ tf.keras.layers.Input(shape=[None], dtype=tf.int64, ragged=True), tf.keras.layers.Embedding(hash_buckets, 16), tf.keras.layers.LSTM(32, use_bias=False), tf.keras.layers.Dense(32), tf.keras.layers.Activation(tf.nn.relu), tf.keras.layers.Dense(1) ]) keras_model.compile(loss='binary_crossentropy', optimizer='rmsprop') keras_model.fit(hashed_words, is_question, epochs=5) print(keras_model.predict(hashed_words)) keras_module_path = "/home/ec2-user/SageMaker/keras-toy-lstm/1" tf.keras.Model.save(keras_model, keras_module_path) ``` Load & Infer from Model ======================= Load model into TensorFlow serving container ``` docker run -t --rm -p 8501:8501 -v "/home/ec2-user/SageMaker/keras-toy-lstm/:/models/keras-model" -e MODEL_NAME=keras-model tensorflow/serving ``` ``` import requests import json payload = {"args_0": [940, 203, 668, 638], "args_0_1": [0, 4]} headers = {"content-type": "application/json"} data = json.dumps({"inputs":payload}) r = requests.post('http://localhost:8501/v1/models/keras-model:predict', data=data, headers=headers) r.text ``` SavedModelCLI Output ==================== ``` (tensorflow2_p38) sh-4.2$ saved_model_cli show --dir /tmp/tmpgp0loz1v/ --all MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['args_0'] tensor_info: dtype: DT_INT64 shape: (-1) name: serving_default_args_0:0 inputs['args_0_1'] tensor_info: dtype: DT_INT64 shape: (-1) name: serving_default_args_0_1:0 The given SavedModel SignatureDef contains the following output(s): outputs['dense_1'] tensor_info: dtype: DT_FLOAT shape: (-1, 1) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict Concrete Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64) Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64) Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64) Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64) Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 DType: RaggedTensorSpec Value: RaggedTensorSpec(TensorShape([None, None]), tf.int64, 1, tf.int64) Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None ```
69,929,986
I tried many ways but neither worked. I have to convert string like `assdggg` to `a2sd3g` in python. If letters are next to each other we leave only one letter and before it we write how mamy of them were next to eachother. Any idea how can it be done?
2021/11/11
[ "https://Stackoverflow.com/questions/69929986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14868201/" ]
I'd suggest `itertools.groupby` then format as you need ``` from itertools import groupby # groupby("assdggg") # {'a': ['a'], 's': ['s', 's'], 'd': ['d'], 'g': ['g', 'g', 'g']} result = "" for k, v in groupby("assdggg"): count = len(list(v)) result += (str(count) if count > 1 else "") + k print(result) # a2sd3g ```
Try using `.groupby()`: ``` from itertools import groupby txt = "assdggg" print(''.join(str(l) + k if (l := len(list(g))) != 1 else k for k, g in groupby(txt))) ``` output : ``` a2sd3g ```
69,929,986
I tried many ways but neither worked. I have to convert string like `assdggg` to `a2sd3g` in python. If letters are next to each other we leave only one letter and before it we write how mamy of them were next to eachother. Any idea how can it be done?
2021/11/11
[ "https://Stackoverflow.com/questions/69929986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14868201/" ]
I'd suggest `itertools.groupby` then format as you need ``` from itertools import groupby # groupby("assdggg") # {'a': ['a'], 's': ['s', 's'], 'd': ['d'], 'g': ['g', 'g', 'g']} result = "" for k, v in groupby("assdggg"): count = len(list(v)) result += (str(count) if count > 1 else "") + k print(result) # a2sd3g ```
You can try this : ``` string = 'assdggg' compression = '' for char in string : if char not in compression : if string.count(char) != 1 : compression += str(string.count(char)) compression += char print(compression) #'a2sd3g' ```
69,929,986
I tried many ways but neither worked. I have to convert string like `assdggg` to `a2sd3g` in python. If letters are next to each other we leave only one letter and before it we write how mamy of them were next to eachother. Any idea how can it be done?
2021/11/11
[ "https://Stackoverflow.com/questions/69929986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14868201/" ]
Try using `.groupby()`: ``` from itertools import groupby txt = "assdggg" print(''.join(str(l) + k if (l := len(list(g))) != 1 else k for k, g in groupby(txt))) ``` output : ``` a2sd3g ```
You can try this : ``` string = 'assdggg' compression = '' for char in string : if char not in compression : if string.count(char) != 1 : compression += str(string.count(char)) compression += char print(compression) #'a2sd3g' ```
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
This might not be the right things to do , but ``` $ open -a Google\ Chrome http://localhost:8888 $ open -a Firefox http://localhost:8888 ``` Works from me (only on mac) to open any url in one of the 2 browser. Use the `--no-browser` option and make an bash function that does that. Or even have a bookmark in Chrome.
For future reference, this works looks the most elegant way to edit `jupyter_notebook_config.py` for me on macOS: ``` c.NotebookApp.browser = u'open -a "Google Chrome" %s' ``` > > You can obviously replace `"Google Chrome"` with any other browser. > > > Full procedure: 1. `jupyter notebook --generate-config` 2. `open ./jupyter/jupyter_notebook_config.py` 3. Find the line `#c.NotebookApp.browser` and edit it as above
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
Based on [this answer](https://stackoverflow.com/a/6042407/1225068), (running Python 2.7.3 and IPython-0.13.1 on Linux), all I had to set in my `ipython_notebook_config.py` was ``` c.NotebookApp.browser = u'/usr/bin/google-chrome %s' ``` I'm guessing, setting `c.NotebookApp.browser` to `/Applications/Browsers/Chrome.app/Contents/MacOS/Google Chrome %s` should work for you.
This might not be the right things to do , but ``` $ open -a Google\ Chrome http://localhost:8888 $ open -a Firefox http://localhost:8888 ``` Works from me (only on mac) to open any url in one of the 2 browser. Use the `--no-browser` option and make an bash function that does that. Or even have a bookmark in Chrome.
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
For people who want to make firefox their default for ipython notebooks (where it is not necessarily the system default), adding the following line to `ipython_notebook_config.py` should be sufficient: `c.NotebookApp.browser = 'Firefox'` For me, this was better than linking to the application file directly because it avoids the error: `A copy of Firefox is already open. Only one copy of Firefox can be open at a time.`
For Mac users, the best way is to change the default browser from the system preferences/General, and enjoy your new browser for jupyter notebook.
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
Since the great switch to Jupyter, and with recent versions of OS X (e.g., Yosemite), Jupyter/iPython (e.g., 4.0.1), and Chrome (e.g., 47), things have changed a bit. Jupyter/iPython no longer puts the notebook config file in `~/.ipython`; it's now in `~/.jupyter`, and the default file is generated with ``` jupyter notebook --generate-config ``` If you have an existing `ipython_notebook_config.py` you can migrate it with `jupyter migrate` ([H/T](https://stackoverflow.com/questions/31962862/ipython-ipython-notebook-config-py-missing)). After generating or migrating your config file, add the following line to `jupyter_notebook_config.py`: ``` c.NotebookApp.browser = u'/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome %s' ```
For future reference, this works looks the most elegant way to edit `jupyter_notebook_config.py` for me on macOS: ``` c.NotebookApp.browser = u'open -a "Google Chrome" %s' ``` > > You can obviously replace `"Google Chrome"` with any other browser. > > > Full procedure: 1. `jupyter notebook --generate-config` 2. `open ./jupyter/jupyter_notebook_config.py` 3. Find the line `#c.NotebookApp.browser` and edit it as above
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
Since the great switch to Jupyter, and with recent versions of OS X (e.g., Yosemite), Jupyter/iPython (e.g., 4.0.1), and Chrome (e.g., 47), things have changed a bit. Jupyter/iPython no longer puts the notebook config file in `~/.ipython`; it's now in `~/.jupyter`, and the default file is generated with ``` jupyter notebook --generate-config ``` If you have an existing `ipython_notebook_config.py` you can migrate it with `jupyter migrate` ([H/T](https://stackoverflow.com/questions/31962862/ipython-ipython-notebook-config-py-missing)). After generating or migrating your config file, add the following line to `jupyter_notebook_config.py`: ``` c.NotebookApp.browser = u'/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome %s' ```
For people who want to make firefox their default for ipython notebooks (where it is not necessarily the system default), adding the following line to `ipython_notebook_config.py` should be sufficient: `c.NotebookApp.browser = 'Firefox'` For me, this was better than linking to the application file directly because it avoids the error: `A copy of Firefox is already open. Only one copy of Firefox can be open at a time.`
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
For people who want to make firefox their default for ipython notebooks (where it is not necessarily the system default), adding the following line to `ipython_notebook_config.py` should be sufficient: `c.NotebookApp.browser = 'Firefox'` For me, this was better than linking to the application file directly because it avoids the error: `A copy of Firefox is already open. Only one copy of Firefox can be open at a time.`
If you don't want to open the browser at all, you can add `ipython notebook --no-browser`.
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
On OS X, you can put the following in ipython\_notebook\_config.py to open Chrome: ``` c.NotebookApp.browser = u'/usr/bin/open -a Google\\ Chrome %s' ``` The executable in '/Applications/Google Chrome.app/Contents/MacOS/Google Chrome' fails for me with 'unable to obtain profile lock', so going through 'open' is the only simple alternative I see.
This worked for me on OSX Mavericks: ``` c.NotebookApp.browser = u'/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome %s' ```
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
On OS X, you can put the following in ipython\_notebook\_config.py to open Chrome: ``` c.NotebookApp.browser = u'/usr/bin/open -a Google\\ Chrome %s' ``` The executable in '/Applications/Google Chrome.app/Contents/MacOS/Google Chrome' fails for me with 'unable to obtain profile lock', so going through 'open' is the only simple alternative I see.
For people who want to make firefox their default for ipython notebooks (where it is not necessarily the system default), adding the following line to `ipython_notebook_config.py` should be sufficient: `c.NotebookApp.browser = 'Firefox'` For me, this was better than linking to the application file directly because it avoids the error: `A copy of Firefox is already open. Only one copy of Firefox can be open at a time.`
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
If you don't want to open the browser at all, you can add `ipython notebook --no-browser`.
For Mac users, the best way is to change the default browser from the system preferences/General, and enjoy your new browser for jupyter notebook.
16,704,588
I would like to keep firefox as my system default browser on my Mac, but launch IPython Notebook in Chrome[1]. [This answer](https://stackoverflow.com/a/15748692/1730674) led me to my `ipython_notebook_config.py` file but I can't get an instance of Chrome running. After `c = get_config()` and `import webbrowser`, I've tried: 1. `webbrowser.register(u'chrome', None, webbrowser.Chrome())` 2. `webbrowser.register(u'chrome', webbrowser.Chrome)` 3. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app'))` 4. `webbrowser.register(u'chrome', None, webbrowser.GenericBrowser('/Applications/Browsers/Chrome.app/Contents/MacOS/Google\ Chrome'))` All followed by `c.NotebookApp.browser = u'chrome'` I've fiddled with `webbbrowser` in the interpreter, and couldn't figure out how to create an instance of Chrome. --- [1]: PS Why is IPython Notebook *so slow* in firefox, especially for pylab with the inline backend? It's orders of magnitude faster (for rendering, scrolling, etc) in chrome.
2013/05/23
[ "https://Stackoverflow.com/questions/16704588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1730674/" ]
Based on [this answer](https://stackoverflow.com/a/6042407/1225068), (running Python 2.7.3 and IPython-0.13.1 on Linux), all I had to set in my `ipython_notebook_config.py` was ``` c.NotebookApp.browser = u'/usr/bin/google-chrome %s' ``` I'm guessing, setting `c.NotebookApp.browser` to `/Applications/Browsers/Chrome.app/Contents/MacOS/Google Chrome %s` should work for you.
For people who want to make firefox their default for ipython notebooks (where it is not necessarily the system default), adding the following line to `ipython_notebook_config.py` should be sufficient: `c.NotebookApp.browser = 'Firefox'` For me, this was better than linking to the application file directly because it avoids the error: `A copy of Firefox is already open. Only one copy of Firefox can be open at a time.`
7,022,148
In the below python the message RSU is not supported on single node machine\*\* is not getting printed. can anyone help please?? ``` #! /usr/bin/env python import sys class SWMException(Exception): def __init__(self, arg): print "inside exception" Exception.__init__(self, arg) class RSUNotSupported(SWMException): def __init__(self): SWMException.__init__(self, "**RSU is not supported on single node machine**") def isPrepActionNeeded(): if 1==1: raise RSUNotSupported() try: isPrepActionNeeded() except: sys.exit(1) ```
2011/08/11
[ "https://Stackoverflow.com/questions/7022148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/889384/" ]
It is not printed, because you're even not trying to print it :) Here: ``` try: isPrepActionNeeded() except RSUNotSupported as e: print str(e) sys.exit(1) ```
Because you handle the exception with your try/except clause.
7,022,148
In the below python the message RSU is not supported on single node machine\*\* is not getting printed. can anyone help please?? ``` #! /usr/bin/env python import sys class SWMException(Exception): def __init__(self, arg): print "inside exception" Exception.__init__(self, arg) class RSUNotSupported(SWMException): def __init__(self): SWMException.__init__(self, "**RSU is not supported on single node machine**") def isPrepActionNeeded(): if 1==1: raise RSUNotSupported() try: isPrepActionNeeded() except: sys.exit(1) ```
2011/08/11
[ "https://Stackoverflow.com/questions/7022148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/889384/" ]
It is not printed, because you're even not trying to print it :) Here: ``` try: isPrepActionNeeded() except RSUNotSupported as e: print str(e) sys.exit(1) ```
Change the last two lines to: ``` except Exception as e: print e sys.exit(1) ``` I use just `Exception` here to keep this the equivalent of a bare `except:`. You really should use `RSUNotSupported` so you don't hide other types of errors.
68,435,024
My code: ```py import pyttsx3 #sapi5 is default windows voice api engine = pyttsx3.init('sapi5') voices = engine.getProperty('voices') print(voices[1].id) engine.setProperty('voice', voices[0].id) def speak(audio): pass ``` On running the code instead of getting that voice ID printed I am getting this error: ``` Traceback (most recent call last): File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\__init__.py", line 20, in init eng = _activeEngines[driverName] File "C:\Program Files\Python39\lib\weakref.py", line 134, in __getitem__ o = self.data[key]() KeyError: 'sapi5' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Users\subha\Documents\Personal_Space\Code_Samples\Mavis\jarvis.py", line 4, in <module> engine = pyttsx3.init('sapi5') File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\__init__.py", line 22, in init eng = Engine(driverName, debug) File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\engine.py", line 30, in __init__ self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug) File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\driver.py", line 50, in __init__ self._module = importlib.import_module(name) File "C:\Program Files\Python39\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\pyttsx3\drivers\sapi5.py", line 10, in <module> import pythoncom File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\pythoncom.py", line 2, in <module> import pywintypes File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\win32\lib\pywintypes.py", line 105, in <module> __import_pywin32_system_module__("pywintypes", globals()) File "C:\Users\subha\AppData\Roaming\Python\Python39\site-packages\win32\lib\pywintypes.py", line 87, in __import_pywin32_system_module__ raise ImportError("No system module '%s' (%s)" % (modname, filename)) ImportError: No system module 'pywintypes' (pywintypes39.dll) ``` On the other side I had a project based on cv2 that was working fine earlier but now that also stopped working. The only change that happened recently is that I upgraded to windows11 yesterday. So is that the problem? How to fix it?
2021/07/19
[ "https://Stackoverflow.com/questions/68435024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15030983/" ]
No, they don't conflict with Windows 11! For me they still are running efficiently! Try to uninstall Python and pip including all modules as well and then download them. If that doesn't work you, you could try switching to older versions of Python that support these quiet efficiently or you can download these libraries from internet optimized for Windows 11. Or, if none of these help you can consider switching back to Windows 10! Note- Windows 11 is still relatively new and there are lot of bugs which different users are facing which may be uncommon for others. I am myself facing some issues which are uncommon and the methods which I prescribed may not work for you!
Ok, I have tried the code and works fine for me now as per me there is no problem in your code and its probably the windows 11 or the installation has a defect / glitch cause windows 11 is not yet the smoothest and it may be causing your code to not run properly i would also like to ask you to see if the permissions and stuff are allowed maybe they are stopping access i hope these words have been helpful i have tried my best to understand hope it goes well
16,627,533
I'm very new to python. How can I convert a unit in python? I mean not using a conversion function to do this. Just as a built-in syntax in python, like the complex numbers works. E.g., when I typed 1mm in python command line, and expect the result is 0.001 ``` >>> 1mm 0.001 #Just like the built-in complex numbers or scintific expressions >>> 1j 1j >>> 1e3 1000 ``` I totally have no idea, do anybody knows how complex number or scintific expressions work in Python? Or any idea on how to do it. thanks,
2013/05/18
[ "https://Stackoverflow.com/questions/16627533", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2397416/" ]
how bout ``` mm = 0.001 1*mm ``` not sure if that is what you are asking for ... if you have ever messed with report lab they do simillar stuff. (although they use it to convert pixels to actual border sizes and what not) eg: ``` inch = DPI*some_thing margin = 2*inch ```
If you are doing scientific work with physical units, it is a good idea to use a units library (not built-in) like [quantities](http://pythonhosted.org/quantities/user/tutorial.html) which also supports scientific packages like numpy. For example: ``` >>> from quantities import meter >>> q = 1 * meter >>> q.units = 'ft' # or 'foot' or 'feet' >>> print q 3.280839895013123 ft ```
16,627,533
I'm very new to python. How can I convert a unit in python? I mean not using a conversion function to do this. Just as a built-in syntax in python, like the complex numbers works. E.g., when I typed 1mm in python command line, and expect the result is 0.001 ``` >>> 1mm 0.001 #Just like the built-in complex numbers or scintific expressions >>> 1j 1j >>> 1e3 1000 ``` I totally have no idea, do anybody knows how complex number or scintific expressions work in Python? Or any idea on how to do it. thanks,
2013/05/18
[ "https://Stackoverflow.com/questions/16627533", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2397416/" ]
how bout ``` mm = 0.001 1*mm ``` not sure if that is what you are asking for ... if you have ever messed with report lab they do simillar stuff. (although they use it to convert pixels to actual border sizes and what not) eg: ``` inch = DPI*some_thing margin = 2*inch ```
Python doens't have built-in units, so you'll need to install a package specifically for that. [Axiompy](https://github.com/ArztKlein/Axiompy) is a package that can do this. Install with `pip install axiompy` If you want to convert from millimetres to metres, like in the question, you'd use: ```py from axiompy import Units units = Units() print(units.unit_convert(3 * units.millimetre, units.metre)) ```
43,897,628
I updated to pandas 0.20.1 recently and I tried to use the new feature of to\_json(orient='table') ``` import pandas as pd pd.__version__ # '0.20.1' a = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6]}) a.to_json('a.json', orient='table') ``` But how can I read this JSON file to DataFrame? I tried `pd.read_json('a.json', orient='table')` but it raised `ValueError` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-22-7527b25107ef> in <module>() ----> 1 pd.read_json('a.json', orient='table') C:\Anaconda3\lib\site-packages\pandas\io\json\json.py in read_json(path_or_buf, orient, typ, dtype, convert_axes, convert_dates, keep_default_dates, numpy, precise_float, date_unit, encoding, lines) 352 obj = FrameParser(json, orient, dtype, convert_axes, convert_dates, 353 keep_default_dates, numpy, precise_float, --> 354 date_unit).parse() 355 356 if typ == 'series' or obj is None: C:\Anaconda3\lib\site-packages\pandas\io\json\json.py in parse(self) 420 421 else: --> 422 self._parse_no_numpy() 423 424 if self.obj is None: C:\Anaconda3\lib\site-packages\pandas\io\json\json.py in _parse_no_numpy(self) 650 else: 651 self.obj = DataFrame( --> 652 loads(json, precise_float=self.precise_float), dtype=None) 653 654 def _process_converter(self, f, filt=None): C:\Anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy) 273 dtype=dtype, copy=copy) 274 elif isinstance(data, dict): --> 275 mgr = self._init_dict(data, index, columns, dtype=dtype) 276 elif isinstance(data, ma.MaskedArray): 277 import numpy.ma.mrecords as mrecords C:\Anaconda3\lib\site-packages\pandas\core\frame.py in _init_dict(self, data, index, columns, dtype) 409 arrays = [data[k] for k in keys] 410 --> 411 return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype) 412 413 def _init_ndarray(self, values, index, columns, dtype=None, copy=False): C:\Anaconda3\lib\site-packages\pandas\core\frame.py in _arrays_to_mgr(arrays, arr_names, index, columns, dtype) 5592 # figure out the index, if necessary 5593 if index is None: -> 5594 index = extract_index(arrays) 5595 else: 5596 index = _ensure_index(index) C:\Anaconda3\lib\site-packages\pandas\core\frame.py in extract_index(data) 5643 5644 if have_dicts: -> 5645 raise ValueError('Mixing dicts with non-Series may lead to ' 5646 'ambiguous ordering.') 5647 ValueError: Mixing dicts with non-Series may lead to ambiguous ordering. ``` So is there a way i can read that JSON file? Thanks in advance. PS: the JSON file looks like this: ``` {"schema": {"pandas_version":"0.20.0","fields":[{"type":"integer","name":"index"},{"type":"integer","name":"a"},{"type":"integer","name":"b"}],"primaryKey":["index"]}, "data": [{"index":0,"a":1,"b":4},{"index":1,"a":2,"b":5},{"index":2,"a":3,"b":6}]} ```
2017/05/10
[ "https://Stackoverflow.com/questions/43897628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4956987/" ]
Apparently the new method outputs some metadata with the dataset into json such as the pandas version. Hence, consider using the built-in `json` module to read in this nested object to extract the value at *data* key: ``` import json ... with open('a.json', 'r') as f: json_obj = json.loads(f.read()) df = pd.DataFrame(json_obj['data']).set_index('index') df.index.name = None print(df) # a b # 0 1 4 # 1 2 5 # 2 3 6 ``` --- Should you intend to use *type* and *name*, run dictionary and list comprehension on those parts in nested json. Though here, *integer* has to be sliced to *int*. The *dtype* argument cannot be used since names are not saved until after the step: ``` with open('a.json', 'r') as f: json_obj = json.loads(f.read()) df = pd.DataFrame(json_obj['data'], columns=[t['name'] for t in json_obj['schema']['fields']]) df = df.astype(dtype={t['name']: t['type'][:3] for t in json_obj['schema']['fields']}).set_index('index') df.index.name = None print(df) # a b # 0 1 4 # 1 2 5 # 2 3 6 ```
Here is a function I have developed from Parfait answer: ``` def table_to_df(table): df = pd.DataFrame(table['data'], columns=[t['name'] for t in table['schema']['fields']]) for t in table['schema']['fields']: if t['type'] == "datetime": df[t['name']] = pd.to_datetime(df[t['name']], infer_datetime_format=True) df.set_index(table['schema']['primaryKey'], inplace=True) return df ```
66,129,496
I am using Featuretools library to try to generate custom features involving customer transactions. I tested the function and it returns the answer so I am not sure why I am getting this error. I tried using the following link: <https://featuretools.alteryx.com/en/stable/getting_started/primitives.html> Thank you! ``` from featuretools.primitives import make_agg_primitive from featuretools.variable_types import DatetimeTimeIndex, Numeric, Categorical def test_fun(categorical, datetimeindex): x = pd.DataFrame({'store_name': categorical, 'session_start_time': datetimeindex}) x_mode = list(x['store_name'].mode())[0] x = x[x['store_name'] == x_mode] y = x.session_start_time.diff().fillna(pd.Timedelta(seconds=0))/np.timedelta64(1, 's') return y.median() Test_Fun = make_agg_primitive(function = test_fun, input_types = [Categorical, DatetimeTimeIndex], return_type = [Numeric]) fm, fd = ft.dfs( entityset = es, target_entity = 'customers', agg_primitives = [Test_Fun], cutoff_time = lt, cutoff_time_in_index = True, include_cutoff_time = False, verbose = True, ) ``` Results in the following error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-492-358f980bb6b0> in <module> 20 return_type = [Numeric]) 21 ---> 22 fm, fd = ft.dfs( 23 entityset = es, 24 target_entity = 'customers', ~\Anaconda3\lib\site-packages\featuretools\utils\entry_point.py in function_wrapper(*args, **kwargs) 38 ep.on_error(error=e, 39 runtime=runtime) ---> 40 raise e 41 42 # send return value ~\Anaconda3\lib\site-packages\featuretools\utils\entry_point.py in function_wrapper(*args, **kwargs) 30 # call function 31 start = time.time() ---> 32 return_value = func(*args, **kwargs) 33 runtime = time.time() - start 34 except Exception as e: ~\Anaconda3\lib\site-packages\featuretools\synthesis\dfs.py in dfs(entities, relationships, entityset, target_entity, cutoff_time, instance_ids, agg_primitives, trans_primitives, groupby_trans_primitives, allowed_paths, max_depth, ignore_entities, ignore_variables, primitive_options, seed_features, drop_contains, drop_exact, where_primitives, max_features, cutoff_time_in_index, save_progress, features_only, training_window, approximate, chunk_size, n_jobs, dask_kwargs, verbose, return_variable_types, progress_callback, include_cutoff_time) 259 seed_features=seed_features) 260 --> 261 features = dfs_object.build_features( 262 verbose=verbose, return_variable_types=return_variable_types) 263 ~\Anaconda3\lib\site-packages\featuretools\synthesis\deep_feature_synthesis.py in build_features(self, return_variable_types, verbose) 287 assert isinstance(return_variable_types, list), msg 288 --> 289 self._run_dfs(self.es[self.target_entity_id], RelationshipPath([]), 290 all_features, max_depth=self.max_depth) 291 ~\Anaconda3\lib\site-packages\featuretools\synthesis\deep_feature_synthesis.py in _run_dfs(self, entity, relationship_path, all_features, max_depth) 412 """ 413 --> 414 self._build_transform_features(all_features, entity, max_depth=max_depth) 415 416 """ ~\Anaconda3\lib\site-packages\featuretools\synthesis\deep_feature_synthesis.py in _build_transform_features(self, all_features, entity, max_depth, require_direct_input) 576 input_types = input_types[0] 577 --> 578 matching_inputs = self._get_matching_inputs(all_features, 579 entity, 580 new_max_depth, ~\Anaconda3\lib\site-packages\featuretools\synthesis\deep_feature_synthesis.py in _get_matching_inputs(self, all_features, entity, max_depth, input_types, primitive, primitive_options, require_direct_input, feature_filter) 793 primitive, primitive_options, require_direct_input=False, 794 feature_filter=None): --> 795 features = self._features_by_type(all_features=all_features, 796 entity=entity, 797 max_depth=max_depth, ~\Anaconda3\lib\site-packages\featuretools\synthesis\deep_feature_synthesis.py in _features_by_type(self, all_features, entity, max_depth, variable_type) 768 if (variable_type == variable_types.PandasTypes._all or 769 f.variable_type == variable_type or --> 770 any(issubclass(f.variable_type, vt) for vt in variable_type)): 771 if max_depth is None or f.get_depth(stop_at=self.seed_features) <= max_depth: 772 selected_features.append(f) ~\Anaconda3\lib\site-packages\featuretools\synthesis\deep_feature_synthesis.py in <genexpr>(.0) 768 if (variable_type == variable_types.PandasTypes._all or 769 f.variable_type == variable_type or --> 770 any(issubclass(f.variable_type, vt) for vt in variable_type)): 771 if max_depth is None or f.get_depth(stop_at=self.seed_features) <= max_depth: 772 selected_features.append(f) TypeError: issubclass() arg 1 must be a class ```
2021/02/10
[ "https://Stackoverflow.com/questions/66129496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15179950/" ]
Default CSS should be overridden by your CSS. SO your need to use `!important` in your CSS. Here is the css : ``` <style> .toasting { color: yellow !important; background-color: pink !important; } </style> ``` Working [demo](https://codesandbox.io/s/vue-toasted-example-forked-hd4n3?file=/App.vue)
If someone is facing the same issue, the solution above works but keep in mind it's only **without** style scoping!
15,777,992
First, note that I understand that `==` is used for comparing two expressions, while `=` is used for assigning a value to a variable. However, python is such a clean language with minimal syntax requirements, that this seems like an easy operator to axe. Also I am not trying to start a debate or discussion, but rather learn if there is something that I'm missing to improve my knowledge of programming. Just as (in python) we don't need to declare a variable to be an int, or a string, and the language determines this based on the value assigned, why doesn't the 'if' statement simply determine that the `=` is a comparison, not an assignment? Another example is that python got rid of many of the {} and [] in other languages and simply used the indentation, because indenting and using braces is redundant. It seems to me that `if foo == goo:` is also redundant. But perhaps there's something I'm not yet aware of. Hence the question!
2013/04/03
[ "https://Stackoverflow.com/questions/15777992", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2218093/" ]
One very simple reason is that python allows boolean expressions: ``` a = b == c ``` and also multiple assignment: ``` a = b = c ``` In the first case, `a` gets assigned a boolean value\* (`True` or `False`) depending on whether `b` and `c` are equal. In the second case, `a` and `b` end up referencing the same object (`c`). Clearly you can't support both with only a single operator. I suppose that you could (in principle) overload `=` only within `if` statements (since assignment isn't allowed there), but that would get confusing -- Especially for people coming from `C` where [an assignment *is* allowed in an `if` statement.](https://stackoverflow.com/q/151850/748858) The zen wins again ("Explicit is better than implicit"). --- * It doesn't actually have to be a boolean value. It is actually whatever is returned by `a`'s `__eq__` method (or `b`'s `__eq__` if the former returns `NotImplemented`) -- *most* objects return a boolean, but a few don't (`numpy.ndarray` is one common object which has an `__eq__` which returns another `ndarray` for instance).
The two operators can overlap. For instance, consider ``` a = b = c ``` which sets `a` and `b` both to `c`, and ``` a = b == c ``` which sets `a` to either `True` or `False` based on whether `b` and `c` are equal. --- More generally, Python attempts to avoid syntax that is even possibly ambiguous to allow the parser to be simpler. Even if the ambiguity above could be resolved, it would involve adding a number of special cases, and generally addng complexity to the parser. Keeping the two operators separate neatly avoids the issue.
72,089,771
I have a working python package that's a CLI tool and I wanted to convert it into a single `.exe` file to upload it to other package managers so I used Pyinstaller. After building the `.exe` file with this command: ``` pyinstaller -c --log-level=DEBUG main.py 2> build.txt --onefile --exclude-module=pytest --add-data "src;src" ``` I double-clicked the .exe file but it closed immediately but in that split second, I saw the expected output which is supposed to be the command-line interface so the .exe does work but not entirely. **main.py** ``` from src.Categorize_CLI.__main__ import main if __name__ == "__main__": main() ``` **.spec file** ``` # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['main.py'], pathex=[], binaries=[], datas=[('src', 'src')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=['pytest'], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='main', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) ``` **Update** I got it working by dragging the main.exe file to a open command prompt and then pressed enter and it worked, and I got an error: ``` RuntimeError: 'package' is not installed. Try passing 'package_name' instead. [15592] Failed to execute script 'main' due to unhandled exception! ```
2022/05/02
[ "https://Stackoverflow.com/questions/72089771", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15747757/" ]
It means that the parameter is optional and has a dynamic default value. Usually, optional parameters have default values that are static, like this: ``` foo (string $bar = null): bool ``` Or this: ``` foo (string $bar = 0): bool ``` But in some cases, the default value changes depending on environment. These are shown by a question mark: ``` assert(mixed $assertion, string $description = ?): bool ``` And then the description of the parameter will tell you more details about what the exact value is: > > From PHP 7, if no description is provided, a default description equal to the source code for the invocation of assert() is provided. > > >
The `$description` argument is optional, but its default value is not a constant. The manual explains: > > From PHP 7, if no description is provided, a default description equal to the source code for the invocation of `assert()` is provided. > > > This can't be easily expressed in the syntax summary, so they used `?` as a placeholder.
72,089,771
I have a working python package that's a CLI tool and I wanted to convert it into a single `.exe` file to upload it to other package managers so I used Pyinstaller. After building the `.exe` file with this command: ``` pyinstaller -c --log-level=DEBUG main.py 2> build.txt --onefile --exclude-module=pytest --add-data "src;src" ``` I double-clicked the .exe file but it closed immediately but in that split second, I saw the expected output which is supposed to be the command-line interface so the .exe does work but not entirely. **main.py** ``` from src.Categorize_CLI.__main__ import main if __name__ == "__main__": main() ``` **.spec file** ``` # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['main.py'], pathex=[], binaries=[], datas=[('src', 'src')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=['pytest'], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='main', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) ``` **Update** I got it working by dragging the main.exe file to a open command prompt and then pressed enter and it worked, and I got an error: ``` RuntimeError: 'package' is not installed. Try passing 'package_name' instead. [15592] Failed to execute script 'main' due to unhandled exception! ```
2022/05/02
[ "https://Stackoverflow.com/questions/72089771", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15747757/" ]
The `$description` argument is optional, but its default value is not a constant. The manual explains: > > From PHP 7, if no description is provided, a default description equal to the source code for the invocation of `assert()` is provided. > > > This can't be easily expressed in the syntax summary, so they used `?` as a placeholder.
According to the documentation <https://www.php.net/manual/en/function.assert.php> > > assert() is a language construct in PHP 7 > > > So it's not a function syntax. You can't use such a syntax in your own function. Other answers are right about meaning of the question mark in the description of assert(). But the essence of the answer is **"assert() is not a function"**, so my question is wrong by itself as it refers assert() as a function. Thanks for hints!
72,089,771
I have a working python package that's a CLI tool and I wanted to convert it into a single `.exe` file to upload it to other package managers so I used Pyinstaller. After building the `.exe` file with this command: ``` pyinstaller -c --log-level=DEBUG main.py 2> build.txt --onefile --exclude-module=pytest --add-data "src;src" ``` I double-clicked the .exe file but it closed immediately but in that split second, I saw the expected output which is supposed to be the command-line interface so the .exe does work but not entirely. **main.py** ``` from src.Categorize_CLI.__main__ import main if __name__ == "__main__": main() ``` **.spec file** ``` # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['main.py'], pathex=[], binaries=[], datas=[('src', 'src')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=['pytest'], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='main', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) ``` **Update** I got it working by dragging the main.exe file to a open command prompt and then pressed enter and it worked, and I got an error: ``` RuntimeError: 'package' is not installed. Try passing 'package_name' instead. [15592] Failed to execute script 'main' due to unhandled exception! ```
2022/05/02
[ "https://Stackoverflow.com/questions/72089771", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15747757/" ]
It means that the parameter is optional and has a dynamic default value. Usually, optional parameters have default values that are static, like this: ``` foo (string $bar = null): bool ``` Or this: ``` foo (string $bar = 0): bool ``` But in some cases, the default value changes depending on environment. These are shown by a question mark: ``` assert(mixed $assertion, string $description = ?): bool ``` And then the description of the parameter will tell you more details about what the exact value is: > > From PHP 7, if no description is provided, a default description equal to the source code for the invocation of assert() is provided. > > >
According to the documentation <https://www.php.net/manual/en/function.assert.php> > > assert() is a language construct in PHP 7 > > > So it's not a function syntax. You can't use such a syntax in your own function. Other answers are right about meaning of the question mark in the description of assert(). But the essence of the answer is **"assert() is not a function"**, so my question is wrong by itself as it refers assert() as a function. Thanks for hints!
41,982,238
Is there a way to add a header row to a CSV without loading the CSV into memory in python? I have an 18GB CSV I want to add a header to, and all the methods I've seen require loading the CSV into memory, which is obviously unfeasible.
2017/02/01
[ "https://Stackoverflow.com/questions/41982238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6637269/" ]
You will need to rewrite the whole file. Simplest is not to use python ``` echo 'col1, col2, col2,... ' > out.csv cat in.csv >> out.csv ``` Python based solutions will work at much higher levels and will be a lot slower. 18GB is a lot of data after all. Better to work with operating system functionality, which will be the fastest.
Here is a comparison of the three suggested solutions for a ~200 MB CSV file with 10^6 rows and 10 columns (n=50). The ratio stays approximately the same for larger and smaller files (10 MB to 8 GB). > > cp:shutil:csv\_reader 1:10:55 > > > i.e. using the builtin `cp` function is approximately 55 times faster than using Python's `csv` module. Computer: * regular HDD * Python 3.5.2 64-bit * Ubuntu 16.04 * i7-3770 [![enter image description here](https://i.stack.imgur.com/9Kfil.png)](https://i.stack.imgur.com/9Kfil.png) --- ``` import csv import random import shutil import time import subprocess rows = 1 * 10**3 cols = 10 repeats = 50 shell_script = '/tmp/csv.sh' input_csv = '/tmp/temp.csv' output_csv = '/tmp/huge_output.csv' col_titles = ['titles_' + str(i) for i in range(cols)] with open(shell_script, 'w') as f: f.write("#!/bin/bash\necho '{0}' > {1}\ncat {2} >> {1}".format(','.join(col_titles), output_csv, input_csv)) with open(shell_script, 'w') as f: f.write("echo '{0}' > {1}\ncat {2} >> {1}".format(','.join(col_titles), output_csv, input_csv)) subprocess.call(['chmod', '+x', shell_script]) run_times = dict([ ('csv_writer', list()), ('external', list()), ('shutil', list()) ]) def random_csv(): with open(input_csv, 'w') as csvfile: csv_writer = csv.writer(csvfile, delimiter=',') for i in range(rows): csv_writer.writerow([str(random.random()) for i in range(cols)]) with open(output_csv, 'w'): pass for r in range(repeats): random_csv() #http://stackoverflow.com/a/41982368/2776376 start_time = time.time() with open(input_csv) as fr, open(output_csv, "w", newline='') as fw: cr = csv.reader(fr) cw = csv.writer(fw) cw.writerow(col_titles) cw.writerows(cr) run_times['csv_writer'].append(time.time() - start_time) random_csv() #http://stackoverflow.com/a/41982383/2776376 start_time = time.time() subprocess.call(['bash', shell_script]) run_times['external'].append(time.time() - start_time) random_csv() #http://stackoverflow.com/a/41982383/2776376 start_time = time.time() with open('header.txt', 'w') as header_file: header_file.write(','.join(col_titles)) with open(output_csv, 'w') as new_file: with open('header.txt', 'r') as header_file, open(input_csv, 'r') as main_file: shutil.copyfileobj(header_file, new_file) shutil.copyfileobj(main_file, new_file) run_times['shutil'].append(time.time() - start_time) print('#'*20) for key in run_times: print('{0}: {1:.2f} seconds'.format(key, run_times[key][-1])) print('#'*20) print('Averages') for key in run_times: print('{0}: {1:.2f} seconds'.format(key, sum(run_times[key])/len(run_times[key]))) ``` --- If you really want to do it in Python, you could create the header file first and then merge it with your 2nd file via [`shutil.copyfileobj`](https://docs.python.org/3/library/shutil.html). ``` import shutil with open('header.txt', 'w') as header_file: header_file.write('col1;col2;col3') with open('new_file.csv', 'w') as new_file: with open('header.txt', 'r') as header_file, open('main.csv', 'r') as main_file: shutil.copyfileobj(header_file, new_file) shutil.copyfileobj(main_file, new_file) ```
41,982,238
Is there a way to add a header row to a CSV without loading the CSV into memory in python? I have an 18GB CSV I want to add a header to, and all the methods I've seen require loading the CSV into memory, which is obviously unfeasible.
2017/02/01
[ "https://Stackoverflow.com/questions/41982238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6637269/" ]
Just use the fact that `csv` module iterates on the rows, so it never loads the whole file in memory ``` import csv with open("huge_csv.csv") as fr, open("huge_output.csv","w",newline='') as fw: cr = csv.reader(fr) cw = csv.writer(fw) cw.writerow(["title1","title2","title3"]) cw.writerows(cr) ``` using `writerows` ensure a very good speed. The memory is spared here. Everything is done line-by-line. Since the data is properly processed, you could even change the separator and/or the quoting in the output file.
Here is a comparison of the three suggested solutions for a ~200 MB CSV file with 10^6 rows and 10 columns (n=50). The ratio stays approximately the same for larger and smaller files (10 MB to 8 GB). > > cp:shutil:csv\_reader 1:10:55 > > > i.e. using the builtin `cp` function is approximately 55 times faster than using Python's `csv` module. Computer: * regular HDD * Python 3.5.2 64-bit * Ubuntu 16.04 * i7-3770 [![enter image description here](https://i.stack.imgur.com/9Kfil.png)](https://i.stack.imgur.com/9Kfil.png) --- ``` import csv import random import shutil import time import subprocess rows = 1 * 10**3 cols = 10 repeats = 50 shell_script = '/tmp/csv.sh' input_csv = '/tmp/temp.csv' output_csv = '/tmp/huge_output.csv' col_titles = ['titles_' + str(i) for i in range(cols)] with open(shell_script, 'w') as f: f.write("#!/bin/bash\necho '{0}' > {1}\ncat {2} >> {1}".format(','.join(col_titles), output_csv, input_csv)) with open(shell_script, 'w') as f: f.write("echo '{0}' > {1}\ncat {2} >> {1}".format(','.join(col_titles), output_csv, input_csv)) subprocess.call(['chmod', '+x', shell_script]) run_times = dict([ ('csv_writer', list()), ('external', list()), ('shutil', list()) ]) def random_csv(): with open(input_csv, 'w') as csvfile: csv_writer = csv.writer(csvfile, delimiter=',') for i in range(rows): csv_writer.writerow([str(random.random()) for i in range(cols)]) with open(output_csv, 'w'): pass for r in range(repeats): random_csv() #http://stackoverflow.com/a/41982368/2776376 start_time = time.time() with open(input_csv) as fr, open(output_csv, "w", newline='') as fw: cr = csv.reader(fr) cw = csv.writer(fw) cw.writerow(col_titles) cw.writerows(cr) run_times['csv_writer'].append(time.time() - start_time) random_csv() #http://stackoverflow.com/a/41982383/2776376 start_time = time.time() subprocess.call(['bash', shell_script]) run_times['external'].append(time.time() - start_time) random_csv() #http://stackoverflow.com/a/41982383/2776376 start_time = time.time() with open('header.txt', 'w') as header_file: header_file.write(','.join(col_titles)) with open(output_csv, 'w') as new_file: with open('header.txt', 'r') as header_file, open(input_csv, 'r') as main_file: shutil.copyfileobj(header_file, new_file) shutil.copyfileobj(main_file, new_file) run_times['shutil'].append(time.time() - start_time) print('#'*20) for key in run_times: print('{0}: {1:.2f} seconds'.format(key, run_times[key][-1])) print('#'*20) print('Averages') for key in run_times: print('{0}: {1:.2f} seconds'.format(key, sum(run_times[key])/len(run_times[key]))) ``` --- If you really want to do it in Python, you could create the header file first and then merge it with your 2nd file via [`shutil.copyfileobj`](https://docs.python.org/3/library/shutil.html). ``` import shutil with open('header.txt', 'w') as header_file: header_file.write('col1;col2;col3') with open('new_file.csv', 'w') as new_file: with open('header.txt', 'r') as header_file, open('main.csv', 'r') as main_file: shutil.copyfileobj(header_file, new_file) shutil.copyfileobj(main_file, new_file) ```
54,195,111
I have a JSON data set that looks like this: ``` {"sequence":109428985,"bids":[["0.1243","53",5],["0.12429","24",2],["0.12428","6",1],["0.12427","6",2],["0.12426","6",1],["0.12425","6",1],["0.12424","6",1],["0.12423","6",1],["0.12422","6",1],["0.12421","6",1],["0.124206","6496",2],["0.124205","36032",1],["0.124201","20",1],["0.1242","191",2],["0.124193","400",1],["0.12419","6",1],["0.124189","1214",1],["0.12418","6",1],["0.12417","6",1],["0.12416","6",1],["0.12415","6",1],["0.12414","6",1],["0.12413","6",1],["0.12412","6",1],["0.12411","6",1],["0.1241","6",1],["0.12409","6",1],["0.12408","6",1],["0.12407","6",1],["0.12406","6",1],["0.12405","6",1],["0.124044","14",1],["0.12404","6",1],["0.12403","6",1],["0.12402","6",1],["0.12401","6",1],["0.124","64576",5],["0.12399","6",1],["0.12398","6",1],["0.12397","6",1],["0.12396","6",1],["0.12395","6",1],["0.12394","6",1],["0.12393","6",1],["0.12392","6",1],["0.12391","6",1],["0.1239","6",1],["0.12389","6",1],["0.12388","6",1],["0.12387","6",1]],"asks":[["0.124304","20",1],["0.12434","6",1],["0.12435","6",1],["0.12436","6",1],["0.12437","6",2],["0.12438","6",2],["0.12439","7",2],["0.124453","20",1],["0.124481","1",1],["0.124559","535",1],["0.12456","1210",1],["0.124566","10058",1],["0.124601","7480",1],["0.12462","19",2],["0.124621","1",1],["0.12463","6",1],["0.12464","6",1],["0.12465","6",1],["0.12466","6",1],["0.124668","5306",1],["0.124669","1",1],["0.12467","6",1],["0.124671","20",1],["0.124674","691",1],["0.12468","6",1],["0.124683","20",1],["0.12469","6",1],["0.124697","20",1],["0.1247","6",1],["0.12471","6",1],["0.12472","6",1],["0.12473","6",1],["0.12474","6",1],["0.12475","6",1],["0.12476","6",1],["0.12477","7",2],["0.124779","20",1],["0.12478","7",2],["0.124784","12",1],["0.12479","6",1],["0.124796","1",1],["0.1248","6",1],["0.12481","6",1],["0.12483","6",1],["0.12484","6",1],["0.12485","6",1],["0.124855","1",1],["0.12487","6",1],["0.124889","16500",1],["0.12489","6",1]]} ``` I am trying to pull the data from the first two elements in the lists, and change them so I can put them in as numbers into a database. For example, this: ``` ["0.1243","53",5],["0.12429","24",2],["0.12428","6",1] ``` Should change to this: ``` 0.1243 53 0.12429 24 0.12428 6 ``` Here is my current code: ``` import requests import json import re url = 'https://api.pro.coinbase.com/products/BAT-USDC/book?level=2' trade_data = requests.get(url).json() bid_data = trade_data['bids'] print(bid_data) ``` This is what is returned: ``` [[u'0.124456', u'1158', 3], [u'0.12445', u'6', 1], [u'0.12442', u'6', 1], [u'0.12441', u'6', 1], [u'0.1244', u'6', 1], [u'0.12439', u'6', 1], [u'0.12438', u'6', 1], [u'0.12437', u'6', 2], [u'0.12436', u'6', 2], [u'0.12435', u'6', 1], [u'0.12434', u'6', 1], [u'0.12433', u'6', 1], [u'0.124315', u'6991', 1], [u'0.124314', u'1212', 1], [u'0.1243', u'6', 3], [u'0.12429', u'6', 2], [u'0.12428', u'6', 1], [u'0.12427', u'6', 2], [u'0.124261', u'2419', 1], [u'0.12426', u'6', 1], [u'0.124251', u'20', 1], [u'0.12425', u'6', 1], [u'0.12424', u'6', 1], [u'0.12423', u'6', 1], [u'0.12422', u'6', 1], [u'0.12421', u'6', 1], [u'0.124206', u'20', 1], [u'0.124205', u'36032', 1], [u'0.124201', u'20', 1], [u'0.1242', u'391', 3], [u'0.124193', u'400', 1], [u'0.12419', u'6', 1], [u'0.12418', u'6', 1], [u'0.12417', u'6', 1], [u'0.12416', u'6', 1], [u'0.12415', u'6', 1], [u'0.12414', u'6', 1], [u'0.12413', u'6', 1], [u'0.12412', u'6', 1], [u'0.12411', u'6', 1], [u'0.1241', u'6', 1], [u'0.12409', u'6', 1], [u'0.12408', u'6', 1], [u'0.12407', u'6', 1], [u'0.12406', u'6', 1], [u'0.12405', u'6', 1], [u'0.124044', u'14', 1], [u'0.12404', u'6', 1], [u'0.12403', u'6', 1], [u'0.12402', u'6', 1]] ``` I do not know where the u's came from, and I do not know how to isolate the 2 numbers of importance to my calculations from this "list in list" format. I am a programming noob, and I apologize for that. Any help is greatly appreciated, and thanks in advance! Closest previous question I have found: [how to extract string inside single quotes using python script](https://stackoverflow.com/questions/19449709/how-to-extract-string-inside-single-quotes-using-python-script)
2019/01/15
[ "https://Stackoverflow.com/questions/54195111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10915583/" ]
What I've done: I've recalculated the `viewBox` of your svg element, then I calculated the center of your svg. I've added a blue circle with the center in the center of the svg element. To get the size of your svg I deleted first the transform and used the `getBBox()` method. I've used the properties of the bounding box for the new `viewBox` value ```css svg{width:90vh;} ``` ```html <svg viewBox="64 32 1506 1506"> <circle cx="817" cy="785" r="680" fill="skyblue" /> <g fill="red" stroke="none"> <path d="M715 1534 c-191 -31 -370 -134 -489 -280 -55 -67 -118 -192 -143 -284 -27 -101 -25 -293 5 -390 43 -136 97 -226 197 -326 80 -80 104 -98 195 -142 135 -66 235 -87 375 -79 510 28 846 542 664 1018 -87 228 -278 400 -514 463 -61 17 -236 28 -290 20z m285 -191 c176 -58 318 -202 372 -378 31 -97 31 -243 0 -340 -75 -243 -304 -415 -552 -415 -258 0 -486 168 -562 415 -31 97 -31 243 0 340 38 124 120 235 230 309 152 104 333 128 512 69z"/> <path d="M746 1155 c-48 -17 -94 -59 -113 -105 -11 -24 -17 -86 -21 -199 l-5 -164 -52 58 c-50 56 -52 58 -58 34 -4 -13 -7 -51 -7 -84 l1 -60 87 -93 87 -93 97 88 98 88 0 84 0 84 -22 -19 c-13 -10 -40 -34 -61 -54 l-39 -35 4 152 c3 144 4 153 27 177 46 49 143 42 179 -12 15 -23 17 -54 13 -294 l-3 -268 65 0 65 0 7 284 c7 281 7 285 -15 330 -24 51 -58 79 -121 101 -52 19 -161 19 -213 0z"/> </g> </svg> ``` **Observation:** In my code the svg element has a width of 90vh. You may set it to whatever you need.
Just to expand on enxanetas answer to orient the arrow correctly, here is the code, with all transforms reduced and your viewbox/sizes intact (plus I tidied up the circles): ```html <?xml version="1.0" encoding="utf-8"?> <svg version="1.0" width="160pt" height="157pt" viewBox="0 0 160 157" preserveAspectRatio="xMidYMid meet" xmlns="http://www.w3.org/2000/svg"> <circle fill="skyblue" cx="80" cy="78.5" r="58.2"/> <path fill="red" stroke="none" d="M 73 41.4 C 68.2 43.1 63.6 47.3 61.7 51.9 C 60.6 54.3 60 60.5 59.6 71.8 L 59.1 88.2 L 53.9 82.4 C 48.9 76.8 48.7 76.6 48.1 79 C 47.7 80.3 47.4 84.1 47.4 87.4 L 47.5 93.4 L 56.2 102.7 L 64.9 112 L 74.6 103.2 L 84.4 94.4 L 84.4 86 L 84.4 77.6 L 82.2 79.5 C 80.9 80.5 78.2 82.9 76.1 84.9 L 72.2 88.4 L 72.6 73.2 C 72.9 58.8 73 57.9 75.3 55.5 C 79.9 50.6 89.6 51.3 93.2 56.7 C 94.7 59 94.9 62.1 94.5 86.1 L 94.2 112.9 L 100.7 112.9 L 107.2 112.9 L 107.9 84.5 C 108.6 56.4 108.6 56 106.4 51.5 C 104 46.4 100.6 43.6 94.3 41.4 C 89.1 39.5 78.2 39.5 73 41.4 Z M 138.2 78.5 C 138.2 46.4 112.1 20.3 80 20.3 C 47.9 20.3 21.8 46.4 21.8 78.5 C 21.8 110.6 47.9 136.7 80 136.7 C 112.1 136.7 138.2 110.6 138.2 78.5 Z M 155.5 78.5 C 155.5 120.2 121.7 154 80 154 C 38.3 154 4.5 120.2 4.5 78.5 C 4.5 36.8 38.3 3 80 3 C 121.7 3 155.5 36.8 155.5 78.5 Z" /> </svg> ```
54,195,111
I have a JSON data set that looks like this: ``` {"sequence":109428985,"bids":[["0.1243","53",5],["0.12429","24",2],["0.12428","6",1],["0.12427","6",2],["0.12426","6",1],["0.12425","6",1],["0.12424","6",1],["0.12423","6",1],["0.12422","6",1],["0.12421","6",1],["0.124206","6496",2],["0.124205","36032",1],["0.124201","20",1],["0.1242","191",2],["0.124193","400",1],["0.12419","6",1],["0.124189","1214",1],["0.12418","6",1],["0.12417","6",1],["0.12416","6",1],["0.12415","6",1],["0.12414","6",1],["0.12413","6",1],["0.12412","6",1],["0.12411","6",1],["0.1241","6",1],["0.12409","6",1],["0.12408","6",1],["0.12407","6",1],["0.12406","6",1],["0.12405","6",1],["0.124044","14",1],["0.12404","6",1],["0.12403","6",1],["0.12402","6",1],["0.12401","6",1],["0.124","64576",5],["0.12399","6",1],["0.12398","6",1],["0.12397","6",1],["0.12396","6",1],["0.12395","6",1],["0.12394","6",1],["0.12393","6",1],["0.12392","6",1],["0.12391","6",1],["0.1239","6",1],["0.12389","6",1],["0.12388","6",1],["0.12387","6",1]],"asks":[["0.124304","20",1],["0.12434","6",1],["0.12435","6",1],["0.12436","6",1],["0.12437","6",2],["0.12438","6",2],["0.12439","7",2],["0.124453","20",1],["0.124481","1",1],["0.124559","535",1],["0.12456","1210",1],["0.124566","10058",1],["0.124601","7480",1],["0.12462","19",2],["0.124621","1",1],["0.12463","6",1],["0.12464","6",1],["0.12465","6",1],["0.12466","6",1],["0.124668","5306",1],["0.124669","1",1],["0.12467","6",1],["0.124671","20",1],["0.124674","691",1],["0.12468","6",1],["0.124683","20",1],["0.12469","6",1],["0.124697","20",1],["0.1247","6",1],["0.12471","6",1],["0.12472","6",1],["0.12473","6",1],["0.12474","6",1],["0.12475","6",1],["0.12476","6",1],["0.12477","7",2],["0.124779","20",1],["0.12478","7",2],["0.124784","12",1],["0.12479","6",1],["0.124796","1",1],["0.1248","6",1],["0.12481","6",1],["0.12483","6",1],["0.12484","6",1],["0.12485","6",1],["0.124855","1",1],["0.12487","6",1],["0.124889","16500",1],["0.12489","6",1]]} ``` I am trying to pull the data from the first two elements in the lists, and change them so I can put them in as numbers into a database. For example, this: ``` ["0.1243","53",5],["0.12429","24",2],["0.12428","6",1] ``` Should change to this: ``` 0.1243 53 0.12429 24 0.12428 6 ``` Here is my current code: ``` import requests import json import re url = 'https://api.pro.coinbase.com/products/BAT-USDC/book?level=2' trade_data = requests.get(url).json() bid_data = trade_data['bids'] print(bid_data) ``` This is what is returned: ``` [[u'0.124456', u'1158', 3], [u'0.12445', u'6', 1], [u'0.12442', u'6', 1], [u'0.12441', u'6', 1], [u'0.1244', u'6', 1], [u'0.12439', u'6', 1], [u'0.12438', u'6', 1], [u'0.12437', u'6', 2], [u'0.12436', u'6', 2], [u'0.12435', u'6', 1], [u'0.12434', u'6', 1], [u'0.12433', u'6', 1], [u'0.124315', u'6991', 1], [u'0.124314', u'1212', 1], [u'0.1243', u'6', 3], [u'0.12429', u'6', 2], [u'0.12428', u'6', 1], [u'0.12427', u'6', 2], [u'0.124261', u'2419', 1], [u'0.12426', u'6', 1], [u'0.124251', u'20', 1], [u'0.12425', u'6', 1], [u'0.12424', u'6', 1], [u'0.12423', u'6', 1], [u'0.12422', u'6', 1], [u'0.12421', u'6', 1], [u'0.124206', u'20', 1], [u'0.124205', u'36032', 1], [u'0.124201', u'20', 1], [u'0.1242', u'391', 3], [u'0.124193', u'400', 1], [u'0.12419', u'6', 1], [u'0.12418', u'6', 1], [u'0.12417', u'6', 1], [u'0.12416', u'6', 1], [u'0.12415', u'6', 1], [u'0.12414', u'6', 1], [u'0.12413', u'6', 1], [u'0.12412', u'6', 1], [u'0.12411', u'6', 1], [u'0.1241', u'6', 1], [u'0.12409', u'6', 1], [u'0.12408', u'6', 1], [u'0.12407', u'6', 1], [u'0.12406', u'6', 1], [u'0.12405', u'6', 1], [u'0.124044', u'14', 1], [u'0.12404', u'6', 1], [u'0.12403', u'6', 1], [u'0.12402', u'6', 1]] ``` I do not know where the u's came from, and I do not know how to isolate the 2 numbers of importance to my calculations from this "list in list" format. I am a programming noob, and I apologize for that. Any help is greatly appreciated, and thanks in advance! Closest previous question I have found: [how to extract string inside single quotes using python script](https://stackoverflow.com/questions/19449709/how-to-extract-string-inside-single-quotes-using-python-script)
2019/01/15
[ "https://Stackoverflow.com/questions/54195111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10915583/" ]
You can simplify it massively by using simpler shapes. The following needs a little tweaking but is a good starting point: ```html <svg width="160" height="160" viewBox="0 0 160 160"> <circle cx="80" cy="80" r="65" fill="skyblue" stroke="red" stroke-width="15" /> <path stroke="red" stroke-width="12" fill="none" d="M65 103 v-38 a17 17 0 1 1 34 0 v48" /> <path fill="red" d="M83 83 v15 l-18 16-18-16v-15l18 16 18-16" /> </svg> ``` Explanation ----------- Curved line of arrow: ``` <path stroke="red" stroke-width="12" fill="none" d="M65 103 # Move to 65,103 v-38 # Vertical line up 38 a17 17 0 1 1 34 0 # semicircular arc (see below) #17 radius # flags # finish 34 to right (34 = 17*2) v48 # vertical line down 48 " /> ``` Arrow Head: ``` <path fill="red" d=" M83 83 # Move to top right corner of arrow head v15 # down 15 l -18 16 -18-16 # \/ shape left (bottom of arrow head) v-15 # up 15 l 18 16 18-16 # \/ shape right (arrow top) to starting point # line above could be written 'l18 16 z' for same result " /> ``` Note you can get all fancy creating a [marker](https://developer.mozilla.org/en-US/docs/Web/SVG/Element/marker) element and adding it to the end of the line instead of drawing the arrow head like this ... but that is more complex solution.
Just to expand on enxanetas answer to orient the arrow correctly, here is the code, with all transforms reduced and your viewbox/sizes intact (plus I tidied up the circles): ```html <?xml version="1.0" encoding="utf-8"?> <svg version="1.0" width="160pt" height="157pt" viewBox="0 0 160 157" preserveAspectRatio="xMidYMid meet" xmlns="http://www.w3.org/2000/svg"> <circle fill="skyblue" cx="80" cy="78.5" r="58.2"/> <path fill="red" stroke="none" d="M 73 41.4 C 68.2 43.1 63.6 47.3 61.7 51.9 C 60.6 54.3 60 60.5 59.6 71.8 L 59.1 88.2 L 53.9 82.4 C 48.9 76.8 48.7 76.6 48.1 79 C 47.7 80.3 47.4 84.1 47.4 87.4 L 47.5 93.4 L 56.2 102.7 L 64.9 112 L 74.6 103.2 L 84.4 94.4 L 84.4 86 L 84.4 77.6 L 82.2 79.5 C 80.9 80.5 78.2 82.9 76.1 84.9 L 72.2 88.4 L 72.6 73.2 C 72.9 58.8 73 57.9 75.3 55.5 C 79.9 50.6 89.6 51.3 93.2 56.7 C 94.7 59 94.9 62.1 94.5 86.1 L 94.2 112.9 L 100.7 112.9 L 107.2 112.9 L 107.9 84.5 C 108.6 56.4 108.6 56 106.4 51.5 C 104 46.4 100.6 43.6 94.3 41.4 C 89.1 39.5 78.2 39.5 73 41.4 Z M 138.2 78.5 C 138.2 46.4 112.1 20.3 80 20.3 C 47.9 20.3 21.8 46.4 21.8 78.5 C 21.8 110.6 47.9 136.7 80 136.7 C 112.1 136.7 138.2 110.6 138.2 78.5 Z M 155.5 78.5 C 155.5 120.2 121.7 154 80 154 C 38.3 154 4.5 120.2 4.5 78.5 C 4.5 36.8 38.3 3 80 3 C 121.7 3 155.5 36.8 155.5 78.5 Z" /> </svg> ```
57,823,327
I'm new to python and I'm trying to make user registration that is extend from user creation model, I created profile model with the fields I want and saved the profile object with signal when user is saved, the profile is created successfully and linked with the user but profile data is not saved (in this example: jobtitle, company, phone)are empty in model.py ``` class profile (models.Model): user = models.OneToOneField(User, on_delete = models.CASCADE) company = models.CharField(max_length=100) jobTitle = models.CharField(max_length = 100,default ='') phonenumber = PhoneNumberField(null=False, blank=False, unique=True, help_text='Contact phone number', default ='') @receiver(post_save, sender = User) def create_user_profile( sender, instance, created, **kwargs): if created: profile.objects.create(user = instance) instance.profile.save() @receiver(post_save, sender=User) def save_profile(sender, instance, **kwargs): instance.profile.save() ``` in forms.py ``` class UserRegisterForm(UserCreationForm): email = forms.EmailField(required= True) company = forms.CharField(required = True) phonenumber = PhoneNumberField(widget=forms.TextInput(attrs={'placeholder': ('Phone')}), label=("Phone number"), required=True) first_name = forms.CharField(required = True) jobTitle = forms.CharField(required = True) class Meta: model = User fields = [ 'username', 'first_name', 'last_name', 'email', 'password1', 'password2', 'company', 'jobTitle' ] def save(self, commit=True): user = super(UserRegisterForm, self).save(commit=False) user.first_name = self.cleaned_data['first_name'] user.last_name = self.cleaned_data['last_name'] user.email = self.cleaned_data['email'] if commit: user.save() return user ``` in views.py ``` #User register def signup(request): if request.method == 'POST': form = UserRegisterForm(request.POST) if form.is_valid(): user = form.save() UserProfile = user.profile username = form.cleaned_data.get('username') company = form.cleaned_data.get('company') jobTitle = form.cleaned_data.get('jobTitle') phonenumber = form.cleaned_data.get('phonenumber') UserProfile.save() messages.success(request, f'Account is created for {username}!') return redirect('login') else: form = UserRegisterForm() return render(request, 'users/signup.html', {'form': form}) ``` so can you help me to understand what is causing this issue? I'm expecting to save the inserted values when registering into the profile model. Thank you
2019/09/06
[ "https://Stackoverflow.com/questions/57823327", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2930966/" ]
Remove the second `@receiver(post_save, sender=User)`, it's useless (it's always done in the #1 profile). When you do `user = form.save()`, the signal is raised, creating an *empty* `profile`. So, just after `user = form.save()`, get the profile that has been created through the signal, like: ``` profile = Profile.objects.get(user=user) username = form.cleaned_data.get('username') company = form.cleaned_data.get('company') jobTitle = form.cleaned_data.get('jobTitle') phonenumber = form.cleaned_data.get('phonenumber') profile.save() ``` And my advice: read PEP8 of Python: class always "UpCase", variable always lower\_case.
You need to also import signals in app to let django know about your implementation. You can implement it in following manner: ``` from django.apps import AppConfig class UsersConfig(AppConfig): name = 'users' def ready(self): import users.signals ```
38,134,900
I'm following the `rangeslider` example on the Plotly website: <https://plot.ly/python/range-slider/> Is there a way to automatically (or even manually) rescale the y axis as the x range changes? For example, if the date range in the example above is set between Nov 2008 - April 2009, how can we automatically rescale the y axis to a more appropriate range to see the features in the data better?
2016/06/30
[ "https://Stackoverflow.com/questions/38134900", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4830338/" ]
There is no need to use triple pointer `***`. Passing two-dimensional array will work as is. Here is the code: ``` #include <stdio.h> #include <stdlib.h> // create zero initialized matrix int** callocMatrix(int rmax, int colmax) { int **mat = calloc(rmax, sizeof(int*)); for(int i = 0; i < rmax; i++) mat[i] = calloc(colmax, sizeof(int)); return mat; } // fill matrix void setMatrix(int **mat, int r, int c){ printf("Insert the elements of your matrix:\n"); for (int i = 0; i < r; i++) { for (int j = 0; j < c; j++) { printf("Insert element [%d][%d]: ", i, j); scanf("%d", &mat[i][j]); // no problem here printf("matrix[%d][%d]: %d\n", i, j, mat[i][j]); } } } // print matrix void printMatrix(int **mat, int r, int c){ for (int i=0; i<r;i++){ for (int j=0; j<c;j++) { printf("%d ", mat[i][j]); } printf("\n"); } } int main(int argc, char *argv[]) { int r = 3, c = 3; int **mat = callocMatrix(r, c); setMatrix(mat, r, c); printMatrix(mat, r, c); } ```
Should be: ``` scanf("%d", &(*mat)[i][j]); ``` You're passing a pointer to you matrix object, so you need to dereference it (with `*`) just as you do with `printf`. `scanf` then needs the address of the element to write into, so you need the `&`
34,989,032
``` #code like this import dns import dns.resolver import dns.name import dns.message import dns.query request = dns.message.make_query("google.com",dns.rdatatype.NS) response = dns.query.udp(request,"216.239.32.10") print response.authority ``` but it's null ============= and then when use " nslookup google.com 216.239.32.10 " can get Server: 216.239.32.10 Address: 216.239.32.10#53 Name: google.com Address: 216.58.221.238 Obviously,it's an authority answer but why can't I get the authority section when use dnspython?
2016/01/25
[ "https://Stackoverflow.com/questions/34989032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5836175/" ]
Are you sure? ``` c:\srv>nslookup google.com 216.239.32.10 Server: ns1.google.com Address: 216.239.32.10 Name: google.com Addresses: 2a00:1450:400f:805::200e 178.74.30.16 178.74.30.49 178.74.30.37 178.74.30.24 178.74.30.26 178.74.30.59 178.74.30.57 178.74.30.48 178.74.30.46 178.74.30.27 178.74.30.53 178.74.30.35 178.74.30.20 178.74.30.42 178.74.30.31 178.74.30.38 c:\srv>python Python 2.7.10 (default, May 23 2015, 09:40:32) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import dns >>> import dns.resolver, dns.name >>> import dns.message, dns.query >>> r = dns.message.make_query('google.com', dns.rdatatype.NS) >>> resp = dns.query.udp(r, '216.239.32.10') >>> print resp.authority [] >>> print resp id 63079 opcode QUERY rcode NOERROR flags QR AA RD ;QUESTION google.com. IN NS ;ANSWER google.com. 345600 IN NS ns3.google.com. google.com. 345600 IN NS ns4.google.com. google.com. 345600 IN NS ns2.google.com. google.com. 345600 IN NS ns1.google.com. ;AUTHORITY ;ADDITIONAL ns3.google.com. 345600 IN A 216.239.36.10 ns4.google.com. 345600 IN A 216.239.38.10 ns2.google.com. 345600 IN A 216.239.34.10 ns1.google.com. 345600 IN A 216.239.32.10 >>> ```
I also encountered the same problem. It turned out to be that some DNS resolvers do not reply with authority or additional sections (Check the packets using Wireshark). Change the IP address to `127.0.1.1` in your python code and make sure that you have configured your DNS resolver is not pointing to your original resolver (Check `cat /etc/resolv.conf`). You should see the authority section in you result.
57,438,262
I have a python function that reads random snippets from a large file and does some processing on it. I want the processing to happen in multiple processes and so make use of multiprocessing. I open the file (in binary mode) in the parent process and pass the file descriptor to each child process then use a multiprocessing.Lock() to synchronize access to the file. With a single worker things work as expected, but with more workers, even with the lock, the file reads will randomly return bad data (usually a bit from one part of the file and a bit from the another part of the file). In addition, the position within the file (as returned by file.tell()) will often get messed up. This all suggests a basic race condition accessing the descriptor, but my understanding is the multiprocessing.Lock() should prevent concurrent access to it. Does file.seek() and/or file.read() have some kind of asynchronous operations that don't get contained within the lock/unlock barriers? What is going here? An easy workaround is to have each process open the file individually and get its own file descriptor (I've confirmed this does work), but I'd like to understand what I'm missing. Opening the file in text mode also prevents the issue from occurring, but doesn't work for my use case and doesn't explain what is happening in the binary case. I've run the following reproducer on a number of Linux systems and OS X and on various local and remote file systems. I always get quite a few bad file positions and at least a couple of checksum errors. I know the read isn't guaranteed to read the full amount of data requested, but I've confirmed that is not what is happening here and omitted that code in an effort to keep things concise. ``` import argparse import multiprocessing import random import string def worker(worker, args): rng = random.Random(1234 + worker) for i in range(args.count): block = rng.randrange(args.blockcount) start = block * args.blocksize with args.lock: args.fd.seek(start) data = args.fd.read(args.blocksize) pos = args.fd.tell() if pos != start + args.blocksize: print(i, "bad file position", start, start + args.blocksize, pos) cksm = sum(data) if cksm != args.cksms[block]: print(i, "bad checksum", cksm, args.cksms[block]) args = argparse.Namespace() args.file = '/tmp/text' args.count = 1000 args.blocksize = 1000 args.blockcount = args.count args.filesize = args.blocksize * args.blockcount args.num_workers = 4 args.cksms = multiprocessing.Array('i', [0] * args.blockcount) with open(args.file, 'w') as f: for i in range(args.blockcount): data = ''.join(random.choice(string.ascii_lowercase) for x in range(args.blocksize)) args.cksms[i] = sum(data.encode()) f.write(data) args.fd = open(args.file, 'rb') args.lock = multiprocessing.Lock() procs = [] for i in range(args.num_workers): p = multiprocessing.Process(target=worker, args=(i, args)) procs.append(p) p.start() ``` Example output: ``` $ python test.py 158 bad file position 969000 970000 741000 223 bad file position 908000 909000 13000 232 bad file position 679000 680000 960000 263 bad file position 959000 960000 205000 390 bad file position 771000 772000 36000 410 bad file position 148000 149000 42000 441 bad file position 677000 678000 21000 459 bad file position 143000 144000 636000 505 bad file position 579000 580000 731000 505 bad checksum 109372 109889 532 bad file position 962000 963000 243000 494 bad file position 418000 419000 2000 569 bad file position 266000 267000 991000 752 bad file position 732000 733000 264000 840 bad file position 801000 802000 933000 799 bad file position 332000 333000 989000 866 bad file position 150000 151000 248000 866 bad checksum 109116 109375 887 bad file position 39000 40000 974000 937 bad file position 18000 19000 938000 969 bad file position 20000 21000 24000 953 bad file position 542000 543000 767000 977 bad file position 694000 695000 782000 ```
2019/08/09
[ "https://Stackoverflow.com/questions/57438262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/601004/" ]
This seems to be caused by buffering: using `open(args.file, 'rb', buffering=0)` I can't reproduce anymore. <https://docs.python.org/3/library/functions.html#open> > > buffering is an optional integer used to set the buffering policy. Pass 0 to switch buffering off [...] When no buffering argument is given, the default buffering policy works as follows: [...] Binary files are buffered in fixed-size chunks; the size of the buffer [...] will typically be 4096 or 8192 bytes long. [...] > > >
I've checked, only using multiprocessing.Lock (without buffering = 0), still met the `bad data`. with both `multiprocessing.Lock` and `buffering=0`, all things goes well
11,836,748
I am using ``` httplib.HTTPConnection ("http://ipaddr:port") conn.request("GET", "", params, headers) ``` I am able to do PUT/GET using ipaddr:port using my firefox client!!. But I am seeing this error on execution of the script: ``` File "post_python.py", line 5, in <module> conn.request("GET", "", params, headers) File "/usr/lib64/python2.6/httplib.py", line 914, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.6/httplib.py", line 951, in _send_request self.endheaders() File "/usr/lib64/python2.6/httplib.py", line 908, in endheaders self._send_output() File "/usr/lib64/python2.6/httplib.py", line 780, in _send_output self.send(msg) File "/usr/lib64/python2.6/httplib.py", line 739, in send self.connect() File "/usr/lib64/python2.6/httplib.py", line 720, in connect self.timeout) File "/usr/lib64/python2.6/socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known" ``` Please can someone help me ??
2012/08/06
[ "https://Stackoverflow.com/questions/11836748", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524625/" ]
Try this instead (without "http://" before the IP address): ``` conn = httplib.HTTPConnection("x.x.x.x", port) conn.request("GET", "", params, headers) ```
You might have a proxy in between that the browser already knows about. If you're under linux try setting `http_proxy` environment variable.
11,836,748
I am using ``` httplib.HTTPConnection ("http://ipaddr:port") conn.request("GET", "", params, headers) ``` I am able to do PUT/GET using ipaddr:port using my firefox client!!. But I am seeing this error on execution of the script: ``` File "post_python.py", line 5, in <module> conn.request("GET", "", params, headers) File "/usr/lib64/python2.6/httplib.py", line 914, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.6/httplib.py", line 951, in _send_request self.endheaders() File "/usr/lib64/python2.6/httplib.py", line 908, in endheaders self._send_output() File "/usr/lib64/python2.6/httplib.py", line 780, in _send_output self.send(msg) File "/usr/lib64/python2.6/httplib.py", line 739, in send self.connect() File "/usr/lib64/python2.6/httplib.py", line 720, in connect self.timeout) File "/usr/lib64/python2.6/socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known" ``` Please can someone help me ??
2012/08/06
[ "https://Stackoverflow.com/questions/11836748", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524625/" ]
You might have a proxy in between that the browser already knows about. If you're under linux try setting `http_proxy` environment variable.
If it's an IPv6 address, you need to surround it with brackets as per [RFC 2732](http://www.ietf.org/rfc/rfc2732.txt). If I recall correctly, that's the error message you get if you don't use brackets. ``` httplib.HTTPConnection ("http://[::1]:8080") conn.request("GET", "", params, headers) ```
11,836,748
I am using ``` httplib.HTTPConnection ("http://ipaddr:port") conn.request("GET", "", params, headers) ``` I am able to do PUT/GET using ipaddr:port using my firefox client!!. But I am seeing this error on execution of the script: ``` File "post_python.py", line 5, in <module> conn.request("GET", "", params, headers) File "/usr/lib64/python2.6/httplib.py", line 914, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.6/httplib.py", line 951, in _send_request self.endheaders() File "/usr/lib64/python2.6/httplib.py", line 908, in endheaders self._send_output() File "/usr/lib64/python2.6/httplib.py", line 780, in _send_output self.send(msg) File "/usr/lib64/python2.6/httplib.py", line 739, in send self.connect() File "/usr/lib64/python2.6/httplib.py", line 720, in connect self.timeout) File "/usr/lib64/python2.6/socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known" ``` Please can someone help me ??
2012/08/06
[ "https://Stackoverflow.com/questions/11836748", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524625/" ]
Try this instead (without "http://" before the IP address): ``` conn = httplib.HTTPConnection("x.x.x.x", port) conn.request("GET", "", params, headers) ```
If it's an IPv6 address, you need to surround it with brackets as per [RFC 2732](http://www.ietf.org/rfc/rfc2732.txt). If I recall correctly, that's the error message you get if you don't use brackets. ``` httplib.HTTPConnection ("http://[::1]:8080") conn.request("GET", "", params, headers) ```
54,496,251
I'm trying to create a PDF file using Python and FPDF. I've read the project's page about unicode and I've tryed to follow their instructions, but everytime I run my program, I receave the error: > > File "eventsmanager.py", line 8 SyntaxError: Non-ASCII character > '\xc3' in file eventsmanager.py on line 8, but no encoding declared; > see <http://python.org/dev/peps/pep-0263/> for details > > > This is my program: ``` from fpdf import FPDF pdf = FPDF() pdf.add_page() pdf.add_font('gargi', '', 'gargi.ttf', uni=True) pdf.set_font('gargi', '', 14) pdf.write(8, 'Olá!!') pdf.ln(20) pdf.output('tuto3.pdf', 'F') ``` Can you help me understanding what I'm doing wrong?
2019/02/02
[ "https://Stackoverflow.com/questions/54496251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10735382/" ]
You need to declare that the file encoding is UTF8 as Python 2 defaults to Latin-1. UTF8 became default in Python 3. The linked PEP contain the required line that you have to add at the beginning of the file: ``` # coding: utf8 ``` This must be the first line after the `#!` line EMACS and VIM formats are also supported. It is sad that the error message doesn't include the solution.
If you are using Python 3.x you have to use: ``` pdf.output(dest='S').encode('latin-1','ignore') ``` or ``` text=text.encode('latin-1', 'ignore').decode('latin-1') ``` in order to get the output.
52,641,587
I made a class in python and when I tried to call it into another python file (after importing it) it doesn't recognizes it as a class but as an object and then it tells me that my class is not callable here is my class: ``` class Cell: def __init__(self,value=9,isVissible=False): self.value=value self.isVisible=isVissible def setValue(self,value): self.value=value def setVisible(self): self.visible=True ``` and here is where I tried to call it: ``` import Cell,random class Board: def __init__(self): self.board = [] for i in range(12): a = [] for j in range(12): x = Cell() <=== right here it's an error . . .(the rest of my program) ``` and finally here is the error: ``` x=Cell() TypeError: 'module' object is not callable ``` can anyone help me fix this, even my teacher didn't understand my mistake
2018/10/04
[ "https://Stackoverflow.com/questions/52641587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10455256/" ]
The `Cell` has been used for both your imported module and your class. According to the error, python has mapped it to the module name. So, when you are writing `Cell()`, it tries to use the module name as a function, instead of calling the class constructor. If the class `Cell` is inside the `Cell` module, use `Cell.Cell()` instead, or change your import to `from Cell import Cell`. Otherwise, rename either the module or the class.
Your import statement is wrong, you're importing a module called Cell instead of your Cell class. You should use lower case for your filenames and import it like so: ``` from cell import Cell test = Cell() ```
26,979,711
I am limited to python2.5, and I thought that threading.Thread was asynchronous. I run: python t.py and the script does not return to the shell until 3 seconds have gone by, which means its blocking. Why is it blocking? My Code: ``` #!/usr/bin/python import threading,time def doit(): time.sleep(3) print "DONE" thr = threading.Thread(target=doit, args=(), kwargs={}) thr.start() # will run "foo" ```
2014/11/17
[ "https://Stackoverflow.com/questions/26979711", "https://Stackoverflow.com", "https://Stackoverflow.com/users/420259/" ]
By default, threads in Python are non-daemonic. A Python application [will not exit](https://docs.python.org/2.7/library/threading.html#thread-objects) until the all non-daemon threads have completed, so in your case it won't exit until `doit` has finished. If you want to script to exit immediately upon reaching the end of the main thread, you need to make the thread a daemon, by setting the [`daemon` attribute](https://docs.python.org/2.7/library/threading.html#threading.Thread.daemon) prior to starting the thread: ``` thr = threading.Thread(target=doit, args=(), kwargs={}) thr.daemon = True thr.start() ```
Threading in Python is "kind-of" asynchronous. What does this mean? * Only one thread can be running Python code at one time * threads that are Python code and CPU intensive will not benefit Your issue seems to be that you think a Python thread should keep running after Python itself quits -- that's not how it works. If you do make a thread a `daemon` then when Python quits those threads just die, instantly -- no cleanup, no error recovery, just dead. If you want to actually make a daemon process, something that keeps running in the background after the main application exits, you want to look at `os.fork()`. If you want to do it the easier way, you can try my daemon library, [pandaemonium](https://pypi.python.org/pypi/pandaemonium)
45,248,279
I tried to run a script using turtle module on pythonanywere.com, however, got stuck at an error that cannot find the module named "**tkinter**" and I need to install python3-tk package. I followed this tutorial [installing new modules on pythonanywhere](https://help.pythonanywhere.com/pages/InstallingNewModules/) in attempt to install the package in Bash console on pythonanywere.com by running `pip3.5 install --user package3-tk`. However, I got the error that "could not find the version that satisfies the requirement package3-tk". I also tried to replace the package name with `tkinter` with no luck either.
2017/07/21
[ "https://Stackoverflow.com/questions/45248279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5186966/" ]
You can't use the tkinter module from a server such as pythonanywhere. It needs to have a connection to a display and not just a browser window. <https://www.pythonanywhere.com/forums/topic/360/>
Just learnt the lesson of finding answer by asking question in different way. My purpose is to use turtle module on pythonanywere, which is not possible as explained in the answer above. However, I just found out that pythonanywhere has an affiliate website that is free and supports turtle ([www.trinker.io](http://www.trinker.io))
49,631,966
I want to access a list which is field from different python function. Please refer below code for more details ``` abc = [] name = "apple,orange" def foo(name): abc = name.split(',') print abc foo(name) print abc print name ``` The output is as below. > > ['apple', 'orange'] > > > [] > > > apple,orange > > > Since I am new to python can anybody explain me why the 7th line (`print abc`) doesn't give the result as ['apple', 'orange']? If I need to have the filled list at the line 7 (['apple', 'orange']), what should I do ?
2018/04/03
[ "https://Stackoverflow.com/questions/49631966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5993900/" ]
``` abc = [] name = "apple,orange" def foo(name): global abc # do this abc = name.split(',') print abc foo(name) print abc print name ```
abc from function is not the same as abc from first line, because abc in def foo is a local variable, if you want to refer to abc declared above you have to use global abc.
49,631,966
I want to access a list which is field from different python function. Please refer below code for more details ``` abc = [] name = "apple,orange" def foo(name): abc = name.split(',') print abc foo(name) print abc print name ``` The output is as below. > > ['apple', 'orange'] > > > [] > > > apple,orange > > > Since I am new to python can anybody explain me why the 7th line (`print abc`) doesn't give the result as ['apple', 'orange']? If I need to have the filled list at the line 7 (['apple', 'orange']), what should I do ?
2018/04/03
[ "https://Stackoverflow.com/questions/49631966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5993900/" ]
While other answers suggest to use `global abc`, it is considered (in general) to be bad practice to use global variables. See [why are global variables evil?](https://stackoverflow.com/questions/19158339/why-are-global-variables-evil) A better way would be to `return` the variable: ``` name = "apple,orange" def foo(name): abc = name.split(',') return abc abc = foo(name) print abc print name ```
abc from function is not the same as abc from first line, because abc in def foo is a local variable, if you want to refer to abc declared above you have to use global abc.
22,796,547
I have an issue where I am trying to log some additional attributes (the user ID and connecting host IP) on a Python CGI script. This is running under python 2.6.8 on a RHEL 5 system. I am following the documentation for extending the attributes in the basic logging dictionary as follows: ``` from __future__ import print_function import logging import os import sys LOG_DIR = '/apps/log' LOG_PAGE = re.sub(r'\/(.*)\/.*$', r'\1', os.environ['REQUEST_URI']) # # The log format should be # <date stamp> <level> <remote user> <remote IP>: <message> # LOG_FORMAT = '%(asctime)-19s %(levelname)-9s %(user)-7s %(clientip)-15s - %(message)s' LOG_DATE = '%Y-%m-%d %H:%M:%S' orig_user = os.environ['REMOTE_USER'] orig_ip = os.environ['REMOTE_ADDR'] xtras = { 'user': orig_user, 'clientip': orig_ip } # # Set up logging # LOG_FILE = LOG_DIR + '/' + LOG_PAGE logging.basicConfig(format=LOG_FORMAT, datefmt=LOG_DATE, level=logging.DEBUG, filename=LOG_FILE) logit = logging.getLogger('groups') ``` and I am calling it with: ``` logit.debug('user visited page',extra=xtras) logit.warn('user has no account under domain %s', myDOM, extra=xtras) ``` Every time a message is logged, 3 KeyError exceptions are logged in the web servers error logs: ``` s = self._fmt % record.__dict__, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi KeyError: 'user', referer: https://its-serv-dev.case.edu/googlegroups/index.cgi Traceback (most recent call last):, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi File "/usr/lib64/python2.6/logging/__init__.py", line 776, in emit, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi msg = self.format(record), referer: https://its-serv-dev.case.edu/googlegroups/index.cgi File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi return fmt.format(record), referer: https://its-serv-dev.case.edu/googlegroups/index.cgi File "/usr/lib64/python2.6/logging/__init__.py", line 439, in format, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi s = self._fmt % record.__dict__, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi KeyError: 'user', referer: https://its-serv-dev.case.edu/googlegroups/index.cgi ``` What is odd is that the KeyError exceptions are only generated for one of the "extra" dictionary items, and that the information IS being logged into the file correctly. I've tried various combinations of removing the two extra components (the error just shifts to whatever is left) and nothing seems to stop the exception from being thrown unless I completely remove the extra information. I suppose I could just include the information in a format string as part of the message, but it seems like reinventing the wheel.
2014/04/01
[ "https://Stackoverflow.com/questions/22796547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1980968/" ]
I figured out what was happening here: I am also importing Google's oauth2client.client module which is using the logging module as well. Since the oauth2cleint.client module is considered a "child" of my page, logging was being passed up to my logging object and since the Google module is not including the extra logging dictionary in its calls, the GOOGLE module was the item generating the KeyError exceptions, not my own code. I've worked around the issue for the moment by including the extra items as part of the message, and will need to dig a little more into the the logging module to see if there is a better way to keep the Google oauth2clent.client module logging from conflicting with the page logging.
Faced a similar problem. I guess the error occurs when there are loggers used which do not have filters added (to add the extra attributes on instantiating the loggers) but still passing the records to the formatters using formats corresponding to these attributes. [Link to documentation example using filters](https://docs.python.org/2/howto/logging-cookbook.html#adding-contextual-information-to-your-logging-output)
22,796,547
I have an issue where I am trying to log some additional attributes (the user ID and connecting host IP) on a Python CGI script. This is running under python 2.6.8 on a RHEL 5 system. I am following the documentation for extending the attributes in the basic logging dictionary as follows: ``` from __future__ import print_function import logging import os import sys LOG_DIR = '/apps/log' LOG_PAGE = re.sub(r'\/(.*)\/.*$', r'\1', os.environ['REQUEST_URI']) # # The log format should be # <date stamp> <level> <remote user> <remote IP>: <message> # LOG_FORMAT = '%(asctime)-19s %(levelname)-9s %(user)-7s %(clientip)-15s - %(message)s' LOG_DATE = '%Y-%m-%d %H:%M:%S' orig_user = os.environ['REMOTE_USER'] orig_ip = os.environ['REMOTE_ADDR'] xtras = { 'user': orig_user, 'clientip': orig_ip } # # Set up logging # LOG_FILE = LOG_DIR + '/' + LOG_PAGE logging.basicConfig(format=LOG_FORMAT, datefmt=LOG_DATE, level=logging.DEBUG, filename=LOG_FILE) logit = logging.getLogger('groups') ``` and I am calling it with: ``` logit.debug('user visited page',extra=xtras) logit.warn('user has no account under domain %s', myDOM, extra=xtras) ``` Every time a message is logged, 3 KeyError exceptions are logged in the web servers error logs: ``` s = self._fmt % record.__dict__, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi KeyError: 'user', referer: https://its-serv-dev.case.edu/googlegroups/index.cgi Traceback (most recent call last):, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi File "/usr/lib64/python2.6/logging/__init__.py", line 776, in emit, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi msg = self.format(record), referer: https://its-serv-dev.case.edu/googlegroups/index.cgi File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi return fmt.format(record), referer: https://its-serv-dev.case.edu/googlegroups/index.cgi File "/usr/lib64/python2.6/logging/__init__.py", line 439, in format, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi s = self._fmt % record.__dict__, referer: https://its-serv-dev.case.edu/googlegroups/index.cgi KeyError: 'user', referer: https://its-serv-dev.case.edu/googlegroups/index.cgi ``` What is odd is that the KeyError exceptions are only generated for one of the "extra" dictionary items, and that the information IS being logged into the file correctly. I've tried various combinations of removing the two extra components (the error just shifts to whatever is left) and nothing seems to stop the exception from being thrown unless I completely remove the extra information. I suppose I could just include the information in a format string as part of the message, but it seems like reinventing the wheel.
2014/04/01
[ "https://Stackoverflow.com/questions/22796547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1980968/" ]
I figured out what was happening here: I am also importing Google's oauth2client.client module which is using the logging module as well. Since the oauth2cleint.client module is considered a "child" of my page, logging was being passed up to my logging object and since the Google module is not including the extra logging dictionary in its calls, the GOOGLE module was the item generating the KeyError exceptions, not my own code. I've worked around the issue for the moment by including the extra items as part of the message, and will need to dig a little more into the the logging module to see if there is a better way to keep the Google oauth2clent.client module logging from conflicting with the page logging.
by call `logging.basicConfig`, you actually change the `formatter` of `RootLogger`, which is the ancestor of all `Logger`. Then it will affect other `loggers`. You can get more detail from this [github issue](https://github.com/urllib3/urllib3/issues/1417).
50,760,826
I have the date in the following format: ``` data = """*Date:* May 31, 2018 at 1:49:05 PM EDT""" ``` I need to extract the date and month in 2 different variables: ``` date = 31 month = "May" ``` How can i do that using regex in python 3??. I tried using the below regex to get the date and month: ``` month , date = re.findall("^*Date:* (\w+) (\d+)", message_data, re.MULTILINE).groups() ``` Can anyone help with this regex?
2018/06/08
[ "https://Stackoverflow.com/questions/50760826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9422965/" ]
**Update**: [@Markonius' answer](https://stackoverflow.com/a/54548429/288201) is the proper way to do it. Here is a script that does this based on experimenting with an LFS repository. I didn't look at the LFS protocol in details, so there might be quirks unaccounted for, but it worked for my simple case. [git-lfs-cat-file](https://github.com/koterpillar/desktop/blob/02c01fee06944ab3e211da253c201e1dc40c0595/bin/git-lfs-cat-file) The relevant details are: * LFS files are stored in the index with the following structure: ``` version https://git-lfs.github.com/spec/v1 oid sha256:abcdeffffffffffffff size nnnnnnnn ``` * Actual LFS object will then be under `.git/lfs/objects/ab/cd/abcdeffffffffffffff`.
When last I was working with LFS, there were conversations on the project page about better integration - such as by writing diff and/or merge tools that could be plugged in via `.gitattributes`. These didn't seem to be considered high priority, since the main intended use case of LFS is to protect large *binary* files (but certainly I've come across multiple cases where either I had a large text file, or the only reasonable way to set up LFS tracking rules cast a wide enough net to catch some smaller text files). I'm not sure if there's been any progress on those tools, as I haven't looked at the project page in a while. Absent those things, there's not a particularly "slick" way to do what you're asking. You can set up two work trees and check out different versions. You can check out one version, rename it, and then check out the other.
50,760,826
I have the date in the following format: ``` data = """*Date:* May 31, 2018 at 1:49:05 PM EDT""" ``` I need to extract the date and month in 2 different variables: ``` date = 31 month = "May" ``` How can i do that using regex in python 3??. I tried using the below regex to get the date and month: ``` month , date = re.findall("^*Date:* (\w+) (\d+)", message_data, re.MULTILINE).groups() ``` Can anyone help with this regex?
2018/06/08
[ "https://Stackoverflow.com/questions/50760826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9422965/" ]
Piping an lfs pointer into `git lfs smudge` will yield you what you want. For example: ``` git cat-file blob <blob-sha> | git lfs smudge ``` Or if you have a commit-ish (a commit hash, branch name, just `HEAD`, etc.) and a file name: ``` git cat-file blob <commit-ish>:path/to/my-large-file.name | git lfs smudge ``` You could redirect the output into a file.
When last I was working with LFS, there were conversations on the project page about better integration - such as by writing diff and/or merge tools that could be plugged in via `.gitattributes`. These didn't seem to be considered high priority, since the main intended use case of LFS is to protect large *binary* files (but certainly I've come across multiple cases where either I had a large text file, or the only reasonable way to set up LFS tracking rules cast a wide enough net to catch some smaller text files). I'm not sure if there's been any progress on those tools, as I haven't looked at the project page in a while. Absent those things, there's not a particularly "slick" way to do what you're asking. You can set up two work trees and check out different versions. You can check out one version, rename it, and then check out the other.
50,760,826
I have the date in the following format: ``` data = """*Date:* May 31, 2018 at 1:49:05 PM EDT""" ``` I need to extract the date and month in 2 different variables: ``` date = 31 month = "May" ``` How can i do that using regex in python 3??. I tried using the below regex to get the date and month: ``` month , date = re.findall("^*Date:* (\w+) (\d+)", message_data, re.MULTILINE).groups() ``` Can anyone help with this regex?
2018/06/08
[ "https://Stackoverflow.com/questions/50760826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9422965/" ]
Piping an lfs pointer into `git lfs smudge` will yield you what you want. For example: ``` git cat-file blob <blob-sha> | git lfs smudge ``` Or if you have a commit-ish (a commit hash, branch name, just `HEAD`, etc.) and a file name: ``` git cat-file blob <commit-ish>:path/to/my-large-file.name | git lfs smudge ``` You could redirect the output into a file.
**Update**: [@Markonius' answer](https://stackoverflow.com/a/54548429/288201) is the proper way to do it. Here is a script that does this based on experimenting with an LFS repository. I didn't look at the LFS protocol in details, so there might be quirks unaccounted for, but it worked for my simple case. [git-lfs-cat-file](https://github.com/koterpillar/desktop/blob/02c01fee06944ab3e211da253c201e1dc40c0595/bin/git-lfs-cat-file) The relevant details are: * LFS files are stored in the index with the following structure: ``` version https://git-lfs.github.com/spec/v1 oid sha256:abcdeffffffffffffff size nnnnnnnn ``` * Actual LFS object will then be under `.git/lfs/objects/ab/cd/abcdeffffffffffffff`.
45,225,741
I'm learning how to use the python `xarray` package, however, I'm having troubles with multi-dimensional data. Specifically, how to add and use additional coordinates? Here's an example. ``` import xarray as xr import pandas as pd import numpy as np site_id = ['brw','sum','mlo'] dss = [] for site in site_id: df = pd.DataFrame(np.random.randn(20,2),columns=['a','b'],index=pd.date_range('20160101',periods=20,freq='MS')) ds = df.to_xarray() dss.append(ds) ds = xr.concat(dss, dim=pd.Index(site_id, name='site')) ds.coords['latitude'] = [71.323, 72.58, 19.5362] ds.coords['longitude'] = [156.6114, 38.48, 155.5763] ``` My `xarray` data set looks like: ``` >>> ds <xarray.Dataset> Dimensions: (index: 20, latitude: 3, longitude: 3, site: 3) Coordinates: * index (index) datetime64[ns] 2016-01-01 2016-02-01 2016-03-01 ... * site (site) object 'brw' 'sum' 'mlo' * latitude (latitude) float64 71.32 72.58 19.54 * longitude (longitude) float64 156.6 38.48 155.6 Data variables: a (site, index) float64 -0.1403 -0.2225 -1.199 -0.8916 0.1149 ... b (site, index) float64 -1.506 0.9106 -0.7359 2.123 -0.1987 ... ``` I can select a series by using the sel method based on a site code. For example: ``` >>> ds.sel(site='mlo') ``` But how do I select data based on the other coordinates (i.e. latitude or longitude)? ``` >>> ds.sel(latitude>50) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'latitude' is not defined ```
2017/07/20
[ "https://Stackoverflow.com/questions/45225741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4603445/" ]
Thanks for the easy-to-reproduce example! You can only use `.sel(x=y)` with `=`, because of the limitations of python. An example using `.isel` with latitude (`sel` is harder because it's a float type): ``` In [7]: ds.isel(latitude=0) Out[7]: <xarray.Dataset> Dimensions: (index: 20, longitude: 3, site: 3) Coordinates: * index (index) datetime64[ns] 2016-01-01 2016-02-01 2016-03-01 ... * site (site) object 'brw' 'sum' 'mlo' latitude float64 71.32 * longitude (longitude) float64 156.6 38.48 155.6 Data variables: a (site, index) float64 0.6493 -0.9105 -0.9963 -0.6206 0.6856 ... b (site, index) float64 -0.03405 -1.49 0.2646 -0.3073 0.6326 ... ``` To use conditions such as `>`, you can use `.where`: ``` In [9]: ds.where(ds.latitude>50, drop=True) Out[9]: <xarray.Dataset> Dimensions: (index: 20, latitude: 2, longitude: 3, site: 3) Coordinates: * index (index) datetime64[ns] 2016-01-01 2016-02-01 2016-03-01 ... * site (site) object 'brw' 'sum' 'mlo' * latitude (latitude) float64 71.32 72.58 * longitude (longitude) float64 156.6 38.48 155.6 Data variables: a (site, index, latitude) float64 0.6493 0.6493 -0.9105 -0.9105 ... b (site, index, latitude) float64 -0.03405 -0.03405 -1.49 -1.49 ... ```
Another solution for selecting data through "sel" method would be using the "slice" object of Python. So, in order to select data from a Xarray object whose latitude is greater than a given value (i.e. 50 degrees north), one could write the following: ``` ds.sel(dict(latitude=slice(50,None))) ``` I hope it helps. Sincerely,
62,113,084
I googled it and tried lots of solutions, but this problem still happened. This is my yum.conf: ``` [root@localhost etc]# cat yum.conf [main] gpgcheck=1 installonly_limit=3 clean_requirements_on_remove=True best=True ``` I tried to re-install epel-release: ``` [root@localhost ~]# dnf update Last metadata expiration check: 0:01:38 ago on Sat 09 May 2020 01:30:20 AM EDT. Dependencies resolved. Nothing to do. Complete! [root@localhost ~]# dnf install httpd Last metadata expiration check: 0:01:45 ago on Sat 09 May 2020 01:30:20 AM EDT. No match for argument: httpd Error: Unable to find a match: httpd [root@localhost ~]# dnf provides httpd Last metadata expiration check: 0:01:56 ago on Sat 09 May 2020 01:30:20 AM EDT. Error: No Matches found [root@localhost ~]# dnf install epel-release Last metadata expiration check: 0:02:31 ago on Sat 09 May 2020 01:30:20 AM EDT. Package epel-release-8-8.el8.noarch is already installed. Dependencies resolved. Nothing to do. Complete! ``` Re-enabled modules: ``` [root@localhost ~]# dnf search httpd Last metadata expiration check: 0:02:45 ago on Sat 09 May 2020 01:30:20 AM EDT. ====================================================================== Name & Summary Matched: httpd ======================================================================= libmicrohttpd-doc.noarch : Documentation for libmicrohttpd libmicrohttpd-devel.i686 : Development files for libmicrohttpd libmicrohttpd-devel.x86_64 : Development files for libmicrohttpd lighttpd-filesystem.noarch : The basic directory layout for lighttpd centos-logos-httpd.noarch : CentOS-related icons and pictures used by httpd lighttpd-mod_authn_pam.x86_64 : Authentication module for lighttpd that uses PAM lighttpd-mod_authn_gssapi.x86_64 : Authentication module for lighttpd that uses GSSAPI keycloak-httpd-client-install.noarch : Tools to configure Apache HTTPD as Keycloak client lighttpd-mod_mysql_vhost.x86_64 : Virtual host module for lighttpd that uses a MySQL database lighttpd-fastcgi.x86_64 : FastCGI module and spawning helper for lighttpd and PHP configuration lighttpd-mod_authn_mysql.x86_64 : Authentication module for lighttpd that uses a MySQL database python3-keycloak-httpd-client-install.noarch : Tools to configure Apache HTTPD as Keycloak client =========================================================================== Name Matched: httpd ============================================================================ lighttpd.x86_64 : Lightning fast webserver with light system requirements httpd-devel.x86_64 : Development interfaces for the Apache HTTP server httpd-tools.x86_64 : Tools for use with the Apache HTTP Server httpd-manual.noarch : Documentation for the Apache HTTP server libmicrohttpd.i686 : Lightweight library for embedding a webserver in applications libmicrohttpd.x86_64 : Lightweight library for embedding a webserver in applications sysusage-httpd.noarch : Apache configuration for sysusage httpd-filesystem.noarch : The basic directory layout for the Apache HTTP server httpd-filesystem.noarch : The basic directory layout for the Apache HTTP server ========================================================================== Summary Matched: httpd ========================================================================== mod_dav_svn.x86_64 : Apache httpd module for Subversion server mod_auth_mellon.x86_64 : A SAML 2.0 authentication module for the Apache Httpd Server [root@localhost ~]# dnf module list --enabled Last metadata expiration check: 0:03:42 ago on Sat 09 May 2020 01:30:20 AM EDT. CentOS-8 - AppStream Name Stream Profiles Summary go-toolset rhel8 [d][e] common [d] Go httpd 2.4 [d][e] common [d], devel, minimal Apache HTTP Server mariadb 10.3 [d][e] client, server [d], galera MariaDB Module mysql 8.0 [d][e] client, server [d] MySQL Module perl-DBD-MySQL 4.046 [d][e] common [d] A MySQL interface for Perl perl-DBI 1.641 [d][e] common [d] A database access API for Perl python27 2.7 [d][e] common [d] Python programming language, version 2.7 python36 3.6 [d][e] common [d], build Python programming language, version 3.6 satellite-5-client 1.0 [d][e] common [d], gui Red Hat Satellite 5 client packages Remi's Modular repository for Enterprise Linux 8 - x86_64 Name Stream Profiles Summary php remi-7.4 [e] common [d] [i], devel, minimal PHP scripting language Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled [root@localhost ~]# dnf module disable httpd Last metadata expiration check: 0:04:00 ago on Sat 09 May 2020 01:30:20 AM EDT. Dependencies resolved. ============================================================================================================================================================================ Package Architecture Version Repository Size ============================================================================================================================================================================ Disabling modules: httpd Transaction Summary ============================================================================================================================================================================ Is this ok [y/N]: y Complete! [root@localhost ~]# dnf module enable httpd:2.4/common Last metadata expiration check: 0:04:19 ago on Sat 09 May 2020 01:30:20 AM EDT. Ignoring unnecessary profile: 'httpd/common' Dependencies resolved. ============================================================================================================================================================================ Package Architecture Version Repository Size ============================================================================================================================================================================ Enabling module streams: httpd 2.4 Transaction Summary ============================================================================================================================================================================ Is this ok [y/N]: y Complete! [root@localhost ~]# dnf module list CentOS-8 - AppStream 3.6 kB/s | 4.3 kB 00:01 CentOS-8 - Base 6.0 kB/s | 3.9 kB 00:00 CentOS-8 - Extras 441 B/s | 1.5 kB 00:03 CentOS-8 - PowerTools 1.9 kB/s | 4.3 kB 00:02 Extra Packages for Enterprise Linux Modular 8 - x86_64 9.5 kB/s | 9.4 kB 00:00 Extra Packages for Enterprise Linux 8 - x86_64 6.1 kB/s | 9.0 kB 00:01 Node.js Packages for Enterprise Linux 8 - x86_64 9.6 kB/s | 2.5 kB 00:00 Remi's Modular repository for Enterprise Linux 8 - x86_64 417 B/s | 3.5 kB 00:08 Safe Remi's RPM repository for Enterprise Linux 8 - x86_64 2.5 kB/s | 3.0 kB 00:01 CentOS-8 - AppStream Name Stream Profiles Summary 389-ds 1.4 389 Directory Server (base) ant 1.10 [d] common [d] Java build tool container-tools rhel8 [d] common [d] Common tools and dependencies for container runtimes container-tools 1.0 common [d] Common tools and dependencies for container runtimes freeradius 3.0 [d] server [d] High-performance and highly configurable free RADIUS server gimp 2.8 [d] common [d], devel gimp module go-toolset rhel8 [d][e] common [d] Go httpd 2.4 [d][e] common [d], devel, minimal Apache HTTP Server idm DL1 common [d], adtrust, client, dns, server The Red Hat Enterprise Linux Identity Management system module idm client [d] common [d] RHEL IdM long term support client module inkscape 0.92.3 [d] common [d] Vector-based drawing program using SVG javapackages-runtime 201801 [d] common [d] Basic runtime utilities to support Java applications jmc rhel8 common, core Java Mission Control is a profiling and diagnostics tool for the Hotspot JVM libselinux-python 2.8 common Python 2 bindings for libselinux llvm-toolset rhel8 [d] common [d] LLVM mailman 2.1 [d] common [d] Electronic mail discussion and e-newsletter lists managing software mariadb 10.3 [d][e] client, server [d], galera MariaDB Module maven 3.5 [d] common [d] Java project management and project comprehension tool mercurial 4.8 [d] common [d] Mercurial -- a distributed SCM mod_auth_openidc 2.3 Apache module suporting OpenID Connect authentication mysql 8.0 [d][e] client, server [d] MySQL Module nginx 1.14 [d] common [d] nginx webserver nginx 1.16 common nginx webserver nodejs 10 [d][x] common [d], development, minimal, s2i Javascript runtime nodejs 12 [x] common, development, minimal, s2i Javascript runtime parfait 0.5 common Parfait Module perl 5.24 common [d], minimal Practical Extraction and Report Language perl 5.26 [d] common [d], minimal Practical Extraction and Report Language perl-App-cpanminus 1.7044 [d] common [d] Get, unpack, build and install CPAN modules perl-DBD-MySQL 4.046 [d][e] common [d] A MySQL interface for Perl perl-DBD-Pg 3.7 [d] common [d] A PostgreSQL interface for Perl perl-DBD-SQLite 1.58 [d] common [d] SQLite DBI driver perl-DBI 1.641 [d][e] common [d] A database access API for Perl perl-FCGI 0.78 [d] common [d] FastCGI Perl bindings perl-YAML 1.24 [d] common [d] Perl parser for YAML php 7.2 [d] common [d], devel, minimal PHP scripting language php 7.3 common, devel, minimal PHP scripting language pki-core 10.6 PKI Core module for PKI 10.6 or later pki-deps 10.6 PKI Dependencies module for PKI 10.6 or later postgresql 9.6 client, server [d] PostgreSQL server and client module postgresql 10 [d] client, server [d] PostgreSQL server and client module postgresql 12 client, server PostgreSQL server and client module python27 2.7 [d][e] common [d] Python programming language, version 2.7 python36 3.6 [d][e] common [d], build Python programming language, version 3.6 redis 5 [d] common [d] Redis persistent key-value database rhn-tools 1.0 [d] common [d] Red Hat Satellite 5 tools for RHEL ruby 2.5 [d] common [d] An interpreter of object-oriented scripting language ruby 2.6 common An interpreter of object-oriented scripting language rust-toolset rhel8 [d] common [d] Rust satellite-5-client 1.0 [d][e] common [d], gui Red Hat Satellite 5 client packages scala 2.10 [d] common [d] A hybrid functional/object-oriented language for the JVM squid 4 [d] common [d] Squid - Optimising Web Delivery subversion 1.10 [d] common [d], server Apache Subversion swig 3.0 [d] common [d], complete Connects C/C++/Objective C to some high-level programming languages varnish 6 [d] common [d] Varnish HTTP cache virt rhel [d] common [d] Virtualization module CentOS-8 - PowerTools Name Stream Profiles Summary javapackages-tools 201801 common Tools and macros for Java packaging support mariadb-devel 10.3 MariaDB Module virt-devel rhel Virtualization module Extra Packages for Enterprise Linux Modular 8 - x86_64 Name Stream Profiles Summary 389-directory-server stable minimal, legacy, default [d] 389 Directory Server 389-directory-server testing minimal, legacy, default [d] 389 Directory Server avocado latest minimal, default Framework with tools and libraries for Automated Testing cobbler 3 default [d] Versatile Linux deployment server libuv epel8-buildroot devel [d] libuv-devel for EPEL 8 nginx mainline common nginx webserver nodejs 12 [x] development, minimal, default Javascript runtime nodejs 13 [x] development, minimal, default Javascript runtime Remi's Modular repository for Enterprise Linux 8 - x86_64 Name Stream Profiles Summary glpi 9.3 common [d] Free IT asset management software glpi 9.4 [d] common [d] Free IT asset management software php remi-7.2 common [d], devel, minimal PHP scripting language php remi-7.3 common [d], devel, minimal PHP scripting language php remi-7.4 [e] common [d] [i], devel, minimal PHP scripting language redis remi-5.0 common [d] Redis persistent key-value database redis remi-6.0 common [d] Redis persistent key-value database Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled [root@localhost ~]# dnf install @httpd:2.4/common Last metadata expiration check: 0:04:35 ago on Sat 09 May 2020 01:30:20 AM EDT. Unable to resolve argument httpd:2.4/common No match for package httpd Error: Unable to find a match ``` Same problem to community repo: ``` [root@localhost ~]# vim /etc/yum.repos.d/nginx.repo [root@localhost ~]# cat /etc/yum.repos.d/nginx.repo [nginx-stable] name=nginx stable repo baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ gpgcheck=1 enabled=1 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=true [nginx-mainline] name=nginx mainline repo baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/ gpgcheck=1 enabled=0 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=true [root@localhost ~]# yum-config-manager --enable nginx-mainline [root@localhost ~]# yum install nginx CentOS-8 - AppStream 7.7 kB/s | 4.3 kB 00:00 CentOS-8 - Base 4.6 kB/s | 3.9 kB 00:00 CentOS-8 - Extras 1.6 kB/s | 1.5 kB 00:00 Extra Packages for Enterprise Linux Modular 8 - x86_64 2.7 kB/s | 8.7 kB 00:03 Extra Packages for Enterprise Linux 8 - x86_64 17 kB/s | 7.4 kB 00:00 nginx stable repo 485 B/s | 15 kB 00:31 nginx mainline repo 4.8 kB/s | 38 kB 00:07 No match for argument: nginx Error: Unable to find a match: nginx ``` There are some my repo configuration files: ``` [root@localhost yum.repos.d]# ls CentOS-AppStream.repo CentOS-Debuginfo.repo CentOS-HA.repo CentOS-Vault.repo epel-testing-modular.repo remi.repo CentOS-Base.repo CentOS-Devel.repo CentOS-Media.repo epel-modular.repo epel-testing.repo remi-safe.repo CentOS-centosplus.repo CentOS-Extras.repo CentOS-PowerTools.repo epel-playground.repo nodesource-el8.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo epel.repo remi-modular.repo [root@localhost yum.repos.d]# cat CentOS-AppStream.repo # CentOS-AppStream.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # # [AppStream] name=CentOS-$releasever - AppStream mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=AppStream&infra=$infra #baseurl=http://mirror.centos.org/$contentdir/$releasever/AppStream/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial [root@localhost yum.repos.d]# cat CentOS-Base.repo # CentOS-Base.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # # [BaseOS] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=BaseOS&infra=$infra #baseurl=http://mirror.centos.org/$contentdir/$releasever/BaseOS/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial ``` Some details. Some details. Some details. Some details. Some details. Some details. Some details. Some details. Some details. Some details. Some details. Some details. Some details.
2020/05/31
[ "https://Stackoverflow.com/questions/62113084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13651016/" ]
You can find the below information useful to install the Nginx web server. But, Nginx is not available in CentOS 8 default repository. So, follow the below steps. And please let me know if it works for you or not. **Step 1: Installation of EPEL repository** You have to install the `EPEL` (Extra Package for Enterprise Linux) repository on your server. It is a free repository and allows you to connect many other open-source software packages. Use the below command to install `EPEL`. ``` sudo dnf install epel-release ``` Press `y` to accept the installation confirmation and press the `Enter` key. After completing the EPEL installation, run the below command to view the available Nginx versions. ``` dnf module list nginx ``` **Step 2: Enabling the latest Nginx web server module** After that, you have to enable the latest Nginx module. In other words, you have to tell the dnf command to install the newest version. So, use the below command for this purpose. For instance, The available version of `Nginx` right now is in EPEL repository `1.8`. ``` sudo dnf module enable nginx:1.18 ``` **Step 3: Install Nginx web server** So, we have already installed the EPEL repository. As a result, we can install it directly using dnf command. Use the below command to install the latest version of Nginx. ``` sudo dnf -y install nginx ``` **Step 4: Starting and enabling the Nginx service daemon** Use the below command, which will start the service immediately, and also it will enable the service on boot. ``` sudo systemctl enable --now nginx ``` **Step 5: Checking the Nginx service status** ``` sudo systemctl status nginx ``` **Step 6: Configuration of the firewall to allow the Nginx traffic** You can run the below command to add HTTP and https service permanently in the firewall. ``` sudo firewall-cmd --permanent --add-service http sudo firewall-cmd --permanent --add-service https ``` After allowing the service, we need to reload the firewall service daemon. ``` sudo firewall-cmd --reload ``` **Step 7: Checking the Nginx web server** Now, you can test your Nginx web server that if it is up and running or not by accessing the public IP of your server or domain name from your web browser. **http://domain\_name\_or\_IP** For Instance, if the IP address of the server is **192.168.43.164**. So, the link will look like `http://192.168.43.164` That's it about NGINX web server installation. But, if you like to get a few extra details, then You can find a complete example on my blog: [How to install Nginx web server on CentOS 8](https://www.linuxgurus.in/how-to-install-nginx-webserver-on-centos-8/)
I hope it helps you. I am trying to install Nginx using Yum. Instructions for installing Nginx can be found in the Download section of the NGINX official website. ``` sudo vi /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/OS/OSRELEASE/$basearch/ gpgcheck=0 enabled=1 ``` In the middle, you need to change the ‘OS’ and ‘OSRELEASE’ items to suit your OS. I am installing in CentOS 7.0, so I set it as below. ``` [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1 ``` Next step ``` sudo yum install -y nginx ``` Open the web server port in the firewall. ``` sudo firewall-cmd --permanent --zone=public --add-service=http sudo firewall-cmd --permanent --zone=public --add-service=https sudo firewall-cmd --reload ``` Run Nginx and change it to start automatically at boot time. ``` systemctl start nginx systemctl enable nginx ``` Source: <http://blog.tjsrms.me/centos-7-nginx-%EC%84%A4%EC%B9%98%ED%95%98%EA%B8%B0/>
57,927,442
On the following linke: <https://classicdb.ch/?quest=788> here at `//*[@id="main-contents"]/div[1]/table[1]/tbody/tr/td` it contains a text > > Mottled Boar slain (10) > > > ``` //*[@id="main-contents"]/div[1]/table[1]/tbody/tr/td/a ``` contains only: > > Mottled Boar > > > And I only need the second part that contains: > > slain (10) > > > In python with selenium i tried accessing the node directly with: ``` //*[@id="main-contents"]/div[1]/table[1]/tbody/tr/td/text() ``` However the webdriver can contain only webelements and not text nodes. > > The result of the xpa th expression > "//\*[@id="main-contents"]/div[1]/table[1]/tbody/tr/td/a/following-sibling::text()" > is: [object Text]. It should be an element. > > > I also tried: ``` //*[@id="main-contents"]/div[1]/table[1]/tbody/tr/td/a/following-sibling::text() ``` But does returns the same error that it should be an element not an object text. I have found a workaround by first selecting the text only, then subtracting that from the whole text - but that's ugly. How do achieve this properly? Thank you! EDIT: I must not use specific variables in the code like 'slain' or 'Mottled Boar', because these variables can change in other cases.
2019/09/13
[ "https://Stackoverflow.com/questions/57927442", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6854832/" ]
Try this xpath. ``` //table[@class='iconlist']//tr//td[contains(.,'slain')]//a[contains(.,'Mottled Boar')] ``` **Edit** ``` //table[@class='iconlist']//tr//td//a ``` Use **javaScript** executor. where `firstChild` will return the `Mottled Boar` and `lastChild` will return `slain (10)` ``` driver.get("https://classicdb.ch/?quest=788") print(driver.execute_script('return arguments[0].lastChild.textContent;', driver.find_element_by_xpath("//table[@class='iconlist']//tr//td[1]"))) print(driver.execute_script('return arguments[0].firstChild.textContent;', driver.find_element_by_xpath("//table[@class='iconlist']//tr//td[1]"))) ```
You xpath is correct. you can try this approach to get the text directly from that node. you will need lxml import. ``` from lxml import html tree = html.fromstring(driver.page_source) myText = tree.xpath("//*[@id='main-contents']/div[1]/table[1]/tbody/tr/td/a/following-sibling::text()") print(str(myText).replace('\\t', '')) ```
48,583,455
I'm using the Python C API to call a method. At present I am using [`PyObject_CallMethodObjArgs`](https://docs.python.org/3/c-api/object.html#c.PyObject_CallMethodObjArgs) to do this. This is a variadic function: ``` PyObject* PyObject_CallMethodObjArgs(PyObject *o, PyObject *name, ..., NULL) ``` This is absolutely fine when the number of arguments is known at compile time. However, I have a scenario where the number of arguments is not known until runtime, they are supplied as an array. In essence my issue is precisely the same as in this question: [How can I pass an array as parameters to a vararg function?](https://stackoverflow.com/questions/14705920/passing-an-array-as-parameters-to-a-vararg-function) The accepted answer there tells me that there is no solution to my problem. Is there are way around this hurdle. If I cannot solve the problem using `PyObject_CallMethodObjArgs` is there an alternative function in the Python C API that can meet my needs? For instance, [`PyObject_Call`](https://docs.python.org/3/c-api/object.html#c.PyObject_Call) accepts parameters as a Python sequence object. Is it possible to use this function, or one with a similar interface?
2018/02/02
[ "https://Stackoverflow.com/questions/48583455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/505088/" ]
I am not sure if I am completey wrong, but AFAICT it should be possible to * create a tuple with the required number of arguments * pass this tuple to <https://docs.python.org/3/c-api/object.html#c.PyObject_CallObject> or <https://docs.python.org/3/c-api/object.html#c.PyObject_Call> (this decision depending on the need for kwargs).
A possible way might be to use [libffi](https://sourceware.org/libffi/), perhaps thru the [ctypes](https://docs.python.org/3/library/ctypes.html) Python library. It knows your [ABI](https://en.wikipedia.org/wiki/Application_binary_interface) and [calling conventions](https://en.wikipedia.org/wiki/Calling_convention) (so is partly coded in assembler, for many popular implementations) and enables you to call an arbitrary function of arbitrary signature and arbitrary arity. Notice that there is [no purely standard way of doing that](https://stackoverflow.com/a/14705987/841108) (without using some external library à la `libffi`...) in portable and standard [C11](https://en.wikipedia.org/wiki/C11_(C_standard_revision)) (check by reading [n1570](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf)). BTW, `libffi` can be used from any C program. And `ctypes` can be used from any Python program. The Python [*Embedding and extending Python*](https://docs.python.org/3/extending/index.html) chapter explains how to call Python from C, or C from Python.
42,794,384
[python update](https://i.stack.imgur.com/gyHJ2.png) I have Python 3.5 installed on my (LinuxMint) computer by: ``` sudo apt-get install python3.5 ``` However, when I run python -V, it shows that Python 2.7 is being used. How do I tell the system to use the updated version of Python?
2017/03/14
[ "https://Stackoverflow.com/questions/42794384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You have python2.7 installed and you already have a link to the `python2.7` executable so that when you simply run `python`, it actually runs `python2.7`. When you install python3.5, that link still exists. You should either run `python3` (or `python3.5`) or you should replace the link with a new link like so (assuming python3.5 in /usr/bin): ``` ln -sf /usr/bin/python3.5 /usr/bin/python ``` Add `sudo` before the command if you don't have permissions to create the link and your user is a sudoer.
More dynamically, ``` ln -sf $(which python3) $(which python) ``` which forces the creation of symbolic link from python3 to python.
50,619,846
I'm trying to create a series of subplots: ``` count=0 fig1, axes1 = plt.subplots(nrows=2, ncols=1, figsize=(10,80)) for x in b: """code gets data here as a dataframe""" axes1[count]=q1.plot() count=count+1 ``` However this creates two plots rather than 2 subplots in one figure. I am using python 3.5 in Pycharm. Been racking my brain at what i'm doing wrong here
2018/05/31
[ "https://Stackoverflow.com/questions/50619846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9874771/" ]
`DataReceived` event returns on another/secondary thread, which means you will have to marshal back to the UI thread to update your `TextBox` [SerialPort.DataReceived Event](https://msdn.microsoft.com/fi-fi/library/system.io.ports.serialport.datareceived(v=vs.110).aspx) > > The DataReceived event is raised on a secondary thread when data is > received from the SerialPort object. Because this event is raised on a > secondary thread, and not the main thread, attempting to modify some > elements in the main thread, such as UI elements, could raise a > threading exception. If it is necessary to modify elements in the main > Form or Control, post change requests back using Invoke, which will do > the work on the proper thread. > > > You can use `Dispatcher.BeginInvoke` or `Dispatcher.Invoke Method` to marshal back to the main thread **Exmaple** ``` Application.Current.Dispatcher.Invoke(new Action(() => { /* Your code here */ })); ``` or ``` someControl.Dispatcher.Invoke(new Action(() => { /* Your code here */ })); ```
I got now the problem that when i send my command using **SerialPort.Write(string cmd)**, I can't read back the answer...
57,551,049
I am new to web scraping. I am trying to extract data using python from <https://www.clinicaltrialsregister.eu> using keywords "acute myeloid leukemia", "chronic myeloid leukemia", "acute lymphoblastic leukemia" to extract following information-EudraCT Number, Trial Status, Full title of the trial, Name of Sponsor, Country, Medical condition(s) being investigated, Investigator Networks to be involved in the Trial. I am trying to collect URL from each link and then go to each page and extract the information, but I am not getting a proper link. I want URL like "<https://www.clinicaltrialsregister.eu/ctr-search/trial/2014-000526-37/DE>" but getting ``` '/ctr-search/trial/2014-000526-37/DE', '/ctr-search/trial/2006-001777-19/NL', '/ctr-search/trial/2006-001777-19/BE', '/ctr-search/trial/2007-000273-35/IT', '/ctr-search/trial/2011-005934-20/FR', '/ctr-search/trial/2006-004950-25/GB', '/ctr-search/trial/2009-017347-33/DE', '/ctr-search/trial/2012-000334-19/IT', '/ctr-search/trial/2012-001594-93/FR', '/ctr-search/trial/2012-001594-93/results', '/ctr-search/trial/2007-003103-12/DE', '/ctr-search/trial/2006-004517-17/FR', '/ctr-search/trial/2013-003421-28/DE', '/ctr-search/trial/2008-002986-30/FR', '/ctr-search/trial/2008-002986-30/results', '/ctr-search/trial/2013-000238-37/NL', '/ctr-search/trial/2010-018418-53/FR', '/ctr-search/trial/2010-018418-53/NL', '/ctr-search/trial/2010-018418-53/HU', '/ctr-search/trial/2010-018418-53/DE', '/ctr-search/trial/2010-018418-53/results', '/ctr-search/trial/2006-006852-37/DE', '/ctr-search/trial/2006-006852-37/ES', '/ctr-search/trial/2006-006852-37/AT', '/ctr-search/trial/2006-006852-37/CZ', '/ctr-search/trial/2006-006852-37/NL', '/ctr-search/trial/2006-006852-37/SK', '/ctr-search/trial/2006-006852-37/HU', '/ctr-search/trial/2006-006852-37/BE', '/ctr-search/trial/2006-006852-37/IT', '/ctr-search/trial/2006-006852-37/FR', '/ctr-search/trial/2006-006852-37/GB', '/ctr-search/trial/2008-000664-16/IT', '/ctr-search/trial/2005-005321-63/IT', '/ctr-search/trial/2005-005321-63/results', '/ctr-search/trial/2011-005023-40/GB', '/ctr-search/trial/2010-022446-24/DE', '/ctr-search/trial/2010-019710-24/IT', ``` Attempted Code - ``` import requests from bs4 import BeautifulSoup page = requests.get('https://www.clinicaltrialsregister.eu/ctr-search/search?query=acute+myeloid+leukemia&page=1') soup = BeautifulSoup(page.text, 'html.parser') #links = [a['href'] for a in soup.find_all('a', href=True) if a.text] #links_with_text = [] #for a in soup.find_all('a', href=True): # if a.text: # links_with_text.append(a['href']) links = [a['href'] for a in soup.find_all('a', href=True)] ``` OutPut- ``` '/help.html', '/ctr-search/search', '/joiningtrial.html', '/contacts.html', '/about.html', '/about.html', '/whatsNew.html', '/dataquality.html', '/doc/Sponsor_Contact_Information_EUCTR.pdf', '/natauthorities.html', '/links.html', '/about.html', '/doc/How_to_Search_EU_CTR.pdf#zoom=100,0,0', 'javascript:void(0)', 'javascript:void(0)', 'javascript:void(0)', 'javascript:void();', '#tabs-1', '#tabs-2', '&page=2', '&page=3', '&page=4', '&page=5', '&page=6', '&page=7', '&page=8', '&page=9', '&page=2', '&page=19', '/ctr-search/trial/2014-000526-37/DE', '/ctr-search/trial/2006-001777-19/NL', '/ctr-search/trial/2006-001777-19/BE', '/ctr-search/trial/2007-000273-35/IT', '/ctr-search/trial/2011-005934-20/FR', '/ctr-search/trial/2006-004950-25/GB', '/ctr-search/trial/2009-017347-33/DE', '/ctr-search/trial/2012-000334-19/IT', '/ctr-search/trial/2012-001594-93/FR', '/ctr-search/trial/2012-001594-93/results', '/ctr-search/trial/2007-003103-12/DE', '/ctr-search/trial/2006-004517-17/FR', '/ctr-search/trial/2013-003421-28/DE', '/ctr-search/trial/2008-002986-30/FR', '/ctr-search/trial/2008-002986-30/results', '/ctr-search/trial/2013-000238-37/NL', '/ctr-search/trial/2010-018418-53/FR', '/ctr-search/trial/2010-018418-53/NL', '/ctr-search/trial/2010-018418-53/HU', '/ctr-search/trial/2010-018418-53/DE', '/ctr-search/trial/2010-018418-53/results', '/ctr-search/trial/2006-006852-37/DE', '/ctr-search/trial/2006-006852-37/ES', '/ctr-search/trial/2006-006852-37/AT', '/ctr-search/trial/2006-006852-37/CZ', '/ctr-search/trial/2006-006852-37/NL', '/ctr-search/trial/2006-006852-37/SK', '/ctr-search/trial/2006-006852-37/HU', '/ctr-search/trial/2006-006852-37/BE', '/ctr-search/trial/2006-006852-37/IT', '/ctr-search/trial/2006-006852-37/FR', '/ctr-search/trial/2006-006852-37/GB', '/ctr-search/trial/2008-000664-16/IT', '/ctr-search/trial/2005-005321-63/IT', '/ctr-search/trial/2005-005321-63/results', '/ctr-search/trial/2011-005023-40/GB', '/ctr-search/trial/2010-022446-24/DE', '/ctr-search/trial/2010-019710-24/IT', 'javascript:void(0)', '&page=2', '&page=3', '&page=4', '&page=5', '&page=6', '&page=7', '&page=8', '&page=9', '&page=2', '&page=19', 'https://servicedesk.ema.europa.eu', '/disclaimer.html', 'http://www.ema.europa.eu', 'http://www.hma.eu' ```
2019/08/19
[ "https://Stackoverflow.com/questions/57551049", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11266219/" ]
As i said, you can achieve this by concatenating the required part of url to every result. Try this code: ``` import requests from bs4 import BeautifulSoup page = requests.get('https://www.clinicaltrialsregister.eu/ctr-search/search?query=acute+myeloid+leukemia&page=1') soup = BeautifulSoup(page.text, 'html.parser') links = ["https://www.clinicaltrialsregister.eu" + a['href'] for a in soup.find_all('a', href=True)] ```
This script will traverse all pages of the search results and try to find relevant information. It's necessary to add full url, not just `https://www.clinicaltrialsregister.eu`. ``` import requests from bs4 import BeautifulSoup base_url = 'https://www.clinicaltrialsregister.eu/ctr-search/search?query=acute+myeloid+leukemia' url = base_url + '&page=1' soup = BeautifulSoup(requests.get(url).text, 'lxml') page = 1 while True: print('Page no.{}'.format(page)) print('-' * 160) print() for table in soup.select('table.result'): print('EudraCT Number: ', end='') for span in table.select('td:contains("EudraCT Number:")'): print(span.get_text(strip=True).split(':')[1]) print('Full Title: ', end='') for td in table.select('td:contains("Full Title:")'): print(td.get_text(strip=True).split(':')[1]) print('Sponsor Name: ', end='') for td in table.select('td:contains("Sponsor Name:")'): print(td.get_text(strip=True).split(':')[1]) print('Trial protocol: ', end='') for a in table.select('td:contains("Trial protocol:") a'): print(a.get_text(strip=True), end=' ') print() print('Medical condition: ', end='') for td in table.select('td:contains("Medical condition:")'): print(td.get_text(strip=True).split(':')[1]) print('-' * 160) next_page = soup.select_one('a:contains("Next»")') if next_page: soup = BeautifulSoup(requests.get(base_url + next_page['href']).text, 'lxml') page += 1 else: break ``` Prints: ``` Page no.1 ---------------------------------------------------------------------------------------------------------------------------------------------------------------- EudraCT Number: 2014-000526-37 Full Title: An Investigator-Initiated Study To Evaluate Ara-C and Idarubicin in Combination with the Selective Inhibitor Of Nuclear Export (SINE) Selinexor (KPT-330) in Patients with Relapsed Or Refractory A... Sponsor Name: GSO Global Clinical Research B.V. Trial protocol: DE Medical condition: Patients with relapsed/refractory Acute Myeloid Leukemia (AML) ---------------------------------------------------------------------------------------------------------------------------------------------------------------- EudraCT Number: 2006-001777-19 Full Title: A Phase II multicenter study to assess the tolerability and efficacy of the addition of Bevacizumab to standard induction therapy in AML and high risk MDS above 60 years. Sponsor Name: HOVON foundation Trial protocol: NL BE Medical condition: Acute myeloid leukaemia (AML), AML FAB M0-M2 or M4-M7; diagnosis with refractory anemia with excess of blasts (RAEB) or refractory anemia with excess of blasts in transformation (RAEB-T) with an IP... ---------------------------------------------------------------------------------------------------------------------------------------------------------------- EudraCT Number: 2007-000273-35 Full Title: A Phase II, Open-Label, Multi-centre, 2-part study to assess the Safety, Tolerability, and Efficacy of Tipifarnib Plus Bortezomib in the Treatment of Newly Diagnosed Acute Myeloid Leukemia AML ... Sponsor Name: AZIENDA OSPEDALIERA DI BOLOGNA POLICLINICO S. ORSOLA M. MALPIGHI Trial protocol: IT Medical condition: Acute Myeloid Leukemia ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ...and so on. ```
11,300,737
I cant import rdflib in python. error detailed: ``` Python 2.7.3 (default, Jun 27 2012, 23:48:21) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import rdflib Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/rdflib/__init__.py", line 65, in <module> from rdflib.term import URIRef, BNode, Literal, Variable File "/usr/local/lib/python2.7/site-packages/rdflib/term.py", line 49, in <module> from isodate import parse_time, parse_date, parse_datetime ImportError: No module named isodate ``` I would be grateful if anybody can help. Thanks.
2012/07/02
[ "https://Stackoverflow.com/questions/11300737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1497041/" ]
If you actually install rdflib via `pip`, then its dependencies will come along with it (isodate included): ``` pip install -U rdflib ``` or ``` easy_install -U rdflib ``` Chances are you might have installed it directly from source, meaning you would have to take care of the deps yourself. Information on installing pip if you dont have it already: <http://www.pip-installer.org/en/latest/installing.html> If you have easy\_install, you can do: `easy_install pip`
It seems there is a dependency to [isodate](http://pypi.python.org/pypi/isodate/), so try installing that via your favorite PyPI-Installer (*pip* oder \*easy\_install\*).
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` javascript: var all_product_ids = #{raw existing_ids.to_json}; var products_json = #{raw @filter.data.to_json}; ```
I had a similar problem with this. Using the code that others have provided didn't work for me because slim was html escaping my variable. I ended up using [Gon](https://github.com/gazay/gon-sinatra). This one is for Sinatra, but they have a gem for Rails as well. Hope it helps others having similar problems.
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
From <https://github.com/slim-template/slim> Text Interpolation paragraph SLIM escapes HTML by default. To avoid the same use **#{{content}}** for **#{content}**
create a `_your_erb_file_contain_javascript_code.erb` first And then in your slim file `javascript:` part: ``` #{ conditions ? (render 'your_erb_file_contain_javascript_code') : nil} ```
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Slim ``` javascript: var all_product_ids = "#{existing_ids.to_json}"; var products_json = "#{@filter.data.to_json}"; ```
what i prefer to do, is to keep all the javascript in a separate file. for example, i would do it as follows(jquery): in your layout: ... ``` <body data-product-ids="<%= existing_ids.to_json %>" data-products-json="<%= @filter.data.to_json %>"> ``` .. in js: ``` // create your application namespace $.myapp = {}; $.myapp.allProductIds = $.parseJSON( $('body').attr('data-product-ids') ); $.myapp.products = $.parseJSON( $('body').attr('data-products-json') ); ``` so, you'll then use it in js like so: ``` $.myapp.allProductIds ```
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Slim ``` javascript: var all_product_ids = "#{existing_ids.to_json}"; var products_json = "#{@filter.data.to_json}"; ```
create a `_your_erb_file_contain_javascript_code.erb` first And then in your slim file `javascript:` part: ``` #{ conditions ? (render 'your_erb_file_contain_javascript_code') : nil} ```
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` javascript: var all_product_ids = #{raw existing_ids.to_json}; var products_json = #{raw @filter.data.to_json}; ```
``` javascript: var isAdmin = "#{current_user.admin? ? 1 : 0}"; ```
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
what i prefer to do, is to keep all the javascript in a separate file. for example, i would do it as follows(jquery): in your layout: ... ``` <body data-product-ids="<%= existing_ids.to_json %>" data-products-json="<%= @filter.data.to_json %>"> ``` .. in js: ``` // create your application namespace $.myapp = {}; $.myapp.allProductIds = $.parseJSON( $('body').attr('data-product-ids') ); $.myapp.products = $.parseJSON( $('body').attr('data-products-json') ); ``` so, you'll then use it in js like so: ``` $.myapp.allProductIds ```
create a `_your_erb_file_contain_javascript_code.erb` first And then in your slim file `javascript:` part: ``` #{ conditions ? (render 'your_erb_file_contain_javascript_code') : nil} ```
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
what i prefer to do, is to keep all the javascript in a separate file. for example, i would do it as follows(jquery): in your layout: ... ``` <body data-product-ids="<%= existing_ids.to_json %>" data-products-json="<%= @filter.data.to_json %>"> ``` .. in js: ``` // create your application namespace $.myapp = {}; $.myapp.allProductIds = $.parseJSON( $('body').attr('data-product-ids') ); $.myapp.products = $.parseJSON( $('body').attr('data-products-json') ); ``` so, you'll then use it in js like so: ``` $.myapp.allProductIds ```
I had a similar problem with this. Using the code that others have provided didn't work for me because slim was html escaping my variable. I ended up using [Gon](https://github.com/gazay/gon-sinatra). This one is for Sinatra, but they have a gem for Rails as well. Hope it helps others having similar problems.
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
From <https://github.com/slim-template/slim> Text Interpolation paragraph SLIM escapes HTML by default. To avoid the same use **#{{content}}** for **#{content}**
I had a similar problem with this. Using the code that others have provided didn't work for me because slim was html escaping my variable. I ended up using [Gon](https://github.com/gazay/gon-sinatra). This one is for Sinatra, but they have a gem for Rails as well. Hope it helps others having similar problems.
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Slim ``` javascript: var all_product_ids = "#{existing_ids.to_json}"; var products_json = "#{@filter.data.to_json}"; ```
``` javascript: var isAdmin = "#{current_user.admin? ? 1 : 0}"; ```
5,646,322
I've written a python cgi script to generate random numbers and add them together than ask the user to give the answer. But even user answer was correct, it gives him wrong. The answer will not be correct. The problem is in the flow : ``` #!/usr/bin/python2.7 import cgi,sys,random sys.stderr = sys.stdout input_field = cgi.FieldStorage() x = random.randrange(1,20) y = random.randrange(1,20) answer = x + y print("Content-type: text/html\n") print ("<center>") if "input" in input_field: if input_field["input"].value == answer: print("Content-Type content=text/htm") print "<br><br><b> Good :D </b" else: print "<br><br><b> Wrong :( </b><br><br>" print "Your Answer :",input_field['input'].value print "<br>","Correct Answer",answer else: print "<br><br>",x,"+",y,"<br><br>" print(""" <form method=POST action=""> <TR> <TH align=right>Answer:<TD> <input type=text name=input> <TR> """) print ("</center>") ``` e.g : --- > > 3 + 9 > > > Answer: [12 ] > > > --- > > **Wrong :(** > > > Your Answer : 12 > Correct Answer 17 > > > Has anyone know what should I do to fix the flow? **It generate new numbers each time** **Note : This is not a type error, it's *Logical* error**
2011/04/13
[ "https://Stackoverflow.com/questions/5646322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` javascript: var all_product_ids = #{raw existing_ids.to_json}; var products_json = #{raw @filter.data.to_json}; ```
create a `_your_erb_file_contain_javascript_code.erb` first And then in your slim file `javascript:` part: ``` #{ conditions ? (render 'your_erb_file_contain_javascript_code') : nil} ```
58,917,280
I am trying to base64 encode using a custom character set in python3. Most of the examples I have seen in SO are related to Python 2, so I had to make some minor adjustments to the code. The issue that I am facing is that I am replacing the character `/` with `_`, but it is still printing with `/`. My code is: This is just an example, i am not trying to only base64 with urlsafe chars. `custom` could be anything with the correct length. ```py import base64 data = 'some random? data' print(base64.b64encode(data.encode())) std_base64chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/" custom = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_" data = data.translate(str.maketrans(custom, std_base64chars)).encode() print(base64.b64encode(data)) # Both prints b'c29tZSByYW5kb20/IGRhdGE=' b'c29tZSByYW5kb20/IGRhdGE=' ``` How can I get the translation to work so that occurrences of `/` are replaced correctly with `_`? Edit ---- I should make it clear that I am not trying to do only one type of base64 encoding here like urlsafe, but any possible character set. This will be a function were a user can pass their own charset. I am looking for a character by character mapping, not string slicing. Edit ---- Because there is some confusion around the clarity of my question, I am try to add more details. I am trying to write a function that can take an arbitrary charset from a user, and then map them individually before base64 encoding. Most of the answers have been around manipulating `altchars` or string slice and replace, but that doesnt solve all the needs. So for example, the itoa64 charset is: `./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz=` or unix crypt format is `./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz`. The answers although correct, does not address these situations.
2019/11/18
[ "https://Stackoverflow.com/questions/58917280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7402287/" ]
If the only characters you want to switch are `+` and `\`, you can use [base64.urlsafe\_b64encode](https://docs.python.org/2/library/base64.html#base64.urlsafe_b64encode) to replace with `-` and `_` respectively. ``` >>> base64.urlsafe_b64encode(data.encode()) b'c29tZSByYW5kb20_IGRhdGE=' ``` Alternatively, you can replace those characters with characters of your own choice using the optional argument of [base64.b64encode](https://docs.python.org/2/library/base64.html#base64.b64encode): ``` >>> base64.b64encode(data.encode(), '*&'.encode()) b'c29tZSByYW5kb20&IGRhdGE=' ``` If you need to use an entirely new alphabet, you can do ``` import base64 data = 'some random? data' print(base64.b64encode(data.encode())) std_base64chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/" custom = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_" x = base64.b64encode(data.encode()) print(bytes(str(x)[2:-1].translate(str(x)[2:-1].maketrans(std_base64chars, custom)), 'utf-8')) ``` Which outputs: ``` b'c29tZSByYW5kb20/IGRhdGE=' b'C29TzsbYyw5KB20_igrHDge=' ```
Shouldn't this work: ``` import base64 data = 'some random? data' custom = b"-_" rslt = base64.b64encode(data) print(rslt) rslt = base64.b64encode(data, altchars=custom) print(rslt) ``` I get following output: ``` c29tZSByYW5kb20/IGRhdGE= c29tZSByYW5kb20_IGRhdGE= ``` or if you insist, that custom contains: ``` custom = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_" ``` then use: ``` rslt = base64.b64encode(data, altchars=custom[-2:]) ```
65,335,763
Good day I am using FastAPI and I want to render the database contents on index.html - however I get the following error: ``` INFO: 127.0.0.1:55139 - "GET /?skip=0&limit=100 HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 394, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/fastapi/applications.py", line 190, in __call__ await super().__call__(scope, receive, send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/applications.py", line 111, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc from None File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/middleware/cors.py", line 78, in __call__ await self.app(scope, receive, send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc from None File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/routing.py", line 566, in __call__ await route.handle(scope, receive, send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/routing.py", line 227, in handle await self.app(scope, receive, send) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/routing.py", line 41, in app response = await func(request) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/fastapi/routing.py", line 188, in app raw_response = await run_endpoint_function( File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/fastapi/routing.py", line 137, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "/Users/barnaby/.local/share/virtualenvs/fastapi-example-6xjq_vv2/lib/python3.9/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool return await loop.run_in_executor(None, func, *args) File "/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "./sql_app/main.py", line 51, in read_notes "title": title, NameError: name 'title' is not defined ``` Main.py ``` from fastapi import FastAPI from typing import List, Dict from fastapi import Depends, FastAPI, HTTPException, Request, Response from fastapi.responses import HTMLResponse from fastapi.middleware.cors import CORSMiddleware from sqlalchemy.orm import Session from fastapi.templating import Jinja2Templates from . import crud, models, schemas from .database import SessionLocal, engine models.Base.metadata.create_all(bind=engine) templates = Jinja2Templates(directory="templates") app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"], ) # Dependency def get_db(): db = SessionLocal() try: yield db finally: db.close() # #original function # @app.get("/notes", response_model=List[schemas.Note]) # def read_notes(request: Request, skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): # notes = crud.get_notes(db=db, skip=skip, limit=limit) # print(notes) # return notes @app.get("/", response_class=HTMLResponse) def read_notes(request: Request, skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): notes = crud.get_notes(db=db, skip=skip, limit=limit) print(notes) return templates.TemplateResponse("index.html",{ "request": request, "id": id, "title": title, "description": description }) @app.post("/notes", response_model=schemas.Note, status_code=201) def create_note(note: schemas.NoteCreate, db: Session = Depends(get_db)): return crud.create_note(db=db, note=note) @app.get("/notes/{note_id}", response_model=schemas.Note) def read_user(note_id: int, db: Session = Depends(get_db)): db_note = crud.get_note(db=db, note_id=note_id) if db_note is None: raise HTTPException(status_code=404, detail="Note not found") return db_note @app.delete("/notes/{note_id}", status_code=204) async def delete_note(note_id: int, db: Session = Depends(get_db)): return crud.delete_note(db=db, note_id=note_id) @app.put("/notes/{note_id}", status_code=200) async def put_note(note_id: int, note: schemas.NoteCreate, db: Session = Depends(get_db)): db_note = schemas.Note(id = note_id, title= note.title, description=note.description) crud.update_note(db=db, note=db_note) @app.patch("/notes/{note_id}", status_code=200) async def patch_note(note_id: int, note: schemas.NoteCreate, db: Session = Depends(get_db)): print(note_id) print(note.title) print(note.description) db_note = schemas.Note(id = note_id, title= note.title, description=note.description) crud.update_note(db=db, note=db_note) if __name__ == '__main__': uvicorn.run("main:app", host="127.0.0.1", port=8000) ``` crud.py ``` from sqlalchemy.orm import Session from . import models, schemas def get_note(db: Session, note_id: int): return db.query(models.Note).filter(models.Note.id == note_id).first() def delete_note(db: Session, note_id: int): db_note = db.query(models.Note).filter(models.Note.id == note_id).first() db.delete(db_note) db.commit() return {} def get_notes(db: Session, skip: int = 0, limit: int = 100): return db.query(models.Note).offset(skip).limit(limit).all() def create_note(db: Session, note: schemas.NoteCreate): db_note = models.Note(title=note.title, description=note.description) db.add(db_note) db.commit() db.refresh(db_note) return db_note def update_note(db: Session, note: schemas.Note): db_note = db.query(models.Note).filter(models.Note.id == note.id).first() db_note.title = note.title db_note.description = note.description db.commit() db.refresh(db_note) return db_note ``` schemas.py ``` class NoteBase(BaseModel): title: str description: str class NoteCreate(NoteBase): pass class Note(NoteBase): id: int class Config: orm_mode = True ``` models.py ``` from sqlalchemy import Column, Integer, String from sqlalchemy.orm import relationship from .database import Base class Note(Base): __tablename__ = "notes" id = Column(Integer, primary_key=True, index=True) title = Column(String, nullable=True, default="new") description = Column(String, nullable=True, default="new") ``` index.html ``` {% extends 'layout.html' %} {% include 'header.html' %} {% block title %} Home {% endblock %} {% block body %} <div class="container"> <div class="row"> <div class="col md-12"> <div class="jumbotron"> <table class="table"> <thead> <tr> <th scope="col">ID</th> <th scope="col">Title</th> <th scope="col">Description</th> </tr> </thead> <tbody> <tr> {% for note in notes%} <td>{{notes.id}}</td> <td>{{notes.title}}</td> <td>{{notes.description}}</td> </tr> {% endfor %} <tr> </tbody> </table> </div> </div> </div> </div> {% endblock %} ``` Is my schema/models set up incorrectly? looking at this end point-- <http://127.0.0.1:8000/notes> i get the following displayed in index.html. However this is in list format.But somehow I am unable to render it via the HTML template. ``` [ { "title": "title 3", "description": "title 3 description", "id": 3 }, { "title": "title 1 updated", "description": "string", "id": 1 }, { "title": "Title updated 2", "description": "description updated2", "id": 2 }, { "title": "Note 4", "description": "Note 4 description", "id": 4 } ] ``` database.py ``` from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker SQLALCHEMY_DATABASE_URL = "postgresql://postgres:123456789@localhost/notes" engine = create_engine (SQLALCHEMY_DATABASE_URL) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base() ```
2020/12/17
[ "https://Stackoverflow.com/questions/65335763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14372626/" ]
BMC does not use OS services. BMC is completely OS independent and it may monitor and control hardware even when no OS is running or installed. BMC power line is independent on the host power and BMC is powered even when the host is powered off. It is ensured by power source design. BMC can control the host power supply. BMC usually has a direct hardware link to the power supply. BMC sends to power-off command directly to the power supply and not to the host OS.
A general design is: BMC connects to a CPLD, which controls the power sequence. When power off is needed, BMC will trigger the CPLD so that it is the same as if someone pushes the front panel power button.
59,087,639
I have the following code: ``` for i in range (0,20,1): df = pd.read_excel(url, sheet_name=i,sep='\s*,\s*') print('sample:',i+1) df1 = df.loc[0:50] #initial push ma=df1['Latest: Potential (V)'].values.tolist() max_force_initial_push=max(ma) ``` And when I run it, I get the following `error`: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-41-e56ce0a5f5fd> in <module> 45 ma=df1['Latest: Potential (V)'].values.tolist() 46 print ('ma: ', ma) ---> 47 max_force_initial_push=max(ma) TypeError: 'list' object is not callable ``` When I `print ma`, I get the following Result: `ma: [3.25836181641, 3.26812744141, 3.22906494141, 3.18023681641, 3.10729980469, 3.08776855469]` Can someone help me to troubleshoot the problem? I don't really understand what is going on Thanks! Any Help Will Be Appreciated!
2019/11/28
[ "https://Stackoverflow.com/questions/59087639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12007010/" ]
This happens because the Promise object holds its state internally . So when you call `.then` on a Promise object it will either : * Await for resolution, and then fire the callback * If the promise is already resolved, the callback will execute immediately
> > So does it mean that the compiler does sth to assist here? for example, the compiler merge the then clause right after the promise1 statement just like example 1? > > > No, there is much less magic happening than you think. `new Promise` returns a promise object. A promise object has a `.then` method. You use this method to tell the promise that *once* it is resolved, call the passed function. The promise object holds onto this callback. You can think of this as registering an event handler. Once the "resolve event" happens, the function is executed. Of the course the promises itself knows whether or not it is resolved. So if `.then` is called on a resolved promise, it can immediately schedule the invocation of the callback. Here is a very poor man's implementation of a promise (doesn't handle rejects and other things), which might give you a better idea of how promises work. Note that I use `setTimeout` here to simulate how promises always execute callbacks on the next tick of the [event loop](https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop), but in reality JavaScript uses some other internal API: ```js class Promise { constructor(handler) { this._callbacks = []; // Execute the passed handler, passing the "resolve" function handler(value => { // Mark the promise as resolved this._resolved = true; // Remember the value this._value = value; // Execute all registered callbacks this._callbacks.forEach(cb => cb(value)); }); } then(handler) { let _resolve; const promise = new Promise(resolve => _resolve = resolve); const callback = value => { setTimeout( () => _resolve(handler(value)), 0 ); }; // Is the promise already resolved? if (this._resolved) { // Immediately schedule invocation of handler with the remembered value callback(this._value); } else { // Remember callback for future invocation this._callbacks.push(callback); } return promise; } } const promise = new Promise(resolve => { console.log('inside Promise'); resolve(42); }); console.log('in between'); promise.then(v => { console.log('inside then', v); }) promise.then(v => { console.log('inside another then', v); return 21; }) .then(v => { console.log('.then returns a new promise', v); }); ```
69,614,180
Hi I have a set of data which I extracted from an api and I am trying to split the data in the set down into separate sets since currently they are all nested in the larger set. My current set: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", "Financing Cash Flow": "-87876000", "Free Cash Flow": "64121000", "Income Tax Paid Supplemental Data": "10417000", "Interest Paid Supplemental Data": "3022000", "Investing Cash Flow": "16066000", "Issuance of Capital Stock": "669000", "Issuance of Debt": "6969000", "Operating Cash Flow": "77434000", "Repayment of Debt": "-6500000", "Repurchase of Capital Stock": "-72738000" }, "9/30/2019": { "Capital Expenditure": "-10495000", "End Cash Position": "50224000", "Financing Cash Flow": "-90976000", "Free Cash Flow": "58896000", "Income Tax Paid Supplemental Data": "15263000", "Interest Paid Supplemental Data": "3423000", "Investing Cash Flow": "45896000", "Issuance of Capital Stock": "781000", "Issuance of Debt": "6963000", "Operating Cash Flow": "69391000", "Repayment of Debt": "-8805000", "Repurchase of Capital Stock": "-66897000" }, "9/30/2020": { "Capital Expenditure": "-7309000", "End Cash Position": "39789000", "Financing Cash Flow": "-86820000", "Free Cash Flow": "73365000", "Income Tax Paid Supplemental Data": "9501000", "Interest Paid Supplemental Data": "3002000", "Investing Cash Flow": "-4289000", "Issuance of Capital Stock": "880000", "Issuance of Debt": "16091000", "Operating Cash Flow": "80674000", "Repayment of Debt": "-12629000", "Repurchase of Capital Stock": "-72358000" }, "ttm": { "Capital Expenditure": "-9646000", "End Cash Position": "35276000", "Financing Cash Flow": "-94328000", "Free Cash Flow": "94768000", "Income Tax Paid Supplemental Data": "19627000", "Interest Paid Supplemental Data": "2597000", "Investing Cash Flow": "-9849000", "Issuance of Capital Stock": "1011000", "Issuance of Debt": "22370000", "Operating Cash Flow": "104414000", "Repayment of Debt": "-7500000", "Repurchase of Capital Stock": "-83410000" } } ``` My desired outcome would be: s\_19\_30\_2018 = ["Capital Expenditure": "-13313000"...] s\_19\_30\_2019 = ["Capital Expenditure": "-10495000"...] s\_19\_30\_2020 = ["Capital Expenditure": "-7309000"...] s\_ttm = ["Capital Expenditure": "-9646000"...] This is so that I can access the data with more ease and add them to a sql database. I have tried by doing s\_19\_30\_2018 = api['19/30/2018'] but I keep getting 'type error string indices must be integers '. Any help would be appreciated in python >.<
2021/10/18
[ "https://Stackoverflow.com/questions/69614180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13970042/" ]
Given this structure: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", }, "9/30/2019": { .... ``` to get a list of key,values for the first entry can run: ``` for key,value in api["9/30/2018"]: l = [key, value] print(f" {key}, {value}") # prints "Capital Expenditure -133313000" ``` to go through all the items ``` #get the keys ks = api.keys() for k in ks: for key, value, in api[k]: print(f" {key}, {value}") # prints "Capital Expenditure -133313000"] ... ```
Your dictionary keys are string so use quotes when you access them like ``` s_19_30_2018 = api["9/30/2018"] ``` also I don't see any key such as "19\_30\_2018" in your dictionary.
69,614,180
Hi I have a set of data which I extracted from an api and I am trying to split the data in the set down into separate sets since currently they are all nested in the larger set. My current set: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", "Financing Cash Flow": "-87876000", "Free Cash Flow": "64121000", "Income Tax Paid Supplemental Data": "10417000", "Interest Paid Supplemental Data": "3022000", "Investing Cash Flow": "16066000", "Issuance of Capital Stock": "669000", "Issuance of Debt": "6969000", "Operating Cash Flow": "77434000", "Repayment of Debt": "-6500000", "Repurchase of Capital Stock": "-72738000" }, "9/30/2019": { "Capital Expenditure": "-10495000", "End Cash Position": "50224000", "Financing Cash Flow": "-90976000", "Free Cash Flow": "58896000", "Income Tax Paid Supplemental Data": "15263000", "Interest Paid Supplemental Data": "3423000", "Investing Cash Flow": "45896000", "Issuance of Capital Stock": "781000", "Issuance of Debt": "6963000", "Operating Cash Flow": "69391000", "Repayment of Debt": "-8805000", "Repurchase of Capital Stock": "-66897000" }, "9/30/2020": { "Capital Expenditure": "-7309000", "End Cash Position": "39789000", "Financing Cash Flow": "-86820000", "Free Cash Flow": "73365000", "Income Tax Paid Supplemental Data": "9501000", "Interest Paid Supplemental Data": "3002000", "Investing Cash Flow": "-4289000", "Issuance of Capital Stock": "880000", "Issuance of Debt": "16091000", "Operating Cash Flow": "80674000", "Repayment of Debt": "-12629000", "Repurchase of Capital Stock": "-72358000" }, "ttm": { "Capital Expenditure": "-9646000", "End Cash Position": "35276000", "Financing Cash Flow": "-94328000", "Free Cash Flow": "94768000", "Income Tax Paid Supplemental Data": "19627000", "Interest Paid Supplemental Data": "2597000", "Investing Cash Flow": "-9849000", "Issuance of Capital Stock": "1011000", "Issuance of Debt": "22370000", "Operating Cash Flow": "104414000", "Repayment of Debt": "-7500000", "Repurchase of Capital Stock": "-83410000" } } ``` My desired outcome would be: s\_19\_30\_2018 = ["Capital Expenditure": "-13313000"...] s\_19\_30\_2019 = ["Capital Expenditure": "-10495000"...] s\_19\_30\_2020 = ["Capital Expenditure": "-7309000"...] s\_ttm = ["Capital Expenditure": "-9646000"...] This is so that I can access the data with more ease and add them to a sql database. I have tried by doing s\_19\_30\_2018 = api['19/30/2018'] but I keep getting 'type error string indices must be integers '. Any help would be appreciated in python >.<
2021/10/18
[ "https://Stackoverflow.com/questions/69614180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13970042/" ]
Your dictionary keys are string so use quotes when you access them like ``` s_19_30_2018 = api["9/30/2018"] ``` also I don't see any key such as "19\_30\_2018" in your dictionary.
``` ks = api.keys() for k in ks: for key, value, in api[k]: print(f" {key}, {value}") ``` By using code you can solve your problem.
69,614,180
Hi I have a set of data which I extracted from an api and I am trying to split the data in the set down into separate sets since currently they are all nested in the larger set. My current set: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", "Financing Cash Flow": "-87876000", "Free Cash Flow": "64121000", "Income Tax Paid Supplemental Data": "10417000", "Interest Paid Supplemental Data": "3022000", "Investing Cash Flow": "16066000", "Issuance of Capital Stock": "669000", "Issuance of Debt": "6969000", "Operating Cash Flow": "77434000", "Repayment of Debt": "-6500000", "Repurchase of Capital Stock": "-72738000" }, "9/30/2019": { "Capital Expenditure": "-10495000", "End Cash Position": "50224000", "Financing Cash Flow": "-90976000", "Free Cash Flow": "58896000", "Income Tax Paid Supplemental Data": "15263000", "Interest Paid Supplemental Data": "3423000", "Investing Cash Flow": "45896000", "Issuance of Capital Stock": "781000", "Issuance of Debt": "6963000", "Operating Cash Flow": "69391000", "Repayment of Debt": "-8805000", "Repurchase of Capital Stock": "-66897000" }, "9/30/2020": { "Capital Expenditure": "-7309000", "End Cash Position": "39789000", "Financing Cash Flow": "-86820000", "Free Cash Flow": "73365000", "Income Tax Paid Supplemental Data": "9501000", "Interest Paid Supplemental Data": "3002000", "Investing Cash Flow": "-4289000", "Issuance of Capital Stock": "880000", "Issuance of Debt": "16091000", "Operating Cash Flow": "80674000", "Repayment of Debt": "-12629000", "Repurchase of Capital Stock": "-72358000" }, "ttm": { "Capital Expenditure": "-9646000", "End Cash Position": "35276000", "Financing Cash Flow": "-94328000", "Free Cash Flow": "94768000", "Income Tax Paid Supplemental Data": "19627000", "Interest Paid Supplemental Data": "2597000", "Investing Cash Flow": "-9849000", "Issuance of Capital Stock": "1011000", "Issuance of Debt": "22370000", "Operating Cash Flow": "104414000", "Repayment of Debt": "-7500000", "Repurchase of Capital Stock": "-83410000" } } ``` My desired outcome would be: s\_19\_30\_2018 = ["Capital Expenditure": "-13313000"...] s\_19\_30\_2019 = ["Capital Expenditure": "-10495000"...] s\_19\_30\_2020 = ["Capital Expenditure": "-7309000"...] s\_ttm = ["Capital Expenditure": "-9646000"...] This is so that I can access the data with more ease and add them to a sql database. I have tried by doing s\_19\_30\_2018 = api['19/30/2018'] but I keep getting 'type error string indices must be integers '. Any help would be appreciated in python >.<
2021/10/18
[ "https://Stackoverflow.com/questions/69614180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13970042/" ]
Your dictionary keys are string so use quotes when you access them like ``` s_19_30_2018 = api["9/30/2018"] ``` also I don't see any key such as "19\_30\_2018" in your dictionary.
by using the following code .....you can solve your problem ``` ks = api.keys() for k in ks: for key, value, in api[k]: print(f" {key}, {value}") ```
69,614,180
Hi I have a set of data which I extracted from an api and I am trying to split the data in the set down into separate sets since currently they are all nested in the larger set. My current set: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", "Financing Cash Flow": "-87876000", "Free Cash Flow": "64121000", "Income Tax Paid Supplemental Data": "10417000", "Interest Paid Supplemental Data": "3022000", "Investing Cash Flow": "16066000", "Issuance of Capital Stock": "669000", "Issuance of Debt": "6969000", "Operating Cash Flow": "77434000", "Repayment of Debt": "-6500000", "Repurchase of Capital Stock": "-72738000" }, "9/30/2019": { "Capital Expenditure": "-10495000", "End Cash Position": "50224000", "Financing Cash Flow": "-90976000", "Free Cash Flow": "58896000", "Income Tax Paid Supplemental Data": "15263000", "Interest Paid Supplemental Data": "3423000", "Investing Cash Flow": "45896000", "Issuance of Capital Stock": "781000", "Issuance of Debt": "6963000", "Operating Cash Flow": "69391000", "Repayment of Debt": "-8805000", "Repurchase of Capital Stock": "-66897000" }, "9/30/2020": { "Capital Expenditure": "-7309000", "End Cash Position": "39789000", "Financing Cash Flow": "-86820000", "Free Cash Flow": "73365000", "Income Tax Paid Supplemental Data": "9501000", "Interest Paid Supplemental Data": "3002000", "Investing Cash Flow": "-4289000", "Issuance of Capital Stock": "880000", "Issuance of Debt": "16091000", "Operating Cash Flow": "80674000", "Repayment of Debt": "-12629000", "Repurchase of Capital Stock": "-72358000" }, "ttm": { "Capital Expenditure": "-9646000", "End Cash Position": "35276000", "Financing Cash Flow": "-94328000", "Free Cash Flow": "94768000", "Income Tax Paid Supplemental Data": "19627000", "Interest Paid Supplemental Data": "2597000", "Investing Cash Flow": "-9849000", "Issuance of Capital Stock": "1011000", "Issuance of Debt": "22370000", "Operating Cash Flow": "104414000", "Repayment of Debt": "-7500000", "Repurchase of Capital Stock": "-83410000" } } ``` My desired outcome would be: s\_19\_30\_2018 = ["Capital Expenditure": "-13313000"...] s\_19\_30\_2019 = ["Capital Expenditure": "-10495000"...] s\_19\_30\_2020 = ["Capital Expenditure": "-7309000"...] s\_ttm = ["Capital Expenditure": "-9646000"...] This is so that I can access the data with more ease and add them to a sql database. I have tried by doing s\_19\_30\_2018 = api['19/30/2018'] but I keep getting 'type error string indices must be integers '. Any help would be appreciated in python >.<
2021/10/18
[ "https://Stackoverflow.com/questions/69614180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13970042/" ]
Given this structure: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", }, "9/30/2019": { .... ``` to get a list of key,values for the first entry can run: ``` for key,value in api["9/30/2018"]: l = [key, value] print(f" {key}, {value}") # prints "Capital Expenditure -133313000" ``` to go through all the items ``` #get the keys ks = api.keys() for k in ks: for key, value, in api[k]: print(f" {key}, {value}") # prints "Capital Expenditure -133313000"] ... ```
``` ks = api.keys() for k in ks: for key, value, in api[k]: print(f" {key}, {value}") ``` By using code you can solve your problem.
69,614,180
Hi I have a set of data which I extracted from an api and I am trying to split the data in the set down into separate sets since currently they are all nested in the larger set. My current set: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", "Financing Cash Flow": "-87876000", "Free Cash Flow": "64121000", "Income Tax Paid Supplemental Data": "10417000", "Interest Paid Supplemental Data": "3022000", "Investing Cash Flow": "16066000", "Issuance of Capital Stock": "669000", "Issuance of Debt": "6969000", "Operating Cash Flow": "77434000", "Repayment of Debt": "-6500000", "Repurchase of Capital Stock": "-72738000" }, "9/30/2019": { "Capital Expenditure": "-10495000", "End Cash Position": "50224000", "Financing Cash Flow": "-90976000", "Free Cash Flow": "58896000", "Income Tax Paid Supplemental Data": "15263000", "Interest Paid Supplemental Data": "3423000", "Investing Cash Flow": "45896000", "Issuance of Capital Stock": "781000", "Issuance of Debt": "6963000", "Operating Cash Flow": "69391000", "Repayment of Debt": "-8805000", "Repurchase of Capital Stock": "-66897000" }, "9/30/2020": { "Capital Expenditure": "-7309000", "End Cash Position": "39789000", "Financing Cash Flow": "-86820000", "Free Cash Flow": "73365000", "Income Tax Paid Supplemental Data": "9501000", "Interest Paid Supplemental Data": "3002000", "Investing Cash Flow": "-4289000", "Issuance of Capital Stock": "880000", "Issuance of Debt": "16091000", "Operating Cash Flow": "80674000", "Repayment of Debt": "-12629000", "Repurchase of Capital Stock": "-72358000" }, "ttm": { "Capital Expenditure": "-9646000", "End Cash Position": "35276000", "Financing Cash Flow": "-94328000", "Free Cash Flow": "94768000", "Income Tax Paid Supplemental Data": "19627000", "Interest Paid Supplemental Data": "2597000", "Investing Cash Flow": "-9849000", "Issuance of Capital Stock": "1011000", "Issuance of Debt": "22370000", "Operating Cash Flow": "104414000", "Repayment of Debt": "-7500000", "Repurchase of Capital Stock": "-83410000" } } ``` My desired outcome would be: s\_19\_30\_2018 = ["Capital Expenditure": "-13313000"...] s\_19\_30\_2019 = ["Capital Expenditure": "-10495000"...] s\_19\_30\_2020 = ["Capital Expenditure": "-7309000"...] s\_ttm = ["Capital Expenditure": "-9646000"...] This is so that I can access the data with more ease and add them to a sql database. I have tried by doing s\_19\_30\_2018 = api['19/30/2018'] but I keep getting 'type error string indices must be integers '. Any help would be appreciated in python >.<
2021/10/18
[ "https://Stackoverflow.com/questions/69614180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13970042/" ]
Given this structure: ``` api = { "9/30/2018": { "Capital Expenditure": "-13313000", "End Cash Position": "25913000", }, "9/30/2019": { .... ``` to get a list of key,values for the first entry can run: ``` for key,value in api["9/30/2018"]: l = [key, value] print(f" {key}, {value}") # prints "Capital Expenditure -133313000" ``` to go through all the items ``` #get the keys ks = api.keys() for k in ks: for key, value, in api[k]: print(f" {key}, {value}") # prints "Capital Expenditure -133313000"] ... ```
by using the following code .....you can solve your problem ``` ks = api.keys() for k in ks: for key, value, in api[k]: print(f" {key}, {value}") ```
66,670,681
I am working on a desktop application that I made by using the python language and some open-source library. When I convert that .py file to a .exe file using Pyinstaller it runs on window 10 but shows an error on window 7. Is there any way to make one .exe for all window versions 7/8/10? * List item
2021/03/17
[ "https://Stackoverflow.com/questions/66670681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10975941/" ]
You do not need to "import" the functions, you would need just to create a `class` that keeps all your needed functions, and import that class. I am supposing you use an IDE, you could go to your IDE and create a simple `class`. e.g: ```java public class Utils { public static int doSmth(/* Your parameters here */) { // Your code here } public static void yetAnotherFunction(/* Your parameters here */) { // Your code here } } ``` The most important keyword here is `static`, to understand it I suggest you check my answer here: [when to decide use static functions at java](https://stackoverflow.com/questions/66652425/when-to-decide-use-static-functions-at-java/66652634#66652634) Then you import this class to any of your other classes and call the `static` functions, without creating an Object. ```java import package.Utils; public class MainClass { public static void main(String[] args) { Utils.doSmth(); } } ```
You can add at the beginning: import like- ```java import utils; ``` Assuming it is in the same package as the current class Else it should be: ```java import <package name>.<class name> ```
66,670,681
I am working on a desktop application that I made by using the python language and some open-source library. When I convert that .py file to a .exe file using Pyinstaller it runs on window 10 but shows an error on window 7. Is there any way to make one .exe for all window versions 7/8/10? * List item
2021/03/17
[ "https://Stackoverflow.com/questions/66670681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10975941/" ]
You do not need to "import" the functions, you would need just to create a `class` that keeps all your needed functions, and import that class. I am supposing you use an IDE, you could go to your IDE and create a simple `class`. e.g: ```java public class Utils { public static int doSmth(/* Your parameters here */) { // Your code here } public static void yetAnotherFunction(/* Your parameters here */) { // Your code here } } ``` The most important keyword here is `static`, to understand it I suggest you check my answer here: [when to decide use static functions at java](https://stackoverflow.com/questions/66652425/when-to-decide-use-static-functions-at-java/66652634#66652634) Then you import this class to any of your other classes and call the `static` functions, without creating an Object. ```java import package.Utils; public class MainClass { public static void main(String[] args) { Utils.doSmth(); } } ```
Yes, you can import methods but there are caveats. The methods are defined on a class. The methods are defined as static. The import employs the keyword static. Keep in mind that importing methods directly can create confusion and complicate debugging. Consider importing the class and invoke the method from the class. When/whether defining a utility class makes sense and how to implement them are separate discussions. ``` package some.path; public class MyUtils { public static int add(int x, int y) { return x + y; } } ``` To import the method ``` package some.other.path; import static some.path.MyUtils.add; // note keyword static here public class MyClass { private int a, b; public int sum() { return add(a,b); } } ``` Or, import the class and use the method statically ``` package some.other.path; import some.path.MyUtils; public class MyClass { private int a,b; public int sum() { return MyUtils.add(a,b); } } ```
19,162,812
I have a python data structure like this ``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL', '1', '', '', '', '', '9i0']}] ^ ^ | | \ / Delete these ``` I am trying to delete the position in the list `'val'` where the values in the same column in each list are empty. For example, position 0 and 3 in the list(dl). I am trying to get an output like this: ``` Output= [{'plat': 'unix', 'val':['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', 1, '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['KL', '1', '', '', '', '9i0']}] ```
2013/10/03
[ "https://Stackoverflow.com/questions/19162812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2843234/" ]
``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL','1', '', '', '', '', '9i0']}] def empty_indices(lst): return {i for i,v in enumerate(lst) if not v} # Need to special-case the first one to initialize the set of "emtpy" indices. remove_idx = empty_indices(dl[0]['val']) # Here we do the first one twice. We could use itertools.islice but it's # probably not worth the miniscule speedup. for item in dl: remove_idx &= empty_indices(item['val']) for item in dl: item['val'] = [k for i,k in enumerate(item['val']) if i not in remove_idx] # print the results. import pprint pprint.pprint(dl) ```
``` from itertools import izip from operator import itemgetter # create an iterator over columns columns = izip(*(d['val'] for d in dl)) # make function keeps non-empty columns keepfunc = itemgetter(*(i for i, c in enumerate(columns) if any(c))) # apply function to each list for d in dl: d['val'] = list(keepfunc(d['val'])) ```
19,162,812
I have a python data structure like this ``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL', '1', '', '', '', '', '9i0']}] ^ ^ | | \ / Delete these ``` I am trying to delete the position in the list `'val'` where the values in the same column in each list are empty. For example, position 0 and 3 in the list(dl). I am trying to get an output like this: ``` Output= [{'plat': 'unix', 'val':['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', 1, '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['KL', '1', '', '', '', '9i0']}] ```
2013/10/03
[ "https://Stackoverflow.com/questions/19162812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2843234/" ]
Let's do this in two steps. First, find indices to remove: ``` lists = [e['val'] for e in dl] idx_to_remove = [i for i, elem in enumerate(map(any, zip(*lists))) if not elem] ``` Second, let's filter original lists: ``` for l in lists: l[:] = [elem for i, elem in enumerate(l) if i not in idx_to_remove] ``` Result: ``` >>> pprint.pprint(dl) [{'plat': 'unix', 'val': ['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val': ['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val': ['', 1, '', '202', '', '']}, {'plat': 'centOs', 'val': ['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val': ['KL', '1', '', '', '', '9i0']}] ```
``` from itertools import izip from operator import itemgetter # create an iterator over columns columns = izip(*(d['val'] for d in dl)) # make function keeps non-empty columns keepfunc = itemgetter(*(i for i, c in enumerate(columns) if any(c))) # apply function to each list for d in dl: d['val'] = list(keepfunc(d['val'])) ```
19,162,812
I have a python data structure like this ``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL', '1', '', '', '', '', '9i0']}] ^ ^ | | \ / Delete these ``` I am trying to delete the position in the list `'val'` where the values in the same column in each list are empty. For example, position 0 and 3 in the list(dl). I am trying to get an output like this: ``` Output= [{'plat': 'unix', 'val':['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', 1, '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['KL', '1', '', '', '', '9i0']}] ```
2013/10/03
[ "https://Stackoverflow.com/questions/19162812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2843234/" ]
Let's do this in two steps. First, find indices to remove: ``` lists = [e['val'] for e in dl] idx_to_remove = [i for i, elem in enumerate(map(any, zip(*lists))) if not elem] ``` Second, let's filter original lists: ``` for l in lists: l[:] = [elem for i, elem in enumerate(l) if i not in idx_to_remove] ``` Result: ``` >>> pprint.pprint(dl) [{'plat': 'unix', 'val': ['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val': ['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val': ['', 1, '', '202', '', '']}, {'plat': 'centOs', 'val': ['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val': ['KL', '1', '', '', '', '9i0']}] ```
``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL','1', '', '', '', '', '9i0']}] def empty_indices(lst): return {i for i,v in enumerate(lst) if not v} # Need to special-case the first one to initialize the set of "emtpy" indices. remove_idx = empty_indices(dl[0]['val']) # Here we do the first one twice. We could use itertools.islice but it's # probably not worth the miniscule speedup. for item in dl: remove_idx &= empty_indices(item['val']) for item in dl: item['val'] = [k for i,k in enumerate(item['val']) if i not in remove_idx] # print the results. import pprint pprint.pprint(dl) ```
19,162,812
I have a python data structure like this ``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL', '1', '', '', '', '', '9i0']}] ^ ^ | | \ / Delete these ``` I am trying to delete the position in the list `'val'` where the values in the same column in each list are empty. For example, position 0 and 3 in the list(dl). I am trying to get an output like this: ``` Output= [{'plat': 'unix', 'val':['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', 1, '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['KL', '1', '', '', '', '9i0']}] ```
2013/10/03
[ "https://Stackoverflow.com/questions/19162812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2843234/" ]
``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL','1', '', '', '', '', '9i0']}] def empty_indices(lst): return {i for i,v in enumerate(lst) if not v} # Need to special-case the first one to initialize the set of "emtpy" indices. remove_idx = empty_indices(dl[0]['val']) # Here we do the first one twice. We could use itertools.islice but it's # probably not worth the miniscule speedup. for item in dl: remove_idx &= empty_indices(item['val']) for item in dl: item['val'] = [k for i,k in enumerate(item['val']) if i not in remove_idx] # print the results. import pprint pprint.pprint(dl) ```
Yet another possible solution (not really efficient but well...). `zip()` is really underrated... ``` # extract the values as a list of list vals = [item["val"] for item in dl] # transpose lines to columns cols = map(list, zip(*lines)) # filter out empty columns cols = [c for c in cols if filter(None, c)] # retranspose columns to lines lines = map(list, zip(*cols)) # build the new dict output = [ dict(plat=item["plat"], val=line) for item, line in zip(dl, lines) ] ```
19,162,812
I have a python data structure like this ``` dl= [{'plat': 'unix', 'val':['', '', '1ju', '', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', '', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', '', 1, '', '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['', 'KL', '1', '', '', '', '', '9i0']}] ^ ^ | | \ / Delete these ``` I am trying to delete the position in the list `'val'` where the values in the same column in each list are empty. For example, position 0 and 3 in the list(dl). I am trying to get an output like this: ``` Output= [{'plat': 'unix', 'val':['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val':['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val':['', 1, '' , '202', '', '']}, {'plat': 'centOs', 'val':['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val':['KL', '1', '', '', '', '9i0']}] ```
2013/10/03
[ "https://Stackoverflow.com/questions/19162812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2843234/" ]
Let's do this in two steps. First, find indices to remove: ``` lists = [e['val'] for e in dl] idx_to_remove = [i for i, elem in enumerate(map(any, zip(*lists))) if not elem] ``` Second, let's filter original lists: ``` for l in lists: l[:] = [elem for i, elem in enumerate(l) if i not in idx_to_remove] ``` Result: ``` >>> pprint.pprint(dl) [{'plat': 'unix', 'val': ['', '1ju', '', '202', '', '']}, {'plat': 'Ios', 'val': ['', '', 'Ty', '', 'Jk', '']}, {'plat': 'NT', 'val': ['', 1, '', '202', '', '']}, {'plat': 'centOs', 'val': ['', '', '', '202', '', '']}, {'plat': 'ubuntu', 'val': ['KL', '1', '', '', '', '9i0']}] ```
Yet another possible solution (not really efficient but well...). `zip()` is really underrated... ``` # extract the values as a list of list vals = [item["val"] for item in dl] # transpose lines to columns cols = map(list, zip(*lines)) # filter out empty columns cols = [c for c in cols if filter(None, c)] # retranspose columns to lines lines = map(list, zip(*cols)) # build the new dict output = [ dict(plat=item["plat"], val=line) for item, line in zip(dl, lines) ] ```
40,701,398
I am trying to figure out how to take the following for loop that splits an array based on the index of the lowest value in the row and use vectorization. I've looked at this [link](https://www.safaribooksonline.com/library/view/python-for-data/9781449323592/ch04.html) and have been trying to use the numpy.where function but currently unsuccessful. For example if an array has *n* columns, then all the rows where *col[0]* has the lowest value are put in one array, all the rows where *col[1]* are put in another, etc. Here's the code using a for loop. ``` import numpy a = numpy.array([[ 0. 1. 3.] [ 0. 1. 3.] [ 0. 1. 3.] [ 1. 0. 2.] [ 1. 0. 2.] [ 1. 0. 2.] [ 3. 1. 0.] [ 3. 1. 0.] [ 3. 1. 0.]]) result_0 = [] result_1 = [] result_2 = [] for value in a: if value[0] <= value[1] and value[0] <= value[2]: result_0.append(value) elif value[1] <= value[0] and value[1] <= value[2]: result_1.append(value) else: result_2.append(value) print(result_0) >>[array([ 0. 1. 3.]), array([ 0. 1. 3.]), array([ 0. 1. 3.])] print(result_1) >>[array([ 1. 0. 2.]), array([ 1. 0. 2.]), array([ 1. 0. 2.])] print(result_2) >>[array([ 3. 1. 0.]), array([ 3. 1. 0.]), array([ 3. 1. 0.])] ```
2016/11/20
[ "https://Stackoverflow.com/questions/40701398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5025845/" ]
First, use `argsort` to see where the lowest value in each row is: ``` >>> a.argsort(axis=1) array([[0, 1, 2], [0, 1, 2], [0, 1, 2], [1, 0, 2], [1, 0, 2], [1, 0, 2], [2, 1, 0], [2, 1, 0], [2, 1, 0]]) ``` Note that wherever a row has `0`, that is the smallest column in that row. Now you can build the results: ``` >>> sortidx = a.argsort(axis=1) >>> [a[sortidx[:,i] == 0] for i in range(a.shape[1])] [array([[ 0., 1., 3.], [ 0., 1., 3.], [ 0., 1., 3.]]), array([[ 1., 0., 2.], [ 1., 0., 2.], [ 1., 0., 2.]]), array([[ 3., 1., 0.], [ 3., 1., 0.], [ 3., 1., 0.]])] ``` So it is done with only a single loop over the columns, which will give a huge speedup if the number of rows is much larger than the number of columns.
This is not the best solution since it relies on simple python loops and is not very efficient when you start dealing with large data sets but it should get you started. The point is to create an array of "buckets" which store the data based on the depth of the lengthiest element. Then enumerate each element in `values`, selecting the smallest one and saving its offset which is subsequently appended to the correct results "bucket", for each `a`. Finally we print this out in the last loop. **Solution using loops**: ``` import numpy import pprint # random data set a = numpy.array([[0, 1, 3], [0, 1, 3], [0, 1, 3], [1, 0, 2], [1, 0, 2], [1, 0, 2], [3, 1, 0], [3, 1, 0], [3, 1, 0]]) # create a list of results as big as the depth of elements in an entry results = list() for l in range(max(len(i) for i in a)): results.append(list()) # don't do the following because all the references to the lists will be the same and you get dups: # results = [[]]*max(len(i) for i in a) for value in a: res_offset, _val = min(enumerate(value), key=lambda x: x[1]) # get the offset and min value results[res_offset].append(value) # store the original Array obj in the correct "bucket" # print for visualization for c, r in enumerate(results): print("result_%s: %s" % (c, r)) ``` Outputs: > > result\_0: [array([0, 1, 3]), array([0, 1, 3]), array([0, 1, 3])] > > result\_1: [array([1, 0, 2]), array([1, 0, 2]), array([1, 0, 2])] > > result\_2: [array([3, 1, 0]), array([3, 1, 0]), array([3, 1, 0])] > > >
40,701,398
I am trying to figure out how to take the following for loop that splits an array based on the index of the lowest value in the row and use vectorization. I've looked at this [link](https://www.safaribooksonline.com/library/view/python-for-data/9781449323592/ch04.html) and have been trying to use the numpy.where function but currently unsuccessful. For example if an array has *n* columns, then all the rows where *col[0]* has the lowest value are put in one array, all the rows where *col[1]* are put in another, etc. Here's the code using a for loop. ``` import numpy a = numpy.array([[ 0. 1. 3.] [ 0. 1. 3.] [ 0. 1. 3.] [ 1. 0. 2.] [ 1. 0. 2.] [ 1. 0. 2.] [ 3. 1. 0.] [ 3. 1. 0.] [ 3. 1. 0.]]) result_0 = [] result_1 = [] result_2 = [] for value in a: if value[0] <= value[1] and value[0] <= value[2]: result_0.append(value) elif value[1] <= value[0] and value[1] <= value[2]: result_1.append(value) else: result_2.append(value) print(result_0) >>[array([ 0. 1. 3.]), array([ 0. 1. 3.]), array([ 0. 1. 3.])] print(result_1) >>[array([ 1. 0. 2.]), array([ 1. 0. 2.]), array([ 1. 0. 2.])] print(result_2) >>[array([ 3. 1. 0.]), array([ 3. 1. 0.]), array([ 3. 1. 0.])] ```
2016/11/20
[ "https://Stackoverflow.com/questions/40701398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5025845/" ]
First, use `argsort` to see where the lowest value in each row is: ``` >>> a.argsort(axis=1) array([[0, 1, 2], [0, 1, 2], [0, 1, 2], [1, 0, 2], [1, 0, 2], [1, 0, 2], [2, 1, 0], [2, 1, 0], [2, 1, 0]]) ``` Note that wherever a row has `0`, that is the smallest column in that row. Now you can build the results: ``` >>> sortidx = a.argsort(axis=1) >>> [a[sortidx[:,i] == 0] for i in range(a.shape[1])] [array([[ 0., 1., 3.], [ 0., 1., 3.], [ 0., 1., 3.]]), array([[ 1., 0., 2.], [ 1., 0., 2.], [ 1., 0., 2.]]), array([[ 3., 1., 0.], [ 3., 1., 0.], [ 3., 1., 0.]])] ``` So it is done with only a single loop over the columns, which will give a huge speedup if the number of rows is much larger than the number of columns.
I found a much easier way to do this. I hope that I am interpreting the OP correctly. My sense is that the OP wants to create a slice of the larger array based upon some set of conditions. Note that the code above to create the array does not seem to work--at least in python 3.5. I generated the array as follow. ``` a = np.array([0., 1., 3., 0., 1., 3., 0., 1., 3., 1., 0., 2., 1., 0., 2.,1., 0., 2.,3., 1., 0.,3., 1., 0.,3., 1., 0.]).reshape([9,3]) ``` Next, I sliced the original array into smaller arrays. `Numpy` has builtins to help with this. ``` result_0 = a[np.logical_and(a[:,0] <= a[:,1],a[:,0] <= a[:,2])] result_1 = a[np.logical_and(a[:,1] <= a[:,0],a[:,1] <= a[:,2])] result_2 = a[np.logical_and(a[:,2] <= a[:,0],a[:,2] <= a[:,1])] ``` This will generate new `numpy` arrays that match the given conditions. Note if the user wants to convert these individual rows into a list or arrays, he/she can just enter the following code to obtain the result. ``` result_0 = [np.array(x) for x in result_0.tolist()] result_0 = [np.array(x) for x in result_1.tolist()] result_0 = [np.array(x) for x in result_2.tolist()] ``` This should generate the outcome requested in the OP.
66,776,644
My freelance client is giving FTP access to the shared hosting, I am new to web development and can't figure out how to deploy the flask app to cgi-bin folder, please help me understand how this works? ``` .htaccess file RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f # Don't interfere with static files RewriteRule ^(.*)$ /public_html/app.cgi/$1 [L] app.py file import sys, subprocess # implement pip as a subprocess: subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'flask']) from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return "Hello, world" if __name__ == '__main__': app.run() app.cgi file #!/usr/bin/python3 from wsgiref.handlers import CGIHandler from app import app CGIHandler().run(app) ```
2021/03/24
[ "https://Stackoverflow.com/questions/66776644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13993581/" ]
First create a python script which will contain: ```py import sys import subprocess # implement pip as a subprocess: subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'flask']) ``` And then follow the instructions given by **Hostgator**. Please mark it as an answer if it is helpful! [![Rajdeep, a Full-Stack Python Developer](https://stackoverflow.com/users/flair/15467414.png "Rajdeep, a Full-Stack Python Developer")](https://stackoverflow.com/users/15467414/rajdeep)
Sorry but the Flask hosting can't be done within Shared-Hosting. You need **DigitalOcean** or **Heroku** or **PythonAnywhere**(easiest) Hosting to deploy a **Flask**/**Django** Website.
48,044,680
I have a problem with python that is I want to generate a multidict like the one below, using a for loop. Numbers are generated randomly and if the two elements are the same, the value if 0. ``` arcs, capacity = multidict({ (0, 0): 0, (0, 1): 80, (0, 2): 11, (1, 0): 15, (1, 1): 0, (1, 2): 120 (2, 0): 103, (2, 1): 3, (2, 2): 0 }) ``` So far I coded this one: ``` arcs = {} for i in range(N): for j in range(N): arcs[i,j]=(random.randint(1,101)) ``` But it's nothing like the one I need. Can anyone help me on how to code the for loop for this problem? Many many thanks in advance.
2017/12/31
[ "https://Stackoverflow.com/questions/48044680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Unit tests call functions that they test. You want to know if a function F called by a unit test can eventually invoke malloc (or new or ...). Seems like what you really want to do is build a call graph for your entire system, and then ask for the critical functions F whether F can reach malloc etc. in the call graph. This is rather easy to compute once you have the call graph. Getting the call graph is not so easy. Discovering that module A calls module B directly is "technically easy" if you have a real language front end that does name resolution. Finding out what A calls indirectly isn't easy; you need a (function pointer) points-to analysis and those are hard. And, of course, you have decide if you are going to dive into library (e.g., std::) routines or not. Your call graph needs needs to be conservative (so you don't miss potential calls) and reasonably precise (so you don't drown in false positives) in the face of function pointers and method calls. This Doxygen support claims to build call graphs: <http://clang.llvm.org/doxygen/CallGraph_8cpp.html> I don't know if it handles indirect/methods calls or how precise it is; I'm not very familiar with it and the documentation seems thin. Doxygen in the past did not have reputation for handling indirection well or being precise, but past versions weren't based on Clang. There is some further discussion of this applied on small scale at <http://stackoverflow.com/questions/5373714/generate-calling-graph-for-c-code> Your question is tagged c/c++ but seems to be about C++. For C, our [DMS Software Reengineering Toolkit](http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html) with its generic flow analysis and call graph generation support, coupled with DMS's [C Front End](http://www.semanticdesigns.com/Products/FrontEnds/CFrontEnd.html?Home=CTools), has been used to [analyze C systems of some 16 million lines/50,000 functions with indirect calls](http://www.semanticdesigns.com/Products/DMS/FlowAnalysis.html#CallGraphAnalysis) to produce conservatively correct call graphs. We have not specifically tried to build C++ call graphs for large systems, but the same DMS generic flow analysis and call graph generation would be "technically straightforward" used with DMS's [C++ Front End](https://www.semanticdesigns.com/Products/FrontEnds/CppFrontEnd.html). When building a static analysis that operates correctly and at scale, nothing is trivial to do.
If you're using libraries which invoke malloc, then you might want to take a look at the [Joint Strike Fighter C++ Coding Standards](http://www.stroustrup.com/JSF-AV-rules.pdf). It's a coding style aimed towards mission critical software. One suggestion would be to write your own allocator(s). Another suggestion is to use something like `jemalloc` which has statistics, but is much more unpredictable since it is geared towards performance. --- What you want is a mocking library with spy capabilities. How this works for each framework is going to vary, but here is an example using Google: ``` static std::function<void*(size_t)> malloc_bridge; struct malloc_mock { malloc_mock() { malloc_bridge = std::bind(&malloc_mock::mock_, this, _1); } MOCK_METHOD1(mock_, void*(size_t)); } void* malloc_cheat(size_t size) { return malloc_bridge(size); } #define malloc malloc_cheat struct fixture { void f() { malloc(...); } }; struct CustomTest : ::testing::test { malloc_mock mock_; }; TEST_F(CustomTest, ShouldMallocXBytes) { EXPECT_CALL(mock_, mock_(X)) .WillOnce(::testing::Return(static_cast<void*>(0))); Fixture fix; fix.f(); } #undef malloc ``` **WARNING:** Code hasn't been touched by compiler hands. But you get the idea.
48,044,680
I have a problem with python that is I want to generate a multidict like the one below, using a for loop. Numbers are generated randomly and if the two elements are the same, the value if 0. ``` arcs, capacity = multidict({ (0, 0): 0, (0, 1): 80, (0, 2): 11, (1, 0): 15, (1, 1): 0, (1, 2): 120 (2, 0): 103, (2, 1): 3, (2, 2): 0 }) ``` So far I coded this one: ``` arcs = {} for i in range(N): for j in range(N): arcs[i,j]=(random.randint(1,101)) ``` But it's nothing like the one I need. Can anyone help me on how to code the for loop for this problem? Many many thanks in advance.
2017/12/31
[ "https://Stackoverflow.com/questions/48044680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Unit tests call functions that they test. You want to know if a function F called by a unit test can eventually invoke malloc (or new or ...). Seems like what you really want to do is build a call graph for your entire system, and then ask for the critical functions F whether F can reach malloc etc. in the call graph. This is rather easy to compute once you have the call graph. Getting the call graph is not so easy. Discovering that module A calls module B directly is "technically easy" if you have a real language front end that does name resolution. Finding out what A calls indirectly isn't easy; you need a (function pointer) points-to analysis and those are hard. And, of course, you have decide if you are going to dive into library (e.g., std::) routines or not. Your call graph needs needs to be conservative (so you don't miss potential calls) and reasonably precise (so you don't drown in false positives) in the face of function pointers and method calls. This Doxygen support claims to build call graphs: <http://clang.llvm.org/doxygen/CallGraph_8cpp.html> I don't know if it handles indirect/methods calls or how precise it is; I'm not very familiar with it and the documentation seems thin. Doxygen in the past did not have reputation for handling indirection well or being precise, but past versions weren't based on Clang. There is some further discussion of this applied on small scale at <http://stackoverflow.com/questions/5373714/generate-calling-graph-for-c-code> Your question is tagged c/c++ but seems to be about C++. For C, our [DMS Software Reengineering Toolkit](http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html) with its generic flow analysis and call graph generation support, coupled with DMS's [C Front End](http://www.semanticdesigns.com/Products/FrontEnds/CFrontEnd.html?Home=CTools), has been used to [analyze C systems of some 16 million lines/50,000 functions with indirect calls](http://www.semanticdesigns.com/Products/DMS/FlowAnalysis.html#CallGraphAnalysis) to produce conservatively correct call graphs. We have not specifically tried to build C++ call graphs for large systems, but the same DMS generic flow analysis and call graph generation would be "technically straightforward" used with DMS's [C++ Front End](https://www.semanticdesigns.com/Products/FrontEnds/CppFrontEnd.html). When building a static analysis that operates correctly and at scale, nothing is trivial to do.
This is not a full answer but you can try to use Valgrind to count allocs and frees. The default Valgrind tool memcheck by default counts the number of allocs and frees and prints resulting report in `HEAP SUMMARY`, here is a sample output: ``` $ valgrind ./a.out ==2653== Memcheck, a memory error detector ==2653== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==2653== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info ==2653== Command: ./a.out ==2653== ==2653== ==2653== HEAP SUMMARY: ==2653== in use at exit: 0 bytes in 0 blocks ==2653== total heap usage: 2 allocs, 2 frees, 72,716 bytes allocated ==2653== ==2653== All heap blocks were freed -- no leaks are possible ==2653== ==2653== For counts of detected and suppressed errors, rerun with: -v ==2653== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) ``` You can add one more test doing nothing to count baseline allocs number: ``` void my_unit_test_baseline() { my_object obj; // perform some initialization that will allocate } ``` Now you can run real test and compare the number of allocations with baseline test. If they are not equal than some allocations took place in your tested code. You can log this fact or signal it some other whatever you want way.
48,044,680
I have a problem with python that is I want to generate a multidict like the one below, using a for loop. Numbers are generated randomly and if the two elements are the same, the value if 0. ``` arcs, capacity = multidict({ (0, 0): 0, (0, 1): 80, (0, 2): 11, (1, 0): 15, (1, 1): 0, (1, 2): 120 (2, 0): 103, (2, 1): 3, (2, 2): 0 }) ``` So far I coded this one: ``` arcs = {} for i in range(N): for j in range(N): arcs[i,j]=(random.randint(1,101)) ``` But it's nothing like the one I need. Can anyone help me on how to code the for loop for this problem? Many many thanks in advance.
2017/12/31
[ "https://Stackoverflow.com/questions/48044680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Unit tests call functions that they test. You want to know if a function F called by a unit test can eventually invoke malloc (or new or ...). Seems like what you really want to do is build a call graph for your entire system, and then ask for the critical functions F whether F can reach malloc etc. in the call graph. This is rather easy to compute once you have the call graph. Getting the call graph is not so easy. Discovering that module A calls module B directly is "technically easy" if you have a real language front end that does name resolution. Finding out what A calls indirectly isn't easy; you need a (function pointer) points-to analysis and those are hard. And, of course, you have decide if you are going to dive into library (e.g., std::) routines or not. Your call graph needs needs to be conservative (so you don't miss potential calls) and reasonably precise (so you don't drown in false positives) in the face of function pointers and method calls. This Doxygen support claims to build call graphs: <http://clang.llvm.org/doxygen/CallGraph_8cpp.html> I don't know if it handles indirect/methods calls or how precise it is; I'm not very familiar with it and the documentation seems thin. Doxygen in the past did not have reputation for handling indirection well or being precise, but past versions weren't based on Clang. There is some further discussion of this applied on small scale at <http://stackoverflow.com/questions/5373714/generate-calling-graph-for-c-code> Your question is tagged c/c++ but seems to be about C++. For C, our [DMS Software Reengineering Toolkit](http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html) with its generic flow analysis and call graph generation support, coupled with DMS's [C Front End](http://www.semanticdesigns.com/Products/FrontEnds/CFrontEnd.html?Home=CTools), has been used to [analyze C systems of some 16 million lines/50,000 functions with indirect calls](http://www.semanticdesigns.com/Products/DMS/FlowAnalysis.html#CallGraphAnalysis) to produce conservatively correct call graphs. We have not specifically tried to build C++ call graphs for large systems, but the same DMS generic flow analysis and call graph generation would be "technically straightforward" used with DMS's [C++ Front End](https://www.semanticdesigns.com/Products/FrontEnds/CppFrontEnd.html). When building a static analysis that operates correctly and at scale, nothing is trivial to do.
In case you are using the GNU C library, you can use the `_malloc_hook ()` and alike functions to have a user-defined function called whenever one of the functions of the `malloc` family is used. Such a hooked function could analyse the call trace (using `backtrace()`) in order to find whether `malloc` was allowed in this call chain or not and print messages about the culprit if not.
7,076,254
I have a program which deals with nested data structures where the underlying type usually ends up being a decimal. e.g. ``` x={'a':[1.05600000001,2.34581736481,[1.1111111112,9.999990111111]],...} ``` Is there a simple pythonic way to print such a variable but rounding all floats to (say) 3dp and not assuming a particular configuration of lists and dictionaries? e.g. ``` {'a':[1.056,2.346,[1.111,10.000],...} ``` I'm thinking something like `pformat(x,round=3)` or maybe ``` pformat(x,conversions={'float':lambda x: "%.3g" % x}) ``` except I don't think they have this kind of functionality. Permanently rounding the underlying data is of course not an option.
2011/08/16
[ "https://Stackoverflow.com/questions/7076254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/768552/" ]
This will recursively descend dicts, tuples, lists, etc. formatting numbers and leaving other stuff alone. ``` import collections import numbers def pformat(thing, formatfunc): if isinstance(thing, dict): return type(thing)((key, pformat(value, formatfunc)) for key, value in thing.iteritems()) if isinstance(thing, collections.Container): return type(thing)(pformat(value, formatfunc) for value in thing) if isinstance(thing, numbers.Number): return formatfunc(thing) return thing def formatfloat(thing): return "%.3g" % float(thing) x={'a':[1.05600000001,2.34581736481,[8.1111111112,9.999990111111]], 'b':[3.05600000001,4.34581736481,[5.1111111112,6.999990111111]]} print pformat(x, formatfloat) ``` If you want to try and convert everything to a float, you can do ``` try: return formatfunc(thing) except: return thing ``` instead of the last three lines of the function.
``` >>> b = [] >>> x={'a':[1.05600000001,2.34581736481,[1.1111111112,9.999990111111]]} >>> for i in x.get('a'): if type(i) == type([]): for y in i: print("%0.3f"%(float(y))) else: print("%0.3f"%(float(i))) 1.056 2.346 1.111 10.000 ``` The problem Here is we don't have flatten method in python, since I know it is only 2 level list nesting I have used `for loop`.
7,076,254
I have a program which deals with nested data structures where the underlying type usually ends up being a decimal. e.g. ``` x={'a':[1.05600000001,2.34581736481,[1.1111111112,9.999990111111]],...} ``` Is there a simple pythonic way to print such a variable but rounding all floats to (say) 3dp and not assuming a particular configuration of lists and dictionaries? e.g. ``` {'a':[1.056,2.346,[1.111,10.000],...} ``` I'm thinking something like `pformat(x,round=3)` or maybe ``` pformat(x,conversions={'float':lambda x: "%.3g" % x}) ``` except I don't think they have this kind of functionality. Permanently rounding the underlying data is of course not an option.
2011/08/16
[ "https://Stackoverflow.com/questions/7076254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/768552/" ]
A simple approach assuming you have lists of floats: ``` >>> round = lambda l: [float('%.3g' % e) if type(e) != list else round(e) for e in l] >>> print {k:round(v) for k,v in x.iteritems()} {'a': [1.06, 2.35, [1.11, 10.0]]} ```
``` >>> b = [] >>> x={'a':[1.05600000001,2.34581736481,[1.1111111112,9.999990111111]]} >>> for i in x.get('a'): if type(i) == type([]): for y in i: print("%0.3f"%(float(y))) else: print("%0.3f"%(float(i))) 1.056 2.346 1.111 10.000 ``` The problem Here is we don't have flatten method in python, since I know it is only 2 level list nesting I have used `for loop`.
7,076,254
I have a program which deals with nested data structures where the underlying type usually ends up being a decimal. e.g. ``` x={'a':[1.05600000001,2.34581736481,[1.1111111112,9.999990111111]],...} ``` Is there a simple pythonic way to print such a variable but rounding all floats to (say) 3dp and not assuming a particular configuration of lists and dictionaries? e.g. ``` {'a':[1.056,2.346,[1.111,10.000],...} ``` I'm thinking something like `pformat(x,round=3)` or maybe ``` pformat(x,conversions={'float':lambda x: "%.3g" % x}) ``` except I don't think they have this kind of functionality. Permanently rounding the underlying data is of course not an option.
2011/08/16
[ "https://Stackoverflow.com/questions/7076254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/768552/" ]
This will recursively descend dicts, tuples, lists, etc. formatting numbers and leaving other stuff alone. ``` import collections import numbers def pformat(thing, formatfunc): if isinstance(thing, dict): return type(thing)((key, pformat(value, formatfunc)) for key, value in thing.iteritems()) if isinstance(thing, collections.Container): return type(thing)(pformat(value, formatfunc) for value in thing) if isinstance(thing, numbers.Number): return formatfunc(thing) return thing def formatfloat(thing): return "%.3g" % float(thing) x={'a':[1.05600000001,2.34581736481,[8.1111111112,9.999990111111]], 'b':[3.05600000001,4.34581736481,[5.1111111112,6.999990111111]]} print pformat(x, formatfloat) ``` If you want to try and convert everything to a float, you can do ``` try: return formatfunc(thing) except: return thing ``` instead of the last three lines of the function.
A simple approach assuming you have lists of floats: ``` >>> round = lambda l: [float('%.3g' % e) if type(e) != list else round(e) for e in l] >>> print {k:round(v) for k,v in x.iteritems()} {'a': [1.06, 2.35, [1.11, 10.0]]} ```
49,883,687
How to close a file if it is already open? ``` import xlwings as xw wb = xw.Book(folderpath + 'Metrics - auto.xlsx') ``` Using try:except: but need a way to close the file so it can be opened, or find the file and work with it? I get this error if it's already open: ``` wb = xw.Book(folderpath + 'Metrics - auto.xlsx') Traceback (most recent call last): File "<ipython-input-34-85b6fd35627b>", line 1, in <module> wb = xw.Book(folderpath + 'Metrics - auto.xlsx') File "C:\Users\ReDimLearning\AppData\Local\Continuum\anaconda2\lib\site-packages\xlwings\main.py", line 480, in __init__ impl = app.books.open(fullname).impl File "C:\Users\ReDimLearning\AppData\Local\Continuum\anaconda2\lib\site-packages\xlwings\main.py", line 2751, in open "Cannot open two workbooks named '%s', even if they are saved in different locations." % name ValueError: Cannot open two workbooks named 'metrics - auto.xlsx', even if they are saved in different locations. ```
2018/04/17
[ "https://Stackoverflow.com/questions/49883687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3664733/" ]
You can check the workbook collection with ``` import xlwings as xw xw.books ``` and check if your fullname is already open using something like: ``` if myworkbook in [i.fullname for i in xw.books]: ... ```
I have no experience with the xlwings package, but looking at the [source code](https://github.com/ZoomerAnalytics/xlwings/blob/master/xlwings/main.py) for `Book.__init__`, it looks like it automatically looks for any instances of the work book that are already open. If there is only one, then it returns it. If there is more than one instance of the work book open, it will raise an error. If there aren't already any open instances, then it will "connect" to it. So, it looks like you don't have to worry about closing the file. This lines up with the [documentation](http://docs.xlwings.org/en/stable/connect_to_workbook.html#python-to-excel) which says: > > The easiest way to connect to a book is offered by xw.Book: it looks > for the book in all app instances and returns an error, should the > same book be open in multiple instances. > > > However, after connecting to a workbook, if you really want to close it, there is a [Book.close](http://docs.xlwings.org/en/stable/api.html#xlwings.Book.close) method which: > > Closes the book without saving it. > > >
35,692,537
I want to scan some websites and would like to get all the java script files names and content.I tried python requests with BeautifulSoup but wasn't able to get the scripts details and contents.am I missing something ? I have been trying lot of methods to find but I felt like stumbling in the dark. This is the code I am trying ``` import requests from bs4 import BeautifulSoup r = requests.get("http://www.marunadanmalayali.com/") soup = BeautifulSoup(r.content) ```
2016/02/29
[ "https://Stackoverflow.com/questions/35692537", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5478079/" ]
You can get all the linked JavaScript code use the below code: ``` l = [i.get('src') for i in soup.find_all('script') if i.get('src')] ``` * `soup.find_all('script')` returns a list of all the `<script>` tags in the page. * A [*list comprehension*](https://stackoverflow.com/questions/34835951/) is used here to loop over all the elements in the list which returned by `soup.find_all('script')`. * `i` is a *dict like* object, use `.get('src')` to check if it has `src` attribute. If not, ignore it. Otherwise, put it into a list (which's called `l` in the example). The output, in this case looks like below: ``` ['http://adserver.adtech.de/addyn/3.0/1602/5506153/0/6490/ADTECH;loc=700;target=_blank;grp=[group]', 'http://tags.expo9.exponential.com/tags/MarunadanMalayalicom/ROS/tags.js', 'http://tags.expo9.exponential.com/tags/MarunadanMalayalicom/ROS/tags.js', 'http://js.genieessp.com/t/057/794/a1057794.js', 'http://ib.adnxs.com/ttj?id=5620689&cb=[CACHEBUSTER]&pubclick=[INSERT_CLICK_TAG]', 'http://ib.adnxs.com/ttj?id=5531763', 'http://advs.adgorithms.com/ttj?id=3279193&cb=[CACHEBUSTER]&pubclick=[INSERT_CLICK_TAG]', 'http://xp2.zedo.com/jsc/xp2/fo.js', 'http://www.marunadanmalayali.com/js/mnmads.js', 'http://www.marunadanmalayali.com/js/jquery-2.1.0.min.js', 'http://www.marunadanmalayali.com/js/jquery.hoverIntent.minified.js', 'http://www.marunadanmalayali.com/js/jquery.dcmegamenu.1.3.3.js', 'http://www.marunadanmalayali.com/js/jquery.cookie.js', 'http://www.marunadanmalayali.com/js/swanalekha-ml.js', 'http://www.marunadanmalayali.com/js/marunadan.js?r=1875', 'http://www.marunadanmalayali.com/js/taboola_home.js', 'http://d8.zedo.com/jsc/d8/fo.js'] ``` --- My code missed some links because they're not in the HTML source actually. You can see them in the console: [![Chrome console](https://i.stack.imgur.com/qyvEG.png)](https://i.stack.imgur.com/qyvEG.png) But they're not in the source: [![HTML source](https://i.stack.imgur.com/Puvhe.png)](https://i.stack.imgur.com/Puvhe.png) Usually, that's because these links were generated by JavaScript. And the `requests` module doesn't run any JavaScript in the page like *a real browser* - it only send a request to get the HTML source. If you also need them, you have to use another module to run the JavaScript in that page, and you can see these links then. For that, I'd suggest use [**selenium**](http://selenium-python.readthedocs.org/) - which runs *a real browser* so it can runs JavaScript in the page. For example (make sure that you have already installed selenium and a web driver for it): ``` from bs4 import BeautifulSoup from selenium import webdriver driver = webdriver.Chrome() # use Chrome driver for example driver.get('http://www.marunadanmalayali.com/') soup = BeautifulSoup(driver.page_source, "html.parser") l = [i.get('src') for i in soup.find_all('script') if i.get('src')] __import__('pprint').pprint(l) ```
You can use a select with `script[src]` which will only find script tags with a src, you don't need to call .get multiple times: ``` import requests from bs4 import BeautifulSoup r = requests.get("http://www.marunadanmalayali.com/") soup = BeautifulSoup(r.content) src = [sc["src"] for sc in soup.select("script[src]")] ``` You can also specify `src=True` with find\_all to do the same: ``` src = [sc["src"] for sc in soup.find_all("script",src=True)] ``` Which will both give you the same output: ``` ['http://tags.expo9.exponential.com/tags/MarunadanMalayalicom/ROS/tags.js', 'http://tags.expo9.exponential.com/tags/MarunadanMalayalicom/ROS/tags.js', 'http://js.genieessp.com/t/052/954/a1052954.js', '//s3-ap-northeast-1.amazonaws.com/tms-t/marunadanmalayali-7219.js', 'http://advs.adgorithms.com/ttj?id=3279193&cb=[CACHEBUSTER]&pubclick=[INSERT_CLICK_TAG]', 'http://www.marunadanmalayali.com/js/mnmcombined1.min.js', 'http://www.marunadanmalayali.com/js/mnmcombined2.min.js'] ``` Also if you use selenium, you can use it with [*PhantomJs*](http://phantomjs.org/) for headless browsing, you don't need beautufulSoup at all if you use selenium, you can use the same css selector directly in selenium: ``` from selenium import webdriver driver = webdriver.PhantomJS() driver.get('http://www.marunadanmalayali.com/') src = [sc.get_attribute("src") for sc in driver.find_elements_by_css_selector("script[src]")] print(src) ``` Which gives you all the links: ``` u'https://pixel.yabidos.com/fltiu.js?qid=836373f5137373f5131353&cid=511&p=165&s=http%3a%2f%2fwww.marunadanmalayali.com%2f&x=admeta&nci=&adtg=96331&nai=', u'http://gum.criteo.com/sync?c=72&r=2&j=TRC.getRTUS', u'http://b.scorecardresearch.com/beacon.js', u'http://cdn.taboola.com/libtrc/impl.201-1-RELEASE.js', u'http://p165.atemda.com/JSAdservingMP.ashx?pc=1&pbId=165&clk=&exm=&jsv=1.84&tsv=2.26&cts=1459160775430&arp=0&fl=0&vitp=0&vit=&jscb=&url=&fp=0;400;300;20&oid=&exr=&mraid=&apid=&apbndl=&mpp=0&uid=&cb=54613943&pId0=64056124&rank0=1&gid0=64056124:1c59ac&pp0=&clk0=[External%20click-tracking%20goes%20here%20(NOT%20URL-encoded)]&rpos0=0&ecpm0=&ntv0=&ntl0=&adsid0=', u'http://cdn.taboola.com/libtrc/marunadanaalayali-network/loader.js', u'http://s.atemda.com/Admeta.js', u'http://www.google-analytics.com/analytics.js', u'http://tags.expo9.exponential.com/tags/MarunadanMalayalicom/ROS/tags.js', u'http://tags.expo9.exponential.com/tags/MarunadanMalayalicom/ROS/tags.js', u'http://js.genieessp.com/t/052/954/a1052954.js', u'http://s3-ap-northeast-1.amazonaws.com/tms-t/marunadanmalayali-7219.js', u'http://d8.zedo.com/jsc/d8/fo.js', u'http://z1.zedo.com/asw/fm/1185/7219/9/fm.js?c=7219&a=0&f=&n=1185&r=1&d=9&adm=&q=&$=&s=1936&l=%5BINSERT_CLICK_TRACKER_MACRO%5D&ct=&z=0.025054786819964647&tt=0&tz=0&pu=http%3A%2F%2Fwww.marunadanmalayali.com%2F&ru=&pi=1459160768626&ce=UTF-8&zpu=www.marunadanmalayali.com____1_&tpu=', u'http://cas.criteo.com/delivery/ajs.php?zoneid=308686&nodis=1&cb=38688817829&exclude=undefined&charset=UTF-8&loc=http%3A//www.marunadanmalayali.com/', u'http://ads.pubmatic.com/AdServer/js/showad.js', u'http://showads.pubmatic.com/AdServer/AdServerServlet?pubId=135167&siteId=135548&adId=600924&kadwidth=300&kadheight=250&SAVersion=2&js=1&kdntuid=1&pageURL=http%3A%2F%2Fwww.marunadanmalayali.com%2F&inIframe=0&kadpageurl=marunadanmalayali.com&operId=3&kltstamp=2016-3-28%2011%3A26%3A13&timezone=1&screenResolution=1024x768&ranreq=0.8869257988408208&pmUniAdId=0&adVisibility=2&adPosition=999x664', u'http://d8.zedo.com/jsc/d8/fo.js', u'http://z1.zedo.com/asw/fm/1185/7213/9/fm.js?c=7213&a=0&f=&n=1185&r=1&d=9&adm=&q=&$=&s=1948&l=%5BINSERT_CLICK_TRACKER_MACRO%5D&ct=&z=0.08655649935826659&tt=0&tz=0&pu=http%3A%2F%2Fwww.marunadanmalayali.com%2F&ru=&pi=1459160768626&ce=UTF-8&zpu=www.marunadanmalayali.com____1_&tpu=', u'http://advs.adgorithms.com/ttj?id=3279193&cb=[CACHEBUSTER]&pubclick=[INSERT_CLICK_TAG]', u'http://ib.adnxs.com/ttj?ttjb=1&bdc=1459160761&bdh=ZllBLkzcj2dGDVPeS0Sw_OTWjgQ.&tpuids=eyJ0cHVpZHMiOlt7InByb3ZpZGVyIjoiY3JpdGVvIiwidXNlcl9pZCI6Il9KRC1PUmhLX3hLczd1cUJhbjlwLU1KQ2VZbDQ2VVUxIn1dfQ==&view_iv=0&view_pos=664,2096&view_ws=400,300&view_vs=3&bdref=http%3A%2F%2Fwww.marunadanmalayali.com%2F&bdtop=true&bdifs=0&bstk=http%3A%2F%2Fwww.marunadanmalayali.com%2F&&id=3279193&cb=[CACHEBUSTER]&pubclick=[INSERT_CLICK_TAG]', u'http://www.marunadanmalayali.com/js/mnmcombined1.min.js', u'http://www.marunadanmalayali.com/js/mnmcombined2.min.js', u'http://pixel.yabidos.com/iftfl.js?ver=1.4.2&qid=836373f5137373f5131353&cid=511&p=165&s=http%3a%2f%2fwww.marunadanmalayali.com%2f&x=admeta&adtg=96331&nci=&nai=&nsi=&cstm1=&cstm2=&cstm3=&kqt=&xc=&test=&od1=&od2=&co=0&tps=34&rnd=3m17uji8ftbf'] ```
71,443,087
i'm struggling to debug my python code with regex in PyCharm. The idea: I want to find any case of 'here we are', which can go with or without 'attention', and the word 'attention' can be separated by whitespace, dot, comma, exclamation mark. I expect this expression should do the job ``` r'(attention.{0,2})?here we are' ``` Online services like <https://regex101.com/> and <https://pythex.org/> confirm my expression is correct – and i'm getting expected "attention! here we are" However, if i run the below code in PyCharm I'm getting such (unexpected for me) result. ``` my_string_1 = 'attention! here we are!' my_list = re.findall(r'(attention.{0,2})?here we are', my_string_1) print(my_list) >>> ['attention! '] ``` Could someone direct me to the reason why PyCharm's outcome is different? Thanks
2022/03/11
[ "https://Stackoverflow.com/questions/71443087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15769721/" ]
I would assume that you're using class components, so the solution I would provide is for that. First step is to import ConnectedProps: ``` import { connect, ConnectedProps } from 'react-redux' ``` Next step is to define the objects that your state/reducer, so in a file that we can name `TsConnector.ts`, add something like: ``` interface UIState { a: string, b: string, ...your other props } interface State { UI: UI, ...your other props } ``` Then create your `mapState` and `mapDispatch` functions: ``` const mapState = (state: UIState ) => ({ UI: state.UI, ...your other props }) const mapDispatch = { ... your dispatch action } ``` And export: ``` export default tsconnector = connect(mapState, mapDispatch) ``` Then use this as you normally would an HOC component: ``` import { tsconnector } from '../components/TsConnector' export var About = tsconnector(mapStateToProps, mapDispatchToProps)(About_Class); ``` And that's it.
Have you tried `useDispatch` and `useSelector` in ts file to get redux state ``` import { useSelector as useReduxSelector, TypedUseSelectorHook, } from 'react-redux' export const useSelector: TypedUseSelectorHook<RootState> = useReduxSelector ```
3,878,195
The Python [logging module](http://docs.python.org/library/logging.html) is cumbersome to use. Is there a more elegant alternative? Integration with desktop notifications would be a plus.
2010/10/07
[ "https://Stackoverflow.com/questions/3878195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105066/" ]
You can look into [Twiggy](https://github.com/wearpants/twiggy), it's an early stage attempt to build a more pythonic alternative to the logging module.
``` #!/usr/bin/env python # -*- coding: utf-8 -*- import logging import logging.handlers from logging.config import dictConfig logger = logging.getLogger(__name__) DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, } def configure_logging(logfile_path): """ Initialize logging defaults for Project. :param logfile_path: logfile used to the logfile :type logfile_path: string This function does: - Assign INFO and DEBUG level to logger file handler and console handler """ dictConfig(DEFAULT_LOGGING) default_formatter = logging.Formatter( "[%(asctime)s] [%(levelname)s] [%(name)s] [%(funcName)s():%(lineno)s] [PID:%(process)d TID:%(thread)d] %(message)s", "%d/%m/%Y %H:%M:%S") file_handler = logging.handlers.RotatingFileHandler(logfile_path, maxBytes=10485760,backupCount=300, encoding='utf-8') file_handler.setLevel(logging.INFO) console_handler = logging.StreamHandler() console_handler.setLevel(logging.DEBUG) file_handler.setFormatter(default_formatter) console_handler.setFormatter(default_formatter) logging.root.setLevel(logging.DEBUG) logging.root.addHandler(file_handler) logging.root.addHandler(console_handler) [31/10/2015 22:00:33] [DEBUG] [yourmodulename] [yourfunction_name():9] [PID:61314 TID:140735248744448] this is logger infomation from hello module ``` you can config logfile with console and file,I don't think desktop notication is a good idea,you can see the log information from console and logfiles
3,878,195
The Python [logging module](http://docs.python.org/library/logging.html) is cumbersome to use. Is there a more elegant alternative? Integration with desktop notifications would be a plus.
2010/10/07
[ "https://Stackoverflow.com/questions/3878195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105066/" ]
You can look into [Twiggy](https://github.com/wearpants/twiggy), it's an early stage attempt to build a more pythonic alternative to the logging module.
You might want to have a look at [pysimplelog](http://bachiraoun.github.io/pysimplelog/). It's pure python, very simple to use, pip installable and provides what you need ``` from pysimplelog import Logger L=Logger() print L >>> Logger (Version 0.2.1) >>> log type |log name |level |std flag |file flag | >>> ----------|----------|----------|----------|----------| >>> debug |DEBUG |0.0 |True |True | >>> info |INFO |10.0 |True |True | >>> warn |WARNING |20.0 |True |True | >>> error |ERROR |30.0 |True |True | >>> critical |CRITICAL |100.0 |True |True | L.info('I am an info') >>> 2016-09-26 15:01:17 - logger <INFO> I am an info L.warn('I am a warning') >>> 2016-09-26 15:01:17 - logger <WARNING> I am a warning L.error('I am an error') >>> 2016-09-26 15:01:17 - logger <ERROR> I am an error ``` and with those parameters, a 'simplelog.log' file will be created and updated automatically for you
3,878,195
The Python [logging module](http://docs.python.org/library/logging.html) is cumbersome to use. Is there a more elegant alternative? Integration with desktop notifications would be a plus.
2010/10/07
[ "https://Stackoverflow.com/questions/3878195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105066/" ]
You can look into [Twiggy](https://github.com/wearpants/twiggy), it's an early stage attempt to build a more pythonic alternative to the logging module.
Checkout [logbook](https://github.com/getlogbook/logbook), it is much nicer to work with. Logbook was mentioned in a comment but it deserves its own answer.
3,878,195
The Python [logging module](http://docs.python.org/library/logging.html) is cumbersome to use. Is there a more elegant alternative? Integration with desktop notifications would be a plus.
2010/10/07
[ "https://Stackoverflow.com/questions/3878195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105066/" ]
You might want to have a look at [pysimplelog](http://bachiraoun.github.io/pysimplelog/). It's pure python, very simple to use, pip installable and provides what you need ``` from pysimplelog import Logger L=Logger() print L >>> Logger (Version 0.2.1) >>> log type |log name |level |std flag |file flag | >>> ----------|----------|----------|----------|----------| >>> debug |DEBUG |0.0 |True |True | >>> info |INFO |10.0 |True |True | >>> warn |WARNING |20.0 |True |True | >>> error |ERROR |30.0 |True |True | >>> critical |CRITICAL |100.0 |True |True | L.info('I am an info') >>> 2016-09-26 15:01:17 - logger <INFO> I am an info L.warn('I am a warning') >>> 2016-09-26 15:01:17 - logger <WARNING> I am a warning L.error('I am an error') >>> 2016-09-26 15:01:17 - logger <ERROR> I am an error ``` and with those parameters, a 'simplelog.log' file will be created and updated automatically for you
``` #!/usr/bin/env python # -*- coding: utf-8 -*- import logging import logging.handlers from logging.config import dictConfig logger = logging.getLogger(__name__) DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, } def configure_logging(logfile_path): """ Initialize logging defaults for Project. :param logfile_path: logfile used to the logfile :type logfile_path: string This function does: - Assign INFO and DEBUG level to logger file handler and console handler """ dictConfig(DEFAULT_LOGGING) default_formatter = logging.Formatter( "[%(asctime)s] [%(levelname)s] [%(name)s] [%(funcName)s():%(lineno)s] [PID:%(process)d TID:%(thread)d] %(message)s", "%d/%m/%Y %H:%M:%S") file_handler = logging.handlers.RotatingFileHandler(logfile_path, maxBytes=10485760,backupCount=300, encoding='utf-8') file_handler.setLevel(logging.INFO) console_handler = logging.StreamHandler() console_handler.setLevel(logging.DEBUG) file_handler.setFormatter(default_formatter) console_handler.setFormatter(default_formatter) logging.root.setLevel(logging.DEBUG) logging.root.addHandler(file_handler) logging.root.addHandler(console_handler) [31/10/2015 22:00:33] [DEBUG] [yourmodulename] [yourfunction_name():9] [PID:61314 TID:140735248744448] this is logger infomation from hello module ``` you can config logfile with console and file,I don't think desktop notication is a good idea,you can see the log information from console and logfiles
3,878,195
The Python [logging module](http://docs.python.org/library/logging.html) is cumbersome to use. Is there a more elegant alternative? Integration with desktop notifications would be a plus.
2010/10/07
[ "https://Stackoverflow.com/questions/3878195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105066/" ]
Checkout [logbook](https://github.com/getlogbook/logbook), it is much nicer to work with. Logbook was mentioned in a comment but it deserves its own answer.
``` #!/usr/bin/env python # -*- coding: utf-8 -*- import logging import logging.handlers from logging.config import dictConfig logger = logging.getLogger(__name__) DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, } def configure_logging(logfile_path): """ Initialize logging defaults for Project. :param logfile_path: logfile used to the logfile :type logfile_path: string This function does: - Assign INFO and DEBUG level to logger file handler and console handler """ dictConfig(DEFAULT_LOGGING) default_formatter = logging.Formatter( "[%(asctime)s] [%(levelname)s] [%(name)s] [%(funcName)s():%(lineno)s] [PID:%(process)d TID:%(thread)d] %(message)s", "%d/%m/%Y %H:%M:%S") file_handler = logging.handlers.RotatingFileHandler(logfile_path, maxBytes=10485760,backupCount=300, encoding='utf-8') file_handler.setLevel(logging.INFO) console_handler = logging.StreamHandler() console_handler.setLevel(logging.DEBUG) file_handler.setFormatter(default_formatter) console_handler.setFormatter(default_formatter) logging.root.setLevel(logging.DEBUG) logging.root.addHandler(file_handler) logging.root.addHandler(console_handler) [31/10/2015 22:00:33] [DEBUG] [yourmodulename] [yourfunction_name():9] [PID:61314 TID:140735248744448] this is logger infomation from hello module ``` you can config logfile with console and file,I don't think desktop notication is a good idea,you can see the log information from console and logfiles
3,878,195
The Python [logging module](http://docs.python.org/library/logging.html) is cumbersome to use. Is there a more elegant alternative? Integration with desktop notifications would be a plus.
2010/10/07
[ "https://Stackoverflow.com/questions/3878195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/105066/" ]
Checkout [logbook](https://github.com/getlogbook/logbook), it is much nicer to work with. Logbook was mentioned in a comment but it deserves its own answer.
You might want to have a look at [pysimplelog](http://bachiraoun.github.io/pysimplelog/). It's pure python, very simple to use, pip installable and provides what you need ``` from pysimplelog import Logger L=Logger() print L >>> Logger (Version 0.2.1) >>> log type |log name |level |std flag |file flag | >>> ----------|----------|----------|----------|----------| >>> debug |DEBUG |0.0 |True |True | >>> info |INFO |10.0 |True |True | >>> warn |WARNING |20.0 |True |True | >>> error |ERROR |30.0 |True |True | >>> critical |CRITICAL |100.0 |True |True | L.info('I am an info') >>> 2016-09-26 15:01:17 - logger <INFO> I am an info L.warn('I am a warning') >>> 2016-09-26 15:01:17 - logger <WARNING> I am a warning L.error('I am an error') >>> 2016-09-26 15:01:17 - logger <ERROR> I am an error ``` and with those parameters, a 'simplelog.log' file will be created and updated automatically for you
64,913,140
As per the [documentation](https://docs.python.org/3/reference/datamodel.html#object.__bool__), every class has a default `__bool__` that returns `true`. Is there a way to "remove" this default behaviour so that it raises an error when used as bool (for instance in a expression like `if obj`? And especially, is there a way to specify this with, for instance, mypy, so as to make static inspection aware of unwanted behaviours?
2020/11/19
[ "https://Stackoverflow.com/questions/64913140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4444546/" ]
you can use Array map function on **option.selectedOption** too just like: ``` {restaurant.map((option, marker) => ( <p key={marker.id}> {option.selectedOption.map((optn, index) => ( <strong key={index}> {optn.label + ', '} </strong> ))} </p> ))} ``` and show the label and value according to your requirement inside JSX
You should do that instead: ``` {restaurant.map((option, marker) => ( <p key={marker.id}> <strong> {option.selectedOption.reduce(labels, label, index => labels += `${index > 0 ? ', ' : ''}${label}`, '')} </strong> </p> ))} ``` The array function `reduce()` allow you to (as the name says) reduce and array into an single value. More info [HERE](https://developer.mozilla.org/fr/docs/Web/JavaScript/Reference/Objets_globaux/Array/reduce)