qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
52,911,232
I'm trying to install and import the Basemap library into my Jupyter Notebook, but this returns the following error: ``` KeyError: 'PROJ_LIB' ``` After some research online, I understand I'm to install Basemap on a separate environment in Anaconda. After creating a new environment and installing Basemap (as well as all other relevant libraries), I have activated the environment. But when importing Basemap I still receive the same KeyError. Here's what I did in my MacOS terminal: ``` conda create --name Py3.6 python=3.6 basemap source activate Py3.6 conda upgrade proj4 env | grep -i proj conda update --channel conda-forge proj4 ``` Then in Jupyter Notebook I run the following: ``` from mpl_toolkits.basemap import Basemap ``` Can anyone tell me why this results in a KeyError?
2018/10/21
[ "https://Stackoverflow.com/questions/52911232", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10534668/" ]
In Windows 10 command line: first find the directory where the **epsg** file is stored: ``` where /r "c:\Users\username" epsg.* ``` ... **c:\Users\username\AppData\Local\conda\conda\envs\envname\Library\share**\epsg ... then either in command line: ``` activate envname SET PROJ_LIB=c:\Users\username\AppData\Local\conda\conda\envs\envname\Library\share ``` (make sure there are no leading on trailing spaces in the path!) and then ``` jupyter notebook ``` or add this to your jupyter notebook (as suggested by john ed): ``` import os os.environ['PROJ_LIB'] = r'c:\Users\username\AppData\Local\conda\conda\envs\envname\Library\share' ```
If you **can not locate epsg file** at all, you can download it here: <https://raw.githubusercontent.com/matplotlib/basemap/master/lib/mpl_toolkits/basemap/data/epsg> And copy this file to your PATH, e.g. to: os.environ['PROJ\_LIB'] = 'C:\Users\username\Anaconda3\pkgs\basemap-1.2.0-py37h4e5d7af\_0\Lib\site-packages\mpl\_toolkits\basemap\data\' This is the ONLY solution that worked for me on Windows 10 / Anaconda 3.
67,439,037
Code to extract sequences ``` from Bio import SeqIO def get_cds_feature_with_qualifier_value(seq_record, name, value): for feature in genome_record.features: if feature.type == "CDS" and value in feature.qualifiers.get(name, []): return feature return None genome_record = SeqIO.read("470.8208.gbk", "genbank") db_xref = ['fig|470.8208.peg.2198', 'fig|470.8208.peg.2200', 'fig|470.8208.peg.2203', 'fig|470.8208.peg.2199', 'fig|470.8208.peg.2201', 'fig|470.8208.peg.2197', 'fig|470.8208.peg.2202', 'fig|470.8208.peg.2501', 'fig|470.8208.peg.2643', 'fig|470.8208.peg.2193', 'fig|470.8208.peg.2670', 'fig|470.8208.peg.2695', 'fig|470.8208.peg.2696', 'fig|470.8208.peg.2189', 'fig|470.8208.peg.2458', 'fig|470.8208.peg.2191', 'fig|470.8208.peg.2190', 'fig|470.8208.peg.2188', 'fig|470.8208.peg.2192', 'fig|470.8208.peg.2639', 'fig|470.8208.peg.3215', 'fig|470.8208.peg.2633', 'fig|470.8208.peg.2682', 'fig|470.8208.peg.3186', 'fig|470.8208.peg.2632', 'fig|470.8208.peg.2683', 'fig|470.8208.peg.3187', 'fig|470.8208.peg.2764', 'fig|470.8208.peg.2686', 'fig|470.8208.peg.2638', 'fig|470.8208.peg.2680', 'fig|470.8208.peg.2685', 'fig|470.8208.peg.2684', 'fig|470.8208.peg.2633', 'fig|470.8208.peg.2682', 'fig|470.8208.peg.3186', 'fig|470.8208.peg.2632', 'fig|470.8208.peg.2683', 'fig|470.8208.peg.3187', 'fig|470.8208.peg.2640', 'fig|470.8208.peg.3221', 'fig|470.8208.peg.3222', 'fig|470.8208.peg.3389', 'fig|470.8208.peg.2764', 'fig|470.8208.peg.2653', 'fig|470.8208.peg.3216', 'fig|470.8208.peg.3231', 'fig|470.8208.peg.2641', 'fig|470.8208.peg.2638', 'fig|470.8208.peg.2680', 'fig|470.8208.peg.2637', 'fig|470.8208.peg.2642', 'fig|470.8208.peg.2679', 'fig|470.8208.peg.3230', 'fig|470.8208.peg.2676', 'fig|470.8208.peg.2677', 'fig|470.8208.peg.1238', 'fig|470.8208.peg.2478', 'fig|470.8208.peg.2639', 'fig|470.8208.peg.854', 'fig|470.8208.peg.382', 'fig|470.8208.peg.383'] with open("nucleotides.fasta", "w") as nt_output, open("proteins.fasta", "w") as aa_output: for xref in db_xref: print ("Looking at " + xref) cds_feature = get_cds_feature_with_qualifier_value (genome_record, "db_xref", xref) gene_sequence = cds_feature.extract(genome_record.seq) protein_sequence = gene_sequence.translate(table=11, cds=True) # This is asking Python to halt if the translation does not match: assert protein_sequence == cds_feature.qualifiers["translation"][0] # Output FASTA records - note \n means insert a new line. # This is a little lazy as it won't line wrap the sequence: nt_output.write(">%s\n%s\n" % (xref, gene_sequence)) aa_output.write(">%s\n%s\n" % (xref, gene_sequence)) print("Done") ``` getting following error ``` /usr/local/lib/python3.7/dist-packages/Bio/GenBank/Scanner.py:1394: BiopythonParserWarning: Truncated LOCUS line found - is this correct? :'LOCUS CP027704 3430798 bp DNA linear UNK \n' BiopythonParserWarning, Looking at fig|470.8208.peg.2198 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-32-323ff320990a> in <module>() 15 print ("Looking at " + xref) 16 cds_feature = get_cds_feature_with_qualifier_value (genome_record, "db_xref", xref) ---> 17 gene_sequence = cds_feature.extract(genome_record.seq) 18 protein_sequence = gene_sequence.translate(table=11, cds=True) 19 AttributeError: 'NoneType' object has no attribute 'extract' ```
2021/05/07
[ "https://Stackoverflow.com/questions/67439037", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13965256/" ]
You can resolve the inline execution error by changing `scriptTag.innerHTML = scriptText;` to `scriptTag.src = chrome.runtime.getURL(filePath);`, no need to fetch the script. Manifest v3 seems to only allow injecting static scripts into the page context. If you want to run dynamically sourced scripts I think this can be achieved by having the static (already trusted) script fetch a remote script then eval it. UPDATE: example extension, with manifest v3, that injects a script that operates in the page context. ``` # myscript.js window.variableInMainContext = "hi" ``` ``` # manifest.json { "name": "example", "version": "1.0", "description": "example extension", "manifest_version": 3, "content_scripts": [ { "matches": ["https://*/*"], "run_at": "document_start", "js": ["inject.js"] } ], "web_accessible_resources": [ { "resources": [ "myscript.js" ], "matches": [ "https://*/*" ] } ] } ``` ``` # inject.js const nullthrows = (v) => { if (v == null) throw new Error("it's a null"); return v; } function injectCode(src) { const script = document.createElement('script'); // This is why it works! script.src = src; script.onload = function() { console.log("script injected"); this.remove(); }; // This script runs before the <head> element is created, // so we add the script to <html> instead. nullthrows(document.head || document.documentElement).appendChild(script); } injectCode(chrome.runtime.getURL('/myscript.js')); ```
Download the script files and put it on your project to make it local. It solved my content security policy problem.
20,564,010
I am calling a python script from a ruby program as: ``` sku = ["VLJAI20225", "VJLS1234"] qty = ["3", "7"] system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku} #{qtys}" ``` But I'd like to access the array elements in the python script. ``` print sys.argv[1] #gives [VLJAI20225, expected ["VLJAI20225", "VJLS1234"] print sys.argv[2] #gives VJLS1234] expected ["3", "7"] ``` I feel that the space between the array elements is treating the array elements as separate arguments. I may be wrong. How can I pass the array correctly?
2013/12/13
[ "https://Stackoverflow.com/questions/20564010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2388940/" ]
Not a python specialist, but a quick fix would be to use json: ``` system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku.to_json} #{qtys.to_json}" ``` Then parse in your python script.
``` "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py \'#{sku}\' \'#{qtys}\'" ``` Maybe something like this?
20,564,010
I am calling a python script from a ruby program as: ``` sku = ["VLJAI20225", "VJLS1234"] qty = ["3", "7"] system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku} #{qtys}" ``` But I'd like to access the array elements in the python script. ``` print sys.argv[1] #gives [VLJAI20225, expected ["VLJAI20225", "VJLS1234"] print sys.argv[2] #gives VJLS1234] expected ["3", "7"] ``` I feel that the space between the array elements is treating the array elements as separate arguments. I may be wrong. How can I pass the array correctly?
2013/12/13
[ "https://Stackoverflow.com/questions/20564010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2388940/" ]
You need to find a suitable protocol to encode your data (your array, list, whatever) over the interface you've chosen (this much is true pretty general). In your case, you've chosen as interface the Unix process call mechanism which allows only passing of a list of strings during calling. This list also is rather limited (you cannot pass, say, a gigabyte that way), so you might want to consider passing data via a pipe between the two processes. Anyway, all you can do is pass bytes, so I propose to encode your data accordingly, e. g. as a JSON string. That means encode your Ruby array in JSON, transfer that JSON string, then decode on the Python side the received JSON string to get a proper Python list again. This only sketches the approach and reasoning behind it. Since I'm not fluent in Ruby, I will leave that part to you (see probably @apneadiving's answer on that), but on the Python side you can use ``` import json myList = json.loads(s) ``` to convert a JSON string `s` to a Python list `myList`.
Not a python specialist, but a quick fix would be to use json: ``` system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku.to_json} #{qtys.to_json}" ``` Then parse in your python script.
20,564,010
I am calling a python script from a ruby program as: ``` sku = ["VLJAI20225", "VJLS1234"] qty = ["3", "7"] system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku} #{qtys}" ``` But I'd like to access the array elements in the python script. ``` print sys.argv[1] #gives [VLJAI20225, expected ["VLJAI20225", "VJLS1234"] print sys.argv[2] #gives VJLS1234] expected ["3", "7"] ``` I feel that the space between the array elements is treating the array elements as separate arguments. I may be wrong. How can I pass the array correctly?
2013/12/13
[ "https://Stackoverflow.com/questions/20564010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2388940/" ]
You need to find a suitable protocol to encode your data (your array, list, whatever) over the interface you've chosen (this much is true pretty general). In your case, you've chosen as interface the Unix process call mechanism which allows only passing of a list of strings during calling. This list also is rather limited (you cannot pass, say, a gigabyte that way), so you might want to consider passing data via a pipe between the two processes. Anyway, all you can do is pass bytes, so I propose to encode your data accordingly, e. g. as a JSON string. That means encode your Ruby array in JSON, transfer that JSON string, then decode on the Python side the received JSON string to get a proper Python list again. This only sketches the approach and reasoning behind it. Since I'm not fluent in Ruby, I will leave that part to you (see probably @apneadiving's answer on that), but on the Python side you can use ``` import json myList = json.loads(s) ``` to convert a JSON string `s` to a Python list `myList`.
``` "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py \'#{sku}\' \'#{qtys}\'" ``` Maybe something like this?
66,297,277
I'm trying to embed python code in a C++ application. Problem is that if I use OpenCV functions in the C++ code and also in python function I am embedding there is memory corruption. Simply commenting all the opencv functions from the code below solve the problem. One thing is that my OpenCV for c++ is 4.5.0 (dinamically linked), compiled from source while version used in python is 3.1.0 (installed using the python wheel). Looking at the backtrace (but I am not an expert on this) seems to me that resize function in python is somehow relying on the OpenCV 4.5.0 library to free memory, causing some sort of conflicts between the two version.. Python is version 3.5.2. This is a minimal example. My main cpp code: ``` #include <iostream> #include <Python.h> #include <opencv2/opencv.hpp> using namespace std; int main() { cv::Mat test = cv::imread("/home/rocco/repo/cpp_python/current.jpg"); cv::Mat test_res; cv::resize(test,test_res,cv::Size(),0.5,0.5); cv::imwrite("resized.jpg",test_res); std::string path = "/home/rocco/repo/cpp_python/current.jpg"; std::string filename = "script"; std::string function_name = "myfunction"; std::string wkd = "/home/rocco/repo/cpp_python"; auto script_name = PyUnicode_FromString(filename.c_str()); Py_Initialize(); PyObject *sys_path = PySys_GetObject("path"); PyList_Insert(sys_path, 0, PyUnicode_FromString(wkd.c_str())); PySys_SetObject("path", sys_path); PyObject *pModule; pModule = PyImport_Import(script_name); auto pFunc = PyObject_GetAttrString(pModule, function_name.c_str()); PyObject * pValue = NULL, * pArgs = NULL; pArgs = PyTuple_New(2); auto pFilename = PyUnicode_FromString(path.c_str()); PyTuple_SetItem(pArgs, 0, pFilename); pValue = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); Py_DECREF(pFunc); Py_DECREF(pModule); Py_DECREF(pFilename); Py_Finalize(); return 0; } ``` My python script: ``` import cv2 def myfunction(filename): img = cv2.imread(filename, cv2.IMREAD_UNCHANGED) scale_percent = 0.5 # percent of original size img = cv2.resize(img, None, fx=scale_percent, fy=scale_percent) cv2.imwrite('resized2.jpg',img) myfunction('current.jpg') ``` Backtrace provided here: ``` *** Error in `/home/rocco/repo/cpp_python/cpp_python': free(): invalid next size (fast): 0x0000000002447670 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777f5)[0x7f1a8c9807f5] /lib/x86_64-linux-gnu/libc.so.6(+0x8038a)[0x7f1a8c98938a] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f1a8c98d58c] /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so(+0x103b8c)[0x7f1a8639bb8c] /home/rocco/dynamic_libraries/opencv_4-5/lib/libopencv_core.so.4.5(_ZN2cv3Mat10deallocateEv+0x100)[0x7f1a8d5aa140] /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so(+0x114fe8)[0x7f1a863acfe8] /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so(+0x1154d4)[0x7f1a863ad4d4] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyCFunction_Call+0xe9)[0x7f1a90686049] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x7555)[0x7f1a90792135] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x24ac6c)[0x7f1a90822c6c] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalCodeEx+0x23)[0x7f1a90822d43] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalCode+0x1b)[0x7f1a9078a94b] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x1bf5fd)[0x7f1a907975fd] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyCFunction_Call+0xc9)[0x7f1a90686029] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x8c0e)[0x7f1a907937ee] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x24ac6c)[0x7f1a90822c6c] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x62d9)[0x7f1a90790eb9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalFrameEx+0x79d9)[0x7f1a907925b9] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x24ac6c)[0x7f1a90822c6c] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyEval_EvalCodeEx+0x23)[0x7f1a90822d43] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0xd2ac8)[0x7f1a906aaac8] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyObject_Call+0x6e)[0x7f1a9075f46e] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(_PyObject_CallMethodIdObjArgs+0x1bf)[0x7f1a9073d95f] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyImport_ImportModuleLevelObject+0x864)[0x7f1a907c98d4] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(+0x1be638)[0x7f1a90796638] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyCFunction_Call+0xe9)[0x7f1a90686049] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyObject_Call+0x6e)[0x7f1a9075f46e] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyObject_CallFunction+0xcf)[0x7f1a9075f5cf] /usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0(PyImport_Import+0xe6)[0x7f1a907c9d66] /home/rocco/repo/cpp_python/cpp_python[0x402291] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f1a8c929840] /home/rocco/repo/cpp_python/cpp_python[0x401e89] ======= Memory map: ======== 00400000-00407000 r-xp 00000000 08:04 9180205 /home/rocco/repo/cpp_python/cpp_python 00606000-00607000 r--p 00006000 08:04 9180205 /home/rocco/repo/cpp_python/cpp_python 00607000-00608000 rw-p 00007000 08:04 9180205 /home/rocco/repo/cpp_python/cpp_python 01e0d000-0251f000 rw-p 00000000 00:00 0 [heap] 7f1a64000000-7f1a64021000 rw-p 00000000 00:00 0 7f1a64021000-7f1a68000000 ---p 00000000 00:00 0 7f1a690f0000-7f1a690f1000 rw-p 00000000 00:00 0 7f1a690f1000-7f1a6a74d000 rw-p 00000000 00:00 0 7f1a6a74d000-7f1a6beeb000 rw-p 00000000 00:00 0 7f1a6beeb000-7f1a6bf7f000 r-xp 00000000 08:04 530555 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6bf7f000-7f1a6c17f000 ---p 00094000 08:04 530555 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6c17f000-7f1a6c1a3000 rw-p 00094000 08:04 530555 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6c1a3000-7f1a6c1a5000 rw-p 00000000 00:00 0 7f1a6c1a5000-7f1a6c1b0000 r-xp 00000000 08:04 530563 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_sfc64.cpython-35m-x86_64-linux-gnu.so 7f1a6c1b0000-7f1a6c3af000 ---p 0000b000 08:04 530563 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_sfc64.cpython-35m-x86_64-linux-gnu.so 7f1a6c3af000-7f1a6c3b1000 rw-p 0000a000 08:04 530563 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_sfc64.cpython-35m-x86_64-linux-gnu.so 7f1a6c3b1000-7f1a6c3bf000 r-xp 00000000 08:04 530561 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_pcg64.cpython-35m-x86_64-linux-gnu.so 7f1a6c3bf000-7f1a6c5bf000 ---p 0000e000 08:04 530561 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_pcg64.cpython-35m-x86_64-linux-gnu.so 7f1a6c5bf000-7f1a6c5c1000 rw-p 0000e000 08:04 530561 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_pcg64.cpython-35m-x86_64-linux-gnu.so 7f1a6c5c1000-7f1a6c5d2000 r-xp 00000000 08:04 530566 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_philox.cpython-35m-x86_64-linux-gnu.so 7f1a6c5d2000-7f1a6c7d1000 ---p 00011000 08:04 530566 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_philox.cpython-35m-x86_64-linux-gnu.so 7f1a6c7d1000-7f1a6c7d3000 rw-p 00010000 08:04 530566 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_philox.cpython-35m-x86_64-linux-gnu.so 7f1a6c7d3000-7f1a6c7d4000 rw-p 00000000 00:00 0 7f1a6c7d4000-7f1a6c7eb000 r-xp 00000000 08:04 530559 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_mt19937.cpython-35m-x86_64-linux-gnu.so 7f1a6c7eb000-7f1a6c9eb000 ---p 00017000 08:04 530559 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_mt19937.cpython-35m-x86_64-linux-gnu.so 7f1a6c9eb000-7f1a6c9ed000 rw-p 00017000 08:04 530559 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_mt19937.cpython-35m-x86_64-linux-gnu.so 7f1a6c9ed000-7f1a6ca3b000 r-xp 00000000 08:04 530556 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bounded_integers.cpython-35m-x86_64-linux-gnu.so 7f1a6ca3b000-7f1a6cc3b000 ---p 0004e000 08:04 530556 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bounded_integers.cpython-35m-x86_64-linux-gnu.so 7f1a6cc3b000-7f1a6cc3d000 rw-p 0004e000 08:04 530556 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bounded_integers.cpython-35m-x86_64-linux-gnu.so 7f1a6cc3d000-7f1a6ce58000 r-xp 00000000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6ce58000-7f1a6d057000 ---p 0021b000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6d057000-7f1a6d073000 r--p 0021a000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6d073000-7f1a6d07f000 rw-p 00236000 08:04 9437776 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f1a6d07f000-7f1a6d082000 rw-p 00000000 00:00 0 7f1a6d082000-7f1a6d088000 r-xp 00000000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d088000-7f1a6d287000 ---p 00006000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d287000-7f1a6d288000 r--p 00005000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d288000-7f1a6d289000 rw-p 00006000 08:04 7086087 /usr/lib/python3.5/lib-dynload/_hashlib.cpython-35m-x86_64-linux-gnu.so 7f1a6d289000-7f1a6d2b9000 r-xp 00000000 08:04 530568 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_common.cpython-35m-x86_64-linux-gnu.so 7f1a6d2b9000-7f1a6d4b8000 ---p 00030000 08:04 530568 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_common.cpython-35m-x86_64-linux-gnu.so 7f1a6d4b8000-7f1a6d4ba000 rw-p 0002f000 08:04 530568 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_common.cpython-35m-x86_64-linux-gnu.so 7f1a6d4ba000-7f1a6d4bb000 rw-p 00000000 00:00 0 7f1a6d4bb000-7f1a6d4e0000 r-xp 00000000 08:04 530557 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bit_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6d4e0000-7f1a6d6e0000 ---p 00025000 08:04 530557 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bit_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6d6e0000-7f1a6d6e5000 rw-p 00025000 08:04 530557 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/_bit_generator.cpython-35m-x86_64-linux-gnu.so 7f1a6d6e5000-7f1a6d6e6000 rw-p 00000000 00:00 0 7f1a6d6e6000-7f1a6d75e000 r-xp 00000000 08:04 530558 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/mtrand.cpython-35m-x86_64-linux-gnu.so 7f1a6d75e000-7f1a6d95d000 ---p 00078000 08:04 530558 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/mtrand.cpython-35m-x86_64-linux-gnu.so 7f1a6d95d000-7f1a6d983000 rw-p 00077000 08:04 530558 /home/rocco/.local/lib/python3.5/site-packages/numpy/random/mtrand.cpython-35m-x86_64-linux-gnu.so 7f1a6d983000-7f1a6d985000 rw-p 00000000 00:00 0 7f1a6d985000-7f1a6d99b000 r-xp 00000000 08:04 529217 /home/rocco/.local/lib/python3.5/site-packages/numpy/fft/_pocketfft_internal.cpython-35m-x86_64-linux-gnu.so 7f1a6d99b000-7f1a6db9a000 ---p 00016000 08:04 529217 /home/rocco/.local/lib/python3.5/site-packages/numpy/fft/_pocketfft_internal.cpython-35m-x86_64-linux-gnu.so 7f1a6db9a000-7f1a6db9b000 rw-p 00015000 08:04 529217 /home/rocco/.local/lib/python3.5/site-packages/numpy/fft/_pocketfft_internal.cpython-35m-x86_64-linux-gnu.so 7f1a6db9b000-7f1a6dbd2000 r-xp 00000000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6dbd2000-7f1a6ddd1000 ---p 00037000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6ddd1000-7f1a6ddd2000 r--p 00036000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6ddd2000-7f1a6ddd3000 rw-p 00037000 08:04 6824744 /usr/lib/x86_64-linux-gnu/libmpdec.so.2.4.2 7f1a6ddd3000-7f1a6ddf7000 r-xp 00000000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6ddf7000-7f1a6dff6000 ---p 00024000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6dff6000-7f1a6dff7000 r--p 00023000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6dff7000-7f1a6e000000 rw-p 00024000 08:04 7150229 /usr/lib/python3.5/lib-dynload/_decimal.cpython-35m-x86_64-linux-gnu.so 7f1a6e000000-7f1a70000000 rw-p 00000000 00:00 0 7f1a70000000-7f1a70021000 rw-p 00000000 00:00 0 7f1a70021000-7f1a74000000 ---p 00000000 00:00 0 7f1a74000000-7f1a78000000 rw-p 00000000 00:00 0 7f1a78000000-7f1a78021000 rw-p 00000000 00:00 0 7f1a78021000-7f1a7c000000 ---p 00000000 00:00 0 7f1a7c010000-7f1a7c190000 rw-p 00000000 00:00 0 7f1a7c190000-7f1a7c197000 r-xp 00000000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c197000-7f1a7c396000 ---p 00007000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c396000-7f1a7c397000 r--p 00006000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c397000-7f1a7c399000 rw-p 00007000 08:04 7136401 /usr/lib/python3.5/lib-dynload/_lzma.cpython-35m-x86_64-linux-gnu.so 7f1a7c399000-7f1a7c3a8000 r-xp 00000000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c3a8000-7f1a7c5a7000 ---p 0000f000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c5a7000-7f1a7c5a8000 r--p 0000e000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c5a8000-7f1a7c5a9000 rw-p 0000f000 08:04 9437491 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7f1a7c5a9000-7f1a7c5ad000 r-xp 00000000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c5ad000-7f1a7c7ac000 ---p 00004000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c7ac000-7f1a7c7ad000 r--p 00003000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c7ad000-7f1a7c7ae000 rw-p 00004000 08:04 7136410 /usr/lib/python3.5/lib-dynload/_bz2.cpython-35m-x86_64-linux-gnu.so 7f1a7c7ae000-7f1a7c8ae000 rw-p 00000000 00:00 0 7f1a7c8ae000-7f1a7c8d9000 r-xp 00000000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7c8d9000-7f1a7cad9000 ---p 0002b000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7cad9000-7f1a7cadb000 rw-p 0002b000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7cadb000-7f1a7cade000 rw-p 000d4000 08:04 530149 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/_umath_linalg.cpython-35m-x86_64-linux-gnu.so 7f1a7cade000-7f1a7cae2000 r-xp 00000000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cae2000-7f1a7cce2000 ---p 00004000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cce2000-7f1a7cce3000 rw-p 00004000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cce3000-7f1a7cce5000 rw-p 00019000 08:04 530150 /home/rocco/.local/lib/python3.5/site-packages/numpy/linalg/lapack_lite.cpython-35m-x86_64-linux-gnu.so 7f1a7cce5000-7f1a7cda5000 rw-p 00000000 00:00 0 7f1a7cda5000-7f1a7cdc7000 r-xp 00000000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cdc7000-7f1a7cfc6000 ---p 00022000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cfc6000-7f1a7cfc7000 r--p 00021000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cfc7000-7f1a7cfcb000 rw-p 00022000 08:04 7136407 /usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so 7f1a7cfcb000-7f1a7cfcc000 rw-p 00000000 00:00 0 7f1a7cfcc000-7f1a7cfeb000 r-xp 00000000 08:04 530237 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_tests.cpython-35m-x86_64-linux-gnu.so 7f1a7cfeb000-7f1a7d1eb000 ---p 0001f000 08:04 530237 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_tests.cpython-35m-x86_64-linux-gnu.so 7f1a7d1eb000-7f1a7d1ed000 rw-p 0001f000 08:04 530237 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_tests.cpython-35m-x86_64-linux-gnu.so 7f1a7d1ed000-7f1a7d26d000 rw-p 00000000 00:00 0 7f1a7d26d000-7f1a7d26e000 ---p 00000000 00:00 0 7f1a7d26e000-7f1a7da6e000 rw-p 00000000 00:00 0 7f1a7da6e000-7f1a7f564000 r-xp 00000000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f564000-7f1a7f763000 ---p 01af6000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f763000-7f1a7f77c000 rw-p 01af5000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f77c000-7f1a7f787000 rw-p 00000000 00:00 0 7f1a7f787000-7f1a7f7ff000 rw-p 01be1000 08:04 532531 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libopenblasp-r0-34a18dc3.3.7.so 7f1a7f7ff000-7f1a7f800000 ---p 00000000 00:00 0 7f1a7f800000-7f1a80000000 rw-p 00000000 00:00 0 7f1a80000000-7f1a80021000 rw-p 00000000 00:00 0 7f1a80021000-7f1a84000000 ---p 00000000 00:00 0 7f1a8401d000-7f1a840dd000 rw-p 00000000 00:00 0 7f1a840fe000-7f1a841be000 rw-p 00000000 00:00 0 7f1a841be000-7f1a841bf000 ---p 00000000 00:00 0 7f1a841bf000-7f1a849bf000 rw-p 00000000 00:00 0 7f1a849bf000-7f1a849c0000 ---p 00000000 00:00 0 7f1a849c0000-7f1a851c0000 rw-p 00000000 00:00 0 7f1a851c0000-7f1a852b0000 r-xp 00000000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a852b0000-7f1a854af000 ---p 000f0000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a854af000-7f1a854b1000 rw-p 000ef000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a854b1000-7f1a854b2000 rw-p 00000000 00:00 0 7f1a854b2000-7f1a854ba000 rw-p 000f2000 08:04 532291 /home/rocco/.local/lib/python3.5/site-packages/numpy.libs/libgfortran-ed201abd.so.3.0.0 7f1a854ba000-7f1a8585f000 r-xp 00000000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a8585f000-7f1a85a5e000 ---p 003a5000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a85a5e000-7f1a85a7d000 rw-p 003a4000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a85a7d000-7f1a85a9e000 rw-p 00000000 00:00 0 7f1a85a9e000-7f1a85aa5000 rw-p 01474000 08:04 530243 /home/rocco/.local/lib/python3.5/site-packages/numpy/core/_multiarray_umath.cpython-35m-x86_64-linux-gnu.so 7f1a85aa5000-7f1a85ae5000 rw-p 00000000 00:00 0 7f1a85ae5000-7f1a85b53000 r-xp 00000000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85b53000-7f1a85d53000 ---p 0006e000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85d53000-7f1a85d54000 r--p 0006e000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85d54000-7f1a85d55000 rw-p 0006f000 08:04 9441868 /lib/x86_64-linux-gnu/libpcre.so.3.13.2 7f1a85d55000-7f1a85e64000 r-xp 00000000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a85e64000-7f1a86063000 ---p 0010f000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a86063000-7f1a86064000 r--p 0010e000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a86064000-7f1a86065000 rw-p 0010f000 08:04 9437261 /lib/x86_64-linux-gnu/libglib-2.0.so.0.4800.2 7f1a86065000-7f1a86066000 rw-p 00000000 00:00 0 7f1a86066000-7f1a86067000 r-xp 00000000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86067000-7f1a86266000 ---p 00001000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86266000-7f1a86267000 r--p 00000000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86267000-7f1a86268000 rw-p 00001000 08:04 6819030 /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0.4800.2 7f1a86298000-7f1a870f2000 r-xp 00000000 08:04 528776 /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so 7f1a870f2000-7f1a872f1000 ---p 00e5a000 08:04 528776 /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so 7f1a872f1000-7f1a8733f000 rw-p 00e59000 08:04 528776 /home/rocco/.local/lib/python3.5/site-packages/cv2/cv2.cpython-35m-x86_64-linux-gnu.so 7f1a8733f000-7f1a87465000 rw-p 00000000 00:00 0 7f1a87465000-7f1a8773f000 r--p 00000000 08:04 6816127 /usr/lib/locale/locale-archive 7f1a8773f000-7f1a87740000 ---p 00000000 00:00 0 7f1a87740000-7f1a87f40000 rw-p 00000000 00:00 0 7f1a87f40000-7f1a87f41000 ---p 00000000 00:00 0 7f1a87f41000-7f1a8a336000 rw-p 00000000 00:00 0 7f1a8a336000-7f1a8a33b000 r-xp 00000000 08:04 6823807 /usr/lib/x86_64-linux-gnu/libIlmThread-2_2.so.12.0.0 7f1a8a33b000-7f1a8a53b000 ---p 00005000 08:04 6823807 /usr/lib/x86_64-linux-gnu/libIlmThread-2_2.so.12.0.0 ``` **EDIT** After some test, I tried updating python to version 3.6.13 but problem persist, in addition I realized that the problem is not related to cv2.resize function but to opencv in general, whatever opencv function I call from python, it calls my dynamic linked opencv 4.5 libray for deallocation, causing the problem. **EDIT** Using OpenCV statically linked in C++ everything works as expected.. is there a reason for this?
2021/02/20
[ "https://Stackoverflow.com/questions/66297277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4267439/" ]
It seems a linking problem. Python uses "dlopen" to dynamically load libraries, try changing the [flags](https://manpages.debian.org/buster/manpages-dev/dlopen.3.en.html) that are passed to this function. For example, using RTLD\_DEEPBIND you can specify to prefer the symbols contained by the loaded objects (instead of the global ones with the same name). Using these two lines: ```py import os print(os.RTLD_DEEPBIND | os.RTLD_NOW) ``` You can obtain the int code for these flags, which is 10. Then you can change the flags used by dlopen by calling `sys.setdlopenflags(10)`: ```cpp void setFlags() { auto flag = PyLong_FromLong(10); auto setdl = PySys_GetObject("setdlopenflags"); auto arg = PyTuple_New(1); PyTuple_SetItem(arg, 0, flag); PyObject_CallObject(setdl, arg); } ``` It must be used before `PyImport_Import`.
I think your problem is that python uses binding for c++ functions, and as you mention you are using different opencv versions for opencv c++ and python. Refer to <https://docs.opencv.org/3.4/da/d49/tuy_bindings_basics.html> for more information about python bindings. One solution will be to use already bonded functions or use the same compiled library version if you create your own bindings.
67,094,562
My dataset is a .txt file separated by colons (:). One of the columns contains a date *AND* time, the date is separated by backslash (/) which is fine. However, the time is separated by colons (:) just like the rest of the data which throws off my method for cleaning the data. Example of a couple of lines of the dataset: ``` USA:Houston, Texas:05/06/2020 12:00:00 AM:car Japan:Tokyo:05/06/2020 11:05:10 PM:motorcycle USA:Houston, Texas:12/15/2020 12:00:10 PM:car Japan:Kyoto:01/04/1999 05:30:00 PM:bicycle ``` I'd like to clean the dataset before loading it into a dataframe in python using pandas. How do I separate the columns? I can't use ``` df = pandas.read_csv('example.txt', sep=':', header=None) ``` because that will separate the time data into different columns. Any ideas?
2021/04/14
[ "https://Stackoverflow.com/questions/67094562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12326879/" ]
You cannot assign to arrays. You should use [`strcpy()`](https://man7.org/linux/man-pages/man3/strcpy.3.html) to copy C-style strings. ``` strcpy(dstPath, getenv("APPDATA")); strcat(dstPath, p.filename().string().c_str()); ``` Or the concatination can be done in one line via [`snprintf()`](https://man7.org/linux/man-pages/man3/snprintf.3.html): ``` snprintf(dstPath, sizeof(dstPath), "%s%s", getenv("APPDATA"), p.filename().string().c_str()); ``` Finally, `TCHAR` and `GetModuleFileName` can refer to UNICODE version of the API, according to the compilation option. Using ANSI version (`char` and `GetModuleFileNameA`) explicitly is safer to work with `std::string` and other APIs that require strings consists of `char`.
You are trying to use strcat to concatenate two strings and store the result in another one, but it does not work that way. The call `strcat (str1, str2)` adds the content of `str2` at the end of `str1`. It also returns a pointer to `str1` but I don't normally use it. What you are trying to do should be done in three steps: * Make sure that `dstPath` contains an empty string * Concatenate to `dstPath` the value of the environment variable APPDATA * Concatenate to `dstPath` the value of filename Something like this: ``` dstPath[0] = '\0'; strcat(dstPath, getenv("APPDATA")); strcat(dstPath, p.filename().string().c_str()); ``` You should also add checks not to overflow `dstPath`...
67,094,562
My dataset is a .txt file separated by colons (:). One of the columns contains a date *AND* time, the date is separated by backslash (/) which is fine. However, the time is separated by colons (:) just like the rest of the data which throws off my method for cleaning the data. Example of a couple of lines of the dataset: ``` USA:Houston, Texas:05/06/2020 12:00:00 AM:car Japan:Tokyo:05/06/2020 11:05:10 PM:motorcycle USA:Houston, Texas:12/15/2020 12:00:10 PM:car Japan:Kyoto:01/04/1999 05:30:00 PM:bicycle ``` I'd like to clean the dataset before loading it into a dataframe in python using pandas. How do I separate the columns? I can't use ``` df = pandas.read_csv('example.txt', sep=':', header=None) ``` because that will separate the time data into different columns. Any ideas?
2021/04/14
[ "https://Stackoverflow.com/questions/67094562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12326879/" ]
You cannot assign to arrays. You should use [`strcpy()`](https://man7.org/linux/man-pages/man3/strcpy.3.html) to copy C-style strings. ``` strcpy(dstPath, getenv("APPDATA")); strcat(dstPath, p.filename().string().c_str()); ``` Or the concatination can be done in one line via [`snprintf()`](https://man7.org/linux/man-pages/man3/snprintf.3.html): ``` snprintf(dstPath, sizeof(dstPath), "%s%s", getenv("APPDATA"), p.filename().string().c_str()); ``` Finally, `TCHAR` and `GetModuleFileName` can refer to UNICODE version of the API, according to the compilation option. Using ANSI version (`char` and `GetModuleFileNameA`) explicitly is safer to work with `std::string` and other APIs that require strings consists of `char`.
First off, you are mixing `TCHAR` and `char` APIs in a way you should not be. You really should not be using `TCHAR` at all in modern code. But, if you are going to use `TCHAR`, then at least use `TCHAR`- based functions/macros, like `_tprintf()` instead of `printf()`, `_tcscat()` instead of `strcat()`, etc. The compiler error is because you are trying to assign the `char*` pointer returned by `strcat()` to your `dstPath` `TCHAR[]` array. You can't assign a pointer to an array like that. You should `strcpy()` the result of `getenv()` into `dstPath` first, and then `strcat()` your filename onto the end of it, eg: ```cpp #include <string> #include <filesystem> #include <Windows.h> #include <Shlwapi.h> #include <stdio.h> #include <tchar.h> TCHAR* _tgetenv(const TCHAR *varname) { #ifdef _UNICODE return _wgetenv(varname); #else return getenv(varname); #endif } std::basic_string<TCHAR> path2TStr(const std::filesystem::path &p) { #ifdef _UNICODE return p.wstring(); #else return p.string(); #endif } int main() { TCHAR selfPath[MAX_PATH]; TCHAR dstPath[MAX_PATH]; if (GetModuleFileName(NULL, selfPath, MAX_PATH) == 0) // Getting exe File Location { printf("Error : %ul\n", GetLastError()); return 0; } std::filesystem::path p(selfPath); _tcscpy(dstPath, _tgetenv(_T("APPDATA"))); _tcscat(dstPath, path2TStr(p.filename()).c_str()); _tprintf(_T("Src : %s\n"), selfPath); _tprintf(_T("Dst : %s\n"), dstPath); return 0; } ``` However, you really should be using [`SHGetFolderPath(CSIDL_APPDATA)`](https://learn.microsoft.com/en-us/windows/win32/api/shlobj_core/nf-shlobj_core-shgetfolderpathw) or [`SHGetKnownFolderPath(FOLDERID_RoamingAppData)`](https://learn.microsoft.com/en-us/windows/win32/api/shlobj_core/nf-shlobj_core-shgetknownfolderpath) instead of using `getenv("APPDATA")`. And since you are using the `<filesystem>` library anyway, you really should just use `std::filesystem::path` for all of your path handling. It has [`operator/=`](https://en.cppreference.com/w/cpp/filesystem/path/append) and [`operator/`](https://en.cppreference.com/w/cpp/filesystem/path/operator_slash) to concatenate path segments, and an [`operator<<`](https://en.cppreference.com/w/cpp/filesystem/path/operator_ltltgtgt) for printing paths to a `std::ostream`, like `std::cout`. Don't use `strcat()` for concatenating path segments, it won't handle directory separators correctly, at least. Try this instead: ```cpp #include <iostream> #include <string> #include <filesystem> #include <stdexcept> #include <Windows.h> #include <Shlobj.h> std::filesystem::path getSelfPath() { WCHAR wPath[MAX_PATH] = {}; if (!GetModuleFileNameW(NULL, wPath, MAX_PATH)) // Getting exe File Location { DWORD err = GetLastError(); throw std::runtime_error("Error : " << std::to_string(err)); } return wPath; } std::filesystem::path getAppDataPath() { WCHAR wPath[MAX_PATH] = {}; HRESULT hRes = SHGetFolderPathW(NULL, CSIDL_APPDATA, NULL, SHGFP_TYPE_CURRENT, wPath); // Getting APPDATA Folder Location if (hRes != S_OK) throw std::runtime_error("Error : " << std::to_string(hRes)); return wPath; } int main() { try { auto selfPath = getSelfPath(); auto dstPath = getAppDataPath() / selfPath.filename(); std::cout << "Src : " << selfPath << "\n"; std::cout << "Dst : " << dstPath << "\n"; } catch (const std::exception &e) { std::cerr << e.what() << "\n"; } return 0; } ```
65,974,443
When I compile `graalpython -m ginstall install pandas` or `graalpython -m ginstall install bumpy` I got the following error, please comment how to fix the error. Thank you. ``` line 54, in __init__ File "number.c", line 284, in array_power File "ufunc_object.c", line 4688, in ufunc_generic_call File "ufunc_object.c", line 3178, in PyUFunc_GenericFunction File "ufunc_type_resolution.c", line 180, in PyUFunc_DefaultTypeResolver File "ufunc_type_resolution.c", line 2028, in linear_search_type_resolver File "ufunc_type_resolution.c", line 1639, in ufunc_loop_matches File "convert_datatype.c", line 904, in PyArray_CanCastArrayTo java.lang.AssertionError ```
2021/01/31
[ "https://Stackoverflow.com/questions/65974443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5329711/" ]
I'm seeing a couple steps missing. 1. You shouldn't be installing chromedriver from brew, you can use the "webdrivers" gem to handle that. The gem will install drivers in the `~/.webdrivers` directory by default. 2. When running integration tests, you'll need to set the proper driver for Capybara. <https://github.com/teamcapybara/capybara#selenium> `Capybara.current_driver = :selenium_chrome`
Very stupid and non-obvious issue - the gems need to be in the `test` group ‍♂️: ``` group :test do gem 'capybara' gem 'webdrivers' end ```
41,175,862
I'm using Connector/Python to insert many rows into a temp table in mysql. The rows are all in a list-of-lists. I perform the insertion like this: ``` cursor = connection.cursor(); batch = [[1, 'foo', 'bar'],[2, 'xyz', 'baz']] cursor.executemany('INSERT INTO temp VALUES(?, ?, ?)', batch) connection.commit() ``` I noticed that (with many more rows, of course) the performance was extremely poor. Using SHOW PROCESSLIST, I noticed that each insert was executing separately. But the documentation <https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html> says this should be optimized into 1 insert. What's going on?
2016/12/16
[ "https://Stackoverflow.com/questions/41175862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2627992/" ]
Answering so other people won't go through the debugging I had to! I wrote the query modeling it on other queries in our code that used prepared statements and used '?' to indicate parameters. But you *can't do that* for executemany()! It *must* use '%s'. Changing to the following: ``` cursor.executemany('INSERT INTO temp VALUES(%s,%s,%s)', batch) ``` ...led to a hundredfold speed improvement and the optimized single query could be seen using SHOW PROCESSLIST. Beware the standard '?' syntax!
Try turn this command on: cursor.fast\_executemany = True Otherwise executemany acts just like multiple execute
41,175,862
I'm using Connector/Python to insert many rows into a temp table in mysql. The rows are all in a list-of-lists. I perform the insertion like this: ``` cursor = connection.cursor(); batch = [[1, 'foo', 'bar'],[2, 'xyz', 'baz']] cursor.executemany('INSERT INTO temp VALUES(?, ?, ?)', batch) connection.commit() ``` I noticed that (with many more rows, of course) the performance was extremely poor. Using SHOW PROCESSLIST, I noticed that each insert was executing separately. But the documentation <https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html> says this should be optimized into 1 insert. What's going on?
2016/12/16
[ "https://Stackoverflow.com/questions/41175862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2627992/" ]
Answering so other people won't go through the debugging I had to! I wrote the query modeling it on other queries in our code that used prepared statements and used '?' to indicate parameters. But you *can't do that* for executemany()! It *must* use '%s'. Changing to the following: ``` cursor.executemany('INSERT INTO temp VALUES(%s,%s,%s)', batch) ``` ...led to a hundredfold speed improvement and the optimized single query could be seen using SHOW PROCESSLIST. Beware the standard '?' syntax!
If you used `IGNORE` like `cursor.executemany('INSERT IGNORE INTO temp VALUES(%s,%s,%s)', batch)`, `executemany()` acts just like multiple execute!
41,175,862
I'm using Connector/Python to insert many rows into a temp table in mysql. The rows are all in a list-of-lists. I perform the insertion like this: ``` cursor = connection.cursor(); batch = [[1, 'foo', 'bar'],[2, 'xyz', 'baz']] cursor.executemany('INSERT INTO temp VALUES(?, ?, ?)', batch) connection.commit() ``` I noticed that (with many more rows, of course) the performance was extremely poor. Using SHOW PROCESSLIST, I noticed that each insert was executing separately. But the documentation <https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html> says this should be optimized into 1 insert. What's going on?
2016/12/16
[ "https://Stackoverflow.com/questions/41175862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2627992/" ]
If you used `IGNORE` like `cursor.executemany('INSERT IGNORE INTO temp VALUES(%s,%s,%s)', batch)`, `executemany()` acts just like multiple execute!
Try turn this command on: cursor.fast\_executemany = True Otherwise executemany acts just like multiple execute
35,162,989
I am learning python and need some help with lists and printing of the same. List would end up looking list this: ``` mylist = ["a", "d", "c", "g", "g", "g", "a", "b", "n", "g", "a", "s", "t", "z", "a"] ``` I've used Counter(i think lol) ``` class item_print(Counter): def __str__(self): return '\n'.join('{}: {}'.format(k, v) for k, v in self.items()) ``` to make it look like this: ``` a:4 b:1 etc ``` Wondering if there is a way to make it look like this: ``` "a":4 "b":1 etc etc ```
2016/02/02
[ "https://Stackoverflow.com/questions/35162989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5699265/" ]
Three problems. First this needs to be put in a directive so you are assured the element(s) exist(s) when the code is run Next...the event is outside of angular context . Whenever code outside of angular updates scope you need to tell angular to update view Last ... `angular.element` doesn't accept class selectors unless jQuery is included in page. Using directive also solves this issue since the element itself is exposed within directive as a jQlite or jQuery object ``` videoElement.on('pause',function() { vm.hideVideoCaption = false; $scope.$apply() }); ```
You are updating scope variable angular, out of its context. Angular doesn't run the digest cycle for those kind of updation. In this case you are updating scope variables from custom events, which doesn't intimate Angular digest system something has udpated on UI, resultant the digest cycle doesn't get fired. You need to kick off digest cycle manually to update the bindings. You could either run digest cycle by `$apply` over `$scope` **OR** use `$timeout` function. ``` videoElement.on('canplay',function() { $timeout(function(){ vm.hidePlayButton = false; }) }); videoElement.on('pause',function() { $timeout(function(){ vm.hideVideoCaption = false; }) }); ```
55,738,296
A call to [`functools.reduce`](https://docs.python.org/3/library/functools.html#functools.reduce) returns only the final result: ``` >>> from functools import reduce >>> a = [1, 2, 3, 4, 5] >>> f = lambda x, y: x + y >>> reduce(f, a) 15 ``` Instead of writing a loop myself, does a function exist which returns the intermediary values, too? ``` [3, 6, 10, 15] ``` (This is only a simple example, I'm not trying to calculate the cumulative sum - the solution should work for arbitrary `a` and `f`.)
2019/04/18
[ "https://Stackoverflow.com/questions/55738296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1621041/" ]
You can use [`itertools.accumulate()`](https://docs.python.org/3/library/itertools.html#itertools.accumulate): ``` >>> from itertools import accumulate >>> list(accumulate([1, 2, 3, 4, 5], lambda x, y: x+y))[1:] [3, 6, 10, 15] ``` Note that the order of parameters is switched relative to `functools.reduce()`. Also, the default `func` (the second parameter) is a sum (like `operator.add`), so in your case, it's technically optional: ``` >>> list(accumulate([1, 2, 3, 4, 5]))[1:] # default func: sum [3, 6, 10, 15] ``` And finally, it's worth noting that `accumulate()` will include the first term in the sequence, hence why the result is indexed from `[1:]` above. --- In your edit, you noted that... > > This is only a simple example, I'm not trying to calculate the cumulative sum - the solution should work for arbitrary a and f. > > > The nice thing about `accumulate()` is that it is flexible about the callable it will take. It only demands a callable that is a function of two parameters. For instance, builtin [`max()`](https://docs.python.org/3.5/library/functions.html#max) satisfies that: ``` >>> list(accumulate([1, 10, 4, 2, 17], max)) [1, 10, 10, 10, 17] ``` This is a longer form of using the unnecessary lambda: ``` >>> # Don't do this >>> list(accumulate([1, 10, 4, 2, 17], lambda x, y: max(x, y))) [1, 10, 10, 10, 17] ```
``` import numpy as np x=[1, 2, 3, 4, 5] y=np.cumsum(x) # gets you the cumulative sum y=list(y[1:]) # remove the first number print(y) #[3, 6, 10, 15] ```
53,129,790
I'm using Django with Postgres. On a page I can show a list of featured items, let's say 10. 1. If in the database I have more featured items than 10, I want to get them random/(better rotate). 2. If the number of featured item is lower than 10, get all featured item and add to the list until 10 non-featured items. Because the random takes more time on database, I do the sampling in python: ``` count = Item.objects.filter(is_featured=True).count() if count >= 10: item = random.sample(list(Item.objects.filter(is_featured=True))[:10]) else: item = list(Item.objects.all()[:10]) ``` The code above miss the case where there less than 10 featured(for example 8, to add 2 non-featured). I can try to add a new query, but I don't know if this is an efficient retrive, using 4-5 queries for this.
2018/11/03
[ "https://Stackoverflow.com/questions/53129790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3541631/" ]
you may use the `whereHas` keyword in laravel: ``` public function search(Request $request) { return Product::with('categories') ->whereHas('categories', function ($query) use ($request){ $query->where('category_id', $request->category_id); })->get(); } ``` Here is the [docs](https://laravel.com/docs/5.7/eloquent-relationships)
You can search it in following way: ``` public function search(Request $request) { return Product::with('categories') ->whereHas('categories', function ($q) use (request) { $q->where('id', $request->category_id); }); } ```
37,530,804
I'm new to MPI, but I have been trying to use it on a cluster with OpenMPI. I'm having the following problem: ``` $ python -c "from mpi4py import MPI" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: libmpi.so.1: cannot open shared object file: No such file or directory ``` I've done some research and tried a few things including: ``` module load mpi/openmpi-x86_64 ``` which doesn't seem to change anything. My LD\_LIBRARY\_PATH seems to be set correctly, but the desired "libmpi.so.1" does not exist. Instead, there is "libmpi.so.12": ``` $ echo $LD_LIBRARY_PATH /usr/lib64/openmpi/lib:/usr/local/matio/lib:/usr/lib64/mysql:/usr/local/knitro/lib:/usr/local/gurobi/linux64/lib:/usr/local/lib: $ls /usr/lib64/openmpi/lib ... libmpi.so libmpi.so.12 libmpi.so.12.0.0 libmpi_usempi.so ... ``` I can't uninstall/re-install mpi4py because it's outside my virtual environment / I don't have the permissions to uninstall it on the general cluster. I've seen this: [Error because file libmpi.so.1 missing](https://stackoverflow.com/questions/36860315/error-because-file-libmpi-so-1-missing), but not sure how I can create a symbolic link / don't think I have permission to modify the folder. I'm somewhat out of ideas: Not sure if it's possible to have a separate install of mpi4py on my virtualenv, or whether I can specify the correct file to mpi4py temporarily? Any suggestions would be greatly appreciated! Update: I ran `find /usr -name libmpi.so.1 2>/dev/null` as suggested, which returned `usr/lib64/compat-openmpi16/lib/libmpi.so.1`. Using `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH"/usr/lib64/compat-openmpi16/lib/"`, `python -c "from mpi4py import MPI"` runs without any problems. However, if I try `mpiexec -np 2 python -c 'from mpi4py import MPI'`, I get the following error: ``` [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file ess_env_module.c at line 367 [[INVALID],INVALID]-[[28753,0],0] mca_oob_tcp_peer_try_connect: connect to 255.255.255.255:37376 failed: Network is unreachable (101) ```
2016/05/30
[ "https://Stackoverflow.com/questions/37530804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6332838/" ]
Even adding to LD\_LIBRARY\_PATH did not work for me. I had to uninstall mpi4py, then install it manually with my mpicc path according to the instructions here - <https://mpi4py.readthedocs.io/en/stable/install.html> (using distutils section)
I solve this problem by ```bash pip uninstall mpi4py conda install mpi4py ```
44,132,619
I used the `pyodbc` and `pypyodbc` python package to connect to SQL server. Drivers used anyone of these `['SQL Server', 'SQL Server Native Client 10.0', 'ODBC Driver 11 for SQL Server', 'ODBC Driver 13 for SQL Server']`. connection string : ``` connection = pyodbc.connect('DRIVER={SQL Server};' 'Server=aaa.database.windows.net;' 'DATABASE=DB_NAME;' 'UID=User_name;' 'PWD=password') ``` now I am getting error message like ``` DatabaseError: (u'28000', u"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ``` But I can connect to the server through the SQL server management studio. its on SQL Server Authentication, Not Windows Authentication. Is this about python package and driver issue or DB issue??? how to solve?
2017/05/23
[ "https://Stackoverflow.com/questions/44132619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5602871/" ]
The problem is not driver issue, you can see the error message is `DatabaseError: Login failed for user`, it means this problem occurs if the user tries to log in with credentials that cannot be validated. I suspect you are login with your windows Authentication, if so, use `Trusted_Connection=yes` instead: ``` connection = pyodbc.connect('DRIVER={SQL Server};Server=aaa.database.windows.net;DATABASE=DB_NAME;Trusted_Connection=yes') ``` For more details, please refer to [my old answer](https://stackoverflow.com/questions/43815104/python-pyodbc-connection-error/43820031#43820031) about the difference of `SQL Server Authentication modes`.
I think problem because of driver definition in your connection string. You may try with below. ``` connection = pyodbc.connect('DRIVER={SQL Server Native Client 10.0}; Server=aaa.database.windows.net; DATABASE=DB_NAME; UID=User_name; PWD=password') ```
44,132,619
I used the `pyodbc` and `pypyodbc` python package to connect to SQL server. Drivers used anyone of these `['SQL Server', 'SQL Server Native Client 10.0', 'ODBC Driver 11 for SQL Server', 'ODBC Driver 13 for SQL Server']`. connection string : ``` connection = pyodbc.connect('DRIVER={SQL Server};' 'Server=aaa.database.windows.net;' 'DATABASE=DB_NAME;' 'UID=User_name;' 'PWD=password') ``` now I am getting error message like ``` DatabaseError: (u'28000', u"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ``` But I can connect to the server through the SQL server management studio. its on SQL Server Authentication, Not Windows Authentication. Is this about python package and driver issue or DB issue??? how to solve?
2017/05/23
[ "https://Stackoverflow.com/questions/44132619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5602871/" ]
The problem is not driver issue, you can see the error message is `DatabaseError: Login failed for user`, it means this problem occurs if the user tries to log in with credentials that cannot be validated. I suspect you are login with your windows Authentication, if so, use `Trusted_Connection=yes` instead: ``` connection = pyodbc.connect('DRIVER={SQL Server};Server=aaa.database.windows.net;DATABASE=DB_NAME;Trusted_Connection=yes') ``` For more details, please refer to [my old answer](https://stackoverflow.com/questions/43815104/python-pyodbc-connection-error/43820031#43820031) about the difference of `SQL Server Authentication modes`.
I applied your connection string and updated it with my server connections details and it worked fine. Are you sure your are passing correct user name and password ? Login failed implies connection was established successfully, but authentication didn't pass.
44,132,619
I used the `pyodbc` and `pypyodbc` python package to connect to SQL server. Drivers used anyone of these `['SQL Server', 'SQL Server Native Client 10.0', 'ODBC Driver 11 for SQL Server', 'ODBC Driver 13 for SQL Server']`. connection string : ``` connection = pyodbc.connect('DRIVER={SQL Server};' 'Server=aaa.database.windows.net;' 'DATABASE=DB_NAME;' 'UID=User_name;' 'PWD=password') ``` now I am getting error message like ``` DatabaseError: (u'28000', u"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ``` But I can connect to the server through the SQL server management studio. its on SQL Server Authentication, Not Windows Authentication. Is this about python package and driver issue or DB issue??? how to solve?
2017/05/23
[ "https://Stackoverflow.com/questions/44132619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5602871/" ]
I see you have no `port` defined in your script. The general way to connect to a server using pyodbc `${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}`, where `DBName` is your database name, `DBUser` is the username used to connect to it, `DBPass` is the password, `DBHost` is the URL of your database, and `DBPort` is the port you use to connect to your DB. I use MS SQL so my port is `1433`, yours might be different. I've had this issue just today, and that fixed it.
I think problem because of driver definition in your connection string. You may try with below. ``` connection = pyodbc.connect('DRIVER={SQL Server Native Client 10.0}; Server=aaa.database.windows.net; DATABASE=DB_NAME; UID=User_name; PWD=password') ```
44,132,619
I used the `pyodbc` and `pypyodbc` python package to connect to SQL server. Drivers used anyone of these `['SQL Server', 'SQL Server Native Client 10.0', 'ODBC Driver 11 for SQL Server', 'ODBC Driver 13 for SQL Server']`. connection string : ``` connection = pyodbc.connect('DRIVER={SQL Server};' 'Server=aaa.database.windows.net;' 'DATABASE=DB_NAME;' 'UID=User_name;' 'PWD=password') ``` now I am getting error message like ``` DatabaseError: (u'28000', u"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ``` But I can connect to the server through the SQL server management studio. its on SQL Server Authentication, Not Windows Authentication. Is this about python package and driver issue or DB issue??? how to solve?
2017/05/23
[ "https://Stackoverflow.com/questions/44132619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5602871/" ]
I see you have no `port` defined in your script. The general way to connect to a server using pyodbc `${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}`, where `DBName` is your database name, `DBUser` is the username used to connect to it, `DBPass` is the password, `DBHost` is the URL of your database, and `DBPort` is the port you use to connect to your DB. I use MS SQL so my port is `1433`, yours might be different. I've had this issue just today, and that fixed it.
I applied your connection string and updated it with my server connections details and it worked fine. Are you sure your are passing correct user name and password ? Login failed implies connection was established successfully, but authentication didn't pass.
44,132,619
I used the `pyodbc` and `pypyodbc` python package to connect to SQL server. Drivers used anyone of these `['SQL Server', 'SQL Server Native Client 10.0', 'ODBC Driver 11 for SQL Server', 'ODBC Driver 13 for SQL Server']`. connection string : ``` connection = pyodbc.connect('DRIVER={SQL Server};' 'Server=aaa.database.windows.net;' 'DATABASE=DB_NAME;' 'UID=User_name;' 'PWD=password') ``` now I am getting error message like ``` DatabaseError: (u'28000', u"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ``` But I can connect to the server through the SQL server management studio. its on SQL Server Authentication, Not Windows Authentication. Is this about python package and driver issue or DB issue??? how to solve?
2017/05/23
[ "https://Stackoverflow.com/questions/44132619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5602871/" ]
You can add `Trusted_Connection=NO;` in your connection string after the password
I think problem because of driver definition in your connection string. You may try with below. ``` connection = pyodbc.connect('DRIVER={SQL Server Native Client 10.0}; Server=aaa.database.windows.net; DATABASE=DB_NAME; UID=User_name; PWD=password') ```
44,132,619
I used the `pyodbc` and `pypyodbc` python package to connect to SQL server. Drivers used anyone of these `['SQL Server', 'SQL Server Native Client 10.0', 'ODBC Driver 11 for SQL Server', 'ODBC Driver 13 for SQL Server']`. connection string : ``` connection = pyodbc.connect('DRIVER={SQL Server};' 'Server=aaa.database.windows.net;' 'DATABASE=DB_NAME;' 'UID=User_name;' 'PWD=password') ``` now I am getting error message like ``` DatabaseError: (u'28000', u"[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ``` But I can connect to the server through the SQL server management studio. its on SQL Server Authentication, Not Windows Authentication. Is this about python package and driver issue or DB issue??? how to solve?
2017/05/23
[ "https://Stackoverflow.com/questions/44132619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5602871/" ]
You can add `Trusted_Connection=NO;` in your connection string after the password
I applied your connection string and updated it with my server connections details and it worked fine. Are you sure your are passing correct user name and password ? Login failed implies connection was established successfully, but authentication didn't pass.
12,700,194
I've installed git-core (+svn) on my Mac from MacPorts. This has given me: ``` git-core @1.7.12.2_0+credential_osxkeychain+doc+pcre+python27+svn subversion @1.7.6_2 ``` I'm attempting to call something like the following: ``` git svn clone http://my.svn.com/svn/area/subarea/project -s ``` The output looks something like this: ``` Initialized empty Git repository in /Users/bitwise/work/svn/project/.git/ Using higher level of URL: http://my.svn.com/svn/area/subarea/project => http://my.svn.com/svn/area A folder/file.txt A folder/file2.txt [... some number of files from svn ... ] A folder44/file0.txt Temp file with moniker 'svn_delta' already in use at /opt/local/lib/perl5/site_perl/5.12.4/Git.pm line 1024. ``` I've done the usual searches but most of the threads seem to trail off without proposing a clear fix.
2012/10/03
[ "https://Stackoverflow.com/questions/12700194", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1637252/" ]
Add this setting to your `~/.subversion/servers` file: ``` [global] http-bulk-updates=on ``` I had this issue on Linux, and saw the above workaround on [this thread](http://mail-archives.apache.org/mod_mbox/subversion-users/201307.mbox/%3C51D79BDD.5020106@wandisco.com%3E). I **think** I ran into this because I forced [Alien SVN](http://search.cpan.org/~mschwern/Alien-SVN-v1.7.3.1/src/subversion/subversion/bindings/swig/perl/native/Core.pm) to build with subversion 1.8 which uses the serf library now instead of neon for https, and apparently git-svn doesn't play nicely with serf.
<http://bugs.debian.org/534763> suggests it is a bug in libsvn-perl package, try upgrading that
12,700,194
I've installed git-core (+svn) on my Mac from MacPorts. This has given me: ``` git-core @1.7.12.2_0+credential_osxkeychain+doc+pcre+python27+svn subversion @1.7.6_2 ``` I'm attempting to call something like the following: ``` git svn clone http://my.svn.com/svn/area/subarea/project -s ``` The output looks something like this: ``` Initialized empty Git repository in /Users/bitwise/work/svn/project/.git/ Using higher level of URL: http://my.svn.com/svn/area/subarea/project => http://my.svn.com/svn/area A folder/file.txt A folder/file2.txt [... some number of files from svn ... ] A folder44/file0.txt Temp file with moniker 'svn_delta' already in use at /opt/local/lib/perl5/site_perl/5.12.4/Git.pm line 1024. ``` I've done the usual searches but most of the threads seem to trail off without proposing a clear fix.
2012/10/03
[ "https://Stackoverflow.com/questions/12700194", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1637252/" ]
Note that [git 1.8.5rc3 (release November 20st, 2013](https://github.com/git/git/blob/5fd09df3937f54c5cfda4f1087f5d99433cce527/Documentation/RelNotes/1.8.5.txt#L56-L61), [announced here](http://article.gmane.org/gmane.comp.version-control.git/238099)) now includes: * "`git-svn`" has been taught to use the [**`serf`** library](https://code.google.com/p/serf/), which is the [only option SVN 1.8.0 offers us when talking the HTTP protocol](http://subversion.apache.org/docs/release-notes/1.8.html#neon-deleted) (no more [neon](http://webdav.org/neon/)). * "`git-svn`" talking over an `https://` connection using the **`serf`** library dumped core due to a bug in the **`serf`** library that SVN uses. Work around it on our side, even though the SVN side is being fixed. So a general upgrade to latest Git (1.8.5 should be out next week) and [latest SVN 1.8](http://samoldak.com/updating-to-svn-1-8-for-mac-os-x-10-8/) can help make things run smoothly.
<http://bugs.debian.org/534763> suggests it is a bug in libsvn-perl package, try upgrading that
12,700,194
I've installed git-core (+svn) on my Mac from MacPorts. This has given me: ``` git-core @1.7.12.2_0+credential_osxkeychain+doc+pcre+python27+svn subversion @1.7.6_2 ``` I'm attempting to call something like the following: ``` git svn clone http://my.svn.com/svn/area/subarea/project -s ``` The output looks something like this: ``` Initialized empty Git repository in /Users/bitwise/work/svn/project/.git/ Using higher level of URL: http://my.svn.com/svn/area/subarea/project => http://my.svn.com/svn/area A folder/file.txt A folder/file2.txt [... some number of files from svn ... ] A folder44/file0.txt Temp file with moniker 'svn_delta' already in use at /opt/local/lib/perl5/site_perl/5.12.4/Git.pm line 1024. ``` I've done the usual searches but most of the threads seem to trail off without proposing a clear fix.
2012/10/03
[ "https://Stackoverflow.com/questions/12700194", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1637252/" ]
Add this setting to your `~/.subversion/servers` file: ``` [global] http-bulk-updates=on ``` I had this issue on Linux, and saw the above workaround on [this thread](http://mail-archives.apache.org/mod_mbox/subversion-users/201307.mbox/%3C51D79BDD.5020106@wandisco.com%3E). I **think** I ran into this because I forced [Alien SVN](http://search.cpan.org/~mschwern/Alien-SVN-v1.7.3.1/src/subversion/subversion/bindings/swig/perl/native/Core.pm) to build with subversion 1.8 which uses the serf library now instead of neon for https, and apparently git-svn doesn't play nicely with serf.
Note that [git 1.8.5rc3 (release November 20st, 2013](https://github.com/git/git/blob/5fd09df3937f54c5cfda4f1087f5d99433cce527/Documentation/RelNotes/1.8.5.txt#L56-L61), [announced here](http://article.gmane.org/gmane.comp.version-control.git/238099)) now includes: * "`git-svn`" has been taught to use the [**`serf`** library](https://code.google.com/p/serf/), which is the [only option SVN 1.8.0 offers us when talking the HTTP protocol](http://subversion.apache.org/docs/release-notes/1.8.html#neon-deleted) (no more [neon](http://webdav.org/neon/)). * "`git-svn`" talking over an `https://` connection using the **`serf`** library dumped core due to a bug in the **`serf`** library that SVN uses. Work around it on our side, even though the SVN side is being fixed. So a general upgrade to latest Git (1.8.5 should be out next week) and [latest SVN 1.8](http://samoldak.com/updating-to-svn-1-8-for-mac-os-x-10-8/) can help make things run smoothly.
73,629,234
Simplest way to explain will be I have this code, ``` Str = 'Floor_Live_Patterened_SpanPairs_1: [[-3, 0, 0, 5.5], [-3, 5.5, 0, 9.5]]Floor_Live_Patterened_SpanPairs_2: [[-3, 0, 0, 5.5], [-3, 9.5, 0, 13.5]]Floor_Live_Patterened_SpanPairs_3: [[-3, 5.5, 0, 9.5], [-3, 9.5, 0, 13.5]]' from re import findall findall ('[^\]\]]+\]\]?', Str) ``` What I get is, ``` ['Floor_Live_Patterened_SpanPairs_1: [[-3, 0, 0, 5.5]', ', [-3, 5.5, 0, 9.5]]', 'Floor_Live_Patterened_SpanPairs_2: [[-3, 0, 0, 5.5]', ', [-3, 9.5, 0, 13.5]]', 'Floor_Live_Patterened_SpanPairs_3: [[-3, 5.5, 0, 9.5]', ', [-3, 9.5, 0, 13.5]]'] ``` I assume it's taking only single ']' instead of ']]' when splitting, I want result as below, ``` ['Floor_Live_Patterened_SpanPairs_1: [[-3, 0, 0, 5.5], [-3, 5.5, 0, 9.5]]', 'Floor_Live_Patterened_SpanPairs_2: [[-3, 0, 0, 5.5], [-3, 9.5, 0, 13.5]]', 'Floor_Live_Patterened_SpanPairs_3: [[-3, 5.5, 0, 9.5], [-3, 9.5, 0, 13.5]]'] ``` I have gone through the documentation but couldn't work out how to achieve this or what modification should be done in above using regex findall function, a similar technique was adopted in one of answers in [In Python, how do I split a string and keep the separators?](https://stackoverflow.com/questions/2136556/in-python-how-do-i-split-a-string-and-keep-the-separators)
2022/09/07
[ "https://Stackoverflow.com/questions/73629234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16966180/" ]
You can simplify and start to make the code reusable by putting it into a function. Also, it's a good idea to make it pythonic and remove all the `;` that will only serve to confuse the reader as to what language they're looking at. ``` def login(name, age): password = input('Enter your desired password: ') print('Registration successful') tries = 5 for i in range(tries): print(' Login ') login = input('Enter password: ') if login == password: return True else: print(' Wrong password') print(f'{tries-1-i} trial(s) left ') return False name = 'Bob' age = 12 if login(name, age): print(' Welcome ', 'Account Information: ', f'Name: {name}', f'Age: {age}', sep='\n') else: print('Account locked') ``` Example Output: ``` # Success 1st try: Enter your desired password: password Registration successful Login Enter password: password Welcome Account Information: Name: Bob Age: 12 # Success 3rd try: Enter your desired password: password Registration successful Login Enter password: pass Wrong password 4 trial(s) left Login Enter password: pass Wrong password 3 trial(s) left Login Enter password: password Welcome Account Information: Name: Bob Age: 12 # Fail: Enter your desired password: password Registration successful Login Enter password: pass Wrong password 4 trial(s) left Login Enter password: pass Wrong password 3 trial(s) left Login Enter password: pass Wrong password 2 trial(s) left Login Enter password: word Wrong password 1 trial(s) left Login Enter password: pasword Wrong password 0 trial(s) left Account locked ```
I got this working. ```py def login(): counter = 1; x = 5; for i in range(5): print(' Login '); login = input('Enter password: '); if login == password: counter -= 1 print(' Welcome '); print('Account Information: '); print('Name: ',name); print('Age: ',age) break; else: x=x-1; if x==0: print('Account locked'); start(); else: print(' Wrong password'); print(x,' trial(s) left '); def start(): for i in range(1): global password password = input('Enter your desired password: '); print('Registration successful'); login(); start(); ``` [![enter image description here](https://i.stack.imgur.com/Mwyy4.png)](https://i.stack.imgur.com/Mwyy4.png)
53,810,242
i have two functions in python ``` class JENKINS_JOB_INFO(): def __init__(self): parser = argparse.ArgumentParser(description='xxxx. e.g., script.py -j jenkins_url -u user -a api') parser.add_argument('-j', '--address', dest='address', default="", required=True, action="store") parser.add_argument('-u', '--user', dest='user', default="xxxx", required=True, action="store") parser.add_argument('-t', '--api_token', dest='api_token', required=True, action="store") parsers = parser.parse_args() self.base_url = parsers.address.strip() self.user = parsers.user.strip() self.api_token = parsers.api_token.strip() def main(self): logger.info("call the function") self.get_jobs_state() def get_jobs_state(self): get_jenkins_json_data(params) def get_jenkins_json_data(self, params, base_url): url = urljoin(self.base_url, str(params.params)) r = requests.get(url, auth=HTTPBasicAuth(self.user, self.api_token), verify=False) ``` i have a parameter `params` defined in my function `get_jobs_state`and i want to pass this param to my other function `get_jenkins_json_data` so that the complete `url` inside function `get_jenkins_json_data` joins to `https:<jenkins>/api/json?pretty=true&tree=jobs[name,color]` But when i run my code the `url` is not correct and the value of `params` inside the function is `<__main__.CLASS_NAME instance at 0x7f9adf4c0ab8>` here `base_url` is a parameter that i am passing to my script. How can i get rid of this error?
2018/12/17
[ "https://Stackoverflow.com/questions/53810242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5871199/" ]
Just write `params.params` instead of `params`. The way you do it is extremely confusing because in `get_jenkins_josn_data`, `self` will be the `params` and `params` will be the `base_url`. I would advise you no to do that in the future. If you want to send some parameters to the function, send the minimal amount of information that the function needs. Here, for example, you could have sent `self.params` instead of the whole `self`. This way you wouldn't encounter this error and the code would be much more readable. I suggest you to rewrite this function this way.
So your solution is a bit confusing. You shouldn't pass the self to the `get_jenkins_json_data` method. The python will do that for you automatically. You should check out the data model for how [instance methods](https://docs.python.org/3/reference/datamodel.html#the-standard-type-hierarchy) work. I would refactor your code like this: ``` def get_jobs_state(self): params = "api/json?pretty=true&tree=jobs[name,color]" self.get_jenkins_json_data(params) def get_jenkins_json_data(self, params): url = urljoin(self.base_url, params) r = requests.get(url, auth=HTTPBasicAuth(self.user, self.api_token), verify=False) ... ```
53,810,242
i have two functions in python ``` class JENKINS_JOB_INFO(): def __init__(self): parser = argparse.ArgumentParser(description='xxxx. e.g., script.py -j jenkins_url -u user -a api') parser.add_argument('-j', '--address', dest='address', default="", required=True, action="store") parser.add_argument('-u', '--user', dest='user', default="xxxx", required=True, action="store") parser.add_argument('-t', '--api_token', dest='api_token', required=True, action="store") parsers = parser.parse_args() self.base_url = parsers.address.strip() self.user = parsers.user.strip() self.api_token = parsers.api_token.strip() def main(self): logger.info("call the function") self.get_jobs_state() def get_jobs_state(self): get_jenkins_json_data(params) def get_jenkins_json_data(self, params, base_url): url = urljoin(self.base_url, str(params.params)) r = requests.get(url, auth=HTTPBasicAuth(self.user, self.api_token), verify=False) ``` i have a parameter `params` defined in my function `get_jobs_state`and i want to pass this param to my other function `get_jenkins_json_data` so that the complete `url` inside function `get_jenkins_json_data` joins to `https:<jenkins>/api/json?pretty=true&tree=jobs[name,color]` But when i run my code the `url` is not correct and the value of `params` inside the function is `<__main__.CLASS_NAME instance at 0x7f9adf4c0ab8>` here `base_url` is a parameter that i am passing to my script. How can i get rid of this error?
2018/12/17
[ "https://Stackoverflow.com/questions/53810242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5871199/" ]
In your function `get_jobs_state` you are passing in self as the argument for the params argument to the `get_jenkins_json_data`, and self in this case is an instance of a class. Try something like this: ``` class Jenkins: def __init__(self, domain): self.user = "user" self.api_token = "api_token" self.base_domain = domain def get_jobs_state(self): query = "api/json?pretty=true&tree=jobs[name,color]" return self.get_jenkins_json_data(query) def get_jenkins_json_data(self, query): url = urljoin(self.base_domain, str(query)) r = requests.get(url, auth=HTTPBasicAuth(self.user, self.api_token), verify=False) return r ```
So your solution is a bit confusing. You shouldn't pass the self to the `get_jenkins_json_data` method. The python will do that for you automatically. You should check out the data model for how [instance methods](https://docs.python.org/3/reference/datamodel.html#the-standard-type-hierarchy) work. I would refactor your code like this: ``` def get_jobs_state(self): params = "api/json?pretty=true&tree=jobs[name,color]" self.get_jenkins_json_data(params) def get_jenkins_json_data(self, params): url = urljoin(self.base_url, params) r = requests.get(url, auth=HTTPBasicAuth(self.user, self.api_token), verify=False) ... ```
66,038,159
To improve performance on project of mine, i've coded a function using tf.function to replace a function witch does not use tf. The result is that plain python code runs much (100x faster) than the tf.funtion when GPU is enabled. When running on CPU, TF is still slower, but only 10x slower. Am i missing something? ``` @tf.function def test1(cond): xp = tf.constant(0) yp = tf.constant(0) stride = tf.constant(10) patches = tf.TensorArray( tf.int32, size=tf.cast((cond / stride + 1) * (cond / stride + 1), dtype=tf.int32), dynamic_size=False, clear_after_read=False) i = tf.constant(0) while tf.less_equal(yp, cond): while tf.less_equal(xp, cond): xp = tf.add(xp, stride) patches = patches.write(i, xp) i += 1 xp = tf.constant(0) yp = tf.add(yp, stride) return patches.stack() def test2(cond): xp = 0 yp = 0 stride = 10 i = 0 patches = [] while yp <= cond: while xp <= cond: xp += stride patches.append(xp) xp = 0 yp += stride return patches ``` This is specially noticeable when cond is big (like 5000 or greater) **UPDATE:** I have found [this](https://github.com/tensorflow/tensorflow/issues/46518) and [this](https://github.com/tensorflow/tensorflow/issues/19733). As i was expecting, it seems performance of TensorArray is poor, and, the solution, in my case, was to replace TensorArray and loops with other Tensor calculations (in this case, i used tf.image.extract\_patches and others). In this way, it achieved performance 3x faster than plain python code.
2021/02/04
[ "https://Stackoverflow.com/questions/66038159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15140159/" ]
When you use some tf functions with GPU enabled, it does a callback and transfers data to the GPU. In some cases, this overhead is not worth it. When running on the CPU, this overhead decreases, but it's still slower than pure python code. Tensorflow is faster when you do heavy calculations, and that's what Tensorflow was made for. Even numpy can be slower than pure python code for light calculations.
The slow part (while loop) is still in python and simple functions like this are pretty fast. The linear overhead of switching from python to tf each time is certainly bigger than anything you could ever gain on such a small function. For more complex operations, this might be very different. In this case tf is simply overkill.
13,216,520
Walking through matplotlib's animation example on my Mac OSX machine - <http://matplotlib.org/examples/animation/simple_anim.html> - I am getting this error:- ``` File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/animation.py", line 248, in _blit_clear a.figure.canvas.restore_region(bg_cache[a]) AttributeError: 'FigureCanvasMac' object has no attribute 'restore_region' ``` Does anyone who has encountered this before know how to resolve this issue? Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
2012/11/04
[ "https://Stackoverflow.com/questions/13216520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
You can avoid the problem by switching to a different backend: ``` import matplotlib matplotlib.use('TkAgg') ```
Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
13,216,520
Walking through matplotlib's animation example on my Mac OSX machine - <http://matplotlib.org/examples/animation/simple_anim.html> - I am getting this error:- ``` File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/animation.py", line 248, in _blit_clear a.figure.canvas.restore_region(bg_cache[a]) AttributeError: 'FigureCanvasMac' object has no attribute 'restore_region' ``` Does anyone who has encountered this before know how to resolve this issue? Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
2012/11/04
[ "https://Stackoverflow.com/questions/13216520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
Just set ``` blit=False ``` when [animation.FuncAnimation()](http://matplotlib.org/api/animation_api.html#matplotlib.animation.FuncAnimation) is called and it will work. For instance ([from double\_pendulum\_animated](http://matplotlib.org/examples/animation/double_pendulum_animated.html)): ``` ani = animation.FuncAnimation(fig, animate, np.arange(1, len(y)), interval=25, blit=False, init_func=init) ```
Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
13,216,520
Walking through matplotlib's animation example on my Mac OSX machine - <http://matplotlib.org/examples/animation/simple_anim.html> - I am getting this error:- ``` File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/animation.py", line 248, in _blit_clear a.figure.canvas.restore_region(bg_cache[a]) AttributeError: 'FigureCanvasMac' object has no attribute 'restore_region' ``` Does anyone who has encountered this before know how to resolve this issue? Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
2012/11/04
[ "https://Stackoverflow.com/questions/13216520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
As noted at <https://mail.python.org/pipermail/pythonmac-sig/2012-September/023664.html> use: ``` import matplotlib matplotlib.use('TkAgg') #just *before* import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation ``` This has worked for me with Tkinter installed using the ActiveState Tkinter installation on OSX 10.11.6, Python 2.71 The basic animation example is still a little noisy until blt=False in the line\_ani code here: ``` line_ani = animation.FuncAnimation(fig1, update_line, 25, fargs=(data, l), interval=50, blit=False) ```
Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
13,216,520
Walking through matplotlib's animation example on my Mac OSX machine - <http://matplotlib.org/examples/animation/simple_anim.html> - I am getting this error:- ``` File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/animation.py", line 248, in _blit_clear a.figure.canvas.restore_region(bg_cache[a]) AttributeError: 'FigureCanvasMac' object has no attribute 'restore_region' ``` Does anyone who has encountered this before know how to resolve this issue? Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
2012/11/04
[ "https://Stackoverflow.com/questions/13216520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
Just set ``` blit=False ``` when [animation.FuncAnimation()](http://matplotlib.org/api/animation_api.html#matplotlib.animation.FuncAnimation) is called and it will work. For instance ([from double\_pendulum\_animated](http://matplotlib.org/examples/animation/double_pendulum_animated.html)): ``` ani = animation.FuncAnimation(fig, animate, np.arange(1, len(y)), interval=25, blit=False, init_func=init) ```
You can avoid the problem by switching to a different backend: ``` import matplotlib matplotlib.use('TkAgg') ```
13,216,520
Walking through matplotlib's animation example on my Mac OSX machine - <http://matplotlib.org/examples/animation/simple_anim.html> - I am getting this error:- ``` File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/animation.py", line 248, in _blit_clear a.figure.canvas.restore_region(bg_cache[a]) AttributeError: 'FigureCanvasMac' object has no attribute 'restore_region' ``` Does anyone who has encountered this before know how to resolve this issue? Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
2012/11/04
[ "https://Stackoverflow.com/questions/13216520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
You can avoid the problem by switching to a different backend: ``` import matplotlib matplotlib.use('TkAgg') ```
As noted at <https://mail.python.org/pipermail/pythonmac-sig/2012-September/023664.html> use: ``` import matplotlib matplotlib.use('TkAgg') #just *before* import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation ``` This has worked for me with Tkinter installed using the ActiveState Tkinter installation on OSX 10.11.6, Python 2.71 The basic animation example is still a little noisy until blt=False in the line\_ani code here: ``` line_ani = animation.FuncAnimation(fig1, update_line, 25, fargs=(data, l), interval=50, blit=False) ```
13,216,520
Walking through matplotlib's animation example on my Mac OSX machine - <http://matplotlib.org/examples/animation/simple_anim.html> - I am getting this error:- ``` File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/animation.py", line 248, in _blit_clear a.figure.canvas.restore_region(bg_cache[a]) AttributeError: 'FigureCanvasMac' object has no attribute 'restore_region' ``` Does anyone who has encountered this before know how to resolve this issue? Looks like it's a known (and unresolved at this time of writing) issue - <https://github.com/matplotlib/matplotlib/issues/531>
2012/11/04
[ "https://Stackoverflow.com/questions/13216520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482506/" ]
Just set ``` blit=False ``` when [animation.FuncAnimation()](http://matplotlib.org/api/animation_api.html#matplotlib.animation.FuncAnimation) is called and it will work. For instance ([from double\_pendulum\_animated](http://matplotlib.org/examples/animation/double_pendulum_animated.html)): ``` ani = animation.FuncAnimation(fig, animate, np.arange(1, len(y)), interval=25, blit=False, init_func=init) ```
As noted at <https://mail.python.org/pipermail/pythonmac-sig/2012-September/023664.html> use: ``` import matplotlib matplotlib.use('TkAgg') #just *before* import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation ``` This has worked for me with Tkinter installed using the ActiveState Tkinter installation on OSX 10.11.6, Python 2.71 The basic animation example is still a little noisy until blt=False in the line\_ani code here: ``` line_ani = animation.FuncAnimation(fig1, update_line, 25, fargs=(data, l), interval=50, blit=False) ```
34,347,401
I have a Pelican blog where I write the posts in Markdown. I want each article to link to the previous and next article in the sequence, and to one random article. All the articles are generated with a python script, resulting in a folder of markdown files called /content/. Here the files are like: * article-slug1.md * another-article-slug.md * more-articles-slug.md * [...] Is there a token I can add to the markdown to randomly interlink/link to next/previous? If not, how can I set this up in python? Thanks in advance
2015/12/18
[ "https://Stackoverflow.com/questions/34347401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4108771/" ]
There is the [Pelican Neighbours](https://github.com/getpelican/pelican-plugins/tree/master/neighbors) plug-in that might do what you want. You'll have to active the plug-in and update your template to get it to work. * Adding plug-ins: [Pelican-Plugins' Readme](https://github.com/getpelican/pelican-plugins/blob/master/Readme.rst) * Updating your template: [Plug-in's Readme](https://github.com/getpelican/pelican-plugins/blob/master/neighbors/Readme.rst)
I am not sure about the random article, but for next and previous, there is a Pelican plugin called [neighbor articles](https://github.com/getpelican/pelican-plugins/tree/master/neighbors).
34,347,401
I have a Pelican blog where I write the posts in Markdown. I want each article to link to the previous and next article in the sequence, and to one random article. All the articles are generated with a python script, resulting in a folder of markdown files called /content/. Here the files are like: * article-slug1.md * another-article-slug.md * more-articles-slug.md * [...] Is there a token I can add to the markdown to randomly interlink/link to next/previous? If not, how can I set this up in python? Thanks in advance
2015/12/18
[ "https://Stackoverflow.com/questions/34347401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4108771/" ]
There is the [Pelican Neighbours](https://github.com/getpelican/pelican-plugins/tree/master/neighbors) plug-in that might do what you want. You'll have to active the plug-in and update your template to get it to work. * Adding plug-ins: [Pelican-Plugins' Readme](https://github.com/getpelican/pelican-plugins/blob/master/Readme.rst) * Updating your template: [Plug-in's Readme](https://github.com/getpelican/pelican-plugins/blob/master/neighbors/Readme.rst)
Random article:<https://github.com/getpelican/pelican-plugins/tree/master/random_article> More pelican plugins: <https://github.com/getpelican/pelican-plugins>
34,347,401
I have a Pelican blog where I write the posts in Markdown. I want each article to link to the previous and next article in the sequence, and to one random article. All the articles are generated with a python script, resulting in a folder of markdown files called /content/. Here the files are like: * article-slug1.md * another-article-slug.md * more-articles-slug.md * [...] Is there a token I can add to the markdown to randomly interlink/link to next/previous? If not, how can I set this up in python? Thanks in advance
2015/12/18
[ "https://Stackoverflow.com/questions/34347401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4108771/" ]
There is the [Pelican Neighbours](https://github.com/getpelican/pelican-plugins/tree/master/neighbors) plug-in that might do what you want. You'll have to active the plug-in and update your template to get it to work. * Adding plug-ins: [Pelican-Plugins' Readme](https://github.com/getpelican/pelican-plugins/blob/master/Readme.rst) * Updating your template: [Plug-in's Readme](https://github.com/getpelican/pelican-plugins/blob/master/neighbors/Readme.rst)
If you are generating all your posts programatically, is it safe to assume that your generation script knows what the next and previous articles? If this is the case, then you can write the links in directly in your generated markdown. E.g. at the end of `another-article-slug.md` add the lines: ``` <!-- end of article --> [Previous Post]({filename}article-slug1.md) -- [Next Post]({filename}more-articles-slug.md) ``` This will result in two links, one to the previous article and one to the next article, and the end of your post.
67,455,130
I am new to python and trying to create a small atm like project using classes. I wanna have the user ability to input the amount they want to withdraw and if the amount exceeds the current balance, it prints out that you cannot have negative balance and asks for the input again. ``` class account: def __init__(self,owner,balance): self.owner = owner self.balance = balance # self.withdraw_amount = withdraw_amount # self.deposit_amount = deposit_amount print("{} has {} in his account".format(owner,balance)) def withdraw(self,withdraw_amount): withdraw_amount = int(input("Enter amount to withdraw: ")) self.withdraw_amount = withdraw_amount if self.withdraw_amount > self.balance: print('Balance cannot be negative') else: self.final_amount = self.balance - self.withdraw_amount print("Final Amount after Withdraw " + str(self.final_amount)) def deposit(self,deposit_amount): deposit_amount = int(input("Enter amount to Deposit: ")) self.deposit_amount = deposit_amount self.final_balance = self.balance + self.deposit_amount print("Final Amount after Deposit " + str(self.final_balance)) my_account = account('Samuel',500) my_account.withdraw(withdraw_amount) ``` But i am getting the followng error : `NameError: name 'withdraw_amount' is not defined` Can someone please point out what i am doing wrong and how can i fix it? Thank you.
2021/05/09
[ "https://Stackoverflow.com/questions/67455130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11606914/" ]
Via `np.select` ``` condlist = [ (df.Email_x.isna()) & (~df.Email_y.isna()), # 1st column NAN but 2nd is not (df.Email_y.isna()) & (~df.Email_x.isna()), # 2nd column NAN but 1st is not (~df.Email_x.isna()) & (~df.Email_y.isna()) # both is not NAN ] choicelist = [ df.Email_y, df.Email_x, df.Email_x ] df['Email'] = np.select(condlist,choicelist, default=‘') # default value '' ```
You can use [`.mask()`](https://pandas.pydata.org/docs/reference/api/pandas.Series.mask.html), as follows: ``` df['email'] = df['Email_x'].mask(df['Email_x'].isna(), df['Email_y']) ``` It will retain the value of `df['Email_x']` if the condition is false (i.e. not `NaN`) and replace with value of `df['Email_y']` if the condition is true (i.e. `df['Email_x']` is `NaN`).
67,455,130
I am new to python and trying to create a small atm like project using classes. I wanna have the user ability to input the amount they want to withdraw and if the amount exceeds the current balance, it prints out that you cannot have negative balance and asks for the input again. ``` class account: def __init__(self,owner,balance): self.owner = owner self.balance = balance # self.withdraw_amount = withdraw_amount # self.deposit_amount = deposit_amount print("{} has {} in his account".format(owner,balance)) def withdraw(self,withdraw_amount): withdraw_amount = int(input("Enter amount to withdraw: ")) self.withdraw_amount = withdraw_amount if self.withdraw_amount > self.balance: print('Balance cannot be negative') else: self.final_amount = self.balance - self.withdraw_amount print("Final Amount after Withdraw " + str(self.final_amount)) def deposit(self,deposit_amount): deposit_amount = int(input("Enter amount to Deposit: ")) self.deposit_amount = deposit_amount self.final_balance = self.balance + self.deposit_amount print("Final Amount after Deposit " + str(self.final_balance)) my_account = account('Samuel',500) my_account.withdraw(withdraw_amount) ``` But i am getting the followng error : `NameError: name 'withdraw_amount' is not defined` Can someone please point out what i am doing wrong and how can i fix it? Thank you.
2021/05/09
[ "https://Stackoverflow.com/questions/67455130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11606914/" ]
Here is a [great answer](https://stackoverflow.com/questions/44061607/pandas-lambda-function-with-nan-support), which can be achieved with combine\_first(). ``` df['email'] = df['Email_x'].combine_first(df['Email_y']) ```
You can use [`.mask()`](https://pandas.pydata.org/docs/reference/api/pandas.Series.mask.html), as follows: ``` df['email'] = df['Email_x'].mask(df['Email_x'].isna(), df['Email_y']) ``` It will retain the value of `df['Email_x']` if the condition is false (i.e. not `NaN`) and replace with value of `df['Email_y']` if the condition is true (i.e. `df['Email_x']` is `NaN`).
24,468,944
I am trying to write a python program that asks the user to enter an existing text file's name and then display the first 5 lines of the text file or the complete file if it is 5 lines or less. This is what I have programmed so far: ``` def main(): # Ask user for the file name they wish to view filename = input('Enter the file name that you wish to view: ') # opens the file name the user specifies for reading open_file = open(filename, 'r') # reads the file contents - first 5 lines for count in range (1,6): line = open_file.readline() # prints the contents of line print() main() ``` I am using a file that has 8 lines in it called names.txt. The contents of this text file is the following: ``` Steve Smith Kevin Applesauce Mike Hunter David Jones Cliff Martinez Juan Garcia Amy Doe John Doe ``` When I run the python program, I get no output. Where am I going wrong?
2014/06/28
[ "https://Stackoverflow.com/questions/24468944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3786320/" ]
Just `print()`, by itself, will only print a newline, nothing else. You need to pass the `line` variable to `print()`: ``` print(line) ``` The `line` string will have a newline at the end, you probably want to ask `print` not to add another: ``` print(line, end='') ``` or you can remove the newline: ``` print(line.rstrip('\n')) ```
In order to print first 5 or less lines. You can try the following code: ``` filename = input('Enter the file name that you wish to view: ') from itertools import islice with open(filename) as myfile: head = list(islice(myfile,5)) print head ``` Hope the above code will satisfy your query. Thank you.
24,468,944
I am trying to write a python program that asks the user to enter an existing text file's name and then display the first 5 lines of the text file or the complete file if it is 5 lines or less. This is what I have programmed so far: ``` def main(): # Ask user for the file name they wish to view filename = input('Enter the file name that you wish to view: ') # opens the file name the user specifies for reading open_file = open(filename, 'r') # reads the file contents - first 5 lines for count in range (1,6): line = open_file.readline() # prints the contents of line print() main() ``` I am using a file that has 8 lines in it called names.txt. The contents of this text file is the following: ``` Steve Smith Kevin Applesauce Mike Hunter David Jones Cliff Martinez Juan Garcia Amy Doe John Doe ``` When I run the python program, I get no output. Where am I going wrong?
2014/06/28
[ "https://Stackoverflow.com/questions/24468944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3786320/" ]
Just `print()`, by itself, will only print a newline, nothing else. You need to pass the `line` variable to `print()`: ``` print(line) ``` The `line` string will have a newline at the end, you probably want to ask `print` not to add another: ``` print(line, end='') ``` or you can remove the newline: ``` print(line.rstrip('\n')) ```
As Martijn said, the print() command takes an argument, and that argument is what it is you'd like to print. Python is interpreted line by line. When the interpreter arrives at your print() line, it is not aware that you want it to print the "line" variable assigned above. Also, it's good practice to close a file that you've opened so that you free up that memory, though in many cases Python takes care of this automatically. You should close the file outside of your for loop. I.e.: ``` for count in range(5): #it's simpler to allow range() to take the default starting point of 0. line = open_file.readline() print(line) open_file.close() # close the file ```
24,468,944
I am trying to write a python program that asks the user to enter an existing text file's name and then display the first 5 lines of the text file or the complete file if it is 5 lines or less. This is what I have programmed so far: ``` def main(): # Ask user for the file name they wish to view filename = input('Enter the file name that you wish to view: ') # opens the file name the user specifies for reading open_file = open(filename, 'r') # reads the file contents - first 5 lines for count in range (1,6): line = open_file.readline() # prints the contents of line print() main() ``` I am using a file that has 8 lines in it called names.txt. The contents of this text file is the following: ``` Steve Smith Kevin Applesauce Mike Hunter David Jones Cliff Martinez Juan Garcia Amy Doe John Doe ``` When I run the python program, I get no output. Where am I going wrong?
2014/06/28
[ "https://Stackoverflow.com/questions/24468944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3786320/" ]
As Martijn said, the print() command takes an argument, and that argument is what it is you'd like to print. Python is interpreted line by line. When the interpreter arrives at your print() line, it is not aware that you want it to print the "line" variable assigned above. Also, it's good practice to close a file that you've opened so that you free up that memory, though in many cases Python takes care of this automatically. You should close the file outside of your for loop. I.e.: ``` for count in range(5): #it's simpler to allow range() to take the default starting point of 0. line = open_file.readline() print(line) open_file.close() # close the file ```
In order to print first 5 or less lines. You can try the following code: ``` filename = input('Enter the file name that you wish to view: ') from itertools import islice with open(filename) as myfile: head = list(islice(myfile,5)) print head ``` Hope the above code will satisfy your query. Thank you.
12,231,733
I have a python script that I want to only allow to be running once on a machine. I want it to print something like "Error, already running" if it is already running, whether its running in the background or in a different ssh session. How would I do this? Here is my script. ``` import urllib, urllib2, sys num = sys.argv[1] print 'Calling' phones = [ 'http://phone1/index.htm', 'http://phone2/index.htm', 'https://phone3/index.htm', 'https://phone4/index.htm', 'https://phone5/index.htm' ] data = urllib.urlencode({"NUMBER":num, "DIAL":"Dial", "active_line":1}) while 1: for phone in phones: try: urllib2.urlopen(phone,data) # make call urllib2.urlopen(phone+"?dialeddel=0") # clear logs except: pass ``` P.S I am using CentOS 5 if that matters.
2012/09/01
[ "https://Stackoverflow.com/questions/12231733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1601509/" ]
1. You can implement a lock on the file. 2. Create a temp file at the start of the execution and check if that file is present before running the script. Refer to this post for answer- [Check to see if python script is running](https://stackoverflow.com/questions/788411/check-to-see-if-python-script-is-running)
You could lock an existing file using e.g. [flock](http://docs.python.org/library/fcntl.html#fcntl.flock) at start of your script. Then, if the same script is run twice, the latest started would block. See also [this question](https://stackoverflow.com/questions/3918385/flock-question).
12,231,733
I have a python script that I want to only allow to be running once on a machine. I want it to print something like "Error, already running" if it is already running, whether its running in the background or in a different ssh session. How would I do this? Here is my script. ``` import urllib, urllib2, sys num = sys.argv[1] print 'Calling' phones = [ 'http://phone1/index.htm', 'http://phone2/index.htm', 'https://phone3/index.htm', 'https://phone4/index.htm', 'https://phone5/index.htm' ] data = urllib.urlencode({"NUMBER":num, "DIAL":"Dial", "active_line":1}) while 1: for phone in phones: try: urllib2.urlopen(phone,data) # make call urllib2.urlopen(phone+"?dialeddel=0") # clear logs except: pass ``` P.S I am using CentOS 5 if that matters.
2012/09/01
[ "https://Stackoverflow.com/questions/12231733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1601509/" ]
1. You can implement a lock on the file. 2. Create a temp file at the start of the execution and check if that file is present before running the script. Refer to this post for answer- [Check to see if python script is running](https://stackoverflow.com/questions/788411/check-to-see-if-python-script-is-running)
You can install the [single](http://pypi.python.org/pypi/single/) package with `pip install single` that will use an advisory lock to ensure that only a single instance of a command will run without leaving stale lock files behind. You can invoke it on your script like this: ``` single.py -c long-running-script arg1 arg2 ```
12,231,733
I have a python script that I want to only allow to be running once on a machine. I want it to print something like "Error, already running" if it is already running, whether its running in the background or in a different ssh session. How would I do this? Here is my script. ``` import urllib, urllib2, sys num = sys.argv[1] print 'Calling' phones = [ 'http://phone1/index.htm', 'http://phone2/index.htm', 'https://phone3/index.htm', 'https://phone4/index.htm', 'https://phone5/index.htm' ] data = urllib.urlencode({"NUMBER":num, "DIAL":"Dial", "active_line":1}) while 1: for phone in phones: try: urllib2.urlopen(phone,data) # make call urllib2.urlopen(phone+"?dialeddel=0") # clear logs except: pass ``` P.S I am using CentOS 5 if that matters.
2012/09/01
[ "https://Stackoverflow.com/questions/12231733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1601509/" ]
You can install the [single](http://pypi.python.org/pypi/single/) package with `pip install single` that will use an advisory lock to ensure that only a single instance of a command will run without leaving stale lock files behind. You can invoke it on your script like this: ``` single.py -c long-running-script arg1 arg2 ```
You could lock an existing file using e.g. [flock](http://docs.python.org/library/fcntl.html#fcntl.flock) at start of your script. Then, if the same script is run twice, the latest started would block. See also [this question](https://stackoverflow.com/questions/3918385/flock-question).
60,682,568
I've been trying to write code which seperates the age (digits) from name (alphabets) and then compares it to a pre-defined list and if it doesn't match with the list then it sends out an error but instead of getting (for example) alex:error, i'm getting a:error l:error e:error x:error, that is it's splitting the words to its alphabets. Here's the code: ``` from string import digits print("Enter name and age(seperated by a comma):") names=input("Data:") names1=names.strip().replace(" ","").split(',') removed_digits=str.maketrans('','',digits) names2=names.translate(removed_digits) lst1=['john','cena','rey'] print(names) print(names1) print(names2) for name in names2: if name not in lst1: print(f"{name}:Not Matching to our database.") ``` the output is : ``` Enter name and age(seperated by a comma): Data:alex 12, john 13 alex 12, john 13 ['alex12', 'john13'] alex , john a:Not Matching to our database. l:Not Matching to our database. e:Not Matching to our database. x:Not Matching to our database. :Not Matching to our database. ,:Not Matching to our database. :Not Matching to our database. j:Not Matching to our database. o:Not Matching to our database. h:Not Matching to our database. n:Not Matching to our database. :Not Matching to our database. ``` thanks for helping me out! Also I'd love if someone was to explain why my code wasn't working, I referred to pythontutor but still wasn't able to figure out the bug myself!
2020/03/14
[ "https://Stackoverflow.com/questions/60682568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12387627/" ]
You need to change your names2 variable as its a string type. You need to conver it to a list and append each name to it after str.translate(). Here is the modified code. ``` names2=[name.translate(removed_digits) for name in names1] ``` I hope your problem will solved.
You can try something like this: ``` names=input("Data:") names1=names.strip().split(',') #split names for name in names1: names2 = name.strip().split(' ') #split name and digit if names2[0] not in lst1: print(f"{name}:Not Matching to our database.") ```
62,789,471
I'm running python in docker and run across the `ModuleNotFoundError: No module named 'flask'` error message. any thoughts what am I missing in the Dockerfile or requirements ? ```sh FROM python:3.7.2-alpine RUN pip install --upgrade pip RUN apk update && \ apk add --virtual build-deps gcc python-dev RUN adduser -D myuser USER myuser WORKDIR /home/myuser COPY --chown=myuser:myuser ./requirements.txt /home/myuser/requirements.txt RUN pip install --no-cache-dir --user -r requirements.txt ENV PATH="/home/myuser/.local/bin:${PATH}" COPY --chown=myuser:myuser . . ENV FLASK_APP=/home/myuser/app.py CMD ["python", "app.py"] ~ ``` in the app.py I use this line ```sh from flask import Flask, jsonify ``` with requirements looking like this ```sh Flask==0.12.5 ```
2020/07/08
[ "https://Stackoverflow.com/questions/62789471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13115677/" ]
One way using `itertools.starmap`, `islice` and `operator.sub`: ``` from operator import sub from itertools import starmap, islice l = list(range(1, 10000000)) [l[0], *starmap(sub, zip(islice(l, 1, None), l))] ``` Output: ``` [1, 1, 1, ..., 1] ``` --- Benchmark: ``` l = list(range(1, 100000000)) # OP's method %timeit [l[i] - l[i - 1] for i in range(len(l) - 1, 0, -1)] # 14.2 s ± 373 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # numpy approach by @ynotzort %timeit np.diff(l) # 8.52 s ± 301 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # zip approach by @Nick %timeit [nxt - cur for cur, nxt in zip(l, l[1:])] # 7.96 s ± 243 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # itertool and operator approach by @Chris %timeit [l[0], *starmap(sub, zip(islice(l, 1, None), l))] # 6.4 s ± 255 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ```
You could use [numpy.diff](https://numpy.org/doc/stable/reference/generated/numpy.diff.html), For example: ```py import numpy as np a = [1, 2, 3, 4, 5] npa = np.array(a) a_diff = np.diff(npa) ```
62,789,471
I'm running python in docker and run across the `ModuleNotFoundError: No module named 'flask'` error message. any thoughts what am I missing in the Dockerfile or requirements ? ```sh FROM python:3.7.2-alpine RUN pip install --upgrade pip RUN apk update && \ apk add --virtual build-deps gcc python-dev RUN adduser -D myuser USER myuser WORKDIR /home/myuser COPY --chown=myuser:myuser ./requirements.txt /home/myuser/requirements.txt RUN pip install --no-cache-dir --user -r requirements.txt ENV PATH="/home/myuser/.local/bin:${PATH}" COPY --chown=myuser:myuser . . ENV FLASK_APP=/home/myuser/app.py CMD ["python", "app.py"] ~ ``` in the app.py I use this line ```sh from flask import Flask, jsonify ``` with requirements looking like this ```sh Flask==0.12.5 ```
2020/07/08
[ "https://Stackoverflow.com/questions/62789471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13115677/" ]
One way using `itertools.starmap`, `islice` and `operator.sub`: ``` from operator import sub from itertools import starmap, islice l = list(range(1, 10000000)) [l[0], *starmap(sub, zip(islice(l, 1, None), l))] ``` Output: ``` [1, 1, 1, ..., 1] ``` --- Benchmark: ``` l = list(range(1, 100000000)) # OP's method %timeit [l[i] - l[i - 1] for i in range(len(l) - 1, 0, -1)] # 14.2 s ± 373 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # numpy approach by @ynotzort %timeit np.diff(l) # 8.52 s ± 301 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # zip approach by @Nick %timeit [nxt - cur for cur, nxt in zip(l, l[1:])] # 7.96 s ± 243 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # itertool and operator approach by @Chris %timeit [l[0], *starmap(sub, zip(islice(l, 1, None), l))] # 6.4 s ± 255 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ```
You could use `zip` to put together the list with an offset version and subtract those values ``` a = [1, 2, 3, 4, 5] a[1:] = [nxt - cur for cur, nxt in zip(a, a[1:])] print(a) ``` Output: ``` [1, 1, 1, 1, 1] ``` Out of interest, I ran this, the original code and @ynotzort answer through `timeit` and this was much faster than the `numpy` code for short lists; remaining faster up to about 10M values; both were about 30% faster than the original code. As the list size increased beyond 10M, the `numpy` code has more of a speed up and eventually is faster from about 20M values onward. **Update** Also tested the `starmap` code, and that is about 40% faster than the `numpy` code at 20M values... **Update 2** @Chris has some more comprehensive performance data in their answer. This answer can be sped up further (about 10%) by using `itertools.islice` to generate the offset list: ``` a = [a[0], *[nxt - cur for cur, nxt in zip(a, islice(a, 1, None))]] ```
69,788,691
I am new here. I am a begginer with python so I am trying to write a code that allows me to remove the link break of a list in python. I have the following list (which is more extense), but I will share a part of it. ``` info = ['COLOMBIA Y LA \nNUEVA REVOLUCIÓN \nINDUSTRIAL\nPropuestas del Foco \nde Tecnologías Convergentes \ne Industrias 4.0\nVolumen 9\nCOLOMBIA - 2019\nArtista: Federico Uribe\n\n-----\n\n-----\nPropuestas del Foco \nde Tecnologías Convergentes\ne Industrias 4.0\nTomo 9\nCOLOMBIA\nY LA NUEVA \nREVOLUCIÓN \nINDUSTRIAL\n\n-----\n© Vicepresidencia de la República de Colombia\n© Ministerio de Ciencia, Tecnología e Innovación\n© Elías D. Niño-Ruiz, Jean Paul Allain, José Alejandro Montoya, Juan Luis Mejía Arango, Markus Eisenhauer, \nMaría del Pilar Noriega E., Mauricio Arroyave Franco, Mónica Álvarez-Láinez, Nora Cadavid Giraldo, \nOlga L. Quintero-Montoya, Orlando Ayala, Raimundo Abello, Tim Osswald \nPrimera edición, 2020\nISBN Impreso: 978-958-5135-10-9\nISBN digital: 978-958-5135-11-6\nDOI: https://doi.org/10.17230/9789585135116vdyc\nColección: Misión Internacional de Sabios 2019\nTítulo del volumen 9: Colombia y la nueva revolución industrial\nPreparación editorial\nUniversidad EAFIT \nUniversidad del Norte\nCarrera 49 No. 7 sur - 50 \nDirección de Investigación, Desarrollo e Innovación\nTel.: 261 95 23, Medellín \nKm. 5 Vía Puerto Colombia, Área Metropolitana de Barranquilla\ne-mail: publicaciones@eafit.edu.co \nTel.: 3509420\n \ne-mail: dip@uninorte.edu.co\nCorrección de textos y coordinación editorial: Cristian Suárez-Giraldo y Óscar Caicedo Alarcón\nDiseño de la colección y cubierta: leonardofernandezsuarez.com\nDiagramación: Ana Milena Gómez Correa \nMedellín, Colombia, 2020\nProhibida la reproducción total o parcial por cualquier medio sin la autorización escrita del titular de los \nderechos patrimoniales.\n__________________________________\nColombia y la nueva revolución industrial / Elías D. Niño-Ruiz…[et al]. – Medellín : \n Colombia. Ministerio de Ciencia Tecnología e Innovación, 2020\n 175 p. -- (Misión Internacional de Sabios 2019).\n \n ISBN: 978-958-5135-10-9 ; 978-958-5135-11-6\n1. Educación – Colombia. 2. Educación y desarrollo – Colombia. 3. Desarrollo científico y tecnológico – Colombia. I. Niño-Ruiz, Elias D. \nII. Noriega E., María del Pilar, pról. III. Mejía Arango, Juan Luis, pról. IV. Abello Llanos, Raimundo, pról. V. Tít. VI. Serie\n370.9861 cd 23 ed.\nC718\n Universidad Eafit- Centro Cultural Biblioteca Luis Echavarría Villegas\n___________________________________\n\n-----\nTomo 9\nCOLOMBIA \nY LA NUEVA \nREVOLUCIÓN \nINDUSTRIAL\n\n-----\nBiotecnología, Bioeconomía \ny Medio Ambiente\nSilvia Restrepo, coordinadora \nCristian Samper \nFederica di Palma (Reino Unido) \nElizabeth Hodson \nMabel Torres\nEsteban Manrique Reol (España) \nMichel Eddi (Francia) \nLudger Wessjohann (Alemania) \nGermán Poveda\nCiencias Básicas y del Espacio\nMoisés Wasserman Lerner, coordinador \nCarmenza Duque Beltrán \nSerge Haroche (Francia, premio Nobel) \nAna María Rey Ayala \nAntonio Julio Copete Villa\nCiencias Sociales y Desarrollo \nHumano con Equidad\nClemente Forero Pineda, coordinador \nAna María Arjona \nSara Victoria Alvarado Salgado \nWilliam Maloney (Estados Unidos) \nStanislas Dehaene (Francia) \nJohan Schot (Holanda) \nKyoo Sung Noh (Corea del Sur)\nCiencias de la Vida y la Salud\nJuan Manuel Anaya, coordinador \nNubia Muñoz \nIsabelle Magnin (Francia) \nRodolfo Llinás \nJorge Reynolds \nAlejandro Jadad\nEnergía Sostenible\nJuan Benavides, coordinador \nAngela Wilkinson (Reino Unido)\nEduardo Posada \nJosé Fernando Isaza\nIndustrias Creativas y Culturales \nEdgar Puentes, coordinador \nRamiro Osorio \nCamila Loboguerrero \nLina Paola Rodríguez Fernández \nCarlos Jacanamijoy \nAlfredo Zolezzi (Chile)\nOcéanos y Recursos Hidrobiológicos\nAndrés Franco, coordinador \nWeildler Antonio Guerra \nJorge Reynolds \nJuan Armando Sánchez \nSabrina Speich (Francia)\nTecnologías Convergentes Nano, \nInfo y Cogno Industrias 4.0\nMaría del Pilar Noriega, coordinadora \nJean Paul Allain \nTim Andreas Osswald \nOrlando Ayala\nCoordinador de coordinadores\nClemente Forero Pineda\nCOMISIONADOS\n\n-----\nBiotecnología, \nBioeconomía y \nMedio Ambiente\nSecretaría Técnica – \nUniversidad de los \nAndes, Vicerrectoría de \nInvestigación \nSilvia Restrepo \nMaría Fernanda Mideros\nClaudia Carolina Caballero \nLaguna\nGuy Henry \nRelator \nMartín Ramírez \nCiencias Básicas \ny del Espacio\nSecretaría Técnica – \nUniversidad Nacional de \nColombia\nJairo Alexis Rodríguez López\nHernando Guillermo \nGaitán Duarte\nLiliana Pulido Báez\nRelator\nDiego Alejandro Torres \nGalindo\nCiencias Sociales y \nDesarrollo Humano \ncon Equidad\nSecretaría Técnica – \nUniversidad del Rosario, \nEscuela de Ciencias \nHumanas \nStéphanie Lavaux \nCarlos Gustavo Patarroyo \nMaría Martínez \nRelatores\nJuliana Valdés Pereira \nEdgar Sánchez Cuevas \nPaula Juliana Guevara \nPosada\nCiencias de la \nVida y la Salud', 'El gerente de relaciones públicas de Riot Games para la región, Juan José Moreno, retrató el fenómeno de League of Legends en la industria gamer.</li>\n\n<li>Creadoras de contenido relevantes expusieron varios consejos a las mujeres que están interesadas en emprender este camino.</li>\n\n</ul></div><div class=" cid-616 aid-184215">\n\t\t\t<div class="figure cid-616 aid-184215 binary-foto_marquesina format-png"><a href="articles-184215_foto_marquesina.png" title="Ir a "><img src="articles-184215_foto_marquesina.png" alt="" width="960" height="400"></a><a href="articles-184215_foto_marquesina.png" title="Ir a "></a></div>\n<p>Con éxito finalizó el tercer y último día de Colombia 4.0, el evento que reunió a la industria creativa y TI, tanto de manera virtual como presencial durante tres días en Puerta de Oro, en Barranquilla.</p>\n<p>Durante la tercera jornada, los asistentes pudieron presenciar a diversos conferencistas expertos en temáticas como la transformación digital en los jóvenes, el futuro de la industria de los videojuegos, las nuevas maneras de aproximarse a las audiencias y la creación de contenido digital. Así mismo, tuvo lugar un nuevo Foro Misión TIC 2022.</p>\n<p>A continuación, los momentos más destacados del tercer día de Colombia 4.0:</p>\n<h3>El fenómeno League of Legends</h3>\n<p>Juan José Moreno, gerente de relaciones públicas de Riot Games para América Latina, aseguró, durante su participación en Colombia 4.0, que busca ser más que una empresa de videojuegos y crear una experiencia completa a sus jugadores. Así, detalló que en 2016 el juego contaba con más de 100 millones de jugadores activos mensuales y en 2019 alcanzaron 8 millones de jugadores que se conectaban diariamente a jugar partidas simultáneas en todo el mundo.</p>\n<p>Por eso, en 2019 lanzaron una colección de ropa en alianza con Louis Vuitton, además crearon KDA, un grupo virtual de pop de LoL que se compone de los personajes más populares del juego. Esto se dio como respuesta a la necesidad de los streamers de tener canciones sin copyright para sus transmisiones, por lo que lanzaron un disco con 37 canciones originales para que las personas puedan usarlas sin problema.</p>\n<h3>El análisis FODA de las industrias 4.0</h3>\n<p>Ximena Duque, presidenta de Fedesoft, e Iván Darío Castaño, director ejecutivo de Ruta N, desarrollaron un análisis de fortalezas, oportunidades, debilidades y amenazas de las industrias 4.0 en el país. Entre las oportunidades que tienen estas industrias ambos coincidieron en que el mercado entendió que es el momento de transformar sus negocios hacia lo digital. Castaño instó a las industrias 4.0 a que aprovechen este momento, puesto que, dijo, hoy en día se pueden hacer muchos negocios sin que la territorialidad sea un límite.</p>\n<p>Hablando de los aspectos por mejorar, Ximena Duque se refirió a que aún se debe seguir trabajando por el cierre de brechas sociales, al tiempo que Castaño afirmó que el bilingüismo es algo en lo que el país debe avanzar. Eso sí, recordó que cuando se trata de bilingüismo hoy en día es necesario pensar en idiomas diferentes al inglés.</p>\n<h3>Consejos para las creadoras de contenido digital</h3>\n<p>En el último día de Colombia 4.0, algunas de las creadoras de contenidos más influyentes del país dieron sus mejores consejos para aquellas personas que están empezando en las redes sociales y quieren triunfar en el mundo digital.</p>\n<p>La influenciadora Valentina Lizcano comentó que esos influenciadores que se dedican a mostrar vidas perfectas en las redes sociales o hacen contenido aspiracional, no aportan mucho, por eso, la honestidad y ser genuinos es lo que va a ayudar a crear comunidades realmente fieles y sólidas. "Las mujeres debemos dejar de hablarnos con mentiras. Como somos nos vemos lindas. Nuestra realidad tiene que estar alineada con tu contenido", afirmó la actriz.</p>\n<h3>Pódcast como nueva manera de formar comunidad</h3>\n<p>Una de las conversaciones al respecto giró en torno al pódcast, es decir aquellos contenidos de audio que se distribuyen a través de plataformas, y a cómo esta herramienta puede convertirse, incluso, en una fuente de ingresos o de fidelización de comunidades para las empresas.</p>\n<p>La conversación fue entre Mauricio Cabrera, creador de Story Baker, y Alejandro Vargas, gerente general de Podway, quienes coincidieron en que el podcasting ha logrado democratizar el audio para que haya otras ideas por fuera de lo tradicional ya que ahora cualquier ciudadano tiene las herramientas necesarias para ser un creador.</p>\n<h3>Nuevo Foro Misión TIC 2022</h3>\n<p>El tercer foro de Misión TIC contó con la presencia de destacados invitados del sector de la tecnología que despertaron el interés de los asistentes por el apasionante mundo de las tecnologías, donde se analizaron las perspectivas y el rol que juegan las mujeres en las industrias TI del país.</p>\n<p>La introducción y bienvenida estuvo a cargo de Dennis Palacios Directora (E) de Economía Digital del Ministerio TIC, quien explicó los beneficios de Misión TIC e hizo una especial invitación a todos los colombianos, <em>"Hago una cordial invitación a todos los niños jóvenes y adultos del país a que se inscriban en la última convocatoria de Misión TIC \'</em>La última tripulación\' que tiene 50 mil cupos gratis para aprender programación", expresó Palacios durante la apertura.', 'En Norte de Santander hay aproximadamente 650 empresas de base tecnológica registradas en la Cámara de Comercio de Cúcuta, estás desarrollan software, contenidos digitales, aplicaciones, marketing y comercialización virtual. El departamento se ha convertido en una plataforma de despegue para estas ideas de negocio y las cifras muestran que cada vez ganan más terreno.</p>\n\n<p>\n\t<strong>Según Procolombia, en una nota publicada en su página web, las exportaciones de las industrias 4.0 llegaron a 407,5 millones de dólares en 2018, un incremento del 33 % en comparación con 2017.</strong></p>\n\n<p>\n\tLa entidad encargada de la promoción de los productos y la industria nacional reportó que 337 empresas colombianas hicieron negocios en más de 60 destinos. Además, el principal comprador fue Estados Unidos con 177,7 millones de dólares.</p>\n\n<p>\n\t“El apetito de los compradores internacionales por los servicios de las Industrias 4.0 de Colombia (BPO, software, salud, audiovisuales y contenidos digitales, comunicación gráfica y editorial) sigue ampliándose con ritmo acelerado”, reseña la entidad.</p>\n\n<p>\n\t<strong>Al desagregar los productos y servicios que hacen parte de la oferta de estas empresas, resalta el liderazgo de las ventas de software, que aportaron 159,7 millones de dólares, seguido por BPO, con 103,9 millones; audiovisuales y contenidos digitales, con 82,8 millones; salud, con 57,4 millones y comunicación gráfica y editorial, con 3,5 millones de dólares.</strong></p>\n\n<p>\n\tJuliana Villegas, vicepresidente de Exportaciones de Procolombia, indicó que “las ventas del país se han ‘desconcentrado’, porque antes el 80 % de las exportaciones nacionales de estos productos provenía de Bogotá. Ahora, hay zonas que han ganado participación como Antioquia, Valle del Cauca, Santander y Norte de Santander”.</p>\n\n<p>\n\tEl año pasado, el 56,6 %, es decir 230,8 millones de dólares, de las exportaciones salieron de Bogotá. Antioquia aportó el 18,8 % y Valle del Cauca, el 12,5 %. Sin embargo, al analizar las ciudades se destacan los aumentos de 107 % de Pereira (5,8 millones de dólares), del 48 % de Cúcuta (420.000 dólares) y del 10 % de Barranquilla (11,5 millones de dólares).</p>\n\n<p>\n\tEn Norte de Santander cuatro empresas llevaron al extranjero sus productos de\xa0software, contenidos digitales y BPO, con Estados Unidos y México como destinos.\xa0Las exportaciones de software ascendieron a 420.346 dólares, los contenidos digitales produjeron 36.285 dólares y el BPO fue avaluado en 28.950 dólares.</p>\n\n<p>\n\t<strong>La diversificación de creadores y exportadores de este tipo de contenido benefician al país en el incremento de su oferta.</strong></p>\n\n<p>\n\tEstados Unidos es el mayor comprador de la tecnología nacional con el 43,6 % del total de las exportaciones (177,7 millones de dólares) y los productos llegan a San Francisco, Miami, Los Ángeles, Washington, Nueva York, Houston, Atlanta, Dallas y Chicago.</p>\n\n<p>\n\t<strong>Trabajo articulado regional</strong></p>\n\n<p>\n\tDesde el 2016 existía la iniciativa de la Cámara de Comercio de Cúcuta y la Universidad Francisco de Paula Santander (Ufps) de crear un clúster para las industrias de base tecnológica.</p>\n\n<p>\n\tSin embargo, sólo hasta el año pasado se logró la unión de 23 empresas para aunar esfuerzos en pro de mejorar la productividad y abrirse nuevos mercados.</p>\n\n<p>\n\t<strong>Beatriz Vélez, empresaria del sector, dijo que el sector se enfoca en software para el sector salud y de educación. “Estos son los subsectores a donde más están apuntando los desarrolladores del departamento”, indicó.</strong></p>\n\n<p>\n\tLa rentabilidad y poca inversión inicial que se necesita para crear estas empresas es uno de los beneficios más grandes del sector. “No se necesita una gran infraestructura, se puede trabajar desde el garaje de una casa, lo importante es tener buen equipo (computador) e internet”, explicó Vélez.</p>\n\n<p>\n\t<img alt="" height="370" src="/sites/default/files/2019/03/05/imagenes_adicionales/e5.jpg" width="640" /></p>\n\n<p>\n\t<em>23 empresas componen el clúster Nortic del departamento, con el objetivo de impulsar y fortalecer el sector.</em></p>\n\n<p>\n\tSin embargo, la empresaria destacó que el trabajo en equipo es vital para desarrollar el software que venden estas empresas. “En Cúcuta hay mano de obra semi calificada y económica, esto beneficia a las industrias 4.0 de la región”, agregó la empresaria.</p>\n\n<p>\n\t<strong>William Trillos, gerente de Gnosoft, resaltó que las industrias 4.0 hacen parte del cambio de los procesos tradicionales en las empresas colombianas. “En cualquier tipo de actividad económica se pueden aplicar los productos que genera el sector”, manifestó.</strong></p>\n\n<p>\n\tEl gerente de Gnosoft estudió ingeniería de sistemas en la Ufps, luego de trabajar en varias empresas tomó la decisión de crear la suya.\xa0</p>\n\n<p>\n\tDe esta forma, en el 2007 empezó el trabajo para que naciera Gnosoft, empresa que ofrece asesoramiento al sector educativo para la introducción efectiva de las TIC.</p>\n\n<p>\n\t<strong>“Iniciamos fracasando en la salud, luego empezamos a trabajar con la educación y nos dimos cuenta que podíamos cambiar vidas a través de la tecnología, mejorando los canales de comunicación y automatizando los procesos de este sector”, explicó Trillos.</strong></p>\n\n<p>\n\tHoy, a Gnosoft la componen 18 empleados entre ingenieros de sistemas, analistas de diseño gráfico y contadores. Gracias a Procolombia Trillos y su equipo recibieron un curso para desarrollar exportación, “desde Cúcuta podemos ofrecer el servicio a cualquier país del mundo en donde allá conexión a internet, eso es una ventaja”, agregó el empresario.</p>\n\n<p>\n\t<img alt="" height="370" src="/sites/default/files/2019/03/05/imagenes_adicionales/e6.jpg" width="640" /></p>\n\n<p>\n\tOtra empresa que está entrando en el sector es Insegroup. Yordan Mantilla, uno de sus líderes, señaló que actualmente se desarrollan en el mercado eléctrico y en sus proyectos de automatización requieren nuevas tecnologías de las comunicaciones.</p>\n\n<p>\n\t“Enfocamos el desarrollo del concepto de ciudades inteligentes, creando soluciones para la cuantificación de variables ambientales. Otro proyecto es (T-Cyborg) que busca integrar el análisis de los sonidos de las ciudades para entender cómo percibe e interpreta su ciudad una persona ciega”, explicó Mantilla.</p>\n\n<p>\n\tInsegroup tiene una unidad de negocio que busca desarrollar los pilotos de sus proyectos en Cúcuta, para poner en práctica su oferta de servicios y ofrecerla a mediano plazo a nivel mundial.</p>\n\n<p>\n\t<strong>“Dentro de la validación del T-Cyborg, Latinoamérica y Centroamérica mostraron tener un potencial enorme para acceder a esa tecnología, la cual es de punta e innovadora”, indicó el empresario.</strong></p>\n\n<p>\n\tCamilo Puello, cofundador de Just Sapiens, convirtió una idea de negocio digital nacida en 2013, en una empresa. Hoy, su iniciativa se ha vendido a nivel regional y nacional.</p>\n\n<p>\n\t“En un par de meses buscamos vender la idea a nivel internacional, en el departamento hemos vendido el servicio a 35 abogados y a entidades públicas. En el país hemos llegado a Neiva, Santa Marta y Bogotá”, explicó Puello.</p>\n\n<p>\n\tA través del clúster se están desarrollando estrategias para enfocar a las empresas en segmentos de mercado con baja ocupación, donde la región con su portafolio de servicio puede apoyar en la generación de alto valor agregado con soluciones TIC.</p>\n\n<p>\n\t<strong>Vitrina internacional</strong></p>\n\n<p>\n\tUna delegación de cerca de 30 empresas colombianas, en las que no se incluía empresas nortesantandereanas, generó en el Mobile World Congress 2019, llevado a cabo en Barcelona, ventas por cerca de 2 millones de euros y expectativas de negocio de cerca de 50 millones de euros.</p>\n\n<p>\n\tProcolombia reseñó esta información en su página web, donde se aseguró que Colombia despuntó en Barcelona siendo la delegación Latinoamericana más grande.</p>\n\n<p>\n\tLa presidenta de Procolombia, Flavia Santoro manifestó que “el Mobile World Congress fue una gran oportunidad para mostrarle al mundo el valor agregado de las industrias 4.0 del país”.'] ``` I want to remove every **'\n'** that is on the list because when I remove the special characters ("[\W]+"), some words, for example like "\nNUEVA" or "\nINDUSTRIAL", end up with an 'n' before the word ("nNUEVA" o nINDUSTRIAL"). What I need is to remove the 'break line' form each word of each text on the list, even if it is at the beggining, the end or between words. I have tried: ``` lista_1 = [] lista_2 =[] for it in info: for q in it: if q[0:2] != '\n': lista_1.append(it) else: lista_2.append(it[2:]) ``` and then: ``` new_list = lista_1 + lista_2 new_list ``` but I am getting: "memory error" Does somebody knows how can I overcome this problem? or any other code I could use? Thank you so much!
2021/10/31
[ "https://Stackoverflow.com/questions/69788691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17293731/" ]
You can use list comprehension: ``` info = [i.replace("\n", "") for i in info] ```
You can use a generator comprehension and then join each entries of the list by a separator character, `sep`, I choose an empty string. ``` sep = '' # choose the separator character text = sep.join(s.replace('\n', '') for s in info) ```
31,972,419
I have a file whose contents are ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` When I read the file using python, I get the string as ``` "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" ``` I want the double quotes to be removed from the beginning and end of the string. From python docs , I came to know that python adds double quotes by itself if there are single quotes in the string to avoid escaping.
2015/08/12
[ "https://Stackoverflow.com/questions/31972419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1642114/" ]
If the files as stored are intended to be JSON then they are invalid. The JSON format doesn't allow the use of single quotes to delimit strings. **Assuming you have no single quotes within the key/value strings** themselves, you can replace the single quotes with double quotes and then read in using the JSON module: ``` import json x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" x = x.replace("'", '"') j = json.loads(x) print j ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` **Alternatively:** If the data is the string representation of a Python `dict`, you can read it in with `eval`. Using `eval` is dangerous (see [Ned Batchelder](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html)'s thoughts on it). That said, if you wrote the file yourself and you are confident that it contains no malicious code, you can use `eval` to read the string as Python source code: ``` x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" eval(x, {'__builtins__': {}}) ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` Don't make a habit of this though! The right way to do this is to save the data to a file in a proper serialization format and then to read it from disk using a library like the `json` module.
If your string actually contains double quotes (which it might not, as they could just be part of the printed representation), you could get rid of them with a slice, e.g., ``` >>> hello = '"hello more stuff things"' >>> hello '"hello more stuff things"' >>> hello[1:-1] 'hello more stuff things' ``` Note in this case that the outer single quotes are not part of the string, they are just part of the printed representation.
31,972,419
I have a file whose contents are ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` When I read the file using python, I get the string as ``` "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" ``` I want the double quotes to be removed from the beginning and end of the string. From python docs , I came to know that python adds double quotes by itself if there are single quotes in the string to avoid escaping.
2015/08/12
[ "https://Stackoverflow.com/questions/31972419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1642114/" ]
If the files as stored are intended to be JSON then they are invalid. The JSON format doesn't allow the use of single quotes to delimit strings. **Assuming you have no single quotes within the key/value strings** themselves, you can replace the single quotes with double quotes and then read in using the JSON module: ``` import json x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" x = x.replace("'", '"') j = json.loads(x) print j ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` **Alternatively:** If the data is the string representation of a Python `dict`, you can read it in with `eval`. Using `eval` is dangerous (see [Ned Batchelder](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html)'s thoughts on it). That said, if you wrote the file yourself and you are confident that it contains no malicious code, you can use `eval` to read the string as Python source code: ``` x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" eval(x, {'__builtins__': {}}) ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` Don't make a habit of this though! The right way to do this is to save the data to a file in a proper serialization format and then to read it from disk using a library like the `json` module.
The double quotes you are referring to are not part of the string, and are just there to delimit it. If you assign the string "thi's'" to a variable: `>>> a = "thi's'"` the first element in that string is `t`: `>>> a[0]` `t` In your example, the first element in the string would be `{`, which I believe is what you would expect.
31,972,419
I have a file whose contents are ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` When I read the file using python, I get the string as ``` "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" ``` I want the double quotes to be removed from the beginning and end of the string. From python docs , I came to know that python adds double quotes by itself if there are single quotes in the string to avoid escaping.
2015/08/12
[ "https://Stackoverflow.com/questions/31972419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1642114/" ]
If the files as stored are intended to be JSON then they are invalid. The JSON format doesn't allow the use of single quotes to delimit strings. **Assuming you have no single quotes within the key/value strings** themselves, you can replace the single quotes with double quotes and then read in using the JSON module: ``` import json x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" x = x.replace("'", '"') j = json.loads(x) print j ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` **Alternatively:** If the data is the string representation of a Python `dict`, you can read it in with `eval`. Using `eval` is dangerous (see [Ned Batchelder](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html)'s thoughts on it). That said, if you wrote the file yourself and you are confident that it contains no malicious code, you can use `eval` to read the string as Python source code: ``` x = "{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" eval(x, {'__builtins__': {}}) ``` yields: ``` {'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'} ``` Don't make a habit of this though! The right way to do this is to save the data to a file in a proper serialization format and then to read it from disk using a library like the `json` module.
You can convert the `string` back to `dictionary` using ``` import re x="{'FileID': 'a3333.txt','Timestamp': '2014-12-05T02:01:28.271Z','SuccessList':'a,b,c,d,e'}" print dict(re.findall(r"""'([^']*)'\s*:\s*'([^']*)'""",x)) ```
10,436,130
How should this python be expressed ``` gl_enums = ... # map from name to values for opengl enums # use with gl_enums["enum name"] ``` in clojure? This works, but is it right? ``` (def gl-enums ...) ; map from name to values for opengl enums (defn gl-enum [k] (or (gl-enums k) (throw (SomeException.)))) ; use with (gl-enum :enum-name) ``` edit: for clarification, this question is about the exception-throwing part, not the map-defining part
2012/05/03
[ "https://Stackoverflow.com/questions/10436130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/322866/" ]
Your original example is OK. Two approaches you are also likely to come across: ``` ;; not very idiomatic (defn example [m] (if (contains? m :name) (:name m) (throw (IllegalArgumentException. (format "key %s is missing" :name))))) ;; idiomatic (defn example [m] (if-let [v (:name m)] v (throw (IllegalArgumentException. (format "key %s is missing" :name))))) ``` [More on clojure.core/if-let](http://clojuredocs.org/clojure_core/clojure.core/if-let "More on if-let") [More on clojure.core/contains?](http://clojuredocs.org/clojure_core/clojure.core/contains_q "More on clojure.core/contains?")
Just use a regular hashmap: ``` (def gl-enums {:name1 "value1", :name2 "value2", :name3 "value3", ...}) ``` if you don't want to provide keywords (like `:keyword`) but prefer strings, you'll need to use `(get gl-enums str)` in `gl-enum`
43,684,760
I have very basic python knowledge. This is my code so far: when i run this code the error `UnboundLocalError: local variable 'response' referenced before assignment on line 7` displays. I am trying to create a function that compares the response input to two lists and if that input is found true or false is assigned to response. I also need to assign response, which should be true or false to another list of answers. (true or false will be assigned values and the total of the list of answers will be calculated to match a final list calculation). ``` response = [str(input("Would you rather eat an apple or an orange? Answer apple or orange."))] list1 =[str("apple"), str("Apple"), str("APPLE")] lsit3 = [str("an orange"), str("Orange"), str("orange"), str("an Orange"), str("ORANGE")] def listCompare(): for list1[0] in list1: for response[0] in response: if response[0] == list1[0]: response = true else: for list3[0] in list3: for response[0] in response: if response[0] == list3[0]: response = false listCompare() ``` \*\*EDIT: Ok. Thanks for the roast. I'm in highschool and halfway through an EXTREMELY basic class. I'm just trying to make this work to pass. I dont need "help" anymore.
2017/04/28
[ "https://Stackoverflow.com/questions/43684760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7937653/" ]
You are complicating things a lot here, but that is understandable if you are new to programming or python. To put you on the right track, this is a better way to attack the problem: ``` valid_responses = ['a', 'b'] response = input("chose a or b: ").lower().strip() if response in valid_responses: print("Valid response:", response) else: print("Invalid response:", response) ``` Look up any function you don't understand here. String declarations can also just be in single or double quotes: ``` my_strings = ['orange', "apple"] ``` Also to make global variables assignable inside a function you need to use the global keyword. ``` my_global = "hello" def my_fuction(): global my_global # Do stuff with my_global ``` For loops should assign to new local variables: ``` options = ['a', 'b', 'c'] for opt in options: print(opt) ```
Instead of enumerating a list of exact possible answers, you could instead match against patterns of possible answers. Here is one way to do that, case insensitively: ``` import re known_fruits = ['apple', 'orange'] response = str(input("What would you like to eat? (Answer " + ' or '.join(known_fruits) + '): ')) def listCompare(): for fruit in known_fruits: pattern = '^(?:an\s)?\s*' + fruit + '\s*$' if re.search(pattern,response,re.IGNORECASE): return True if listCompare(): print("Enjoy it!") else: print("'%s' is an invalid choice" % response) ``` This would match `apple`, `Apple`, or even `ApPle`. It would also match 'an apple'. However, it would not match `pineapple`. Here is the same code, with the regular expression broken down: ``` import re known_fruits = ['apple', 'orange'] print("What would you like to eat?") response = str(input("(Answer " + ' or '.join(known_fruits) + '): ')) # Change to lowercase and remove leading/trailing whitespace response = response.lower().strip() def listCompare(): for fruit in known_fruits: pattern = ( '^' + # Beginning of string '(' + # Start of group 'an' + # word "an" '\s' + # single space character ')' + # End of group '?' + # Make group optional '\s*' + # Zero or more space characters fruit + # Word inside "fruit" variable '\s*' + # Zero or more space characters '$' # End of the string ) if re.search(pattern,response): return True if listCompare(): print("Enjoy it!") else: print("'%s' is an invalid choice" % response) ```
15,854,916
i've a problem, like a title. I've tried to install smart\_selects in my project Django, but does not work. I followed the readme in <https://github.com/digi604/django-smart-selects>... but the error is: > > No module named admin\_static > Request Method: GET > Request URL: http://**\*.com/panel/schedevendorcomplete/add/1/Basic/ > Django Version: 1.2.7 > Exception Type: ImportError > Exception Value: > > No module named admin\_static > Exception Location: /var/www/website/smart\_selects/widgets.py in , line 4 > Python Executable: /usr/bin/python > Python Version: 2.7.3 > Python Path: ['/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/var/www/vhosts/**.com', '/var/www/vhosts/**.com/website'] > Server time: sab, 6 Apr 2013 20:50:04 +0200 > Traceback Switch to copy-and-paste view > /usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py in get\_response > request.path\_info) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in resolve > for pattern in self.url\_patterns: ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in \_get\_url\_patterns > patterns = getattr(self.urlconf\_module, "urlpatterns", self.urlconf\_module) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in \_get\_urlconf\_module > self.\_urlconf\_module = import\_module(self.urlconf\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /var/www/vhosts/tuttoricevimenti.com/website/urls.py in > admin.autodiscover() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/\_*init*\_.py in autodiscover > import\_module('%s.admin' % app) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/auth/admin.py in > admin.site.register(Group, GroupAdmin) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/sites.py in register > validate(admin\_class, model) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/validation.py in validate > models.get\_apps() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in get\_apps > self.\_populate() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in \_populate > self.load\_app(app\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in load\_app > models = import\_module('.models', app\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /var/wwww/vhosts/\***.com/gazzettadelpopolo/models.py in > from website.smart\_selects.db\_fields import GroupedForeignKey ... > ▶ Local vars > /var/wwww/vhosts/*\**.com/website/smart\_selects/db\_fields.py in > import form\_fields ... > ▶ Local vars > /var/www/website/smart\_selects/form\_fields.py in > from smart\_selects.widgets import ChainedSelect ... > ▶ Local vars > /var/www/website/smart\_selects/widgets.py in > from django.contrib.admin.templatetags.admin\_static import static ... > ▶ Local vars > > > I do not know where I'm wrong, I'm not very familiar with django. What could be causing this? Tnx
2013/04/06
[ "https://Stackoverflow.com/questions/15854916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252933/" ]
Use Java's Calendar: (since my beloved Date is deprecated) [Calendar API](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Calendar.html) Try something like: ``` Calendar c = Calendar.getInstance(); c.setTimeInMillis(1365228375*1000); System.out.print(c.toString()); - just to demonstrate the API has lots of info about presentation ``` Just put whatever is being returned from your PHP to the Android app in place of the '1365228375' shown above. Cheers.
Well, conversion from seconds to microseconds shouldn't be too difficult; ``` echo time() * 1000; ``` If you need the time stamp to be **acurate** in milliseconds, look at [`microtime()`](http://www.php.net/manual/en/function.microtime.php) however, this function does *not* return an integer so you'll have to do some conversions Don't know how to convert that to a readable time in Android
15,854,916
i've a problem, like a title. I've tried to install smart\_selects in my project Django, but does not work. I followed the readme in <https://github.com/digi604/django-smart-selects>... but the error is: > > No module named admin\_static > Request Method: GET > Request URL: http://**\*.com/panel/schedevendorcomplete/add/1/Basic/ > Django Version: 1.2.7 > Exception Type: ImportError > Exception Value: > > No module named admin\_static > Exception Location: /var/www/website/smart\_selects/widgets.py in , line 4 > Python Executable: /usr/bin/python > Python Version: 2.7.3 > Python Path: ['/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/var/www/vhosts/**.com', '/var/www/vhosts/**.com/website'] > Server time: sab, 6 Apr 2013 20:50:04 +0200 > Traceback Switch to copy-and-paste view > /usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py in get\_response > request.path\_info) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in resolve > for pattern in self.url\_patterns: ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in \_get\_url\_patterns > patterns = getattr(self.urlconf\_module, "urlpatterns", self.urlconf\_module) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py in \_get\_urlconf\_module > self.\_urlconf\_module = import\_module(self.urlconf\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /var/www/vhosts/tuttoricevimenti.com/website/urls.py in > admin.autodiscover() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/\_*init*\_.py in autodiscover > import\_module('%s.admin' % app) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/auth/admin.py in > admin.site.register(Group, GroupAdmin) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/sites.py in register > validate(admin\_class, model) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/contrib/admin/validation.py in validate > models.get\_apps() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in get\_apps > self.\_populate() ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in \_populate > self.load\_app(app\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/db/models/loading.py in load\_app > models = import\_module('.models', app\_name) ... > ▶ Local vars > /usr/local/lib/python2.7/dist-packages/django/utils/importlib.py in import\_module > \_*import*\_(name) ... > ▶ Local vars > /var/wwww/vhosts/\***.com/gazzettadelpopolo/models.py in > from website.smart\_selects.db\_fields import GroupedForeignKey ... > ▶ Local vars > /var/wwww/vhosts/*\**.com/website/smart\_selects/db\_fields.py in > import form\_fields ... > ▶ Local vars > /var/www/website/smart\_selects/form\_fields.py in > from smart\_selects.widgets import ChainedSelect ... > ▶ Local vars > /var/www/website/smart\_selects/widgets.py in > from django.contrib.admin.templatetags.admin\_static import static ... > ▶ Local vars > > > I do not know where I'm wrong, I'm not very familiar with django. What could be causing this? Tnx
2013/04/06
[ "https://Stackoverflow.com/questions/15854916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252933/" ]
Use Java's Calendar: (since my beloved Date is deprecated) [Calendar API](http://docs.oracle.com/javase/1.5.0/docs/api/java/util/Calendar.html) Try something like: ``` Calendar c = Calendar.getInstance(); c.setTimeInMillis(1365228375*1000); System.out.print(c.toString()); - just to demonstrate the API has lots of info about presentation ``` Just put whatever is being returned from your PHP to the Android app in place of the '1365228375' shown above. Cheers.
There isn't really an "Android" time stamp per se. [PHP:time()](http://php.net/manual/en/function.time.php) returns a unix time stamp measured in seconds from epoch GMT/UTC. If you want to create a human readable time stamp such as `January 1 1970 00:00:00 GMT` you can use Java's Calender but I would recommend using [Joda Time](http://joda-time.sourceforge.net/) instead as it's a lot easier to use and far more robust.
27,740,044
I am installing cffi package for cryptography and Jasmin installation. I did some research before posting question, so I found following option but which is seems not working: System ------ > > Mac OSx 10.9.5 > > > python2.7 > > > Error ----- ``` c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. ``` Please guide me on following issue. Thanks Command ------- > > env DYLD\_LIBRARY\_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi > > > LOG --- ``` bhushanvaiude$ env DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi Password: Downloading/unpacking cffi Downloading cffi-0.8.6.tar.gz (196kB): 196kB downloaded Running setup.py egg_info for package cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. Downloading/unpacking pycparser (from cffi) Downloading pycparser-2.10.tar.gz (206kB): 206kB downloaded Running setup.py egg_info for package pycparser Installing collected packages: cffi, pycparser Running setup.py install for cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. building '_cffi_backend' extension cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 Complete output from command /Users/****project path***/bin/python -c "import setuptools;__file__='/Users/****project path***/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/7w/8z_mn3g120n34bv0w780gnd00000gn/T/pip-e6d6Ay-record/install-record.txt --single-version-externally-managed --install-headers /Users/****project path***/include/site/python2.7: warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/cffi copying cffi/__init__.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/api.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/backend_ctypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/commontypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/cparser.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/ffiplatform.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/gc_weakref.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/lock.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/model.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_cpy.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_gen.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/verifier.py -> build/lib.macosx-10.9-intel-2.7/cffi running build_ext building '_cffi_backend' extension creating build/temp.macosx-10.9-intel-2.7 creating build/temp.macosx-10.9-intel-2.7/c cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up... ```
2015/01/02
[ "https://Stackoverflow.com/questions/27740044", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1694106/" ]
In your terminal try and run: ``` xcode-select --install ``` After that try installing the package again. By default, XCode installs itself as the IDE and does not set up the environment for the use by command line tools; for example, the `/usr/include` folder will be missing. Running the above command will install the tools necessary to run compilation from the command line and create the required symbolic links. Since Python packages compile native code parts using the command-line interface of XCode, this step is required to install Python packages that include native components. You only need to do this once per XCode install/upgrade, or if you see a similar error.
Install CLI development toolchain with ``` $ xcode-select --install ``` If you have a broken pkg-config, unlink it with following command as mentioned in comments. ``` $ brew unlink pkg-config ``` Install libffi package ``` $ brew install pkg-config libffi ``` and then install cffi ``` $ pip install cffi ``` Source: [Error installing bcrypt with pip on OS X: can't find ffi.h (libffi is installed)](https://stackoverflow.com/questions/22875270/error-installing-bcrypt-with-pip-on-os-x-cant-find-ffi-h-libffi-is-installed)
27,740,044
I am installing cffi package for cryptography and Jasmin installation. I did some research before posting question, so I found following option but which is seems not working: System ------ > > Mac OSx 10.9.5 > > > python2.7 > > > Error ----- ``` c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. ``` Please guide me on following issue. Thanks Command ------- > > env DYLD\_LIBRARY\_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi > > > LOG --- ``` bhushanvaiude$ env DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi Password: Downloading/unpacking cffi Downloading cffi-0.8.6.tar.gz (196kB): 196kB downloaded Running setup.py egg_info for package cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. Downloading/unpacking pycparser (from cffi) Downloading pycparser-2.10.tar.gz (206kB): 206kB downloaded Running setup.py egg_info for package pycparser Installing collected packages: cffi, pycparser Running setup.py install for cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. building '_cffi_backend' extension cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 Complete output from command /Users/****project path***/bin/python -c "import setuptools;__file__='/Users/****project path***/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/7w/8z_mn3g120n34bv0w780gnd00000gn/T/pip-e6d6Ay-record/install-record.txt --single-version-externally-managed --install-headers /Users/****project path***/include/site/python2.7: warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/cffi copying cffi/__init__.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/api.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/backend_ctypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/commontypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/cparser.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/ffiplatform.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/gc_weakref.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/lock.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/model.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_cpy.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_gen.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/verifier.py -> build/lib.macosx-10.9-intel-2.7/cffi running build_ext building '_cffi_backend' extension creating build/temp.macosx-10.9-intel-2.7 creating build/temp.macosx-10.9-intel-2.7/c cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up... ```
2015/01/02
[ "https://Stackoverflow.com/questions/27740044", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1694106/" ]
In your terminal try and run: ``` xcode-select --install ``` After that try installing the package again. By default, XCode installs itself as the IDE and does not set up the environment for the use by command line tools; for example, the `/usr/include` folder will be missing. Running the above command will install the tools necessary to run compilation from the command line and create the required symbolic links. Since Python packages compile native code parts using the command-line interface of XCode, this step is required to install Python packages that include native components. You only need to do this once per XCode install/upgrade, or if you see a similar error.
Running the below command in terminal took care of my issue. xcode-select --install
27,740,044
I am installing cffi package for cryptography and Jasmin installation. I did some research before posting question, so I found following option but which is seems not working: System ------ > > Mac OSx 10.9.5 > > > python2.7 > > > Error ----- ``` c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. ``` Please guide me on following issue. Thanks Command ------- > > env DYLD\_LIBRARY\_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi > > > LOG --- ``` bhushanvaiude$ env DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi Password: Downloading/unpacking cffi Downloading cffi-0.8.6.tar.gz (196kB): 196kB downloaded Running setup.py egg_info for package cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. Downloading/unpacking pycparser (from cffi) Downloading pycparser-2.10.tar.gz (206kB): 206kB downloaded Running setup.py egg_info for package pycparser Installing collected packages: cffi, pycparser Running setup.py install for cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. building '_cffi_backend' extension cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 Complete output from command /Users/****project path***/bin/python -c "import setuptools;__file__='/Users/****project path***/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/7w/8z_mn3g120n34bv0w780gnd00000gn/T/pip-e6d6Ay-record/install-record.txt --single-version-externally-managed --install-headers /Users/****project path***/include/site/python2.7: warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/cffi copying cffi/__init__.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/api.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/backend_ctypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/commontypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/cparser.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/ffiplatform.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/gc_weakref.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/lock.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/model.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_cpy.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_gen.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/verifier.py -> build/lib.macosx-10.9-intel-2.7/cffi running build_ext building '_cffi_backend' extension creating build/temp.macosx-10.9-intel-2.7 creating build/temp.macosx-10.9-intel-2.7/c cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up... ```
2015/01/02
[ "https://Stackoverflow.com/questions/27740044", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1694106/" ]
Install CLI development toolchain with ``` $ xcode-select --install ``` If you have a broken pkg-config, unlink it with following command as mentioned in comments. ``` $ brew unlink pkg-config ``` Install libffi package ``` $ brew install pkg-config libffi ``` and then install cffi ``` $ pip install cffi ``` Source: [Error installing bcrypt with pip on OS X: can't find ffi.h (libffi is installed)](https://stackoverflow.com/questions/22875270/error-installing-bcrypt-with-pip-on-os-x-cant-find-ffi-h-libffi-is-installed)
Running the below command in terminal took care of my issue. xcode-select --install
47,275,478
I'm using azure service bus queues in my application and here my question is, is there a way how to check the message queue is empty so that I can shutdown my containers and vms to save cost. If there is way to get that please let me know, preferably in python. Thanks
2017/11/13
[ "https://Stackoverflow.com/questions/47275478", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6918471/" ]
For this, you can use [`Azure Service Bus Python SDK`](https://pypi.python.org/pypi/azure-servicebus). What you would need to do is get the properties of a queue using `get_queue` method that will return an object of type `Queue`. This object exposes the total number of messages through `message_count` property. Please note that this count will include count for active messages, dead-letter queue messages and more. Here's a sample code to do so: ``` from azure.servicebus import ServiceBusService, Message, Queue bus_service = ServiceBusService( service_namespace='namespacename', shared_access_key_name='RootManageSharedAccessKey', shared_access_key_value='accesskey') queue = bus_service.get_queue('taskqueue1') print queue.message_count ``` Source code for Azure Service Bus SDK for Python is available on Github: <https://github.com/Azure/azure-sdk-for-python/tree/master/azure-servicebus/azure/servicebus>.
You could check the length of the messages returned from [peek\_messages](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-servicebus/latest/azure.servicebus.html?highlight=peek_messages#azure.servicebus.ServiceBusReceiver.peek_messages) method on the class `azure.servicebus.ServiceBusReceiver` ``` with servicebus_receiver: messages = servicebus_receiver.peek_messages(max_message_count=1) if len(messages) == 0: print('Queue empty') ```
40,130,468
I am using Alamofire for the HTTP networking in my app. But in my api which is written in python have an header key for getting request, if there is a key then only give response. Now I want to use that header key in my iOS app with Alamofire, I am not getting it how to implement. Below is my code of normal without any key implementation: ``` Alamofire.request(.GET,"http://name/user_data/\(userName)@someURL.com").responseJSON { response in // 1 print(response.request) // original URL request print(response.response) // URL response print(response.data) // server data print(response.result) } ``` I have a key as "appkey" and value as a "test" in my api. If anyone can help. Thank you!
2016/10/19
[ "https://Stackoverflow.com/questions/40130468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5088930/" ]
This should work ``` let headers = [ "appkey": "test" ] Alamofire.request(.GET, "http://name/user_data/\(userName)@someURL.com", parameters: nil, encoding: .URL, headers: headers).responseJSON { response in //handle response } ```
``` let headers: HTTPHeaders = [ "Accept": "application/json", "appkey": "test" ] Alamofire.request("http://name/user_data/\(userName)@someURL.com", headers: headers).responseJSON { response in print(response.request) // original URL request print(response.response) // URL response print(response.data) // server data print(response.result) } ```
49,997,303
Write a program using Python 3.x Write the scipt that will read "input.txt" and print the first 5 lines of the file input.txt that consists of the single odd number to stdout The file may contain lines having numeric and non numeric data your script should ignore all the lines that contain anything except single odd integer Assumption:"input.txt" is present in the same folder where script resides My code is as follows ``` f = open('input.txt', mode='r') t = f.readlines() print(t) lst1 = [] for i in t: try: if int(i): lst1.append(i) except ValueError: print ("{}is not a valid number,ignoring the same".format(i)) print ("This is a list with numeric values", lst1) for i in lst1: if int(i) % 2 == 1: print("Odd numbers are: ", i) Input.txt : 123a 13aa a1 1s2 2 3 3 455 56 6 7 8 ``` output : ``` yogi@fdfd:~/Python-Practice$python3 test.py ['123a\n', '13aa\n', 'a1\n', '1s2\n', '2\n', '3\n', '3\n', '455\n', '56\n', '6\n', '7\n', '8\n'] 123a is not a valid number,ignoring the same 13aa is not a valid number,ignoring the same a1 is not a valid number,ignoring the same 1s2 is not a valid number,ignoring the same This is a list with numeric values ['2\n', '3\n', '3\n', '455\n', '56\n', '6\n', '7\n', '8\n'] 3 3 455 7 ``` Note: My code is working fine and prints the odd numbers however I am struggling to check whether list elements are single integer or not.Any pointers will be helpful
2018/04/24
[ "https://Stackoverflow.com/questions/49997303", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8064377/" ]
**Put your admin link like this :** as `Admin` is not your default `controller` and you not set your route for admin Put your `admin` link like this ``` <li class="nav-item"> <a class="nav-link" href="<?php echo base_url('admin/index'); ?>">Admin</a> </li> ``` but if you want to do like this : ``` <li class="nav-item"> <a class="nav-link" href="<?php echo base_url('admin'); ?>">Admin</a> </li> ``` then set `route.php` like this : ``` $route['admin'] = 'admin/index'; ``` And controller : ``` <?php class Admin extends CI_Controller{ public function index(){ echo "Reach here"; die; if (!$this->session->userdata('logged_in')) { redirect('users/login'); } ?> ```
Please use parent construct and after you can do your way. ``` public function __construct() { parent :: __construct(); $this->load->model(array('restaurants_m', 'categories_m', 'cities_m', 'customers_m', 'states_m', 'orders_m', 'delivery_boys_m')); $this->load->library(array('email')); $this->load->helper('url'); } ```
55,168,955
On Mac OS 10.14 (Mojave) I used: ``` pip install -U pytest ``` to install pytest. I got a permission denied error trying to install the packages to `/Users/nagen/Library/Python/2.7` I tried ``` sudo pip install -U pytest ``` This time it installed successfully But, despite adding the full path, the terminal doesn't recognize pytest. If I try to run `/Users/nagen/Library/Python/2.7/bin/pytest` - I get permission error. In addtion, `sudo /Users/nagen/Library/Python/2.7/bin/pytest` works, but it prompts for a password, so I can not use it in automation scripts. Tried installing python3 and then running pip3 install...same issue.
2019/03/14
[ "https://Stackoverflow.com/questions/55168955", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11204687/" ]
I think the best option might be to use a python virtual env. <https://packaging.python.org/guides/installing-using-pip-and-virtualenv/> is a good starting point ``` > virtualenv env > source env/bin/activate > pip install pytest > pytest ``` This will avoid pathing and permissions issues and keep your environment clean. From any other changes you make withing that venv.
I would **strongly** recommend using [homebrew](https://brew.sh/). This is the **best** dev tool there is for mac users and I never install things without it. To install it run the following in terminal: ``` /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` Now to install python3 you simply: ``` brew install python3 ``` brew will make sure your PATH is configured correctly and you shouldn't have any issues running `pip3 install x`. Also, if you decide to reinstall python using homebrew, you will need to follow [this](https://osxuninstaller.com/uninstall-guides/properly-uninstall-python-mac/) guide to uninstall python first. This will be the most tedious part of the process. Make sure that you **don't** uninstall python2 packages! Your mac OS uses them. If you don't have python3 installed at all you can skip the uninstall step and go straight to `brew install python3` When I first started using python, I had the same problem you are having because I tried manually installing it from python.org, then I came across homebrew and never had problems since.
15,034,151
I have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/\* in the /x/y/z directory. What python methods can I use? I tried `shutil.copytree("a/b/c", "/x/y/z")`, but python tries to create /x/y/z and raises an `error "Directory exists"`.
2013/02/22
[ "https://Stackoverflow.com/questions/15034151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
I found this code working which is part of the standard library: ``` from distutils.dir_util import copy_tree # copy subdirectory example from_directory = "/a/b/c" to_directory = "/x/y/z" copy_tree(from_directory, to_directory) ``` Reference: * Python 2: <https://docs.python.org/2/distutils/apiref.html#distutils.dir_util.copy_tree> * Python 3: <https://docs.python.org/3/distutils/apiref.html#distutils.dir_util.copy_tree>
You can also use glob2 to recursively collect all paths (using \*\* subfolders wildcard) and then use shutil.copyfile, saving the paths glob2 link: <https://code.activestate.com/pypm/glob2/>
15,034,151
I have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/\* in the /x/y/z directory. What python methods can I use? I tried `shutil.copytree("a/b/c", "/x/y/z")`, but python tries to create /x/y/z and raises an `error "Directory exists"`.
2013/02/22
[ "https://Stackoverflow.com/questions/15034151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
I found this code working which is part of the standard library: ``` from distutils.dir_util import copy_tree # copy subdirectory example from_directory = "/a/b/c" to_directory = "/x/y/z" copy_tree(from_directory, to_directory) ``` Reference: * Python 2: <https://docs.python.org/2/distutils/apiref.html#distutils.dir_util.copy_tree> * Python 3: <https://docs.python.org/3/distutils/apiref.html#distutils.dir_util.copy_tree>
The python libs are obsolete with this function. I've done one that works correctly: ``` import os import shutil def copydirectorykut(src, dst): os.chdir(dst) list=os.listdir(src) nom= src+'.txt' fitx= open(nom, 'w') for item in list: fitx.write("%s\n" % item) fitx.close() f = open(nom,'r') for line in f.readlines(): if "." in line: shutil.copy(src+'/'+line[:-1],dst+'/'+line[:-1]) else: if not os.path.exists(dst+'/'+line[:-1]): os.makedirs(dst+'/'+line[:-1]) copydirectorykut(src+'/'+line[:-1],dst+'/'+line[:-1]) copydirectorykut(src+'/'+line[:-1],dst+'/'+line[:-1]) f.close() os.remove(nom) os.chdir('..') ```
15,034,151
I have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/\* in the /x/y/z directory. What python methods can I use? I tried `shutil.copytree("a/b/c", "/x/y/z")`, but python tries to create /x/y/z and raises an `error "Directory exists"`.
2013/02/22
[ "https://Stackoverflow.com/questions/15034151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
I found this code working which is part of the standard library: ``` from distutils.dir_util import copy_tree # copy subdirectory example from_directory = "/a/b/c" to_directory = "/x/y/z" copy_tree(from_directory, to_directory) ``` Reference: * Python 2: <https://docs.python.org/2/distutils/apiref.html#distutils.dir_util.copy_tree> * Python 3: <https://docs.python.org/3/distutils/apiref.html#distutils.dir_util.copy_tree>
``` from subprocess import call def cp_dir(source, target): call(['cp', '-a', source, target]) # Linux cp_dir('/a/b/c/', '/x/y/z/') ``` It works for me. Basically, it executes shell command **cp**.
15,034,151
I have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/\* in the /x/y/z directory. What python methods can I use? I tried `shutil.copytree("a/b/c", "/x/y/z")`, but python tries to create /x/y/z and raises an `error "Directory exists"`.
2013/02/22
[ "https://Stackoverflow.com/questions/15034151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
You can also use glob2 to recursively collect all paths (using \*\* subfolders wildcard) and then use shutil.copyfile, saving the paths glob2 link: <https://code.activestate.com/pypm/glob2/>
The python libs are obsolete with this function. I've done one that works correctly: ``` import os import shutil def copydirectorykut(src, dst): os.chdir(dst) list=os.listdir(src) nom= src+'.txt' fitx= open(nom, 'w') for item in list: fitx.write("%s\n" % item) fitx.close() f = open(nom,'r') for line in f.readlines(): if "." in line: shutil.copy(src+'/'+line[:-1],dst+'/'+line[:-1]) else: if not os.path.exists(dst+'/'+line[:-1]): os.makedirs(dst+'/'+line[:-1]) copydirectorykut(src+'/'+line[:-1],dst+'/'+line[:-1]) copydirectorykut(src+'/'+line[:-1],dst+'/'+line[:-1]) f.close() os.remove(nom) os.chdir('..') ```
15,034,151
I have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/\* in the /x/y/z directory. What python methods can I use? I tried `shutil.copytree("a/b/c", "/x/y/z")`, but python tries to create /x/y/z and raises an `error "Directory exists"`.
2013/02/22
[ "https://Stackoverflow.com/questions/15034151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
You can also use glob2 to recursively collect all paths (using \*\* subfolders wildcard) and then use shutil.copyfile, saving the paths glob2 link: <https://code.activestate.com/pypm/glob2/>
``` from subprocess import call def cp_dir(source, target): call(['cp', '-a', source, target]) # Linux cp_dir('/a/b/c/', '/x/y/z/') ``` It works for me. Basically, it executes shell command **cp**.
15,034,151
I have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/\* in the /x/y/z directory. What python methods can I use? I tried `shutil.copytree("a/b/c", "/x/y/z")`, but python tries to create /x/y/z and raises an `error "Directory exists"`.
2013/02/22
[ "https://Stackoverflow.com/questions/15034151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
``` from subprocess import call def cp_dir(source, target): call(['cp', '-a', source, target]) # Linux cp_dir('/a/b/c/', '/x/y/z/') ``` It works for me. Basically, it executes shell command **cp**.
The python libs are obsolete with this function. I've done one that works correctly: ``` import os import shutil def copydirectorykut(src, dst): os.chdir(dst) list=os.listdir(src) nom= src+'.txt' fitx= open(nom, 'w') for item in list: fitx.write("%s\n" % item) fitx.close() f = open(nom,'r') for line in f.readlines(): if "." in line: shutil.copy(src+'/'+line[:-1],dst+'/'+line[:-1]) else: if not os.path.exists(dst+'/'+line[:-1]): os.makedirs(dst+'/'+line[:-1]) copydirectorykut(src+'/'+line[:-1],dst+'/'+line[:-1]) copydirectorykut(src+'/'+line[:-1],dst+'/'+line[:-1]) f.close() os.remove(nom) os.chdir('..') ```
59,296,151
I'm writing a python script that uses a model to predict a large number of values by groupID, where **efficiency is important** (N on the order of 10^8). I initialize a results matrix and am trying to sequentially update a running sum of values in the results matrix. Trying to be efficient, in my current method I use groupID as row numbers of the results matrix to avoid merging (merging is expensive, as far as I understand). My attempt: ``` import numpy as np # Initialize results matrix results = np.zeros((5,3)) # dimension: number of groups x timestep # Now I loop over batches, with batch size 4. Here's an example of one iteration: batch_groupIDs = [3,1,0,1] # Note that multiple values can be generated for same groupID batch_results = np.ones((4,3)) # My attempt at appending the results (low dimension example): results[batch_groupIDs] += batch_results print(results) ``` This outputs: ``` [[1. 1. 1.] [1. 1. 1.] [0. 0. 0.] [1. 1. 1.] [0. 0. 0.]] ``` My desired output is the following (since group 1 shows up twice, and should be appended twice): ``` [[1. 1. 1.] [2. 2. 2.] [0. 0. 0.] [1. 1. 1.] [0. 0. 0.]] ``` The actual dimensions of my problem are approximately 100 timesteps x a batch size of 1 million+ and 2000 groupIDs
2019/12/12
[ "https://Stackoverflow.com/questions/59296151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11918892/" ]
The problem is that your in your language dates use the slash separator, which makes Blazor think you're trying to access a different route. Whenever sending dates as a URL parameter they need to be in invariant culture and use dashes. ```cs NavigationManager.NavigateTo("routeTest/"+numberToSend+"/"+dateToSend.ToString("yyyy-MM-dd HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture)); ``` For reference, see the warning in the [official documentation](https://learn.microsoft.com/en-us/aspnet/core/blazor/routing?view=aspnetcore-3.1#route-constraints)
As you've identified, the DateTime in the URL is affecting the routing due to the slashes. Send the `DateTime` in ISO8601 format `yyyy-MM-ddTHH:mm:ss`. You could use: ``` dateToSend.ToString("s", System.Globalization.CultureInfo.InvariantCulture) ``` where the format specifier `s` is known as the [Sortable date/time pattern](https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#the-sortable-s-format-specifier) Or ``` dateToSend.ToString("yyyy-MM-dd HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture) ``` Use `InvariantCulture` since the [Blazor routing](https://learn.microsoft.com/en-us/aspnet/core/blazor/routing?view=aspnetcore-3.1) page states: > > Route constraints that verify the URL and are converted to a CLR type > (such as `int` or `DateTime`) always use the invariant culture. These > constraints assume that the URL is non-localizable. > > >
59,296,151
I'm writing a python script that uses a model to predict a large number of values by groupID, where **efficiency is important** (N on the order of 10^8). I initialize a results matrix and am trying to sequentially update a running sum of values in the results matrix. Trying to be efficient, in my current method I use groupID as row numbers of the results matrix to avoid merging (merging is expensive, as far as I understand). My attempt: ``` import numpy as np # Initialize results matrix results = np.zeros((5,3)) # dimension: number of groups x timestep # Now I loop over batches, with batch size 4. Here's an example of one iteration: batch_groupIDs = [3,1,0,1] # Note that multiple values can be generated for same groupID batch_results = np.ones((4,3)) # My attempt at appending the results (low dimension example): results[batch_groupIDs] += batch_results print(results) ``` This outputs: ``` [[1. 1. 1.] [1. 1. 1.] [0. 0. 0.] [1. 1. 1.] [0. 0. 0.]] ``` My desired output is the following (since group 1 shows up twice, and should be appended twice): ``` [[1. 1. 1.] [2. 2. 2.] [0. 0. 0.] [1. 1. 1.] [0. 0. 0.]] ``` The actual dimensions of my problem are approximately 100 timesteps x a batch size of 1 million+ and 2000 groupIDs
2019/12/12
[ "https://Stackoverflow.com/questions/59296151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11918892/" ]
The problem is that your in your language dates use the slash separator, which makes Blazor think you're trying to access a different route. Whenever sending dates as a URL parameter they need to be in invariant culture and use dashes. ```cs NavigationManager.NavigateTo("routeTest/"+numberToSend+"/"+dateToSend.ToString("yyyy-MM-dd HH:mm:ss", System.Globalization.CultureInfo.InvariantCulture)); ``` For reference, see the warning in the [official documentation](https://learn.microsoft.com/en-us/aspnet/core/blazor/routing?view=aspnetcore-3.1#route-constraints)
This can solve your issue: ``` @code{ int numberToSend = 123; string dateToSend = DateTime.Now.ToString("yyyy-MM-dd"); private void Naviagte() { NavigationManager.NavigateTo("routeTest/" + numberToSend + "/" + dateToSend); } } ```
62,061,258
I am trying to create a view that handles the email confirmation but i keep on getting an error, your help is highly appreciated My views.py ``` import re from django.contrib.auth import login, logout, authenticate from django.shortcuts import render, HttpResponseRedirect, Http404 # Create your views here. from .forms import LoginForm, RegistrationForm from . models import EmailConfirmation SHA1_RE = re.compile('^[a-f0-9]{40}s') def activation_view(request, activation_key): if SHA1_RE.search(activation_key): print('activation is real') try: instance = EmailConfirmation.objects.get(activation_key=activation_key) except EmailConfirmation.DoesNotExist: instance = None raise Http404 if instance is not None and not instance.confirmed: print('Confirmation complete') instance.confirmed = True instance.save() elif instance is not None and instance.confirmed: print('User already confirmed') else: pass context = {} return render(request, "accounts/activation_complete.html", context) else: pass ``` urls.py ``` from django.urls import path from .views import( logout_view, login_view, register_view, activation_view, ) urlpatterns = [ path('accounts/logout/', logout_view, name='auth_logout'), path('accounts/login/', login_view, name='auth_login'), path('accounts/register/', register_view, name='auth_register'), path('accounts/activate/<activation_key>/', activation_view, name='activation_view'), ] ``` models.py ``` class EmailConfirmation(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) activation_key = models.CharField(max_length=200) confirmed = models.BooleanField(default=False) def __str__(self): return str(self.confirmed) def activate_user_email(self): #send email here activation_url = 'http://localhost:8000/accounts/activate%s' %(self.activation_key) context = { 'activation_key': self.activation_key, 'activation_url': activation_url, 'user': self.user.username } message = render_to_string('accounts/activation/actiavtion_message.txt', context) subject = 'Email actiavtion key' print(message) self.email_user(subject, message, settings.DEFAULT_FROM_EMAIL) def email_user(self, subject, message, from_email=None, **kwargs): send_mail(subject, message, from_email, [self.user.email], kwargs) ``` ERROR ``` ValueError at /accounts/activate/d99e89141eaf2fa786a3b5215ab4aa986e411bcc/ The view accounts.views.activation_view didn't return an HttpResponse object. It returned None instead. Request Method: GET Request URL: http://127.0.0.1:8000/accounts/activate/d99e89141eaf2fa786a3b5215ab4aa986e411bcc/ Django Version: 3.0.6 Exception Type: ValueError Exception Value: The view accounts.views.activation_view didn't return an HttpResponse object. It returned None instead. Exception Location: C:\Users\Martin\Desktop\My Projects\Savanna\venv\lib\site-packages\django\core\handlers\base.py in _get_response, line 124 Python Executable: C:\Users\Martin\Desktop\My Projects\Savanna\venv\Scripts\python.exe Python Version: 3.8.2 Python Path: ['C:\\Users\\Martin\\Desktop\\My Projects\\Savanna', 'C:\\Users\\Martin\\Desktop\\My ' 'Projects\\Savanna\\venv\\Scripts\\python38.zip', 'c:\\program files\\python38\\DLLs', 'c:\\program files\\python38\\lib', 'c:\\program files\\python38', 'C:\\Users\\Martin\\Desktop\\My Projects\\Savanna\\venv', 'C:\\Users\\Martin\\Desktop\\My Projects\\Savanna\\venv\\lib\\site-packages'] Server time: Thu, 28 May 2020 09:05:51 +0000 ```
2020/05/28
[ "https://Stackoverflow.com/questions/62061258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13633361/" ]
Use [`justify`](https://stackoverflow.com/a/44559180/2901002) with `DataFrame` constructor: ``` arr = justify(data_df.to_numpy(), invalid_val=np.nan,axis=0) df = pd.DataFrame(arr, columns=data_df.columns, index=data_df.index) print(df) 8 16 20 24 0 4.0 0.0 1.0 6.0 0 7.0 2.0 5.0 8.0 0 NaN 3.0 NaN NaN 0 NaN 9.0 NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN 0 NaN NaN NaN NaN ```
This is not that pretty - but with help of `numpy` you can fairly easy get a numpy array with your desired result. ``` import numpy def shifted_column(values): none_nan_values = values[ ~np.isnan(values) ] nan_row = np.zeros(values.shape) nan_row[:] = np.nan nan_row[:none_nan_values.size] = none_nan_values return nan_row np.apply_along_axis(shifted_column, 0, data_df.values) ``` You could convert it back to pandas as you wish
3,387,663
HI all I am trying to use SWIG to export C++ code to Python. The C sample I read on the web site does work but I have problem with C++ code. Here are the lines I call ``` swig -c++ -python SWIG_TEST.i g++ -c -fPIC SWIG_TEST.cpp SWIG_TEST_wrap.cxx -I/usr/include/python2.4/ gcc --shared SWIG_TEST.o SWIG_TEST_wrap.o -o _SWIG_TEST.so -lstdc++ ``` When I am finished I receive the following error message ``` ImportError: ./_SWIG_TEST.so: undefined symbol: Py_InitModule4 ``` Do you know what it is?
2010/08/02
[ "https://Stackoverflow.com/questions/3387663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245416/" ]
It looks like you aren't linking to the Python runtime library. Something like adding `-lpython24` to your gcc line. (I don't have a Linux system handy at the moment).
you might try building the shared library using `gcc` ``` g++ -shared SWIG_TEST.o SWIG_TEST_wrap.o -o _SWIG_TEST.so ``` rather than using `ld` directly.
3,387,663
HI all I am trying to use SWIG to export C++ code to Python. The C sample I read on the web site does work but I have problem with C++ code. Here are the lines I call ``` swig -c++ -python SWIG_TEST.i g++ -c -fPIC SWIG_TEST.cpp SWIG_TEST_wrap.cxx -I/usr/include/python2.4/ gcc --shared SWIG_TEST.o SWIG_TEST_wrap.o -o _SWIG_TEST.so -lstdc++ ``` When I am finished I receive the following error message ``` ImportError: ./_SWIG_TEST.so: undefined symbol: Py_InitModule4 ``` Do you know what it is?
2010/08/02
[ "https://Stackoverflow.com/questions/3387663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245416/" ]
It looks like you aren't linking to the Python runtime library. Something like adding `-lpython24` to your gcc line. (I don't have a Linux system handy at the moment).
As Mark said, it's a problem linking to the python library. A nice way to get hints as to just which flags you need to successfully link can be gotten by running `python-config --ldflags`. In fact, a particularly painless way of compiling your test is the following: ``` swig -c++ -python SWIG_TEST.i g++ -c `python-config --cflags` -fPIC SWIG_TEST.cpp SWIG_TEST_wrap.cxx gcc --shared `python-config --ldflags` SWIG_TEST.o SWIG_TEST_wrap.o -o _SWIG_TEST.so -lstdc++ ``` Note that `python-config` isn't perfect; it sometimes gives you extra things, or conflicting things. But this should certainly help a lot.
54,856,829
In TensorFlow <2 the training function for a DDPG actor could be concisely implemented using `tf.keras.backend.function` as follows: ``` critic_output = self.critic([self.actor(state_input), state_input]) actor_updates = self.optimizer_actor.get_updates(params=self.actor.trainable_weights, loss=-tf.keras.backend.mean(critic_output)) self.actor_train_on_batch = tf.keras.backend.function(inputs=[state_input], outputs=[self.actor(state_input)], updates=actor_updates) ``` Then during each training step calling `self.actor_train_on_batch([np.array(state_batch)])` would compute the gradients and perform the updates. However running that on TF 2.0 gives the following error due to eager mode being on by default: ``` actor_updates = self.optimizer_actor.get_updates(params=self.actor.trainable_weights, loss=-tf.keras.backend.mean(critic_output)) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py", line 448, in get_updates grads = self.get_gradients(loss, params) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py", line 361, in get_gradients grads = gradients.gradients(loss, params) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 158, in gradients unconnected_gradients) File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 547, in _GradientsHelper raise RuntimeError("tf.gradients is not supported when eager execution " RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead. ``` As expected, disabling eager execution via `tf.compat.v1.disable_eager_execution()` fixes the issue. However I don't want to disable eager execution for everything - I would like to use purely the 2.0 API. The exception suggests using `tf.GradientTape` instead of `tf.gradients` but that's an internal call. **Question:** What is the appropriate way of computing `-tf.keras.backend.mean(critic_output)` in graph mode (in TensorFlow 2.0)?
2019/02/24
[ "https://Stackoverflow.com/questions/54856829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3552418/" ]
I found two ways to solve it: ###Using data attribute Get the max number of pages in the template, assign it to a data attribute, and access it in the scripts. Then check current page against total page numbers, and set disabled states to the load more button when it reaches the last page. PHP/HTML: ``` <ul id="ajax-content"></ul> <button type="button" id="ajax-button" data-endpoint="<?php echo get_rest_url(null, 'wp/v2/posts'); ?>" data-ppp="<?php echo get_option('posts_per_page'); ?>" data-pages="<?php echo $wp_query->max_num_pages; ?>">Show more</button> ``` JavaScripts: ``` (function($) { var loadMoreButton = $('#ajax-button'); var loadMoreContainer = $('#ajax-content'); if (loadMoreButton) { var endpoint = loadMoreButton.data('endpoint'); var ppp = loadMoreButton.data('ppp'); var pages = loadMoreButton.data('pages'); var loadPosts = function(page) { var theData, errorStatus, errorMessage; $.ajax({ url: endpoint, dataType: 'json', data: { per_page: ppp, page: page, type: 'post', orderby: 'date' }, beforeSend: function() { loadMoreButton.attr('disabled', true); }, success: function(data) { theData = []; for (i = 0; i < data.length; i++) { theData[i] = {}; theData[i].id = data[i].id; theData[i].link = data[i].link; theData[i].title = data[i].title.rendered; theData[i].content = data[i].content.rendered; } $.each(theData, function(i) { loadMoreContainer.append('<li><a href="' + theData[i].link + '">' + theData[i].title + '</a></li>'); }); loadMoreButton.attr('disabled', false); if (getPage == pages) { loadMoreButton.attr('disabled', true); } getPage++; }, error: function(jqXHR) { errorStatus = jqXHR.status + ' ' + jqXHR.statusText + '\n'; errorMessage = jqXHR.responseJSON.message; console.log(errorStatus + errorMessage); } }); }; var getPage = 2; loadMoreButton.on('click', function() { loadPosts(getPage); }); } })(jQuery); ``` ###Using jQuery `complete` event Get the total pages `x-wp-totalpages` from the HTTP response headers. Then change the button states when reaches last page. PHP/HTML: ``` <ul id="ajax-content"></ul> <button type="button" id="ajax-button" data-endpoint="<?php echo get_rest_url(null, 'wp/v2/posts'); ?>" data-ppp="<?php echo get_option('posts_per_page'); ?>">Show more</button> ``` JavaScripts: ``` (function($) { var loadMoreButton = $('#ajax-button'); var loadMoreContainer = $('#ajax-content'); if (loadMoreButton) { var endpoint = loadMoreButton.data('endpoint'); var ppp = loadMoreButton.data('ppp'); var pager = 0; var loadPosts = function(page) { var theData, errorStatus, errorMessage; $.ajax({ url: endpoint, dataType: 'json', data: { per_page: ppp, page: page, type: 'post', orderby: 'date' }, beforeSend: function() { loadMoreButton.attr('disabled', true); }, success: function(data) { theData = []; for (i = 0; i < data.length; i++) { theData[i] = {}; theData[i].id = data[i].id; theData[i].link = data[i].link; theData[i].title = data[i].title.rendered; theData[i].content = data[i].content.rendered; } $.each(theData, function(i) { loadMoreContainer.append('<li><a href="' + theData[i].link + '">' + theData[i].title + '</a></li>'); }); loadMoreButton.attr('disabled', false); }, error: function(jqXHR) { errorStatus = jqXHR.status + ' ' + jqXHR.statusText + '\n'; errorMessage = jqXHR.responseJSON.message; console.log(errorStatus + errorMessage); }, complete: function(jqXHR) { if (pager == 0) { pager = jqXHR.getResponseHeader('x-wp-totalpages'); } pager--; if (pager == 1) { loadMoreButton.attr('disabled', true); } } }); }; var getPage = 2; loadMoreButton.on('click', function() { loadPosts(getPage); getPage++; }); } })(jQuery); ```
The problem appears to be an invalid query to that endpoint so the `success: function()` is never being run in this circumstance. ### Add to All API Errors You could add the same functionality for all errors like this... ``` error: function(jqXHR, textStatus, errorThrown) { loadMoreButton.remove(); .... } ``` Though that may not be the desired way of handling of **all** errors. ### Test for Existing Error Message Another option could be to remove the button if you receive an error with that exact message... ``` error: function(jqXHR, textStatus, errorThrown) { if (jqXHR.statusText === 'The page number requested is larger than the number of pages available.') { loadMoreButton.remove(); } .... } ``` but this would be susceptible to breaking with any changes to that error message. ### Return Custom Error Code from API The recommended way to handle it would be to return specific error code (along with HTTP status code 400) to specify the exact situation in a more reliable format... ``` error: function(jqXHR, textStatus, errorThrown) { if (jqXHR.statusCode === '215') { loadMoreButton.remove(); } .... } ``` Here's an example on how to configure error handling in an API: [Best Practices for API Error Handling](https://nordicapis.com/best-practices-api-error-handling#gooderrorexamples) ### Return 200 HTTP Status Code The last option would be to change the way your API endpoint handles this type of "error"/situation, by returning a `200` level HTTP status code instead, which would invoke the `success:` instead of the `error:` callback instead.
43,819,092
I have installed ngram using pip install ngram. While I am running the following code ``` from ngram import NGram c=NGram.compare('cereal_crop','cereals') print c ``` I get the error `ImportError: cannot import name NGram` Screenshot for it: [![screenshot of console window](https://i.stack.imgur.com/chq5c.png)](https://i.stack.imgur.com/chq5c.png) P.S. A similar question has been asked previously [using ngram in python](https://stackoverflow.com/questions/8390585/using-ngram-in-python/43818866?noredirect=1#comment74676421_43818866), but that time the person who was getting error did not install ngram, so installing ngram worked. In my case I am getting the error in spite of ngram being installed.
2017/05/06
[ "https://Stackoverflow.com/questions/43819092", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4451870/" ]
Your Python script is named `ngram.py`, so it defines a module named `ngram`. When Python runs `from ngram import NGram`, Python ends up looking in your script for something named `NGram`, not in the `ngram` module you have installed. Try changing the name of your script to something else, for example `ngram_test.py`.
try like this: ``` import ngram c = ngram.NGram.compare('cereal_crop','cereals') print c ```
45,006,190
ConnectionRefusedError error showing when register user, basic information added on database but password field was blank and other database fields submitted please find the following error and our class code, **Class** class ProfessionalRegistrationSerializer(serializers.HyperlinkedModelSerializer): ``` password = serializers.CharField(max_length=20, write_only=True) email = serializers.EmailField() first_name = serializers.CharField(max_length=30) last_name = serializers.CharField(max_length=30) class Meta: model = User fields = ('url', 'id', 'first_name', 'last_name', 'email', 'password') def validate_email(self, value): from validate_email_address import validate_email if User.all_objects.filter(email=value.lower()).exists(): raise serializers.ValidationError('User with this email already exists.') return value.lower() def create(self, validated_data): password = validated_data.pop('password') email = validated_data.pop('email') user = User.objects.create( username=email.lower(), email=email.lower(), role_id=1, **validated_data) user.set_password(password) user.save() return user ``` **Error** ConnectionRefusedError at /api/v1/register/professional/ [Errno 111] Connection refused Request Method: POST Request URL: <http://127.0.0.1:8000/api/v1/register/professional/> Django Version: 1.8.14 Exception Type: ConnectionRefusedError Exception Value: [Errno 111] Connection refused Exception Location: /usr/lib/python3.5/socket.py in create\_connection, line 702 Python Executable: /home/project\_backend/env/bin/python Python Version: 3.5.2 Python Path: ['/home/project\_backend', '/home/project\_backend/env/lib/python35.zip', '/home/project\_backend/env/lib/python3.5', '/home/project\_backend/env/lib/python3.5/plat-x86\_64-linux-gnu', '/home/project\_backend/env/lib/python3.5/lib-dynload', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86\_64-linux-gnu', '/home/project\_backend/env/lib/python3.5/site-packages', '/home/project\_backend/env/lib/python3.5/site-packages/setuptools-36.0.1-py3.5.egg'] **Traceback** ``` File "/home/project_backend/env/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response ``` 132.response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/views/decorators/csrf.py" in wrapped\_view 58. return view\_func(\*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/views/generic/base.py" in view 71. return self.dispatch(request, \*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/rest\_framework/views.py" in dispatch 464. response = self.handle\_exception(exc) File "/home/project\_backend/env/lib/python3.5/site-packages/rest\_framework/views.py" in dispatch 461. response = handler(request, \*args, \*\*kwargs) File "/home/project\_backend/filmup/apps/registrations/views.py" in post 53. user = serializer.save(work\_status=user\_type) File "/home/project\_backend/env/lib/python3.5/site-packages/rest\_framework/serializers.py" in save 175. self.instance = self.create(validated\_data) File "/home/project\_backend/project/apps/registrations/serializers.py" in create 157. \*\*validated\_data) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/manager.py" in manager\_method 127. return getattr(self.get\_queryset(), name)(\*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/query.py" in create 348. obj.save(force\_insert=True, using=self.db) File "/home/project\_backend/project/libs/accounts/models.py" in save 217. super().save(\*args, \*\*kwargs) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/base.py" in save 734. force\_update=force\_update, update\_fields=update\_fields) File "/home/project\_backend/env/lib/python3.5/site-packages/django/db/models/base.py" in save\_base 771. update\_fields=update\_fields, raw=raw, using=using) File "/home/project\_backend/env/lib/python3.5/site-packages/django/dispatch/dispatcher.py" in send 189. response = receiver(signal=self, sender=sender, \*\*named) File "/home/project\_backend/filmup/libs/accounts/signals.py" in create\_user\_setting 19. create\_ejabberd\_user(instance) File "/home/project\_backend/project/libs/accounts/signals.py" in create\_ejabberd\_user 11. EjabberdUser.objects.create(username=str(user.id), password=str(token.key)) File "/home/project\_backend/project/libs/accounts/models.py" in create 73. ctl.register(user=kwargs['username'], password=kwargs['password']) File "/home/project\_backend/project/libs/ejabberdctl.py" in register 54. 'password': password}) File "/home/project\_backend/project/libs/ejabberdctl.py" in ctl 32. return fn(self.params, payload) File "/usr/lib/python3.5/xmlrpc/client.py" in **call** 1092. return self.\_\_send(self.\_\_name, args) File "/usr/lib/python3.5/xmlrpc/client.py" in \_\_request 1432. verbose=self.\_\_verbose File "/usr/lib/python3.5/xmlrpc/client.py" in request 1134. return self.single\_request(host, handler, request\_body, verbose) File "/usr/lib/python3.5/xmlrpc/client.py" in single\_request 1146. http\_conn = self.send\_request(host, handler, request\_body, verbose) File "/usr/lib/python3.5/xmlrpc/client.py" in send\_request 1259. self.send\_content(connection, request\_body) File "/usr/lib/python3.5/xmlrpc/client.py" in send\_content 1289. connection.endheaders(request\_body) File "/usr/lib/python3.5/http/client.py" in endheaders 1102. self.\_send\_output(message\_body) File "/usr/lib/python3.5/http/client.py" in \_send\_output 934. self.send(msg) File "/usr/lib/python3.5/http/client.py" in send 877. self.connect() File "/usr/lib/python3.5/http/client.py" in connect 849. (self.host,self.port), self.timeout, self.source\_address) File "/usr/lib/python3.5/socket.py" in create\_connection 711. raise err File "/usr/lib/python3.5/socket.py" in create\_connection 702. sock.connect(sa)
2017/07/10
[ "https://Stackoverflow.com/questions/45006190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7225424/" ]
I was getting the same error and it may be due to email verification. I add following code in my setting.py file and now authentication working perfectly ``` ACCOUNT_EMAIL_VERIFICATION = 'none' ACCOUNT_AUTHENTICATION_METHOD = 'username' ACCOUNT_EMAIL_REQUIRED = False ```
You perform a call to a remote server that you can't reach / isn't configured / isn't running. It's not an issue with Django or DRF.
27,259,112
I want to test uploaded python programs with the `unittest` module on a django based web site and give a useful feedback to the student. I created some helper function to get statistics (how many failures and errors) and messages. ``` def suite(*test_cases): suites = [unittest.makeSuite(case) for case in test_cases] return unittest.TestSuite(suites) def testcase_statistics_and_messages(*test_cases): devnull = open('/dev/null', "w") runner = unittest.TextTestRunner(stream=devnull) test_suite = suite(*test_cases) test_result = runner.run(test_suite) devnull.close() failure_messages = [mesg for test_case, mesg in test_result.failures] number_of_failures = len(failure_messages) error_messages = [mesg for test_case, mesg in test_result.errors] number_of_errors = len(error_messages) number_of_test_cases = test_suite.countTestCases() number_of_successes = (number_of_test_cases - number_of_errors - number_of_failures) return dict( number_of_test_cases=number_of_test_cases, failure_messages=failure_messages, error_messages=error_messages, number_of_successes=number_of_successes, number_of_errors=number_of_errors, number_of_failures=number_of_failures, ) ``` I save the program of the student and the unit tests into a file. I import the file, and I get the list of the TestCase classes from the file, and run the functions above. (I can handle SyntaxError, IndentationError and runtime errors of the program uploaded by the student.) However the messages I got from the unit tests are not too useful for the students. E.g. ``` Traceback (most recent call last): File "/tmp/tmpnda6x_60.py", line 39, in test_discriminant_returns_the_proper_values self.assertEqual(discriminant(*args), return_values) AssertionError: -2 != 0 ``` If I would get for example the docstring of the test method of the `TestCase`, I would be happy. I can not find a straightforward way to do this. If I would get the name of the TestCase and the method, I could get the docstring with the `eval` function. Is there an easier way, than to get the test method names from the messages and go through the TestCases and check whether there is a test method called like I have found in the message. I have tried to use `msg` keyword arguments to the asserts, but than it is a pain to write unit tests, and I need to use e.g. regular expression to get the informative part of the message. I have Python 3.2 on the server I want to run this django project on.
2014/12/02
[ "https://Stackoverflow.com/questions/27259112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/813946/" ]
If you have a `TestCase` like: ``` class ExampleTestCase(unittest.TestCase): def test_example(self): """If this fails, it may not be your fault. Try hacking the integer cache. Evil laugh.""" self.assertEqual(3, 4) ``` You can later, in your `testcase_statistics_and_messages` function, get first line of your doc string using `test_case.shortDescription()`, or the full doc string using `test_case._testMethodDoc`. So, adding those to your function (and to the return dict): ``` short_docs = [test_case.shortDescription() for test_case, mesg in test_result.failures] docs = [test_case._testMethodDoc for test_case, mesg in test_result.failures] ``` And then printing the results using: ``` for key, value in testcase_statistics_and_messages(ExampleTestCase).items(): print(key, "==>", value) ``` Gives me: ``` short_docs ==> ['If this fails, it may not be your fault.'] number_of_test_cases ==> 1 error_messages ==> [] docs ==> ['If this fails, it may not be your fault.\n Try hacking the integer cache. Evil laugh.'] failure_messages ==> ['Traceback (most recent call last):\n File "ut.py", line 9, in test_example\n self.assertEqual(3, 4)\nAssertionError: 3 != 4\n'] number_of_failures ==> 1 number_of_errors ==> 0 number_of_successes ==> 0 ```
Here is one possible solution - override the specific assertion methods that you're using (`assertEqual`, etc, hopefully not too many) and store the variables in a JSON string as message: ``` import json import unittest class ExampleTestCase(unittest.TestCase): longMessage = False def assertEqual(self, first, second, msg=None): super(ExampleTestCase, self).assertEqual(first, second, msg=json.dumps({ 'first': first, 'second': second, })) def test_example(self): self.assertEqual(3, 4) ``` By using `longMessage = False` Python will return the assertion message to you without modifying it further. The above will give you: ``` File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/case.py", line 797, in assertEqual assertion_func(first, second, msg=msg) File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/case.py", line 790, in _baseAssertEqual raise self.failureException(msg) AssertionError: {"first": 3, "second": 4} ``` The last line of this is much easier to parse using `json.loads`. It's up to you to make the sure the values can be serialised and deserialised to/from JSON. Lastly, you may also need to customize running the tests to easily parse each of the tracebacks. This all being said, if you can avoid it, it may be better to review whether your use case can be done using one of the other Python testing frameworks (nosetests, py.tests, etc). After reviewing the situation it may turn out that writing your own testing framework is the best approach to get the best error messages. For example: ``` x = 3 y = 4 try: assert x == y except AssertionError: return ... # your error message here (can be a string or an arbitrary object) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
The accepted answer is close but incorrect if you're trying to take % difference from left to right. You should get the following percent difference: `1,2,3,5,7` --> `100%, 50%, 66.66%, 40%` check for yourself: <https://www.calculatorsoup.com/calculators/algebra/percent-change-calculator.php> Going by what Josmoor98 said, you can use `np.diff(a) / a[:,:-1] * 100` to get the percent difference from left to right, which will give you the correct answer. ``` array([[100. , 50. , 66.66666667, 40. ], [300. , 25. , 20. , 16.66666667], [ 60. , 12.5 , 11.11111111, 220. ], [ 66.66666667, 20. , 116.66666667, -15.38461538]]) ```
1. Combine all your arrays. 2. Then make a data frame from them. ``` df = pd.df(data=array you made) ``` 3. Use the `pct_change()` function on dataframe. It will calculate the % change for all rows in dataframe.
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
I suggest to simply shift the array. The computation basically becomes a one-liner. ``` import numpy as np arr = np.array( [ [1, 2, 3, 5, 7], [1, 4, 5, 6, 7], [5, 8, 9, 10, 32], [3, 5, 6, 13, 11], ] ) # Percentage change from row to row pct_chg_row = arr[1:] / arr[:-1] - 1 [[ 0. 1. 0.66666667 0.2 0. ] [ 4. 1. 0.8 0.66666667 3.57142857] [-0.4 -0.375 -0.33333333 0.3 -0.65625 ]] # Percentage change from column to column pct_chg_col = arr[:, 1::] / arr[:, 0:-1] - 1 [[ 1. 0.5 0.66666667 0.4 ] [ 3. 0.25 0.2 0.16666667] [ 0.6 0.125 0.11111111 2.2 ] [ 0.66666667 0.2 1.16666667 -0.15384615]] ``` You could easily generalize the task, so that you are not limited to compute the change from one row/column to another, but be able to compute the change for `n` rows/columns. ``` n = 2 pct_chg_row_generalized = arr[n:] / arr[:-n] - 1 [[4. 3. 2. 1. 3.57142857] [2. 0.25 0.2 1.16666667 0.57142857]] pct_chg_col_generalized = arr[:, n:] / arr[:, :-n] - 1 [[2. 1.5 1.33333333] [4. 0.5 0.4 ] [0.8 0.25 2.55555556] [1. 1.6 0.83333333]] ``` If the output array must have the same shape as the input array, you need to make sure to insert the appropriate number of `np.nan`. ``` out_row = np.full_like(arr, np.nan, dtype=float) out_row[n:] = arr[n:] / arr[:-n] - 1 [[ nan nan nan nan nan] [ nan nan nan nan nan] [4. 3. 2. 1. 3.57142857] [2. 0.25 0.2 1.16666667 0.57142857]] out_col = np.full_like(arr, np.nan, dtype=float) out_col[:, n:] = arr[:, n:] / arr[:, :-n] - 1 [[ nan nan 2. 1.5 1.33333333] [ nan nan 4. 0.5 0.4 ] [ nan nan 0.8 0.25 2.55555556] [ nan nan 1. 1.6 0.83333333]] ``` Finally, a small function for the general 2D case might look like this: ``` def np_pct_chg(arr: np.ndarray, n: int = 1, axis: int = 0) -> np.ndarray: out = np.full_like(arr, np.nan, dtype=float) if axis == 0: out[n:] = arr[n:] / arr[:-n] - 1 elif axis == 1: out[:, n:] = arr[:, n:] / arr[:, :-n] - 1 return out ```
The accepted answer is close but incorrect if you're trying to take % difference from left to right. You should get the following percent difference: `1,2,3,5,7` --> `100%, 50%, 66.66%, 40%` check for yourself: <https://www.calculatorsoup.com/calculators/algebra/percent-change-calculator.php> Going by what Josmoor98 said, you can use `np.diff(a) / a[:,:-1] * 100` to get the percent difference from left to right, which will give you the correct answer. ``` array([[100. , 50. , 66.66666667, 40. ], [300. , 25. , 20. , 16.66666667], [ 60. , 12.5 , 11.11111111, 220. ], [ 66.66666667, 20. , 116.66666667, -15.38461538]]) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
I suggest to simply shift the array. The computation basically becomes a one-liner. ``` import numpy as np arr = np.array( [ [1, 2, 3, 5, 7], [1, 4, 5, 6, 7], [5, 8, 9, 10, 32], [3, 5, 6, 13, 11], ] ) # Percentage change from row to row pct_chg_row = arr[1:] / arr[:-1] - 1 [[ 0. 1. 0.66666667 0.2 0. ] [ 4. 1. 0.8 0.66666667 3.57142857] [-0.4 -0.375 -0.33333333 0.3 -0.65625 ]] # Percentage change from column to column pct_chg_col = arr[:, 1::] / arr[:, 0:-1] - 1 [[ 1. 0.5 0.66666667 0.4 ] [ 3. 0.25 0.2 0.16666667] [ 0.6 0.125 0.11111111 2.2 ] [ 0.66666667 0.2 1.16666667 -0.15384615]] ``` You could easily generalize the task, so that you are not limited to compute the change from one row/column to another, but be able to compute the change for `n` rows/columns. ``` n = 2 pct_chg_row_generalized = arr[n:] / arr[:-n] - 1 [[4. 3. 2. 1. 3.57142857] [2. 0.25 0.2 1.16666667 0.57142857]] pct_chg_col_generalized = arr[:, n:] / arr[:, :-n] - 1 [[2. 1.5 1.33333333] [4. 0.5 0.4 ] [0.8 0.25 2.55555556] [1. 1.6 0.83333333]] ``` If the output array must have the same shape as the input array, you need to make sure to insert the appropriate number of `np.nan`. ``` out_row = np.full_like(arr, np.nan, dtype=float) out_row[n:] = arr[n:] / arr[:-n] - 1 [[ nan nan nan nan nan] [ nan nan nan nan nan] [4. 3. 2. 1. 3.57142857] [2. 0.25 0.2 1.16666667 0.57142857]] out_col = np.full_like(arr, np.nan, dtype=float) out_col[:, n:] = arr[:, n:] / arr[:, :-n] - 1 [[ nan nan 2. 1.5 1.33333333] [ nan nan 4. 0.5 0.4 ] [ nan nan 0.8 0.25 2.55555556] [ nan nan 1. 1.6 0.83333333]] ``` Finally, a small function for the general 2D case might look like this: ``` def np_pct_chg(arr: np.ndarray, n: int = 1, axis: int = 0) -> np.ndarray: out = np.full_like(arr, np.nan, dtype=float) if axis == 0: out[n:] = arr[n:] / arr[:-n] - 1 elif axis == 1: out[:, n:] = arr[:, n:] / arr[:, :-n] - 1 return out ```
``` import numpy as np a = np.array([[1,2,3,5,7], [1,4,5,6,7], [5,8,9,10,32], [3,5,6,13,11]]) np.array([(i[:-1]/i[1:]) for i in a]) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
I know you have asked this question with Numpy in mind and got answers above: ``` import numpy as np np.diff(a) / a[:,1:] ``` I attempt to solve this with Pandas. For those who would have the same question but using Pandas instead of Numpy ``` import pandas as pd data = [[1,2,3,4,5], [1,4,5,6,7], [5,8,9,10,32], [3,5,6,13,11]] df = pd.DataFrame(data) df_change = df.rolling(1,axis=1).sum().pct_change(axis=1) print(df_change) ```
1. Combine all your arrays. 2. Then make a data frame from them. ``` df = pd.df(data=array you made) ``` 3. Use the `pct_change()` function on dataframe. It will calculate the % change for all rows in dataframe.
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
If I understand correctly, that you're trying to find percent change in each row, then you can do: ``` >>> np.diff(a) / a[:,1:] * 100 ``` Which gives you: ``` array([[ 50. , 33.33333333, 40. , 28.57142857], [ 75. , 20. , 16.66666667, 14.28571429], [ 37.5 , 11.11111111, 10. , 68.75 ], [ 40. , 16.66666667, 53.84615385, -18.18181818]]) ```
The accepted answer is close but incorrect if you're trying to take % difference from left to right. You should get the following percent difference: `1,2,3,5,7` --> `100%, 50%, 66.66%, 40%` check for yourself: <https://www.calculatorsoup.com/calculators/algebra/percent-change-calculator.php> Going by what Josmoor98 said, you can use `np.diff(a) / a[:,:-1] * 100` to get the percent difference from left to right, which will give you the correct answer. ``` array([[100. , 50. , 66.66666667, 40. ], [300. , 25. , 20. , 16.66666667], [ 60. , 12.5 , 11.11111111, 220. ], [ 66.66666667, 20. , 116.66666667, -15.38461538]]) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
If I understand correctly, that you're trying to find percent change in each row, then you can do: ``` >>> np.diff(a) / a[:,1:] * 100 ``` Which gives you: ``` array([[ 50. , 33.33333333, 40. , 28.57142857], [ 75. , 20. , 16.66666667, 14.28571429], [ 37.5 , 11.11111111, 10. , 68.75 ], [ 40. , 16.66666667, 53.84615385, -18.18181818]]) ```
I know you have asked this question with Numpy in mind and got answers above: ``` import numpy as np np.diff(a) / a[:,1:] ``` I attempt to solve this with Pandas. For those who would have the same question but using Pandas instead of Numpy ``` import pandas as pd data = [[1,2,3,4,5], [1,4,5,6,7], [5,8,9,10,32], [3,5,6,13,11]] df = pd.DataFrame(data) df_change = df.rolling(1,axis=1).sum().pct_change(axis=1) print(df_change) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
If I understand correctly, that you're trying to find percent change in each row, then you can do: ``` >>> np.diff(a) / a[:,1:] * 100 ``` Which gives you: ``` array([[ 50. , 33.33333333, 40. , 28.57142857], [ 75. , 20. , 16.66666667, 14.28571429], [ 37.5 , 11.11111111, 10. , 68.75 ], [ 40. , 16.66666667, 53.84615385, -18.18181818]]) ```
``` import numpy as np a = np.array([[1,2,3,5,7], [1,4,5,6,7], [5,8,9,10,32], [3,5,6,13,11]]) np.array([(i[:-1]/i[1:]) for i in a]) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
I know you have asked this question with Numpy in mind and got answers above: ``` import numpy as np np.diff(a) / a[:,1:] ``` I attempt to solve this with Pandas. For those who would have the same question but using Pandas instead of Numpy ``` import pandas as pd data = [[1,2,3,4,5], [1,4,5,6,7], [5,8,9,10,32], [3,5,6,13,11]] df = pd.DataFrame(data) df_change = df.rolling(1,axis=1).sum().pct_change(axis=1) print(df_change) ```
The accepted answer is close but incorrect if you're trying to take % difference from left to right. You should get the following percent difference: `1,2,3,5,7` --> `100%, 50%, 66.66%, 40%` check for yourself: <https://www.calculatorsoup.com/calculators/algebra/percent-change-calculator.php> Going by what Josmoor98 said, you can use `np.diff(a) / a[:,:-1] * 100` to get the percent difference from left to right, which will give you the correct answer. ``` array([[100. , 50. , 66.66666667, 40. ], [300. , 25. , 20. , 16.66666667], [ 60. , 12.5 , 11.11111111, 220. ], [ 66.66666667, 20. , 116.66666667, -15.38461538]]) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
I know you have asked this question with Numpy in mind and got answers above: ``` import numpy as np np.diff(a) / a[:,1:] ``` I attempt to solve this with Pandas. For those who would have the same question but using Pandas instead of Numpy ``` import pandas as pd data = [[1,2,3,4,5], [1,4,5,6,7], [5,8,9,10,32], [3,5,6,13,11]] df = pd.DataFrame(data) df_change = df.rolling(1,axis=1).sum().pct_change(axis=1) print(df_change) ```
``` import numpy as np a = np.array([[1,2,3,5,7], [1,4,5,6,7], [5,8,9,10,32], [3,5,6,13,11]]) np.array([(i[:-1]/i[1:]) for i in a]) ```
53,067,695
I'm writing a function to find the percentage change using Numpy and function calls. So far what I got is: ``` def change(a,b): answer = (np.subtract(a[b+1], a[b])) / a[b+1] * 100 return answer print(change(a,0)) ``` "a" is the array I have made and b will be the index/numbers I am trying to calculate. For example: My Array is ``` [[1,2,3,5,7] [1,4,5,6,7] [5,8,9,10,32] [3,5,6,13,11]] ``` How would I calculate the percentage change between 1 to 2 (=0.5) or 1 to 4(=0.75) or 5,7 etc.. **Note:** I know how mathematically to get the change, I'm not sure how to do this in python/ numpy.
2018/10/30
[ "https://Stackoverflow.com/questions/53067695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10541940/" ]
If I understand correctly, that you're trying to find percent change in each row, then you can do: ``` >>> np.diff(a) / a[:,1:] * 100 ``` Which gives you: ``` array([[ 50. , 33.33333333, 40. , 28.57142857], [ 75. , 20. , 16.66666667, 14.28571429], [ 37.5 , 11.11111111, 10. , 68.75 ], [ 40. , 16.66666667, 53.84615385, -18.18181818]]) ```
I suggest to simply shift the array. The computation basically becomes a one-liner. ``` import numpy as np arr = np.array( [ [1, 2, 3, 5, 7], [1, 4, 5, 6, 7], [5, 8, 9, 10, 32], [3, 5, 6, 13, 11], ] ) # Percentage change from row to row pct_chg_row = arr[1:] / arr[:-1] - 1 [[ 0. 1. 0.66666667 0.2 0. ] [ 4. 1. 0.8 0.66666667 3.57142857] [-0.4 -0.375 -0.33333333 0.3 -0.65625 ]] # Percentage change from column to column pct_chg_col = arr[:, 1::] / arr[:, 0:-1] - 1 [[ 1. 0.5 0.66666667 0.4 ] [ 3. 0.25 0.2 0.16666667] [ 0.6 0.125 0.11111111 2.2 ] [ 0.66666667 0.2 1.16666667 -0.15384615]] ``` You could easily generalize the task, so that you are not limited to compute the change from one row/column to another, but be able to compute the change for `n` rows/columns. ``` n = 2 pct_chg_row_generalized = arr[n:] / arr[:-n] - 1 [[4. 3. 2. 1. 3.57142857] [2. 0.25 0.2 1.16666667 0.57142857]] pct_chg_col_generalized = arr[:, n:] / arr[:, :-n] - 1 [[2. 1.5 1.33333333] [4. 0.5 0.4 ] [0.8 0.25 2.55555556] [1. 1.6 0.83333333]] ``` If the output array must have the same shape as the input array, you need to make sure to insert the appropriate number of `np.nan`. ``` out_row = np.full_like(arr, np.nan, dtype=float) out_row[n:] = arr[n:] / arr[:-n] - 1 [[ nan nan nan nan nan] [ nan nan nan nan nan] [4. 3. 2. 1. 3.57142857] [2. 0.25 0.2 1.16666667 0.57142857]] out_col = np.full_like(arr, np.nan, dtype=float) out_col[:, n:] = arr[:, n:] / arr[:, :-n] - 1 [[ nan nan 2. 1.5 1.33333333] [ nan nan 4. 0.5 0.4 ] [ nan nan 0.8 0.25 2.55555556] [ nan nan 1. 1.6 0.83333333]] ``` Finally, a small function for the general 2D case might look like this: ``` def np_pct_chg(arr: np.ndarray, n: int = 1, axis: int = 0) -> np.ndarray: out = np.full_like(arr, np.nan, dtype=float) if axis == 0: out[n:] = arr[n:] / arr[:-n] - 1 elif axis == 1: out[:, n:] = arr[:, n:] / arr[:, :-n] - 1 return out ```
28,323,435
I'm trying to use [`logging`'s `SMTPHandler`](https://docs.python.org/3/library/logging.handlers.html#smtphandler). From Python 3.3 on, you can specify a `timeout` keyword argument. If you add that argument in older versions, it fails. To get around this, I used the following: ``` import sys if sys.version_info >= (3, 3): smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, timeout=20.0) else: smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` Is there a better way of doing this?
2015/02/04
[ "https://Stackoverflow.com/questions/28323435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/810870/" ]
Rather than test for the version, use exception handling: ``` try: smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, timeout=20.0) except TypeError: # Python < 3.3, no timeout parameter smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` Now you can conceivably upgrade your standard library in-place with a patch or a backport module and it'll continue to work.
Here is another slightly different approach: ``` from logging.handlers import SMTPHandler import sys if sys.version_info >= (3, 3): # patch in timeout where available from functools import partial SMTPHandler = partial(SMTPHandler, timeout=20.0) ``` Now in the rest of the code you can just use: ``` smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` and know that the `timeout` argument is being used if available. This does still rely on static version checking, but means that all of the version-specific config is in one place and may reduce duplication elsewhere.
28,323,435
I'm trying to use [`logging`'s `SMTPHandler`](https://docs.python.org/3/library/logging.handlers.html#smtphandler). From Python 3.3 on, you can specify a `timeout` keyword argument. If you add that argument in older versions, it fails. To get around this, I used the following: ``` import sys if sys.version_info >= (3, 3): smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, timeout=20.0) else: smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` Is there a better way of doing this?
2015/02/04
[ "https://Stackoverflow.com/questions/28323435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/810870/" ]
I would do something like this: ``` kargs = {} if sys.version_info >= (3, 3): kargs['timeout'] = 20.0 smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, **kargs) ``` **UPDATE**: The idea of `try/catch` of other answers is nice, but it assumes that it fails because of the `timeout` argument. Here I present an extra-smart test for the availability of the argument. What if future versions of this class add more and more optional arguments? (Disclaimer: not claimed as portable): ``` if 'timeout' in SMTPHandler.__init__.__code__.co_varnames: kargs['timeout'] = 20.0 ```
Here is another slightly different approach: ``` from logging.handlers import SMTPHandler import sys if sys.version_info >= (3, 3): # patch in timeout where available from functools import partial SMTPHandler = partial(SMTPHandler, timeout=20.0) ``` Now in the rest of the code you can just use: ``` smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` and know that the `timeout` argument is being used if available. This does still rely on static version checking, but means that all of the version-specific config is in one place and may reduce duplication elsewhere.
28,323,435
I'm trying to use [`logging`'s `SMTPHandler`](https://docs.python.org/3/library/logging.handlers.html#smtphandler). From Python 3.3 on, you can specify a `timeout` keyword argument. If you add that argument in older versions, it fails. To get around this, I used the following: ``` import sys if sys.version_info >= (3, 3): smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, timeout=20.0) else: smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` Is there a better way of doing this?
2015/02/04
[ "https://Stackoverflow.com/questions/28323435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/810870/" ]
Rather than test for the version, use exception handling: ``` try: smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, timeout=20.0) except TypeError: # Python < 3.3, no timeout parameter smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT) ``` Now you can conceivably upgrade your standard library in-place with a patch or a backport module and it'll continue to work.
I would do something like this: ``` kargs = {} if sys.version_info >= (3, 3): kargs['timeout'] = 20.0 smtp_handler = SMTPHandler(SMTP_SERVER, FROM_EMAIL, TO_EMAIL, SUBJECT, **kargs) ``` **UPDATE**: The idea of `try/catch` of other answers is nice, but it assumes that it fails because of the `timeout` argument. Here I present an extra-smart test for the availability of the argument. What if future versions of this class add more and more optional arguments? (Disclaimer: not claimed as portable): ``` if 'timeout' in SMTPHandler.__init__.__code__.co_varnames: kargs['timeout'] = 20.0 ```
5,763,608
I'm trying to start a script on bootup on a Ubuntu Server 10.10 machine. The relevant parts of my rc.local look like this: ``` /usr/local/bin/python3.2 "/root/Advantage/main.py" >> /startuplogfile exit 0 ``` If I run ./rc.local from /etc everything works just fine and it writes the following into /startuplogfile: ``` usage: main.py [--loop][--dry] ``` for testing purposes this is exactly what needs to happen. It won't write anything to startuplogfile when I reboot the computer. I'd venture to guess that the script is not started when rc.local is run at bootup. I verified that rc.local is started with a 'touch /rclocaltest' in the file. As expected the directory is created. I've tested rc.local with another python script that simple creates a file in / and prints to the /startuplogfile. From terminal and after reboot, it works just fine. my execution bits are set like this: ``` -rwxrwxrwx 1 root root 4598 2011-04-22 19:09 main.py ``` I have absolutely no idea why this happens and I tried everything that I could think of to remedy the problem. Any ideas on what could be causing this, I'm totally out of ideas. EDIT: I forgot to mention that I only login to this machine over ssh. I'm not sure that makes difference since rc.local is executed before login as far as I know. EDIT: I noticed that this post is a little chaotic so let me sum it up: * I verified that rc.local is called on bootup * If rc.local is manually called everything works as expected * Permissions are set correctly * A testing python script works as expected on bootup with rc.local * My actual python script will only run if rc.local is manually called, not on bootup
2011/04/23
[ "https://Stackoverflow.com/questions/5763608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/317163/" ]
You can't depend on `argv[0]` containing something specific. Sometimes you'll get the full path, sometimes only the program name, sometimes something else entirely. It depends on how the code was invoked.
As you’ve noticed, `argv[0]` can (but doesn’t always) contain the full path of your executable. If you want the file name only, one solution is: ``` #include <libgen.h> const char *proName = basename(argv[0]); ``` --- As noted by Mat, `argv[0]` is not always reliable — although it should usually be. It depends on what exactly you’re trying to accomplish. That said, there’s an alternative way of obtaining the name of the executable on Mac OS X: ``` #include <mach-o/dyld.h> #include <libgen.h> uint32_t bufsize = 0; // Ask _NSGetExecutablePath() to return the buffer size // needed to hold the string containing the executable path _NSGetExecutablePath(NULL, &bufsize); // Allocate the string buffer and ask _NSGetExecutablePath() // to fill it with the executable path char exepath[bufsize]; _NSGetExecutablePath(exepath, &bufsize); const char *proName = basename(exepath); ```
6,730,735
Last weekend (16. July 2011) our mercurial packages auto-updated to the newest 1.9 mercurial binaries using the mercurial-stable ppa on a ubuntu lucid. Now pulling from repository over SSH no longer works. Following error is displayed: ``` remote: Traceback (most recent call last): remote: File "/usr/share/mercurial-server/hg-ssh", line 86, in <module> remote: dispatch.dispatch(['-R', repo, 'serve', '--stdio']) remote: File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 31, in dispatch remote: if req.ferr: remote: AttributeError: 'list' object has no attribute 'ferr' abort: no suitable response from remote hg! ``` In the mercurial 1.9 [upgrade notes](https://www.mercurial-scm.org/wiki/UpgradeNotes#A1.9:_minor_changes.2C_drop_experimental_parentdelta_format) there is an 'interesting' note: ``` contrib/hg-ssh from older Mercurial releases will not be compatible with version 1.9, please update your copy. ``` Has somebody an idea how to upgrade (if there is already a version) the package mercurial-server? Or do we need to upgrade something else? (New python scripts?) If there is no new version yet of the necessary packages, how to downgrade to the previous 1.7.5 (ubuntu lucid)? Any help is really appreciated as our development processes are really slowed down by this fact. :S Thanks
2011/07/18
[ "https://Stackoverflow.com/questions/6730735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/619789/" ]
Ok, found a (workaround) solution by editing a python script. Edit the script /usr/share/mercurial-server/hg-ssh At the end of the script replace the line: ``` dispatch.dispatch(['-R', repo, 'serve', '--stdio']) ``` with the line: ``` dispatch.dispatch(dispatch.request(['-R', repo, 'serve', '--stdio'])) ``` Replace also: ``` dispatch.dispatch(['init', repo]) ``` with the line: ``` dispatch.dispatch(dispatch.request(['init', repo])) ``` This works for us. Hopefully this saves to somebody else burning 4 hours of work with googleing and learning basics of python. :S
More recent versions of mercurial-server are updated to support the API changes, but **may require** the `refresh-auth` script to be rerun for installations being upgraded.
56,145,426
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found". I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name. I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04). Retrying the installation command shows that the package was successfully installed. `find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`) None of these gave access to the executable. How do I do that?
2019/05/15
[ "https://Stackoverflow.com/questions/56145426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39242/" ]
You can use Treemap to contain the duplicity and total sum corresponding to your input String. Also using treemap, your output will be in sorted format if that you require. ```java public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> map = new TreeMap<>(); for (int i = 0; i < names.length; i++) { if (!map.containsKey(names[i])) { map.put(names[i], 0); } map.put(names[i], map.get(names[i]) + numbers[i]); } for (String key : map.keySet()) { System.out.println( key +" = " + map.get(key)); } } ``` The output for above code will be (as pointed by @vincrichaud): ``` a = 9 b = 3 c = 2 ```
You can use Map interface. Use your String array "names" as keys. Write a for loop for it and inside: If your map contains the key, get the value and sum with the new value. PS: Couldn't write code right now, I can update it later.
56,145,426
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found". I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name. I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04). Retrying the installation command shows that the package was successfully installed. `find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`) None of these gave access to the executable. How do I do that?
2019/05/15
[ "https://Stackoverflow.com/questions/56145426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39242/" ]
I already suggested it in the comments: You can use a map to count all the values. Here is an **intentionally verbose** example (to make clear what happens): ``` String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> totals = new HashMap<String, Integer>(); for (int i = 0; i < names.length; i++) { if (totals.containsKey(names[i])) { totals.put(names[i], totals.get(names[i]) + numbers[i]); } else { totals.put(names[i], numbers[i]); } } System.out.println(totals); ``` So if the name is already in the map, just increase the count by the new number. If it's not, add a new map entry with the number. Beware that for this to work, your two arrays must be of equal length! This will print: ``` {a=9, b=3, c=2} ```
You can use Map interface. Use your String array "names" as keys. Write a for loop for it and inside: If your map contains the key, get the value and sum with the new value. PS: Couldn't write code right now, I can update it later.
56,145,426
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found". I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name. I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04). Retrying the installation command shows that the package was successfully installed. `find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`) None of these gave access to the executable. How do I do that?
2019/05/15
[ "https://Stackoverflow.com/questions/56145426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39242/" ]
Here is a working example: ``` import java.util.HashMap; import java.util.Map; public class Main { public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> occurrences = new HashMap<>(); for(int i = 0; i < names.length; ++i) { String key = names[i]; Integer previousNumber = occurrences.getOrDefault(key, 0); //Previously stored number Integer correspondingNumber = numbers[i]; //Delta for specified name occurrences.put(key, previousNumber + correspondingNumber); //Storing new value } //Print result occurrences.forEach((key, value) -> System.out.println("Name: " + key + " Amount: " + value)); } } ``` Maps allows you to asign some kind of value to some unique key. Generally put/retrieval operations of Maps are said to be of O(1) time complexity, which makes them a perfect solution to your problem. The most basic implementation is HashMap, but if you want to keep the order of names when iterating, just use the LinkedHashMap. On the other hand, if you want your data to be sorted in some manner, then you are likely to use TreeMap. EDIT: Like Sharon Ben Asher mentioned in comments section below, you can use merge() method to shorten the code: ``` import java.util.HashMap; import java.util.Map; public class Main { public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> occurrences = new HashMap<>(); for(int i = 0; i < names.length; ++i) occurrences.merge(names[i], numbers[i], Integer::sum); //Print result occurrences.forEach((key, value) -> System.out.println("Name: " + key + " Amount: " + value)); } } ``` But I broke it down a little in the first answer just to give you a better explanation of how do the maps work. Basically you use methods like get() (getOrDefault() is a variant of it, which returns value even if no mapping was found for given key) and put() allow you to commit new mapping/override existing for a given key.
You can use Map interface. Use your String array "names" as keys. Write a for loop for it and inside: If your map contains the key, get the value and sum with the new value. PS: Couldn't write code right now, I can update it later.
56,145,426
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found". I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name. I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04). Retrying the installation command shows that the package was successfully installed. `find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`) None of these gave access to the executable. How do I do that?
2019/05/15
[ "https://Stackoverflow.com/questions/56145426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39242/" ]
I already suggested it in the comments: You can use a map to count all the values. Here is an **intentionally verbose** example (to make clear what happens): ``` String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> totals = new HashMap<String, Integer>(); for (int i = 0; i < names.length; i++) { if (totals.containsKey(names[i])) { totals.put(names[i], totals.get(names[i]) + numbers[i]); } else { totals.put(names[i], numbers[i]); } } System.out.println(totals); ``` So if the name is already in the map, just increase the count by the new number. If it's not, add a new map entry with the number. Beware that for this to work, your two arrays must be of equal length! This will print: ``` {a=9, b=3, c=2} ```
You can use Treemap to contain the duplicity and total sum corresponding to your input String. Also using treemap, your output will be in sorted format if that you require. ```java public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> map = new TreeMap<>(); for (int i = 0; i < names.length; i++) { if (!map.containsKey(names[i])) { map.put(names[i], 0); } map.put(names[i], map.get(names[i]) + numbers[i]); } for (String key : map.keySet()) { System.out.println( key +" = " + map.get(key)); } } ``` The output for above code will be (as pointed by @vincrichaud): ``` a = 9 b = 3 c = 2 ```
56,145,426
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found". I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name. I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04). Retrying the installation command shows that the package was successfully installed. `find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`) None of these gave access to the executable. How do I do that?
2019/05/15
[ "https://Stackoverflow.com/questions/56145426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39242/" ]
You can use Treemap to contain the duplicity and total sum corresponding to your input String. Also using treemap, your output will be in sorted format if that you require. ```java public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> map = new TreeMap<>(); for (int i = 0; i < names.length; i++) { if (!map.containsKey(names[i])) { map.put(names[i], 0); } map.put(names[i], map.get(names[i]) + numbers[i]); } for (String key : map.keySet()) { System.out.println( key +" = " + map.get(key)); } } ``` The output for above code will be (as pointed by @vincrichaud): ``` a = 9 b = 3 c = 2 ```
Using Map will suffice your requirement. It could be done as below String[] names = {"a", "b", "a", "a", "c", "b"}; Integer[] numbers = {5, 2, 3, 1, 2, 1}; Map expectedOut = new HashMap(); ``` for (int i = 0; i < names.length; i++) { if (expectedOut.containsKey(names[i])) expectedOut.put(names[i], expectedOut.get(names[i]) + numbers[i]); else expectedOut.put(names[i], numbers[i]); } for (Map.Entry<String, Integer> out : expectedOut.entrySet()) System.out.println(out.getKey() + " = " + out.getValue()); ``` }
56,145,426
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found". I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name. I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04). Retrying the installation command shows that the package was successfully installed. `find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`) None of these gave access to the executable. How do I do that?
2019/05/15
[ "https://Stackoverflow.com/questions/56145426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39242/" ]
I already suggested it in the comments: You can use a map to count all the values. Here is an **intentionally verbose** example (to make clear what happens): ``` String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> totals = new HashMap<String, Integer>(); for (int i = 0; i < names.length; i++) { if (totals.containsKey(names[i])) { totals.put(names[i], totals.get(names[i]) + numbers[i]); } else { totals.put(names[i], numbers[i]); } } System.out.println(totals); ``` So if the name is already in the map, just increase the count by the new number. If it's not, add a new map entry with the number. Beware that for this to work, your two arrays must be of equal length! This will print: ``` {a=9, b=3, c=2} ```
Here is a working example: ``` import java.util.HashMap; import java.util.Map; public class Main { public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> occurrences = new HashMap<>(); for(int i = 0; i < names.length; ++i) { String key = names[i]; Integer previousNumber = occurrences.getOrDefault(key, 0); //Previously stored number Integer correspondingNumber = numbers[i]; //Delta for specified name occurrences.put(key, previousNumber + correspondingNumber); //Storing new value } //Print result occurrences.forEach((key, value) -> System.out.println("Name: " + key + " Amount: " + value)); } } ``` Maps allows you to asign some kind of value to some unique key. Generally put/retrieval operations of Maps are said to be of O(1) time complexity, which makes them a perfect solution to your problem. The most basic implementation is HashMap, but if you want to keep the order of names when iterating, just use the LinkedHashMap. On the other hand, if you want your data to be sorted in some manner, then you are likely to use TreeMap. EDIT: Like Sharon Ben Asher mentioned in comments section below, you can use merge() method to shorten the code: ``` import java.util.HashMap; import java.util.Map; public class Main { public static void main(String[] args) { String[] names = {"a","b","a","a","c","b"}; Integer[] numbers = {5,2,3,1,2,1}; Map<String, Integer> occurrences = new HashMap<>(); for(int i = 0; i < names.length; ++i) occurrences.merge(names[i], numbers[i], Integer::sum); //Print result occurrences.forEach((key, value) -> System.out.println("Name: " + key + " Amount: " + value)); } } ``` But I broke it down a little in the first answer just to give you a better explanation of how do the maps work. Basically you use methods like get() (getOrDefault() is a variant of it, which returns value even if no mapping was found for given key) and put() allow you to commit new mapping/override existing for a given key.