qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
62,446,911
I have this dataset ``` age 24 32 29 23 23 31 25 26 34 ``` I want to categorize using python and save the result to a new column "agegroup" such that age between; 23 to 26 to return 1 in the agegroup column, 27-30 to return value 2 in the agegroup column and 31-34 to return 3 in the agegroup column
2020/06/18
[ "https://Stackoverflow.com/questions/62446911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12360445/" ]
You can use [`pandas.cut`](https://pandas.pydata.org/docs/reference/api/pandas.cut.html). Given: ``` >>> df age 0 24 1 32 2 29 3 23 4 23 5 31 6 25 7 26 8 34 ``` Solution: ``` >>> df.assign(agegroup=pd.cut(df['age'], bins=[23, 27, 31, 35], right=False, labels=[1, 2, 3])) age agegroup 0 24 1 1 32 3 2 29 2 3 23 1 4 23 1 5 31 3 6 25 1 7 26 1 8 34 3 ```
You can use dictionaries to do this as well. Key-value pairs. The keys would be the different age ranges and the value for a particular key would be the count for that particular age group. groupDict={'23-26':0,'27-30':0,'31-34':0} ``` for i in ages: if i>=23 and i<=26: groupDict['23-26']+=1 elif i>=27 and i<=30: groupDict['27-30']+=1 elif i>=31 and i<=34: groupDict['27-30']+=1 ```
8,891
40,744,392
I use Anonymous Python Functions in BitBake recipes to set variables during parsing. Now I wonder if I can check if a specific variable is set or not. If not, then I want to generate a BitBake Error, which stops the build process. Pseudo code, that I want to create: ``` python __anonymous () { if d.getVar('MY_VARIABLE', True) == "": <BITBAKE ERROR with custom message "MY_VARIABLE not found"> } ```
2016/11/22
[ "https://Stackoverflow.com/questions/40744392", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5316879/" ]
You can call `bb.fatal("MY_VARIABLE not set")` which will print that error and abort the build by throwing an exception. Beware that d.getVar() returns `None` when the variable is unset. You only get the empty string if that's your default value.
Outputs are possible on different loglevels and with python as well as shell script code For usage in python there are: * **bb.fatal** * bb.error * bb.warn * bb.note * bb.plain * bb.debug For usage in shell script there are: * **bbfatal** * bberror * bbwarn * bbnote * bbplain * bbdebug for example if you want to throw an error in your recipe's do\_install\_append function: ``` bbfatal "something went terribly wrong!" ```
8,892
37,228,607
In C, as well as in C++, one can in a for-loop change the index (for example `i`). This can be useful to, for example, compare a current element and based on that comparison compare the next element: ``` for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') smileyDetected = true; ``` Now I know this cannot be done in Python (for [various](https://stackoverflow.com/questions/1485841/behaviour-of-increment-and-decrement-operators-in-python) [reasons](https://stackoverflow.com/questions/15363138/scope-of-python-variable-in-for-loop)). However, I cannot help wondering if there are short alternatives for Python? I can come up with: ``` while i < end: if array[i] == ':': i += 1 if array[i] == ')': smileyDetected = True; ``` However, this costs me an extra line, which doesn't sound so bad until you do the same multiple times ('less readable' did not mean having a long file). So to have it in one line, I would think of something like `array[i += 1]`, but this is invalid syntax as it seems. Is there Python equivalent which does the incrementation of the index in the same line as reading out that incremented index? EDIT: As most answers mention using `in` to find a substring, as an alternative for the particular example, let me add another example which wouldn't be solvable in such a way: ``` j = 0; for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') anotherArray[j++] = array[++i]; ``` With that I mean it's about the incrementing of the index, not the finding of a particular string.
2016/05/14
[ "https://Stackoverflow.com/questions/37228607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1762311/" ]
Perhaps: ``` smileyDetected = ':)' in "".join(array) ``` or per @jonrsharpe: ``` from itertools import tee # pairwise() from "Itertools Recipes" def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) for a, b in pairwise(array): if a == ':' and b == ')': smileyDetected = True ```
In this special case, you could do: ``` for i, char in enumerate(array[:-1]): if char == ":" and array[i+1] == ")": smiley_detected = True ``` However, in the more general case, if you need to skip elements, you could modify the raw iterator: ``` iterator = iter(array) for char in iterator: if char == ":" and next(iterator) == ")": smiley_detected = True ``` Here you need to take more care about the array bounds; if the last element is `:`, you will get a `StopIteration` exception from calling `next` on the depleted iterator. You would need to catch that.
8,893
68,026,549
I'm using TFX to build an AI Pipeline on Vertex AI. I've followed [this tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple) to get started, then I adapted the pipeline to my own data which has over 100M rows of time series data. A couple of my components get killed midway because of memory issues, so I'd like to set the memory requirements for these components only. I use `KubeflowV2DagRunner` to orchestrated and launch the pipeline in Vertex AI with the following code: ```py runner = tfx.orchestration.experimental.KubeflowV2DagRunner( config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig( default_image = 'gcr.io/watch-hop/hop-tfx-covid:0.6.2' ), output_filename=PIPELINE_DEFINITION_FILE) _ = runner.run( create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_path=DATA_ROOT, metadata_path=METADATA_PATH)) ``` A [similar question](https://stackoverflow.com/questions/67816573/how-to-specify-machine-type-in-vertex-ai-pipeline) has been answered on Stack Overflow, which has led me to a way to [set memory requirements in AI Platform](https://www.tensorflow.org/tfx/tutorials/tfx/cloud-ai-platform-pipelines#kubeflow_pipelines_resource_considerations), but these configs don't exist anymore in `KubeflowV2DagRunnerConfig`, so I'm at a dead end. Any help would be much appreciated. \*\* EDIT \*\* We define our components as python functions with the `@component` decorator, so most of them are custom components. For Training components, I know you can specify the machine type using the `tfx.Trainer` class as explained in [this tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training#write_a_pipeline_definition), though my question is for custom components that are not doing any training.
2021/06/17
[ "https://Stackoverflow.com/questions/68026549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2005440/" ]
Turns out you can't at the moment but according to this [issue](https://github.com/tensorflow/tfx/issues/3194#issuecomment-802598448), this feature is coming. An alternative solution is to convert your TFX pipeline to a Kubeflow pipeline. Vertex AI pipelines support kubeflow and with these you can set memory and cpu constraints at the component level. ```py @component // imported from kfp.dsl def MyComponent(Input[Dataset] input_data): // ... @pipeline // imported from kfp.dsl def MyPipeline(...): component = MyComponent(...) component.set_memory_limit('64G') // alternative to set_memory_request(...) ```
An alternate option to this solution would be using the dataflow beam runner which allows components to be run dataflow cluster via Vertex. I am still to find a way for specifying machine types for custom components Sample beam input: ``` BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS = [ --project= GOOGLE_CLOUD_PROJECT, --temp_location= GCS_LOCAITON, --runner=DataflowRunner ``` ] By now you would be migrating to Vertex AI
8,896
24,862,912
**Solution:** My fault: The file where adding the icon to the button is used via the "placeholder" function from QtDesigner. The Main-programm located in a different folder searches in its own folder for the icon, not in the folder from the "imported" file. So you just have to add the path to the icon: ``` dirpath = os.path.dirname(os.path.abspath(__file__)) icon1_path = os.path.join(dirpath,"arrow_down.ico") icon = QtGui.QPixmap(icon1_path) ``` --- I want to create a Qpushbutton with an Icon instead of text: ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.png")) self.ui.pb_down.setIcon(icon) ``` But this doesn't work. Neither this works: ``` self.ui.pb_down.setIcon(QtGui.QIcon("arrow_down.png")) ``` Theres is no Error-message, the icon just doesn't appear. If I add the Icon via Qt Designer, the icon is shown in the Qt Desiger itself, but when running the programm, the icon disappears again. Does anybody know what's going on? Im using python 2.7 and Windows 7 **Edit:** Using @Chris Aung code, I get a button with icon. ``` button = QtGui.QPushButton() self.setCentralWidget(button) icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) print button.icon().isNull() #Returns true if the icon is empty; otherwise returns false. #output = False ``` But if I use exactly this code in my GUI, it just doesnt add the icon. ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) self.ui.pb_down.setIcon(icon) print self.ui.pb_down.icon().isNull() # output = True ``` I have no idea where the problem is.
2014/07/21
[ "https://Stackoverflow.com/questions/24862912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3027322/" ]
This works for me and is auto generated by pyqt when you convert a .ui file to .py with the [pyuic4](http://pyqt.sourceforge.net/Docs/PyQt4/designer.html#the-uic-module) tool. ``` Icon = QtGui.QIcon() Icon.addPixmap(QtGui.QPixmap(_fromUtf8("SOME FILE")), QtGui.QIcon.Normal, QtGui.QIcon.Off) button.setIcon(Icon) button.setIconSize(QtCore.QSize(width, height)) ``` If you use this, you will also have to define "\_fromUtf8" at the top of your module as: ``` try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: _fromUtf8 = lambda s: s ```
Do you have text rendered on that button? Try playing with the icon size setIconSize(), to begin with you can try setting it to the rect of the pixmap.
8,897
2,823,907
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
2010/05/13
[ "https://Stackoverflow.com/questions/2823907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/166482/" ]
Have you tried [Pyglet](http://www.pyglet.org/) with [PyOpenGL](http://pyopengl.sourceforge.net/)? The two goes very well together. Wheaties' suggest is quite good as well, although [PyOgre](http://www.ogre3d.org/wiki/index.php/PyOgre) also has a steep learning curve, as it is indeed higher-level. On another thought, there is also [PyGame](http://www.pygame.org/), which is a Python wrapper of [SDL](http://www.libsdl.org/). I personally prefer PyOpenGL, and you can use [WxPython](http://www.wxpython.org/) or [PyQT](http://wiki.python.org/moin/PyQt) to create your rendering context. Also, there is [PyProcessing](http://code.google.com/p/pyprocessing/), which is still in early stages, but very, very nifty.
I have no personal experience with this, but I have heard some decent things about [Pyglet](http://www.pyglet.org/index.html)
8,900
21,327,768
Note: I was using the wrong source file for my data - once that was fixed, my issue was resolved. It turns out, there is no simple way to use `int(..)` on a string that is not an integer literal. This is an example from the book "Machine Learning In Action", and I cannot quite figure out what is wrong. Here's some background: ``` from numpy import as * def file2matrix(filename): fr = open(filename) numberOfLines = len(fr.readlines()) returnMat = zeros((numberOfLines,3)) classLabelVector = [] fr = open(filename) index = 0 for line in fr.readlines(): line = line.strip() listFromLine = line.split('\t') returnMat[index,:] = listFromLine[0:3] classLabelVector.append(int(listFromLine[-1])) # Problem here. index += 1 return returnMat,classLabelVector ``` The .txt file is as follows: ``` 40920 8.326976 0.953952 largeDoses 14488 7.153469 1.673904 smallDoses 26052 1.441871 0.805124 didntLike 75136 13.147394 0.428964 didntLike 38344 1.669788 0.134296 didntLike ... ``` I am getting an error on the line `classLabelVector.append(int(listFromLine[-1]))` because, I believe, `int(..)` is trying to parse over a String (ie `"largeDoses"`) that is a not a literal integer. Am I missing something? I looked up the documentation for `int()`, but it only seems to parse numbers and integer literals: <http://docs.python.org/2/library/functions.html#int> Also, an excerpt from the book explains this section as follows: > > Finally, you loop over all the lines in the file and strip off the return line character with line.strip(). Next, you split the line > into a list of elements delimited by the tab character: '\t'. You take > the first three elements and shove them into a row of your matrix, and > you use the Python feature of negative indexing to get the last item > from the list to put into classLabelVector. You have to explicitly > tell the interpreter that you’d like the integer version of the last > item in the list, or it will give you the string version. Usually, > you’d have to do this, but NumPy takes care of those details for you. > > >
2014/01/24
[ "https://Stackoverflow.com/questions/21327768", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1884158/" ]
strings like "largeDoses" could not be converted to integers. In folder `Ch02` of [that code project](https://github.com/pbharrin/machinelearninginaction), you have two data files, use the second one `datingTestSet2.txt` instead of loading the first
You can use [ast.literal\_eval](http://docs.python.org/2/library/ast.html#ast.literal_eval) and catch the exception ValueError the malformed string (by the way int('9.4') will raise an exception)
8,908
60,490,195
I am trying to write a small program using the AzureML Python SDK (v1.0.85) to register an Environment in AMLS and use that definition to construct a local Conda environment when experiments are being run (for a pre-trained model). The code works fine for simple scenarios where all dependencies are loaded from Conda/ public PyPI, but when I introduce a private dependency (e.g. a utils library) I am getting a InternalServerError with the message "Error getting recipe specifications". The code I am using to register the environment is (after having authenticated to Azure and connected to our workspace): ``` environment_name = config['environment']['name'] py_version = "3.7" conda_packages = ["pip"] pip_packages = ["azureml-defaults"] private_packages = ["./env-wheels/utils-0.0.3-py3-none-any.whl"] print(f"Creating environment with name {environment_name}") environment = Environment(name=environment_name) conda_deps = CondaDependencies() print(f"Adding Python version: {py_version}") conda_deps.set_python_version(py_version) for conda_pkg in conda_packages: print(f"Adding Conda denpendency: {conda_pkg}") conda_deps.add_conda_package(conda_pkg) for pip_pkg in pip_packages: print(f"Adding Pip dependency: {pip_pkg}") conda_deps.add_pip_package(pip_pkg) for private_pkg in private_packages: print(f"Uploading private wheel from {private_pkg}") private_pkg_url = Environment.add_private_pip_wheel(workspace=ws, file_path=Path(private_pkg).absolute(), exist_ok=True) print(f"Adding private Pip dependency: {private_pkg_url}") conda_deps.add_pip_package(private_pkg_url) environment.python.conda_dependencies = conda_deps environment.register(workspace=ws) ``` And the code I am using to create the local Conda environment is: ``` amls_environment = Environment.get(ws, name=environment_name, version=environment_version) print(f"Building environment...") amls_environment.build_local(workspace=ws) ``` The exact error message being returned when `build_local(...)` is called is: ``` Traceback (most recent call last): File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 814, in build_local raise error File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 807, in build_local recipe = environment_client._get_recipe_for_build(name=self.name, version=self.version, **payload) File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\_restclient\environment_client.py", line 171, in _get_recipe_for_build raise Exception(message) Exception: Error getting recipe specifications. Code: 500 : { "error": { "code": "ServiceError", "message": "InternalServerError", "detailsUri": null, "target": null, "details": [], "innerError": null, "debugInfo": null }, "correlation": { "operation": "15043e1469e85a4c96a3c18c45a2af67", "request": "19231be75a2b8192" }, "environment": "westeurope", "location": "westeurope", "time": "2020-02-28T09:38:47.8900715+00:00" } Process finished with exit code 1 ``` Has anyone seen this error before or able to provide some guidance around what the issue may be?
2020/03/02
[ "https://Stackoverflow.com/questions/60490195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9821873/" ]
The issue was with out firewall blocking the required requests between AMLS and the storage container (I presume to get the environment definitions/ private wheels). We resolved this by updating the firewall with appropriate ALLOW rules for the AMLS service to contact and read from the attached storage container.
Assuming that you'd like to run in the script on a remote compute, then my suggestion would be to pass the environment you just "got". to a `RunConfiguration`, then pass that to an `ScriptRunConfig`, `Estimator`, or a `PythonScriptStep` ```py from azureml.core import ScriptRunConfig from azureml.core.runconfig import DEFAULT_CPU_IMAGE src = ScriptRunConfig(source_directory=project_folder, script='train.py') # Set compute target to the one created in previous step src.run_config.target = cpu_cluster.name # Set environment amls_environment = Environment.get(ws, name=environment_name, version=environment_version) src.run_config.environment = amls_environment run = experiment.submit(config=src) run ``` Check out the rest of the notebook [here](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb). If you're looking for a local run [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/using-environments/using-environments.ipynb) might help.
8,909
1,419,416
please, could someone explain to me a few basic things about working with languages like C? Especially on Windows? 1. If I want to use some other library, what do I need from the library? Header files .h and ..? 2. What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def? 3. Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW? 4. To use another library, what is preferred way? LoadLibrary() or #include <>? 5. There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project? 6. How do I create one big .exe? Is this called "static linking"? 7. How to include some random file into .exe? Say a program icon or start-up song? 8. How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()? 9. If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest 10. What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies? Well, so many question... Thanks for every info!
2009/09/14
[ "https://Stackoverflow.com/questions/1419416", "https://Stackoverflow.com", "https://Stackoverflow.com/users/166939/" ]
> > 1: If I want to use some other library, what do I need from the library? Header files .h and ..? > > > ... and, usually a `*.lib` file which you pass as an argument to your linker. > > 2: What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def? > > > This might be useful: [Static libraries, dynamic libraries, DLLs, entry points, headers … how to get out of this alive?](https://stackoverflow.com/questions/784781) > > 3: Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW? > > > Yes, it matters. For interop between compilers, the normal way is to use a C-style (not C++-style) API, with well-defined parameter-passing conventions (e.g. `__stdcall`), or to use 'COM' interfaces. > > 4: To use another library, what is preferred way? LoadLibrary() or #include <>? > > > `#include` is for the compiler (e.g. so that it can compile calls to the library); and `LoadLibrary` (or, using a `*.lib` file) is for the run-time linker/loader (so that it can substitute the actual address of those library methods into your code): i.e. you need both. > > 5: There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project? > > > If it's only source then you can compile that source (once) into a library, and then (when you build your project) link to that library (without recompiling the library). > > 6: How do I create one big .exe? Is this called "static linking"? > > > Yes, compile everything and pass it all to the linker. > > 7: How to include some random file into .exe? Say a program icon or start-up song? > > > Define that in a Windows-specific 'resource file', which is compiled by the 'resource compiler'. > > 8: How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()? > > > Yes. > > 9: If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest > > > I don't understand your question/example. > > 10: What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies? > > > Yes.
In all seriousness, the place to go to learn how to run your local environment is the documentation for your local environment. After all we on not even know exactly what your environment *is*, much less have it in front of us. But here are some answers: `1.` You need the headers, and a linkable object of some kind. Or you need the source so that you can build these. `3.` It matters that the library is in a format that your linker understands. In c++ and perhaps other languages, it also needs to understand the name mangling that was used. `6.` Forcing all the library code to be included in the executable is, indeed, called "static linking". `7.` There is [at least one StackOverflow question on "resource compilers"](https://stackoverflow.com/questions/381915/free-resource-editor-for-windows-rc-files).
8,910
20,303,558
Coming from this link: [Splitlines in Python a table with empty spaces](https://stackoverflow.com/questions/20252728/splitlines-in-python-a-table-with-empty-spaces) It works well but there is a problem when the size of the columns change: ``` COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME init 1 root cwd unknown /proc/1/cwd (readlink: Permission denied) init 1 root rtd unknown /proc/1/root ``` And the problem starts in col Device or Size/OFF but maybe in other situations could happen in all columns. ``` COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME init 1 root cwd DIR 8,1 4096 2 / init 1 root rtd DIR 8,1 4096 2 / init 1 root txt REG 8,1 36992 139325 /sbin/init init 1 root mem REG 8,1 14696 190970 /lib/libdl-2.11.3.so init 1 root mem REG 8,1 1437064 190958 /lib/libc-2.11.3.so python 30077 carlos 1u CHR 1,3 0t0 700 /dev/null ``` Checking always is the same in the first row, the first column starts in C of COMMAND, second ends in D of PID, the four col. in D +1 of FD.... is there any way to count the number of spaces in the first row to use them to fill this code to parse the other rows? ``` # note: variable-length NAME field at the end intentionally omitted base_format = '8s 1x 6s 1x 10s 1x 4s 1x 9s 1x 6s 1x 9s 1x 6s 1x' base_format_size = struct.calcsize(base_format) ``` Any ideas how to solve the problem?
2013/11/30
[ "https://Stackoverflow.com/questions/20303558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2898827/" ]
I did a bit of reading on `lsof -F` after checking out the other thread and found that it does produce easily parsed output. Here's a quick demonstration of the general idea. It parses that and prints a small subset of the parsed output to show format. Are you able to use `-F` for your use case? ``` import subprocess import copy import pprint def get_rows(output_to_parse, whitelist_keys): lines = output_to_parse.split("\n") rows = [] while lines: row = _get_new_row(lines, whitelist_keys) rows.append(row) return rows def _get_new_row(lines, whitelist_keys): new_row_keys = set() output = {} repeat = False while lines and repeat is False: line = lines.pop() if line == '': continue key = line[0] if key not in whitelist_keys: raise(ValueError(key)) value = line[1:] if key not in new_row_keys: new_row_keys.add(key) output[key] = value else: repeat = True return output if __name__ == "__main__": identifiers = subprocess.Popen(["lsof", "-F", "?"], stderr=subprocess.PIPE).communicate() keys = set([line.strip()[0] for line in identifiers[1].split("\n") if line != ''][1:]) lsof_output = subprocess.check_output(["lsof", "-F"]) rows = get_rows(lsof_output, whitelist_keys=keys) pprint.pprint(rows[:20]) ```
As [@Tim Wilder said](https://stackoverflow.com/a/20304501/4279), you could use `lsof -F` to get machine-readable output. Here's a script that converts `lsof` output into json. One json object per line. It produces output as soon as pipe buffers are full without waiting for the whole `lsof` process to end (it takes a while on my system): ``` #!/usr/bin/env python import json import locale from collections import OrderedDict from subprocess import Popen, PIPE encoding = locale.getpreferredencoding(True) # for json # define fields we are intersted in, see `lsof -F?` # http://www.opensource.apple.com/source/lsof/lsof-12/lsof/lsof_fields.h ids = 'cpLftsn' headers = 'COMMAND PID USER FD TYPE SIZE NAME'.split() # see lsof(8) names = OrderedDict(zip(ids, headers)) # preserve order # use '\0' byte as a field terminator and '\n' to separate each process/file set p = Popen(["lsof", "-F{}0".format(''.join(names))], stdout=PIPE, bufsize=-1) for line in p.stdout: # each line is a process or a file set # id -> field fields = {f[:1].decode('ascii', 'strict'): f[1:].decode(encoding) for f in line.split(b'\0') if f.rstrip(b'\n')} if 'p' in fields: # process set process_info = fields # start new process set elif 'f' in fields: # file set fields.update(process_info) # add process info (they don't intersect) result = OrderedDict((name, fields.get(id)) for id, name in names.items()) print(json.dumps(result)) # one object per line else: assert 0, 'either p or f ids should be present in a set' p.communicate() # close stream, wait for the child to exit ``` The field names such as `COMMAND` are described in the [`lsof(8)` man page](http://linux.die.net/man/8/lsof#Output). To get the full list of available field ids, run `lsof -F?` or see [lsof\_fields.h](http://www.opensource.apple.com/source/lsof/lsof-12/lsof/lsof_fields.h). Fields that are not available are set to `null`. You could omit them instead. I've used `OrderedDict` to preserve order from run to run. `sorted_keys` parameter for `json.dumps` could be used instead. To pretty print, you could use `indent` parameter. `lsof` converts non-printable (in current locale) characters using special encoding. It makes some values ambiguous.
8,913
35,196,449
I am writing script supporting my Django project development. First I thought of using bash for this purpose but due to lack of enough knowledge and total lack of time I decided to write something using argparse and running system commands using subprocess. Everything went ok until I had to run ``` ./manage.py migrate ``` I do it by running: ``` import subprocess ... subprocess.Popen("python {} migrate".format(absolute_path_to_manage_py).split()) ``` The output seems to look ok: ``` Operations to perform: Apply all migrations: sessions, admin, auth, contenttypes, accounts, pcb Running migrations: Rendering model states... DONE Applying contenttypes.0001_initial... OK ... Applying sessions.0001_initial... OK ``` And it stops suddenly, but script is still active (it's still running) and what's worse when I run django app I get a message that I still have some unapplied migrations. I think I don't know something about running system commands from Python or something related to django migrations. Any hint how I could overcome this?
2016/02/04
[ "https://Stackoverflow.com/questions/35196449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1844201/" ]
This is not impossible, but it is a bad idea - it's very tricky to get right. Instead, you want to separate the business logic from the UI, so that you can do the logic in the background, while the UI is still on the UI thread. The key is that you must not modify the UI controls from the background thread - instead, you first load whatever data you need, and then either use `Invoke` or `ReportProgress` to marshal it back on the UI thread, where you can update the UI. If creating the controls themselves is taking too long, you might want to look at some options at increasing the performance there. For example, using `BeginUpdate` and friends to avoid unnecessary redraws and layouting while you're still filling in the data, or creating the controls on demand (as they are about to be visible), rather than all upfront. A simple example using `await` (using Windows Forms, but the basic idea works everywhere): ``` tabSlow.Enabled = false; try { var data = await Database.LoadDataAsync(); dataGrid.DataSource = data; } finally { tabSlow.Enabled = true; } ``` The database operation is done in the background, and when it's done, the UI update is performed back on the UI thread. If you need to do some expensive calculations, rather than I/O, you'd simply use `await Task.Run(...)` instead - just make sure that the task itself doesn't update any UI; it should just return the data you need to update the UI.
WPF does not allow you to change UI from the background thread. Only ONE SINGLE thread can handle UI thread. Instead, you should calculate your data in the background thread, and then call `Application.Current.Dispatcher.Invoke(...)` method to update the UI. For example: ``` Application.Current.Dispatcher.Invoke(() => textBoxTab2.Text = "changing UI from the background thread"); ```
8,914
36,811,332
I'm having some trouble create a table using values from a text file. My text file looks like this: ``` e432,6/5/3,6962,c8429,A,4324 e340,2/3/5,566623,c1210,A,3201 e4202,6/5/3,4232,c8419,E,4232 e3230,2/3/5,66632,c1120,A,53204 e4202,6/5/3,61962,c8429,A,4322 ``` I would like to generate a table containing arrays where: the last column (amountpaid) value is less than the third column's (final total), and if the fifth column's (status) equals 'A'. Outstanding total is found when subtracting final total and amount paid. My code to generate a table is: ``` data = open("pJoptionc.txt", "r") info=data.readlines() data.close for li in info: status=li.split(",")[4] finaltotal=int(li.split(",")[2]) amountpaid=int(li.split(",")[5]) totalrev=0 headers = ["Estimate Number", "Date", "Final Total", "Customer Number", "Status", "Amount Paid", "Outstanding Amount"] print(" ".join(headers)) for line in open("pJoptionc.txt", "r"): line = line.strip().split(",") line.append(str(int(line[2]) - int(line[5]))) if line[2] == line[5] or line[4] in ("E"): continue for i, word in enumerate(line): print(word.ljust(len(headers[i - (i > 4)])), end=" " * ((i - (i > 4)) != len(headers) - 1)) print() outstandingrev =(finaltotal) - (amountpaid) totalrev += int(outstandingrev) print("The total amount of outstanding revenue is...") print("£",totalrev) ``` My desired output is ``` Estimate Number Date Final Total Customer Number Status Amount Paid Outstanding Amount e432 6/5/3 6962 c8429 A 4324 2638 e340 2/3/5 566623 c1210 A 3201 563422 e3230 2/3/5 66632 c1120 A 53204 13428 e4202 6/5/3 61962 c8429 A 4322 57640 The total amount of outstanding revenue is... £ 13428 ``` However, when I run the code, the output is the table repeated over and over, and negative values are in the outstanding amount column. I'm using python 3.4.3.
2016/04/23
[ "https://Stackoverflow.com/questions/36811332", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5854201/" ]
AFAIK, neither JavaScript nor TypeScript provide a generic hashing function. You have to import a third-party lib, like [ts-md5](https://www.npmjs.com/package/ts-md5) for instance, and give it a string representation of your object: `Md5.hashStr(JSON.stringify(yourObject))`. Obviously, depending on your precise use case, this could be perfect, or way too slow, or generate too many conflicts...
If you want to compare the objects and not the data, then the @Valery solution is not for you, as it will compare the data and not the two objects. If you want to compare the data and not the objects, then JSON.stringify(obj1) === JSON.stringify(obj2) is enough, which is simple string comparison.
8,915
25,960,754
``` ImportError at / cannot import name views Request Method: GET Request URL: http://127.0.0.1:8000/ Django Version: 1.7 Exception Type: ImportError Exception Value: cannot import name views Exception Location: /Users/adam/Desktop/qblog/qblog/urls.py in <module>, line 1 Python Executable: /Users/adam/Desktop/venv/bin/python Python Version: 2.7.8 Python Path: ['/Users/adam/Desktop/qblog', '/Users/adam/Desktop/venv/lib/python27.zip', '/Users/adam/Desktop/venv/lib/python2.7', '/Users/adam/Desktop/venv/lib/python2.7/plat-darwin', '/Users/adam/Desktop/venv/lib/python2.7/plat-mac', '/Users/adam/Desktop/venv/lib/python2.7/plat-mac/lib-scriptpackages', '/Users/adam/Desktop/venv/lib/python2.7/lib-tk', '/Users/adam/Desktop/venv/lib/python2.7/lib-old', '/Users/adam/Desktop/venv/lib/python2.7/lib-dynload', '/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/Users/adam/Desktop/venv/lib/python2.7/site-packages'] Server time: Sun, 21 Sep 2014 15:12:22 +0000 ``` Here is urls.py located in qblog/qblog/: ``` from django.conf.urls import patterns, url from . import views urlpatterns = patterns( '', url(r'^admin/', include(admin.site.urls)), url(r'^markdown/', include('django_markdown.urls')), url(r'^', include('blog.urls')), ) ``` Also, if I add "library" to the first import statement (which I don't need) it will give me the same error but with library, "Cannot import name library". Here is urls.py located in qblog/blog/: ``` from django.conf.urls import patterns, include, url from . import views urlpatterns = patterns( '', url(r'^$', views.BlogIndex.as_view(), name="index"), ) ``` Going to the url `http://127.0.0.1:8000/index` provides the same error. I do not get any errors in the terminal when running `./manage.py runserver` Project structure: ``` . ├── blog │   ├── __init__.py │   ├── __init__.pyc │   ├── admin.py │   ├── admin.pyc │   ├── migrations │   │   ├── 0001_initial.py │   │   ├── 0001_initial.pyc │   │   ├── 0002_auto_20140921_1414.py │   │   ├── 0002_auto_20140921_1414.pyc │   │   ├── 0003_auto_20140921_1501.py │   │   ├── 0003_auto_20140921_1501.pyc │   │   ├── __init__.py │   │   └── __init__.pyc │   ├── models.py │   ├── models.pyc │   ├── tests.py │   ├── urls.py │   ├── urls.pyc │   ├── views.py │   └── views.pyc ├── db.sqlite3 ├── manage.py ├── qblog │   ├── __init__.py │   ├── __init__.pyc │   ├── settings.py │   ├── settings.pyc │   ├── urls.py │   ├── urls.pyc │   ├── wsgi.py │   └── wsgi.pyc ├── static │   ├── css │   │   ├── blog.css │   │   └── bootstrap.min.css │   ├── icons │   │   └── favicon.ico │   └── js │   ├── bootstrap.min.js │   └── docs.min.js └── templates ├── base.html ├── home.html └── post.html ```
2014/09/21
[ "https://Stackoverflow.com/questions/25960754", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1779600/" ]
There is no need to import the views in your project-level file. You are not using them there, so no reason to import them. If you *did* need to, you would just to `from blog import views`, because the views are in the blog directory and manage.py puts the top-level directory into the Python path.
You can just use `import views`.This works for me
8,917
41,155,985
Frustratingly having a lot of difficult installing the TA-Lib package in python. <https://pypi.python.org/pypi/TA-Lib> I have read through all the forum posts I can find on this but no such luck for my particular problem.. Windows 10 Python 3.5.2 Anaconda 4.2.0 Cython 0.24.1 Microsoft Visual Studio 14.0 I have downloaded and extracted  ta-lib-0.4.0-msvc.zip to C:/TA-Lib (common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>) If someone could help me solve this I would be very appreciative! Using 'pip install ta-lib' I get the following: ``` C:\Users\Matt>pip install ta-lib Collecting ta-lib Using cached TA-Lib-0.4.10.tar.gz Building wheels for collected packages: ta-lib Running setup.py bdist_wheel for ta-lib ... error Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35: running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.5 creating build\lib.win-amd64-3.5\talib copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib copying talib\test_data.py -> build\lib.win-amd64-3.5\talib copying talib\test_func.py -> build\lib.win-amd64-3.5\talib copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib copying talib\__init__.py -> build\lib.win-amd64-3.5\talib running build_ext skipping 'talib\common.c' Cython extension (up-to-date) building 'talib.common' extension creating build\temp.win-amd64-3.5 creating build\temp.win-amd64-3.5\Release creating build\temp.win-amd64-3.5\Release\talib C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj common.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod common.obj : error LNK2001: unresolved external symbol TA_Shutdown common.obj : error LNK2001: unresolved external symbol TA_Initialize common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod common.obj : error LNK2001: unresolved external symbol TA_GetVersionString build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120 ---------------------------------------- Failed building wheel for ta-lib Running setup.py clean for ta-lib Failed to build ta-lib Installing collected packages: ta-lib Running setup.py install for ta-lib ... error Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib.win-amd64-3.5 creating build\lib.win-amd64-3.5\talib copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib copying talib\test_data.py -> build\lib.win-amd64-3.5\talib copying talib\test_func.py -> build\lib.win-amd64-3.5\talib copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib copying talib\__init__.py -> build\lib.win-amd64-3.5\talib running build_ext skipping 'talib\common.c' Cython extension (up-to-date) building 'talib.common' extension creating build\temp.win-amd64-3.5 creating build\temp.win-amd64-3.5\Release creating build\temp.win-amd64-3.5\Release\talib C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj common.c C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod common.obj : error LNK2001: unresolved external symbol TA_Shutdown common.obj : error LNK2001: unresolved external symbol TA_Initialize common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod common.obj : error LNK2001: unresolved external symbol TA_GetVersionString build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120 ---------------------------------------- Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\ ```
2016/12/15
[ "https://Stackoverflow.com/questions/41155985", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7273219/" ]
In order to use the python package you need the dependencies first. For mac you can just use `brew install ta-lib` and then `pip install TA-Lib` will work just fine.
I faced the same problems trying with Anaconda 5.1.0 and Python 3.6 via Visual Studio. The solution was to get a wheel from <https://www.lfd.uci.edu/~gohlke/pythonlibs>, then install it via pip. You need to make sure the wheel matches your python version (in my case, 3.6). In Anaconda, I just opened a prompt, navigated to where the wheel was, and ran the following: `python -m pip install TA_Lib-0.4.17-cp36-cp36m-win_amd64.whl` For Visual Studio, it was more obtuse. Go to the Python Environments tab, choose 'Overview' in the dropdown, then `Open in PowerShell'. At that point, run the same command as for ANaconda above. [![Python for Visual Studio shell shortcut](https://i.stack.imgur.com/S0V34.png)](https://i.stack.imgur.com/S0V34.png)
8,918
31,091,104
I am using a PHP server at the back end and a basic web page which asks the user to upload an image. This image is used as an input to a MATLAB script to be executed on the server side. What I need is something like a MATLAB session(not clear on that word) that is already running on the server side which runs the MATLAB script. I know about the command: `"matlab -nodesktop -nojvm"` which can be used but the point is that I don't wish to invoke MATLAB again and again, rather, just execute the MATLAB script on the running MATLAB instance whenever a user uploads an image, get the output(necessary). There are some constraints: 1. OS -> Ubuntu 2. Can't use python engine.
2015/06/27
[ "https://Stackoverflow.com/questions/31091104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3159395/" ]
There exist multiple interfaces to control matlab. Probably best choice for this case is [matlabcontrol](https://code.google.com/p/matlabcontrol/) or the matlab engine for python (which you can't use for some reason). On windows a third alternative would be com. Besides controlling the matlab process, you could implement an application in matlab which recieves the data, processes it and sends it back. I solved a similar problem using [apache xmlrpc](https://ws.apache.org/xmlrpc/) in matlab. There are also some submissions on matlab file exchange, directly providing a matlab console [via web](http://www.mathworks.com/matlabcentral/fileexchange/29027-web-server)
There is an API for C++ where you call the Matlab engine using engOpen. This will open Matlab and leave it running until you close it. Then your C++ program can wait and listen for the image to process. <http://www.mathworks.com/help/matlab/calling-matlab-engine-from-c-c-and-fortran-programs.html> Another option is to compile the Matlab script as a standalone executable. Hard code the input image name and output and let PHP handle moving around the inputs and outputs. All the server needs to do is call the executable. It takes about 5 seconds to start the Matlab runtime each time.
8,928
24,518,868
I am looking at the bencode specification and I am writing a C++ port of the deluge bit-torrent client's bencode implementation (written in python). The Python implementation includes a dictionary of { data\_types : callback\_functions } by which a function wrapper easily selects which encoding function to use by a dictionary look-up of the data type of the variable provided to the wrapper function. I did a search for identifying variable types during run-time of C++ equivalents to python primitives, and found that typeid() might provide what I am looking for. I know that my code will not be storing the results of the typeid() function return value, but I would rather not rely on a function that provides compiler specific string literals. I found a [solution](http://www.cplusplus.com/forum/general/21246/) that would allow me to provide a portable means to specify the type manually as a string literal, it seems difficult for me to wrap my head around, templates are a nightmare to me. Here is the Python code in which types are used to select appropriate functions: ``` encode_func = {} encode_func[Bencached] = encode_bencached encode_func[IntType] = encode_int encode_func[LongType] = encode_int encode_func[StringType] = encode_string encode_func[ListType] = encode_list encode_func[TupleType] = encode_list encode_func[DictType] = encode_dict try: from types import BooleanType encode_func[BooleanType] = encode_bool except ImportError: pass def bencode(x): r = [] encode_func[type(x)](x, r) return ''.join(r) ``` Here is the C++ sample I found that allows for explicit class type enumeration: ``` struct Foobar; template<typename T> struct type_name { static const char* name() { static_assert(false, "You are missing a DECL_TYPE_NAME"); } }; template<> struct type_name<int> { static const char* name() {return "int";} }; template<> struct type_name<string> { static const char* name() {return "string";} }; template<> struct type_name<Foobar> { static const char* name() {return "Foobar";} }; int main() { cout << type_name<int>::name(); // prints "int" cout << type_name<float>::name(); // compile time error. float wasn't declared. } ``` If this was implemented appropriately in my classes, I believe I could use `switch(variable.name())` to select which function to use to encode the value. I have three specific questions: 1. What is the purpose of `template<typename T> struct type_name { ... };` in the parent class `name()` function? I don't really even understand what it is, do templates have scopes? Why can't the `name()` function just `static_assert()` a compile error if called? 2. Why are the `template<> struct type_name<type> { static const char* name() {return "type";} }` values in the parent-class? Would it not be more clear to simply over-load the `name()` function for each child-class of a different type? 3. If I over-load `name()` for each child-class, and cast a child-class variable to the parent-class, do I still call the `name()` function call of the child-class if I call `variable.name()`? or will I cause a compile time error because I called the parent-class's `name()` function? Does casting change the scope of the class? I believe that my question stems from a mis-understanding of templates and their purpose in C++, as well as a lack of understanding of inheritance.
2014/07/01
[ "https://Stackoverflow.com/questions/24518868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3795241/" ]
A primer on templates --------------------- C++, in contrast to the dynamic Python, is a **statically typed language**. What this means is that the types of objects are known at compile time (more on the differences [here](https://stackoverflow.com/a/1517670/2567683)). A way to generalize code in C++ is templates, which provide the tools for [static or compile time polymorphism](http://www.catonmat.net/blog/cpp-polymorphism/). To get an idea take a Python example of a `min` function : ``` def min(a, b) : if (a < b) : return a return b ``` To translate this in C++ you'd say ``` int min(int a, int b) { return a < b ? a : b; } ``` But the problem is that this translation is inacurrate : It only works for integers (and any type that is convertible to integers, which may cause truncation) One way to overcome this, would be to define overloads for all your types : ``` double min(double a, double b); // the definition would be the same as above float min(float a, float b); ``` But that would be tedious and error prone and far from generic. A better solution is to work with templates and generalize the function for any type `T` (that supports the `<` operator): ``` template<typename T> T min(T a, T b) { return a < b ? a : b; } ``` or better yet you could even handle the case when the two arguments have a different type : ``` template<typename T1, typename T2> auto min(T1 a, T2 b) -> decltype(a+b) { return a < b ? a : b; } ``` Templates are a powerful mechanism and I couldn't even begin to provide links and sources; you'd have to google your way through this. On specific questions --------------------- * The syntax `template<> struct<Atype> { ...` is called a template (full) specialization. What it does is mandate the layout of a class template for specific template parameters. Here for example : ``` template<> struct type_name<int> { static const char* name() {return "int";} }; ``` We are informed that when the type of `T` in `type_name` is `int` the member function `name()` returns "int" * There are no child classes here, you are not building a hierarchy. `name` is a member function in a class template and you're specializing this class for different types of template parameters. On the big picture ------------------ Being a statically typed language, C++ can provide for cleaner solutions to this : > > ...a function wrapper easily selects which encoding function to use by a dictionary look-up of the data type of the variable provided to the wrapper function. > > > You can directly parametrize the behaviour based on your types. **An Example** would be to overload your function ``` void Encode(int arg) { /* calls encode_int on arg */ } void Encode(TupleType arg) { /* calls encode_list on arg */ } ``` Other example would be to work with templates and policies that parametrize the behaviour for each type.
What you're seeing here is not a parent-child relationship as in normal inheritance, it's a [specialization](http://en.cppreference.com/w/cpp/language/template_specialization). The template defines the default implementation, and the specializations replace that implementation for parameters that match the specific type. So: 1. `template<typename T> struct type_name { ... };` is defining the template class. Without it none of the following definitions would make any sense, since they'd be specializing something that doesn't exist. 2. Those are the specializations for specific types. They are *not* child classes. 3. Because it's not inheritance, you can't cast. P.S. You can't switch on a string, only integer types.
8,930
49,342,652
I have a Django Web-Application that uses celery in the background for periodic tasks. Right now I have three docker images * one for the django application * one for celery workers * one for the celery scheduler whose `Dockerfile`s all look like this: ``` FROM alpine:3.7 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY Pipfile Pipfile.lock ./ RUN apk update && \ apk add python3 postgresql-libs jpeg-dev git && \ apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \ pip3 install --no-cache-dir pipenv && \ pipenv install --system && \ apk --purge del .build-deps COPY . ./ # Run the image as a non-root user RUN adduser -D noroot USER noroot EXPOSE $PORT CMD <Different CMD for all three containers> ``` So they are all exactly the same except the last line. Would it make sense here to create some kind of base image that contains everything except CMD. And all three images use that as base and add only their respective CMD? Or won't that give me any advantages, because everything is cached anyway? Is a seperation like you see above reasonable? Two small bonus questions: * Sometimes the `apk update ..` layer is cached by docker. How does docker know that there are no updates here? * I often read that I should decrease layers as far a possible to reduce image size. But isn't that against the caching idea and will result in longer builds?
2018/03/17
[ "https://Stackoverflow.com/questions/49342652", "https://Stackoverflow.com", "https://Stackoverflow.com/users/376172/" ]
I will suggest to use one Dockerfile and just update your CMD during runtime. Litle bit modification will work for both local and Heroku as well. As far Heroku is concern they provide environment variable to start container with the environment variable. [heroku set-up-your-local-environment-variables](https://devcenter.heroku.com/articles/heroku-local#set-up-your-local-environment-variables) ``` FROM alpine:3.7 ENV PYTHONUNBUFFERED 1 ENV APPLICATION_TO_RUN=default_application RUN mkdir /code WORKDIR /code COPY Pipfile Pipfile.lock ./ RUN apk update && \ apk add python3 postgresql-libs jpeg-dev git && \ apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \ pip3 install --no-cache-dir pipenv && \ pipenv install --system && \ apk --purge del .build-deps COPY . ./ # Run the image as a non-root user RUN adduser -D noroot USER noroot EXPOSE $PORT CMD $APPLICATION_TO_RUN ``` So When run you container pass your application name to run command. ``` docker run -it --name test -e APPLICATION_TO_RUN="celery beat" --rm test ```
I would recommend looking at [docker-compose](https://docs.docker.com/compose/) to simplify management of multiple containers. Use a single Dockerfile like the one you posted above, then create a `docker-compose.yml` that might look something like this: ``` version: '3' services: # a django service serving an application on port 80 django: build: . command: python manage.py runserver ports: - 8000:80 # the celery worker worker: build: . command: celery worker # the celery scheduler scheduler: build: . command: celery beat ``` Of course, modify the commands here to be whatever you are using for your currently separate Dockerfiles. When you want to rebuild the image, `docker-compose build` will rebuild your container image from your Dockerfile for the first service, then reuse the built image for the other services (because they already exist in the cache). `docker-compose up` will spin up 3 instances of your container image, but overriding the run command each time. If you want to get more sophisticated, there are [plenty of resources](https://blog.syncano.io/configuring-running-django-celery-docker-containers-pt-1/) out there for the very common combination of django and celery.
8,931
61,882,562
I'm trying to access to my Api rest that I released in Heroku with docker and see that Dynos is running the gunicorn command that I put in the Dockerfile. The Dockerfile that I used is: ``` FROM ubuntu:18.04 RUN apt update RUN apt install -y python3 python3-pip RUN mkdir /opt/app ENV PYTHONUNBUFFERED 1 ENV LANG C.UTF-8 ENV DEBIAN_FRONTEND=noninteractive COPY Ski4All/ /opt/app/ COPY requirements.txt /opt/app/ RUN pip3 install -r /opt/app/requirements.txt ENV PORT=8000 CMD exec gunicorn Ski4All.wsgi:application — bind 0.0.0.0:$PORT ``` When I release I go into the container via `heroku run bash -a "name app"` and executing `ps aux` I don't see the api running. But if execute the command of my Dockerfile when I'm in the container. Any idea?
2020/05/19
[ "https://Stackoverflow.com/questions/61882562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13039952/" ]
@jhaos mentioned gunicorn was running in the root path Change the following in `Dockerfile` ``` RUN mkdir /opt/app to WORKDIR /opt/app COPY Ski4All/ /opt/app/ to COPY Ski4All . ```
The problem was that the gunicorn command was running in the / root path and not in the correct workdir. In the comments @HariHaraSuhan solved the error.
8,932
12,183,763
I am following the tutorial for deploying a django project on AWS elastic beanstalk here: <http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html> My app works when I test locally but when I deploy, I'm getting a 404 error. Looking at the event logs, I see this message: `Error running user's commands : An error occurred running '. /opt/python/ondeck/env && PYTHONPATH=/opt/python/ondeck/app: django-admin.py syncdb --noinput' (rc: 127) /bin/sh: django-admin.py: command not found` That leads me to believe that the tutorial is missing a part about installing django files on the server or at least configuring my project to recognize django-admin.py. I have django installed on my local machine so it works there. I know python support is brand new for elastic beanstalk but has anyone deployed django to it?
2012/08/29
[ "https://Stackoverflow.com/questions/12183763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/656707/" ]
I believe you don't need to put container\_commands in .config because there is no database or table at this moment.
I followed the same tutorial recently and had a similar result. At step 6, upon seeing the default django 'congrats' page render locally, I deployed to EB as instructed and got a 404 instead of the default 'congrats' page. I decided to use the code up to that point as a foundation for following the '[getting started with django tutorial](https://docs.djangoproject.com/en/dev/intro/tutorial01/)' which led me to a successful rendering of a 'home' view. This is a much more useful place to be anyway. I do agree that there is something wrong with the AWS tutorial and posted to the AWS forums [here](https://forums.aws.amazon.com/thread.jspa?threadID=103117&tstart=0).
8,933
730,573
I have gotten quite familiar with django's email sending abilities, but I havn't seen anything about it receiving and processing emails from users. Is this functionality available? A few google searches have not turned up very promising results. Though I did find this: [Receive and send emails in python](https://stackoverflow.com/questions/348392/receive-and-send-emails-in-python) Am I going to have to roll my own? if so, I'll be posting that app faster than you can say... whatever you say. thanks, Jim **update**: I'm not trying to make an email server, I just need to add some functionality where you can email an image to the site and have it pop up in your account.
2009/04/08
[ "https://Stackoverflow.com/questions/730573", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2908/" ]
There's an app called [jutda-helpdesk](http://code.google.com/p/jutda-helpdesk/) that uses Python's `poplib` and `imaplib` to process incoming emails. You just have to have an account somewhere with POP3 or IMAP access. This is adapted from their [get\_email.py](http://code.google.com/p/jutda-helpdesk/source/browse/trunk/management/commands/get_email.py): ``` def process_mail(mb): print "Processing: %s" % q if mb.email_box_type == 'pop3': if mb.email_box_ssl: if not mb.email_box_port: mb.email_box_port = 995 server = poplib.POP3_SSL(mb.email_box_host, int(mb.email_box_port)) else: if not mb.email_box_port: mb.email_box_port = 110 server = poplib.POP3(mb.email_box_host, int(mb.email_box_port)) server.getwelcome() server.user(mb.email_box_user) server.pass_(mb.email_box_pass) messagesInfo = server.list()[1] for msg in messagesInfo: msgNum = msg.split(" ")[0] msgSize = msg.split(" ")[1] full_message = "\n".join(server.retr(msgNum)[1]) # Do something with the message server.dele(msgNum) server.quit() elif mb.email_box_type == 'imap': if mb.email_box_ssl: if not mb.email_box_port: mb.email_box_port = 993 server = imaplib.IMAP4_SSL(mb.email_box_host, int(mb.email_box_port)) else: if not mb.email_box_port: mb.email_box_port = 143 server = imaplib.IMAP4(mb.email_box_host, int(mb.email_box_port)) server.login(mb.email_box_user, mb.email_box_pass) server.select(mb.email_box_imap_folder) status, data = server.search(None, 'ALL') for num in data[0].split(): status, data = server.fetch(num, '(RFC822)') full_message = data[0][1] # Do something with the message server.store(num, '+FLAGS', '\\Deleted') server.expunge() server.close() server.logout() ``` `mb` is just some object to store all the mail server info, the rest should be pretty clear. You'll probably need to check the docs on `poplib` and `imaplib` to get specific parts of the message, but hopefully this is enough to get you going.
Django is really intended as a web server (well, as a framework that fits into a web server), not as an email server. I suppose you could put some code into a Django web application that starts up an email server, using the kind of code shown in that question you linked to, but I really wouldn't recommend it; it's an abuse of the capabilities of dynamic web programming. The normal practice is to have separate email and web servers, and for that you would want to look into something like Sendmail or (better yet) Postfix. For POP3 you would also need something like Dovecot or Courier, I think. (It's certainly possible to have the email server notify your web application when emails are received so it can act on them, if that's what you want to do.) *EDIT*: in response to your comments: yes you are trying to make (or at least use) an email server. An email server is just a program that receives emails (and may be capable of sending them as well, but you don't need that). You could definitely write a small email server in Python that just receives these emails and saves the images to the filesystem or a database or whatever. (Might be worth asking a new question, about) But don't make it part of your Django web app; keep it as its own separate program.
8,937
39,840,323
I'm using tensorflow 0.10 and I was benchmarking the examples found in the [official HowTo on reading data](https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#reading-data). This HowTo illustrates different methods to move data to tensorflow, using the same MNIST example. I was surprised by the results and I was wondering if anyone has enough low-level understanding to explain what is happening. In the HowTo there are basically 3 methods to read in data: * `Feeding`: building the mini-batch in python and passing it with `sess.run(..., feed_dict={x: mini_batch})` * `Reading from files`: use `tf` operations to open the files and create mini-batches. (Bypass handling data in python.) * `Preloaded data`: load all the data in either a single `tf` variable or constant and use `tf` functions to break that up in mini-batches. The variable or constant is pinned to the cpu, not gpu. The scripts I used to run my benchmarks are found within tensorflow: * `Feeding`: [examples/tutorials/mnist/fully\_connected\_feed.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/tutorials/mnist/fully_connected_feed.py) * `Reading from files`: [examples/how\_tos/reading\_data/convert\_to\_records.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/convert_to_records.py) and [examples/how\_tos/reading\_data/fully\_connected\_reader.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py) * `Preloaded data (constant)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py) * `Preloaded data (variable)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded\_var.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py) I ran those scripts unmodified, except for the last two because they crash --for version 0.10 at least-- unless I add an extra `sess.run(tf.initialize_local_variables())`. Main Question ------------- The time to execute 100 mini-batches of 100 examples running on a GTX1060: * `Feeding`: `~0.001 s` * `Reading from files`: `~0.010 s` * `Preloaded data (constant)`: `~0.010 s` * `Preloaded data (variable)`: `~0.010 s` Those results are quite surprising to me. I would have expected `Feeding` to be the slowest since it does almost everything in python, while the other methods use lower-level tensorflow/C++ to carry similar operations. It is the complete opposite of what I expected. Does anyone understand what is going on? Secondary question ------------------ I have access to another machine which has a Titan X and older NVidia drivers. The relative results were roughly in line with the above, except for `Preloaded data (constant)` which was catastrophically slow, taking many seconds for a single mini-batch. Is this some known issue that performance can vary greatly with hardware/drivers?
2016/10/03
[ "https://Stackoverflow.com/questions/39840323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/765009/" ]
**Update Oct 9** the slowness comes because the computation runs too fast for Python to pre-empt the computation thread and to schedule the pre-fetching threads. Computation in main thread takes 2ms and apparently that's too little for the pre-fetching thread to grab the GIL. Pre-fetching thread has larger delay and hence can always be pre-empted by computation thread. So the computation thread runs through all of the examples, and then spends most of the time blocked on GIL as some prefetching thread gets scheduled and enqueues a single example. The solution is to increase number of Python threads, increase queue size to fit the entire dataset, start queue runners, and then pause main thread for a couple of seconds to give queue runners to pre-populate the queue. **Old stuff** That's surprisingly slow. This looks some kind of special cases making the last 3 examples unnecessarily slow (most effort went into optimizing large models like ImageNet, so MNIST didn't get as much attention). You can diagnose the problems by getting timelines, as described [here](https://github.com/tensorflow/tensorflow/issues/1824#issuecomment-225754659) Here [are](https://github.com/yaroslavvb/stuff/tree/master/input_benchmarks) 3 of those examples with timeline collection enabled. Here's the timeline for `feed_dict` implementation [![enter image description here](https://i.stack.imgur.com/HCrZP.png)](https://i.stack.imgur.com/HCrZP.png) The important thing to notice is that matmul takes a good chunk of the time, so the reading overhead is not significant Now here's the timeline for `reader` implementation [![enter image description here](https://i.stack.imgur.com/7ppyp.png)](https://i.stack.imgur.com/7ppyp.png) You can see that operation is bottlenecked on QueueDequeueMany which takes whopping 45ms. If you zoom in, you'll see a bunch of tiny MEMCPY and Cast operations, which is a sign of some op being CPU only (`parse_single_example`), and the dequeue having to schedule multiple independent CPU->GPU transfers For the `var` example below with GPU disabled, I don't see tiny little ops, but QueueDequeueMany still takes over 10ms. The timing seems to scale linearly with batch size, so there's some fundamental slowness there. Filed [#4740](https://github.com/tensorflow/tensorflow/issues/4740) [![enter image description here](https://i.stack.imgur.com/3SDZ2.png)](https://i.stack.imgur.com/3SDZ2.png)
Yaroslav nails the problem well. With small models you'll need to speed up the data import. One way to do this is with the Tensorflow function, [tf.TFRecordReader.read\_up\_to](https://www.tensorflow.org/api_docs/python/io_ops/readers#TFRecordReader.read_up_to), that reads multiple records in each `session.run()` call, thereby removing the excess overhead caused by multiple calls. ``` enqueue_many_size = SOME_ENQUEUE_MANY_SIZE reader = tf.TFRecordReader(options = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.ZLIB)) _, queue_batch = reader.read_up_to(filename_queue, enqueue_many_size) batch_serialized_example = tf.train.shuffle_batch( [queue_batch], batch_size=batch_size, num_threads=thread_number, capacity=capacity, min_after_dequeue=min_after_dequeue, enqueue_many=True) ``` This was also addressed in this [SO question](https://stackoverflow.com/a/41768163/599139).
8,940
52,407,452
When I run the following piece of code, only the print statements in the method, which I have dynamically assigned to the class "Test" only return "my\_unique\_method\_name". **How to print the method name I have given it?** (Which is "wizardry" in the method—see "Expected Output" below.) ``` #!/usr/bin/python3 import sys import inspect class Test: pass def my_unique_method_name(self): print(inspect.stack()[0][3]) print(sys._getframe().f_code.co_name) print(inspect.currentframe().f_code.co_name) Test.wizardry = my_unique_method_name t = Test() t.wizardry() ``` **Current Output** ```none my_unique_method_name my_unique_method_name my_unique_method_name ``` **Expected Output** ```none wizardry ```
2018/09/19
[ "https://Stackoverflow.com/questions/52407452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5039582/" ]
As you have already made the step to use template matching with the `template match="Class/Student"` I would suggest to stick with that approach and simply write two templates, one for the `Class` elements, the other for the `Student` elements ``` <xsl:template match="Class"> <ul> <xsl:apply-templates select="Student"/> </ul> </xsl:template> <xsl:template match="Student"> <li> <xsl:apply-templates/> </li> </xsl:template> ``` For more complex cases this results in cleaner and more modular code.
You want one `ul` per `Class`, not per `Student`, so change ``` <xsl:template match="Class/Student"> ``` to ``` <xsl:template match="Class"> ``` Then change ``` <xsl:for-each select="../Student"> ``` to ``` <xsl:for-each select="Student"> ``` to get one `li` per `Student` child element of the `Class` current node.
8,943
39,745,460
I am working on some piece of python code that calls various linux tools (like ssh) for automation purposes. Right now I am looking into "return code" handling. Thus: I am looking for a simple way to run *some* command that gives me a specific non-zero return code; something like ``` echo "this is a testcommand, that should return with rc=5" ``` for example. But of course, the above comes back with rc=0. I know that I can call `false`, but this will always return with rc=1. I am looking for something that gives me an rc that I can control. Edit: first answers suggest to *exit*; but the problem with that: *exit* is a bash function. So, when I try to run that from within a python script, I get "No such file or directory: exit". So, I am actually looking for some "binary" tool that gives me that (obviously one can write some simple script to get that; I am just looking if there is something similar to *false* that is already shipped with any Linux/Unix).
2016/09/28
[ "https://Stackoverflow.com/questions/39745460", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1531124/" ]
Run `exit` in a subshell. ``` $ (exit 5) ; echo $? 5 ```
This is not exactly what you are asking but custom `rc` can be achieved through exit command. ``` echo "this is a test command, that should return with " ;exit 5 echo $? 5 ```
8,944
4,377,260
As an exercise I built a little script that query Google Suggest JSON API. The code is quite simple: ``` query = 'a' url = "http://clients1.google.co.jp/complete/search?hl=ja&q=%s&json=t" %query response = urllib.urlopen(url) result = json.load(response) UnicodeDecodeError: 'utf8' codec can't decode byte 0x83 in position 0: invalid start byte ``` If I try to `read()` the response object, this is what I've got: ``` '["a",["amazon","ana","au","apple","adobe","alc","\x83A\x83}\x83]\x83\x93","\x83A\x83\x81\x83u\x83\x8d","\x83A\x83X\x83N\x83\x8b","\x83A\x83\x8b\x83N"],["","","","","","","","","",""]]' ``` So it seams that the error is raised when python try to decode the string. This only happens with google.co.jp and the Japanese language. I tried the same code with different contry/languages and I **do not** get the same issue: when I try to deserialize the object everything works OK. * I checked the response headers for and they always specify utf-8 as the response encoding. * I checked the JSON string with an online parser (http://json.parser.online.fr/) and again all seams OK Any ideas to solve this problem? What make the JSON `load()` function choke? Thanks in advance.
2010/12/07
[ "https://Stackoverflow.com/questions/4377260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/246242/" ]
The response header (`print response.header`) contains the following information: ``` Content-Type: text/javascript; charset=Shift_JIS ``` Note the charset. If you specify this encoding in `json.load` it will work: ``` result = json.load(response, encoding='shift_jis') ```
Regardless of what the spec says, the string "\x83A\x83}\x83]\x83\x93" is not UTF-8. At a guess, it is one of [ "cp932", "shift\_jis", "shift\_jis\_2004", "shift\_jisx0213" ]; try decoding as one of these.
8,946
32,566,625
Say I generated a `dots = psychopy.visual.DotStim`. Is it possible to change the number of dots later? `dots.nDots = 5` leads to an error on the next `dots.draw()` because the underlying matrices don't match: ``` Traceback (most recent call last): File "/home/jonas/Documents/projects/work pggcs/experiment/dots.py", line 32, in <module> dots_right.draw() File "/usr/lib/python2.7/dist-packages/psychopy/visual/dot.py", line 279, in draw self._update_dotsXY() File "/usr/lib/python2.7/dist-packages/psychopy/visual/dot.py", line 362, in _update_dotsXY self._verticesBase[:,0] += self.speed*numpy.reshape(numpy.cos(self._dotsDir),(self.nDots,)) File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 218, in reshape return reshape(newshape, order=order) ValueError: total size of new array must be unchanged ``` The same is true of `psychopy.visual.ElementArrayStim` for which setting `stim.nElements = 5` similarly results in an error on the next draw. A solution is of course to instantiate a whole new `DotStim` or `ElementArrayStim` every time the number of dots/elements should change but this seems too heavy.
2015/09/14
[ "https://Stackoverflow.com/questions/32566625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1297830/" ]
It can be fixed for the `DotStim`: ``` dots.nDots = 5 dots._dotsDir = [0]*dots.nDots dots. _verticesBase = dots._newDotsXY(dots.nDots) ``` This set movement of all dots to 0 but you can change that value to whatever you like or specify for individual dots. This is a hack which will likely break if you modify other aspects of the DotStim. I haven't figured out a solution for `ElementArrayStim`.
Yes, it would be nice to be able to do this but I haven't gotten around to it. The code that has to be run when the nDots/nElements changes is pretty close to starting from scratch with a new stimulus.**init** so adding this in 'correctly' probably means some refactoring (move a lot of the **init** code into setNDots() and then call it from **init**). There's an additional potential issue which is that the elements might have changed (e.g. the user has set the orientations) and then updates the number of elements. Which ones do we remove? And what orientation do we give those that we add? (This is less of an issue for DotStim though) Basically, the issue is a little thorny and hasn't ben a priority for me.
8,947
48,279,419
How does the quality of my code gets affected if I don't use `__init__` method in python? A simple example would be appreciated.
2018/01/16
[ "https://Stackoverflow.com/questions/48279419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7735772/" ]
*Short answer*; nothing happens. *Long answer*; if you have a class `B`, which inherits from a class `A`, and if `B` has no `__init__` method defined, then the parent's (in this case, `A`) `__init__` is invoked. ``` In [137]: class A: ...: def __init__(self): ...: print("In A") ...: In [138]: class B(A): pass In [139]: B() In A Out[139]: <__main__.B at 0x1230a5ac8> ``` If `A` has no predecessor, then the almighty superclass `object`'s `__init__` is invoked, which does nothing. You can view class hierarchy by querying the dunder `__mro__`. ``` In [140]: B.__mro__ Out[140]: (__main__.B, __main__.A, object) ``` Also see [Why do we use \_\_init\_\_ in Python classes?](https://stackoverflow.com/questions/8609153/why-do-we-use-init-in-python-classes) for some context as to *when and where* its usage is appropriate.
Its not a matter of quality here, in `Object-oriented-design` which python supports, `__init__` is way to supply data when an object is created first. In `OOPS`, this is called a `constructor`. In other words **A constructor is a method which prepares a valid object**. There are design patters on large projects are build that rely on the constructor feature provided by python. Without this they will not function, **for e.g.** * You want to keep track of every object that is created for a class, now you need a method which is executed every time a object is created, hence the **constructor**. * Other useful example for a constructor is lets say you want to create a `customer` object for a Bank. Now a customer for a bank will must have an `account number`, so basically you have to set a rule for a valid customer object for a Bank hence the constructor.
8,948
11,577,619
I got two files each containing a column with "time" and one with "id" like this: File 1: ``` time id 11.24 1 11.26 2 11.27 3 11.29 5 11.30 6 ``` File 2: ``` time id 11.25 1 11.26 3 11.27 4 11.31 6 11.32 7 11.33 8 ``` Im trying to do a python script which can subtract the time of the rows with matching id from each other. The files are of different length. I tried using `set(id's of file 1) & set(id's of file 2)` to get the matching id, but now I'm stuck. Any help will be much appreciated, thank you.
2012/07/20
[ "https://Stackoverflow.com/questions/11577619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1540477/" ]
Python Set do not support ordering for the elements. I would store the data as a dictionary ``` file1 = {1:'11:24', 2:'11:26', ... etc} file2 = {1:'11:25', 3:'11:26', ... etc} ``` The loop over the intersection of the keys (or union based on your needs) to do the subtraction (time based or math based).
This is a bit old school. Look at using a default dict from the `collections` module for a more elegant approach. This will work for any number of files, I've named mine `f1`, `f2` etc. The general idea is to process each file and build up a list of time values for each id. After file processing, iterate over the dictionary subtracting each value as you go (via `reduce` on the values list). ``` from operator import sub d = {} for fname in ('f1','f2'): for l in open(fname): t, i = l.split() d[i] = d.get(i, []) + [float(t)] results = {} for k,v in d.items(): results[k] = reduce(sub, v) print results {'1': -0.009999999999999787, '3': 0.009999999999999787, '2': 11.26, '5': 11.29, '4': 11.27, '7': 11.32, '6': -0.009999999999999787, '8': 11.33} ``` **Updated** If you want to include only those ids with more than one value: ``` results = {} for k,v in d.items(): if len(v) > 1: results[k] = reduce(sub, v) ```
8,949
23,221,577
I'm following [this tutorial on Embedding Python on C](https://docs.python.org/2.7/extending/embedding.html), but their [Pure Embedding](https://docs.python.org/2.7/extending/embedding.html#pure-embedding) example is not working for me. I have on the same folder (taken from the example): **call.c** ``` #include <Python.h> int main(int argc, char *argv[]) { PyObject *pName, *pModule, *pDict, *pFunc; PyObject *pArgs, *pValue; int i; if (argc < 3) { fprintf(stderr,"Usage: call pythonfile funcname [args]\n"); return 1; } Py_Initialize(); pName = PyString_FromString(argv[1]); /* Error checking of pName left out */ pModule = PyImport_Import(pName); Py_DECREF(pName); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule, argv[2]); /* pFunc is a new reference */ if (pFunc && PyCallable_Check(pFunc)) { pArgs = PyTuple_New(argc - 3); for (i = 0; i < argc - 3; ++i) { pValue = PyInt_FromLong(atoi(argv[i + 3])); if (!pValue) { Py_DECREF(pArgs); Py_DECREF(pModule); fprintf(stderr, "Cannot convert argument\n"); return 1; } /* pValue reference stolen here: */ PyTuple_SetItem(pArgs, i, pValue); } pValue = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); if (pValue != NULL) { printf("Result of call: %ld\n", PyInt_AsLong(pValue)); Py_DECREF(pValue); } else { Py_DECREF(pFunc); Py_DECREF(pModule); PyErr_Print(); fprintf(stderr,"Call failed\n"); return 1; } } else { if (PyErr_Occurred()) PyErr_Print(); fprintf(stderr, "Cannot find function \"%s\"\n", argv[2]); } Py_XDECREF(pFunc); Py_DECREF(pModule); } else { PyErr_Print(); fprintf(stderr, "Failed to load \"%s\"\n", argv[1]); return 1; } Py_Finalize(); return 0; } ``` **multiply.py** ``` def multiply(a,b): print "Will compute", a, "times", b c = 0 for i in range(0, a): c = c + b return c ``` And I compile and run like this: ``` $ gcc $(python-config --cflags) call.c $(python-config --ldflags) -o call call.c: In function ‘main’: call.c:6:33: warning: unused variable ‘pDict’ [-Wunused-variable] PyObject *pName, *pModule, *pDict, *pFunc; ^ # This seems OK because it's true that pDict is not used $ ./call multiply multiply 3 2 ImportError: No module named multiply Failed to load "multiply" ``` Why can't it load `multiply` module? Example doesn't show filenames nor paths. Can it be a path problem? Thanks a lot.
2014/04/22
[ "https://Stackoverflow.com/questions/23221577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1970845/" ]
Try setting `PYTHONPATH`: ``` export PYTHONPATH=`pwd` ```
Really, the example should have `PySys_SetPath(".");` after initialization.
8,953
71,363,142
how do you call a php function with html I have this: ``` <input class="myButton" type="button" name="woonkameruit" value="UIT" onclick="kameroff('woonkamer');<?php woonkameruit()?>"> ``` and ``` function woonkameruit() { exec("sudo pkill python"); exec("sudo python3/home/pi/Documents/Programmas/WOONKAMERUit.py"); } ``` can you help?
2022/03/05
[ "https://Stackoverflow.com/questions/71363142", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17288855/" ]
You have to send the function name via Javascript, and create some controller like logic. It is a similar to what Laravel Livewire does. For Example: frontend.php: ``` <!-- Frontend Logic --> <input id="myButton" class="myButton" type="button" name="woonkameruit" value="UIT"> <!-- Put Before </body> --> <script type="text/javascript"> function callBackend(f) { fetch(`https://myapp.com/backend.php?f=${f}`) .then(response => response.json()) .then(data => { // Do what you have to do with response console.log(data) if(data.success) { // Function Run Successfully } else { // Function Error } }) // Other Errors .catch(err => console.error(err)); } const myButton = document.getElementById('myButton') myButton.onclick = (e) => { e.preventDefault() callBackend('woonkameruit') } </script> ``` backend.php: ``` <?php // Backend logic function send_json_to_client($status, $data) { $cors = "https://myapp.com"; header("Access-Control-Allow-Origin: $cors"); header('Content-Type: application/json; charset=utf-8'); http_response_code($status); echo json_encode($data); exit(1); } function woonkameruit() { $first_command = exec("sudo pkill python"); $second_command = exec("sudo python3/home/pi/Documents/Programmas/WOONKAMERUit.py"); $success = false; $message = 'Something Went Wrong'; $status = 500; // If everything went good. Put here your success logic if($first_command !== false && $second_command !== false) { $success = true; $message = 'Everything Is Good'; $status = 200; } return [ 'status' => $status, 'response' => [ 'success' => $success, 'message' => $message, ], ]; } $authorizedFunctions = [ 'woonkameruit' ]; if(isset($_GET['f'])) { $fn = filter_input(INPUT_GET, 'f', FILTER_SANITIZE_STRING); if( in_array( $fn, $authorizedFunctions ) ) { $response = $fn(); send_json_to_client($response['status'], $response['response']); } else { send_json_to_client(403, [ 'success' => false, 'message' => 'Unauthorized' ]); } } else { send_json_to_client(404, [ 'success' => false, 'message' => 'Not Found' ]); } ```
You cannot directly call PHP functions with HTML as it is in the backend. However you can send a request to the backend with JS or HTML and make the PHP handle it so it executes a function. To do this there are multiple ways: HTML: Form JS: Ajax If you do not know how to implement this I suggest going to websites such as w3schools or just looking in youtube.
8,954
3,636,928
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another. Note: control characters are **not** printable for my purposes. --- Edit: I was/am looking for a single function, not a roll-your-own solution: What I ended up with is: ``` all(ord(c) < 127 and c in string.printable for c in input_str) ```
2010/09/03
[ "https://Stackoverflow.com/questions/3636928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1343/" ]
``` >>> # Printable >>> s = 'test' >>> len(s)+2 == len(repr(s)) True >>> # Unprintable >>> s = 'test\x00' >>> len(s)+2 == len(repr(s)) False ```
``` ctrlchar = "\n\r| " # ------------------------------------------------------------------------ # This will let you control what you deem 'printable' # Clean enough to display any binary def isprint(chh): if ord(chh) > 127: return False if ord(chh) < 32: return False if chh in ctrlchar: return False if chh in string.printable: return True return False ```
8,955
53,727,275
I am trying to ammend a group of files in a folder, by adding F to the 4th line (which is number 3 in python, if I'm correct). With the following code below, the code is just continuously running and not making the amendments, anyone got any ideas? ``` import os from glob import glob list_of_files = glob('*.gjf') # get list of all .gjf files for file in list_of_files: # read file: with open(file, 'r+') as f: lines=f.readlines() for line in lines: lines.insert(3,'F') ```
2018/12/11
[ "https://Stackoverflow.com/questions/53727275", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10750169/" ]
The error is expected. Function runtime is based on C#, when the response tries to add the `Allow` header, underlying C# code checks its name. It's by design that `Allow` is a read-only header in [HttpContentHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpcontentheaders?view=netframework-4.7.2#properties) hence we can't add it in [HttpResponseHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpresponseheaders?view=netframework-4.7.2). Here are two workarounds for you to refer. 1. Use a custom header name like `Allow-Method`. 2. Create a new Function app, it uses Function runtime 2.0 by default, where we can set `Allow` header.
Have you tried adding the allowed methods to the **function.json** files as noted in the [documentation](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook#trigger---configuration)? Example below: ``` "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "post", "options" ] ```
8,965
71,044,523
For Example I have enabled the mTLS in my istio service in STRICT mode. and I have authorization policy that have kind of source.principals rule check. Now I want to access these rules details like source.principals and source.namespace after request is authenticated and authorized so that I can do more business login in my flask(python) service. My python code looks like this: ``` from flask import Flask app = Flask(__name__) @app.route("/mutate-hook", methods=['POST']) def mutate(): source = request.headers.get('source') # I am expacting source to the source from this: https://istio-releases.github.io/v0.1/docs/reference/config/mixer/attribute-vocabulary.html print(source) request_json = request.get_json() # I am expacting source to the source from this: https://istio-releases.github.io/v0.1/docs/reference/config/mixer/attribute-vocabulary.html request_source = request_json.get('source') print(request_source["principal"], request_source['namespace']) ```
2022/02/09
[ "https://Stackoverflow.com/questions/71044523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2090808/" ]
Have a look at the [AuthConfig class](https://github.com/manfredsteyer/angular-oauth2-oidc/blob/master/projects/lib/src/auth.config.ts) and try setting `loginUrl` to the authorize endpoint and also set the token endpoint explicitly. See if this gives you a different error. A good library should allow you to set endpoints explicitly. As an example, a couple of years ago an SPA of mine did not work with Azure AD since it did not allow CORS requests to the token endpoint. I worked around this by setting a token endpoint in the [oidc client library](https://github.com/IdentityModel/oidc-client-js) to point to an API, so that I could call the token endpoint from there.
Extending **auth.config** about additional loginUrl and tokenUrl solved the issue. This is finall version of my config file: ``` export const OAUTH_CONFIG: AuthConfig = { issuer: environment.identityProviderBaseUrl, loginUrl: environment.identityProviderLoginUrl, logoutUrl: environment.identityProviderLogoutUrl, redirectUri: environment.identityProvideRedirectUrl, tokenEndpoint: environment.identityProviderTokenUrl, clientId: environment.clientId, dummyClientSecret: environment.clientSecret, responseType: 'code', requireHttps: false, scope: 'write', showDebugInformation: true, oidc: false, useIdTokenHintForSilentRefresh: false }; ``` Of corse then we should remember to remove `loadDiscoveryDocumentAndTryLogin()` . ``` public configureCodeFlow(): void { this.authService.configure(OAUTH_CONFIG); this.authService.setupAutomaticSilentRefresh(undefined, 'access_token'); this.authService.tryLoginCodeFlow(); } ```
8,968
14,386,536
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object. ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view return view_func(request, *args, **kwargs) File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request referral = Referral() File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__ val = field.get_default() File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default if isinstance(field_default, self.rel.to): TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types ``` As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question: ``` class Referral (models.Model): opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False) origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None) candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False) intro = models.TextField(max_length=1000,null=False,blank=False) experience = models.TextField(max_length=5000,null=False,blank=False) email = models.CharField(max_length=255,null=False,blank=False) phone = models.CharField(max_length=255,null=False,blank=True,default='') def __unicode__(self): return u"%s" % self.id ``` Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
2013/01/17
[ "https://Stackoverflow.com/questions/14386536", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1988262/" ]
**UPDATE** (with solution below) I've been digging into the Django model code and it seems like there is a bug that creates a race condition when using "app.model"-based identifiers for the related field in a ForeignKey. When the application is running in production mode as opposed to DEBUG, the ForeignKey.get\_default method referenced in the exception above tries to check if the provided default value is an instance of the related field (self.rel.to): ``` def get_default(self): "Here we check if the default value is an object and return the to_field if so." field_default = super(ForeignKey, self).get_default() if isinstance(field_default, self.rel.to): return getattr(field_default, self.rel.get_related_field().attname) return field_default ``` Initially when a ForeignKey is instantiated with a string-based related field, self.rel.to is set to the string-based identifier. There is a separate function in related.py called add\_lazy\_relation that, under normal circumstances, attempts to convert this string-based identifier into a model class reference. Because models are loaded in a lazy fashion, it's possible that this conversion can be deferred until the AppCache is fully loaded. Therefore, if get\_default is called on a string-based ForeignKey relation before AppCache is fully populated, it's possible for a TypeError exception to be raised. Apparently putting my application into production mode was enough to shift the timing of model caching that this error started occurring for me. **SOLUTION** It seems this is genuinely a bug in Django, but here's how to get around it if you do ever run into this problem. Add the following code snippet immediately before you instantiate a troublesome model: ``` from django.db.models.loading import cache as model_cache if not model_cache.loaded: model_cache._populate() ``` This checks the loaded flag on the AppCache to determine if the cache if fully populated. If it is not, we will force the cache to fully populate now. And the problem will be solved.
Another cause: Make sure that all Foreign Keys are in the same reference models in the same application (app label). I was banging my head against the wall for a while on this one. ``` class Meta: db_table = 'reservation' app_label = 'admin' ```
8,969
50,866,111
I'm working with retrain.python file from this demo. I'm getting different types of files: [![enter image description here](https://i.stack.imgur.com/WLY0Y.png)](https://i.stack.imgur.com/WLY0Y.png) [![enter image description here](https://i.stack.imgur.com/htD6r.png)](https://i.stack.imgur.com/htD6r.png) [![enter image description here](https://i.stack.imgur.com/mC8aI.png)](https://i.stack.imgur.com/mC8aI.png) I want to freeze the graph.pb with the checkpoints files, to optimize the frozen file then to convert optimized file to tflite file in order to use it in android app. I'm tried different ways to freeze the file but no luck, > > getting checkpoint file doesn't exist in Terminal > > > and > > UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position > 1: invalid start byte > > > How to complete all steps and get the tflite file and how to combine the labels.txt file? Note: Here is the command I used in Terminal: ``` python freeze_graph.py \ --input_graph=/home/automator/Desktop/retrain/code/graph/graph.pb \ --input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model.ckpt \ --output_graph=/home/automator/Desktop/retrain/code/frozen.pb \ --output_node_names=output_node \ --input_saved_model_dir=/home/automator/Desktop/retrain/code/export/frozen.pb \ --output_node_names=outInput ``` The Error: checkpoint '' doesn't exist! **Tried:** ``` --input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model.ckpt --input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model --input_checkpoint=/home/automator/Desktop/retrain/code/tmp/modelmodel.ckpt .... ``` Please help!
2018/06/14
[ "https://Stackoverflow.com/questions/50866111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4024038/" ]
Here is a nice script to freeze a graph ``` import os import argparse import tensorflow as tf from tensorflow.python.framework import graph_util from tensorflow.python.platform import gfile def load_graph_def(model_path, sess=None): if os.path.isfile(model_path): with gfile.FastGFile(model_path, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') else: sess = sess if sess is not None else tf.get_default_session() saver = tf.train.import_meta_graph(model_path + '.meta') saver.restore(sess, model_path) def freeze_from_checkpoint(checkpoint_file, output_layer_name): model_folder = os.path.basename(checkpoint_file) output_graph = os.path.join(model_folder, checkpoint_file + '.pb') with tf.Session() as sess: load_graph_def(checkpoint_file) graph = tf.get_default_graph() input_graph_def = graph.as_graph_def() print("Exporting graph...") output_graph_def = graph_util.convert_variables_to_constants( sess, input_graph_def, output_layer_name.split(",")) with tf.gfile.GFile(output_graph, "wb") as f: f.write(output_graph_def.SerializeToString()) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('model_path') parser.add_argument('output_layer') args = parser.parse_args() freeze_from_checkpoint(checkpoint_file=args.model_path, output_layer_name=args.output_layer) ``` Save it as freeze\_graph.py Call it: python freeze\_graph.py /home/automator/Desktop/retrain/code/tmp/model.data-000000-of-00001 "output\_node\_name"
Given that you have a `meta graph` saved, try using the `input_meta_graph` argument: ``` python freeze_graph.py \ --input_meta_graph=/home/automator/Desktop/retrain/code/tmp/model.meta \ --input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model.ckpt \ --input_binary=true \ --output_graph=/home/automator/Desktop/retrain/code/frozen.pb \ --output_node_names=output_node ``` The issue is that you are passing the `--input_saved_model_dir` argument that [overwrites](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py#L227) the `input_meta_graph` argument but you don't seem to have a [SavedModel](https://www.tensorflow.org/programmers_guide/saved_model#build_and_load_a_savedmodel).
8,976
60,763,948
These are my two models, when I try to open City page on Django I get an error: "column city.country\_id\_id does not exist". I don't know why python adds extra `_id` there. ``` class Country(models.Model): country_id = models.CharField(primary_key=True,max_length=3) country_name = models.CharField(max_length=30, blank=True, null=True) class Meta: managed = False db_table = 'country' class City(models.Model): city_id=models.CharField(primary_key=True,max_length=3) city_name=models.CharField(max_length=30, blank=True, null=True) country_id = models.ForeignKey(Country, on_delete=models.CASCADE) class Meta: managed = False db_table = 'city' ``` ![Image](https://i.stack.imgur.com/Shj7x.png)
2020/03/19
[ "https://Stackoverflow.com/questions/60763948", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13090788/" ]
as the comments have begun to mention, and in reference to your final statement, this code absolutely *should not* work, for multiple reasons. 1) you open the file three times, for no apparent reason. 2) `outfile` isn't declared, doesn't do anything. 3) when you open a file with `w` it clears the contents of afformentioned file. firstly fix these issues. you understand the fundementals, your upper function is fine etc etc. this is what you must do. 1) dont open the same file 3 times for no reason 2) define `outfile` 3) use `a` instead of `w` so you append rather then delete and write
This will work. ``` f_name = input("Input file name: ") with open(f_name, "r+") as f: lines = f.read().splitlines() # get string, split lines lines = [l.capitalize() for l in lines] # capitalize each line f.seek(0) # move the cursor to the beginning f.write('\n'.join(lines)) # join the lines and write to the same file ```
8,977
3,939,482
I have a stats app written in python that on a timer refreshes the ssh screen with stats. Right now it uses os.system('clear') to clear the screen and then outputs a multi line data with the stats. I'd like to do just do a \r instead of executing the clear but that only works with one line, is it possible to do this with multiple lines? A classic example of what I want to do is when you execute the "top" command which lists the current processes it updates the screen without executing the "clear" and it's got many lines. Anyone have any tips for this?
2010/10/15
[ "https://Stackoverflow.com/questions/3939482", "https://Stackoverflow.com", "https://Stackoverflow.com/users/388538/" ]
It doesn't really answer your question, but there isn't really anything wrong with calling os.system to clear out the terminal (other than the system running on different operating systems) in which case you could use: `os.system('cls' if os.name=='nt' else 'clear')`
For simple applications you can use: ``` print('\n' * number_of_lines) ``` For more advanced there is [curses](http://docs.python.org/library/curses.html) module in standard library.
8,978
40,119,361
I have a dictionary which has the following structure in a python program `{'John':{'age': '12', 'height':'152', 'weight':'45}}`, this is the result returned from a function. My question is how may I extract the sub-dictionary please? so that I can have the data in this form only `{'age': '12', 'height':'152', 'weight':'45}`. \*I can think of a solution of using for loop to go through the dictionary, since there is only one item in this dictionary, I can then store it into a new variable, but I could like to learn an alternative please Many thanks
2016/10/18
[ "https://Stackoverflow.com/questions/40119361", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6797800/" ]
To get a value from a dictionary, use dict[key]: ``` >>> d = {'John':{'age': '12', 'height':'152', 'weight':'45'}} >>> d['John'] {'age': '12', 'height': '152', 'weight': '45'} >>> ```
``` >>> d = {'John':{'age': '12', 'height':'152', 'weight':'45'}, 'Kim':{'age': '13', 'height': '113', 'weight': '30'}} >>> for key in d: ... print(key, d[key]) ... John {'height': '152', 'weight': '45', 'age': '12'} Kim {'height': '113', 'weight': '30', 'age': '13'} ``` Just access the subdictionary with `d[key]`. If you have multiple keys, something like the above loop will allow you to go through all of them.
8,980
45,103,348
I try to make to load the content from mysqldb using an animation and a limit of content taken and it takes content is displaying when it gets at the end of content the animation of loading is continue loading a I get this error My script: ``` <script> $(document).ready(function(){ var limit = 7; var start = 0; var action = 'inactive'; function load_country_data(limit, start) { $.ajax({ url:"/select_index", method:"POST", data:{limit:limit, start:start}, cache:false, success:function(data) { var obj2 = JSON.parse(data); console.log(data); console.log(obj2); var i=0; var j=0; for(i in obj2){ //for(j in obj2[i]){ $('#load_data').prepend("<form><div class='panel panel-white post panel-shadow user-post'><div class='post-heading'><div class='pull-left image'><img src='static/"+obj2[i][4]+"' class='img-circle avatar' alt='user profile image'></div><div class='pull-left meta'> <div class='title h5'><a href='#'><b>"+obj2[i][1]+"</b></a> made a post.</div><h6 class='text-muted time'>1 minute ago</h6></div></div> <div class='post-description'> <p>"+obj2[i][0]+"</p><div class='stats'><a href='#' class='btn btn-default stat-item'> <i class='fa fa-thumbs-up icon'></i>2</a> <a href='#' class='btn btn-default stat-item'><i class='fa fa-share icon'></i>12</a></div></div><div class='post-footer'><div class='input-group'> <input class='form-control' placeholder='Add a comment' type='text'></div> </div></div></form>" ); // } } // $('#load_data').append(data); if(data == '') { $('#load_data_message').html("No Data Found"); action = 'active'; } else { $('#load_data_message').html("<div class='loadingC'><div class='loadingCcerc'</div></div>"); action = "inactive"; } } }); } if(action == 'inactive') { action = 'active'; load_country_data(limit, start); } $(window).scroll(function(){ if($(window).scrollTop() + $(window).height() > $("#load_data").height() && action == 'inactive') { action = 'active'; start = start + limit; setTimeout(function(){ load_country_data(limit, start); }, 800); } }); }); </script> ``` I see something that the error came from json.parse My python script: ``` @app.route('/select_index',methods=["POST","GET"]) def select_index(): limit=request.args['limit'] print(limit) start=request.args['start'] print(start) select="SELECT comments.post,comments.name,comments.post_id , register.id,register.profile_pic,comments.id FROM comments,register WHERE register.id=comments.post_id ORDER BY comments.id DESC LIMIT "+str(start)+","+str(limit)+" " con.execute(select) fetch=con.fetchall() print(fetch) json_fetch=str(jsonify(fetch)) print(len(json_fetch)) if json_fetch=="<Response 3 bytes [200 OK]>": return "" else: return jsonify(fetch) ```
2017/07/14
[ "https://Stackoverflow.com/questions/45103348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7906290/" ]
You are returning an empty string from your python code: ``` if json_fetch=="<Response 3 bytes [200 OK]>": return "" ``` When you use `JSON.parse("")`, you are returned with the `Uncaught SyntaxError: Unexpected end of JSON input`
i change but now is returning an empty array "[]" and the animation did't stop to place on the screen the text("did't stop") script: ``` <script> $(document).ready(function(){ var limit = 7; var start = 0; var action = 'inactive'; function load_country_data(limit, start) { $.ajax({ url:"/select_index", method:"POST", data:{limit:limit, start:start}, cache:false, success:function(data) { var obj2 = JSON.parse(data); console.log(data); console.log(obj2); var i=0; var j=0; for(i in obj2){ //for(j in obj2[i]){ $('#load_data').prepend("<form><div class='panel panel-white post panel-shadow user-post'><div class='post-heading'><div class='pull-left image'><img src='static/"+obj2[i][4]+"' class='img-circle avatar' alt='user profile image'></div><div class='pull-left meta'> <div class='title h5'><a href='#'><b>"+obj2[i][1]+"</b></a> made a post.</div><h6 class='text-muted time'>1 minute ago</h6></div></div> <div class='post-description'> <p>"+obj2[i][0]+"</p><div class='stats'><a href='#' class='btn btn-default stat-item'> <i class='fa fa-thumbs-up icon'></i>2</a> <a href='#' class='btn btn-default stat-item'><i class='fa fa-share icon'></i>12</a></div></div><div class='post-footer'><div class='input-group'> <input class='form-control' placeholder='Add a comment' type='text'></div> </div></div></form>" ); // } } //SELECT comments.post,comments.name,comments.post_id , register.id,register.profile_pic,comments.id,abonari.id_user,abonari.id_abonat FROM comments,register,abonari WHERE register.id=comments.post_id AND register.id=abonari.id_abonat ORDER BY comments.id DESC // $('#load_data').append(data); if(data == '') { $('#load_data_message').html("No Data Found"); action = 'active'; } else { $('#load_data_message').html("<div class='loadingC'><div class='loadingCcerc'</div></div>"); action = "inactive"; } } }); } if(action == 'inactive') { action = 'active'; load_country_data(limit, start); } $(window).scroll(function(){ if($(window).scrollTop() + $(window).height() > $("#load_data").height() && action == 'inactive') { action = 'active'; start = start + limit; setTimeout(function(){ load_country_data(limit, start); }, 800); } }); }); </script> ``` and python: ``` @app.route('/select_index',methods=["POST","GET"]) def select_index(): limit=request.args['limit'] print(limit) start=request.args['start'] print(start) select="SELECT comments.post,comments.name,comments.post_id , register.id,register.profile_pic,comments.id FROM comments,register WHERE register.id=comments.post_id ORDER BY comments.id DESC LIMIT "+str(start)+","+str(limit)+" " con.execute(select) fetch=con.fetchall() print(fetch) json_fetch=str(jsonify(fetch)) print(len(json_fetch)) return jsonify(fetch) ```
8,981
29,037,501
I created an SSH-Agent to provide my key to the ssh/scp cmd when connecting to my server. I also scripted a SSH-Add with the command 'expect' to write my paraphrase when it's needed. This works perfectly with my user "user". But I'm executing a python script that uses /dev/mem that need to be run as root through sudo. This python script call another bash script with ssh and scp cmd inside. Therefore all these cmd are executed as root and my agent/ssh-add doesn't work anymore, keeping asking for the paraphrase for each file. How could I fix that ? I don't want to log as root and run a agent as root. I tried the sudo -u user ssh but it doesn't work (ie: need to enter my paraphrase) Any ideas? Thanks in advance, Mat EDIT: my code: The py script needing the sudo ``` #!/usr/bin/env python2.7 import RPi.GPIO as GPIO import time import subprocess from subprocess import call from datetime import datetime import picamera import os import sys GPIO.setmode(GPIO.BCM) # GPIO 23 set up as input. It is pulled up to stop false signals GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_UP) #set path and time to create the folder where the images will be saved pathtoscript = "/home/pi/python-scripts" current_time = time.localtime()[0:6] dirfmt = "%4d-%02d-%02d-%02d-%02d-%02d" dirpath = os.path.join(pathtoscript , dirfmt) localdirname = dirpath % current_time[0:6] #dirname created with date and time remotedirname = dirfmt % current_time[0:6] #remote-dirname created with date and time os.mkdir(localdirname) #mkdir pictureName = localdirname + "/image%02d.jpg" #path+name of pictures var = 1 while var == 1: try: GPIO.wait_for_edge(23, GPIO.FALLING) with picamera.PiCamera() as camera: #camera.capture_sequence(["/home/pi/python-scripts/'dirname'/image%02d.jpg" % i for i in range(2)]) camera.capture_sequence([pictureName % i for i in range(19)]) camera.close() cmd = '/home/pi/python-scripts/picturesToServer {0} &'.format(remotedirname) call ([cmd], shell=True) except KeyboardInterrupt: GPIO.cleanup() # clean up GPIO on CTRL+C exit GPIO.cleanup() # clean up GPIO on normal exit ``` --- the bash script: ``` #!/bin/bash cd $1 ssh user@server mkdir /home/repulsion/picsToAnimate/"$1" >/dev/null 2>&1 ssh user@server cp "$1"/* /home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1 for i in $( ls ); do scp $i user@server:/home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1 done ```
2015/03/13
[ "https://Stackoverflow.com/questions/29037501", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4542774/" ]
There's no general way to get this behavior. You can create an ImmutablePerson class with a constructor that would accept a Person and construct an immutable version of that Person .
Sure, just have a boolean flag in the Person object which says if the object is locked for modifications. If it is locked just have all setters do nothing or have them throw exceptions. When invoking `immutableObject(person)` just set the flag to true. Setting the flag will also lock/deny the ability to set/change the flag later.
8,982
45,309,118
Trying to create a game called `hangman` in Python. I've come a long way, but the 'core' functionality is failing me. I've edited out all the parts which are irrelevant for this question. Here it comes: ``` picked = ['yaaayyy'] length = len(picked) dashed = "-" * length guessed = picked.replace(picked, dashed) while tries != -1: input = raw_input("Try a letter: ") if input in picked: print "Correct letter!" found = [i for i, x in enumerate(picked) if x == input] for item in found: guessed = guessed[:item] + input + guessed[i+1:] print guessed ``` Upon calling this script, python creates a variable named `guessed` containing 7 dashes `-------` It asks the user for a letter and if the letter is correct, it will replace the `-` with the correct letter. But not keeping the previous letters. The word to be guessed is `yaaayyy` Output of code: ``` Word is 7 characters: ------- Try a letter: a Correct letter! -aaa Try a letter: y Correct letter! yyyy ``` Goal: ``` Word is 7 characters: ------- Try a letter: a Correct letter! -aaa--- Try a letter: y Correct letter! yaaayyy ```
2017/07/25
[ "https://Stackoverflow.com/questions/45309118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4834431/" ]
This code seems to be slightly wrong: ``` found = [i for i, x in enumerate(picked) if x == input] for item in found: guessed = guessed[:item] + input + guessed[i+1:] ``` That last line should probably be: ``` guessed = guessed[:item] + input + guessed[item+1:] ``` **EDIT** This seems simpler to me: ``` for i, x in enumerate(picked): if x == input: guessed = guessed[:i] + input + guessed[i+1:] ``` **EDIT 2** I'm not sure if this is clearer or not, but it's probably a little more efficient: ``` guessed = ''.join(x if picked[i] == input else c for i, c in enumerate(guessed)) ```
Assuming you're using python3 you can solve it by simply doing: ``` user_input = input() guessed = ''.join(letter if user_input == letter else guessed[i] for i, letter in enumerate(picked)) ```
8,985
51,954,369
``` [{"answerInfo":{"extraData":{"am_answer_type":"NN"}},"content":"MP3:not support.","messageId":"c4d6a2f4649d483a811fcce4b26ae9a1"}] ``` How to extract "MP3: not support" from this String using regular expression or python code? But an error was generated per the suggestion: ``` Traceback (most recent call last): File "/Users/congminmin/PycharmProjects/Misc/csv/csvLoader.py", line 16, in <module> print(question+ " " + json.loads(answer)[0]['content']) File "/Users/congminmin/anaconda3/lib/python3.6/json/__init__.py", line 354, in loads return _default_decoder.decode(s) File "/Users/congminmin/anaconda3/lib/python3.6/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/Users/congminmin/anaconda3/lib/python3.6/json/decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ```
2018/08/21
[ "https://Stackoverflow.com/questions/51954369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/697911/" ]
In my code below, I create a dataframe then output it to a .csv file using `pandas.DataFrame.to_csv()` ([link to docs](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html)). In this example, I also add try and except clauses to raise an exception if the user enters an invalid Series ID (not the case with this example) or if the user exceeds the number of daily hits to the BLS API (Unregistered users may request up to 25 queries daily - [per documentation FAQs](https://www.bls.gov/developers/api_faqs.htm#signatures4)). In this example, I add a column showing the SeriesID which would prove more useful if your list of SeriesIDs contained more than one item and would help distinguish between different series. I also do some extra manipulation to the footnotes column to make it more meaningful in the outputted dataframe. ``` import pandas as pd import json import requests headers = {'Content-type': 'application/json'} data = json.dumps({"seriesid": ['LAUMT421090000000005'],"startyear":"2011", "endyear":"2014"}) p = requests.post('https://api.bls.gov/publicAPI/v1/timeseries/data/', data=data, headers=headers) json_data = json.loads(p.text) try: df = pd.DataFrame() for series in json_data['Results']['series']: df_initial = pd.DataFrame(series) series_col = df_initial['seriesID'][0] for i in range(0, len(df_initial) - 1): df_row = pd.DataFrame(df_initial['data'][i]) df_row['seriesID'] = series_col if 'code' not in str(df_row['footnotes']): df_row['footnotes'] = '' else: df_row['footnotes'] = str(df_row['footnotes']).split("'code': '",1)[1][:1] df = df.append(df_row, ignore_index=True) df.to_csv('blsdata.csv', index=False) except: json_data['status'] == 'REQUEST_NOT_PROCESSED' print('BLS API has given the following Response:', json_data['status']) print('Reason:', json_data['message']) ``` If you would like to output a .xlsx file rather than a .csv, simply substitute `df.to_csv('blsdata.csv', index=False)` with `df.to_excel('blsdata.xlsx', index=False, engine='xlsxwriter')`. Expected Outputted .csv file: [![Expected Outputted .csv](https://i.stack.imgur.com/FXhEz.png)](https://i.stack.imgur.com/FXhEz.png) Expected Outputted .xlsx file if you use `pandas.DataFrame.to_excel()` rather than `pandas.DataFrame.to_csv()`: [![Expected Outputted .xlsx](https://i.stack.imgur.com/PVgOi.png)](https://i.stack.imgur.com/PVgOi.png)
``` import requests import json import csv headers = {'Content-type': 'application/json'} data = json.dumps({"seriesid": ['LAUMT421090000000005'],"startyear":"2011", "endyear":"2014"}) p = requests.post('https://api.bls.gov/publicAPI/v2/timeseries/data/', data=data, headers=headers) json_data = json.loads(p.text) ``` After the above lines of code, use the steps describe in the following stackoverflow link for the next steps: [How can I convert JSON to CSV?](https://stackoverflow.com/questions/1871524/how-can-i-convert-json-to-csv) Hope it helps!
8,986
30,532,431
My program consists in mouse drawing: Simultaneous reproduction of the drawn curves are done on a toplevel window. My aim is to set the vertical and horizontal scroll bars to the toplevel window. The drawing works as I expected except I am not seeing the scrollbars as well as I am getting this error (which does not stop the program, however): ``` Exception in Tkinter callback Traceback (most recent call last): File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1489, in __call__ return self.func(*args) File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1523, in yview res = self.tk.call(self._w, 'yview', *args) TclError: unknown option "0.0": must be moveto or scroll ``` The program consists of these lines: ``` from Tkinter import * import numpy as np import cv2 import Image, ImageTk class Test: def __init__(self, parent): self.parent = parent self.b1="up" self.xold=None self.yold=None self.liste=[] self.top = TopLevelWindow() self.s=400,400,3 self.im=np.zeros(self.s,dtype=np.uint8) cv2.imshow("hello",self.im) def test(self): self.drawingArea=Canvas(self.parent,width=400,height=400) self.drawingArea.pack() self.drawingArea.bind("<Motion>",self.motion) self.drawingArea.bind("<ButtonPress-1>",self.b1down) self.drawingArea.bind("<ButtonRelease-1>",self.b1up) def b1down(self,event): self.b1="down" def b1up(self,event): self.b1="up" self.xold=None self.yold=None self.liste.append((self.xold,self.yold)) def motion(self,event): if self.b1=="down": if self.xold is not None and self.yold is not None: event.widget.create_line(self.xold,self.yold,event.x,event.y,fill="red",width=3,smooth=TRUE) self.top.draw_line(self.xold,self.yold,event.x,event.y) self.xold=event.x self.yold=event.y self.liste.append((self.xold,self.yold)) class TopLevelWindow(Frame): def __init__(self): Frame.__init__(self) self.top=Toplevel() self.top.wm_title("Second Window") self.canvas=Canvas(self.top,width=400,height=400) self.canvas.grid(row=0,column=0,sticky=N+E+S+W) self.sbarv=Scrollbar(self,orient=VERTICAL) self.sbarh=Scrollbar(self,orient=HORIZONTAL) self.sbarv.config(command=self.canvas.yview) self.sbarh.config(command=self.canvas.xview) self.canvas.config(yscrollcommand=self.canvas.yview) self.canvas.config(xscrollcommand=self.canvas.xview) self.sbarv.grid(row=0,column=1,sticky=N+S) self.sbarh.grid(row=1,column=0,sticky=W+E) self.canvas.config(scrollregion=(0,0,400,400)) def draw_line(self, xold, yold, x, y): self.canvas.create_line(xold,yold,x,y,fill="blue",width=3,smooth=TRUE) if __name__=="__main__": root = Tk() root.wm_title("Main Window") v = Test(root) v.test() root.mainloop() ```
2015/05/29
[ "https://Stackoverflow.com/questions/30532431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4949239/" ]
These lines are incorrect: ``` self.canvas.config(yscrollcommand=self.canvas.yview) self.canvas.config(xscrollcommand=self.canvas.xview) ``` You're telling the canvas to scroll the canvas when the canvas scrolls. The `yscrollcommand` and `xscrollcommand` options typically need to call the `set` method of a scrollbar: ``` self.canvas.config(yscrollcommand=self.sbarv.set) self.canvas.config(xscrollcommand=self.sbarh.set) ```
I want to share the solution I found in case someone in the future encounters this problem: I only had encrust the two scrollbars into the same parent widget as the canvas itself. I mean: ``` self.sbarv=Scrollbar(self.top,orient=VERTICAL) self.sbarh=Scrollbar(self.top,orient=HORIZONTAL) ```
8,987
29,583,102
I've been working locally on an AngularJS front-end, Django REST backend project and I want to deploy it to Heroku. It keeps giving me errors when I try to push, however, I've followed the steps on [here](https://devcenter.heroku.com/articles/getting-started-with-django), which has helped me deploy Django-only apps before. Below is the output from my `heroku logs`. ``` 8" dyno= connect= service= status=503 bytes= 2015-04-11T20:51:24.680164+00:00 heroku[api]: Deploy efbd3fb by me@example.com 2015-04-11T20:51:24.680164+00:00 heroku[api]: Release v5 created by me@example.com 2015-04-11T20:51:25.010989+00:00 heroku[web.1]: State changed from crashed to starting 2015-04-11T20:51:30.109140+00:00 heroku[web.1]: Starting process with command `python manage.py collectstatic --noinput; gunicorn myApp.wsgi --log-file -` 2015-04-11T20:51:31.795813+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY) 2015-04-11T20:51:31.795838+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1 2015-04-11T20:51:31.824251+00:00 app[web.1]: Traceback (most recent call last): 2015-04-11T20:51:31.824257+00:00 app[web.1]: File "manage.py", line 8, in <module> 2015-04-11T20:51:31.824267+00:00 app[web.1]: from django.core.management import execute_from_command_line 2015-04-11T20:51:31.824296+00:00 app[web.1]: ImportError: No module named django.core.management 2015-04-11T20:51:31.827843+00:00 app[web.1]: bash: gunicorn: command not found 2015-04-11T20:51:32.656017+00:00 heroku[web.1]: Process exited with status 127 2015-04-11T20:51:32.672886+00:00 heroku[web.1]: State changed from starting to crashed 2015-04-11T20:51:32.672886+00:00 heroku[web.1]: State changed from crashed to starting 2015-04-11T20:51:36.675087+00:00 heroku[web.1]: Starting process with command `python manage.py collectstatic --noinput; gunicorn myApp.wsgi --log-file -` 2015-04-11T20:51:38.446374+00:00 app[web.1]: bash: gunicorn: command not found 2015-04-11T20:51:38.420704+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY) 2015-04-11T20:51:38.420729+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1 2015-04-11T20:51:38.442039+00:00 app[web.1]: File "manage.py", line 8, in <module> 2015-04-11T20:51:38.442033+00:00 app[web.1]: Traceback (most recent call last): 2015-04-11T20:51:38.443526+00:00 app[web.1]: ImportError: No module named django.core.management 2015-04-11T20:51:38.442047+00:00 app[web.1]: from django.core.management import execute_from_command_line 2015-04-11T20:51:39.265192+00:00 heroku[web.1]: Process exited with status 127 2015-04-11T20:51:39.287328+00:00 heroku[web.1]: State changed from starting to crashed 2015-04-11T20:52:10.960135+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=myapp.herokuapp.com request_id=afbb960d-eae4-4891-a885-d4a7e3880f1f fwd="64.247.79.248" dyno= connect= service= status=503 bytes= 2015-04-11T20:52:11.321003+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=myapp.herokuapp.com request_id=de3242f1-7dda-4cc5-8b35-6d7c77392151 fwd="64.247.79.248" dyno= connect= service= status=503 bytes= ``` It seems like Heroku is confused because of an ambiguity between which server to use (Node.js vs. Django) and can't resolve it. Django can't seem to be found by Heroku. The basic outline of my project is based on this example: <https://github.com/brwr/thinkster-django-angular>. My settings file is as follows: ``` """ Django settings for my project. For more information on this file, see https://docs.djangoproject.com/en/1.7/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.7/ref/settings/ """ # Build paths inside the project like this: os.path.join(BASE_DIR, ...) import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = SUPER_SECRET # SECURITY WARNING: don't run with debug turned on in production! DEBUG = os.environ.get('DEBUG', True) TEMPLATE_DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'debug_toolbar', 'rest_framework', 'compressor', 'authentication', ) MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) ROOT_URLCONF = 'myApp.urls' WSGI_APPLICATION = 'myApp.wsgi.application' # Database # https://docs.djangoproject.com/en/1.7/ref/settings/#databases import dj_database_url DATABASES = { 'default': dj_database_url.config( default='sqlite:///' + os.path.join(BASE_DIR, 'db.sqlite3') ) } # Internationalization # https://docs.djangoproject.com/en/1.7/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.7/howto/static-files/ STATIC_URL = '/static/' STATIC_ROOT = 'staticfiles' STATICFILES_DIRS = ( # os.path.join(BASE_DIR, 'dist/static'), os.path.join(BASE_DIR, 'static'), ) STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', 'compressor.finders.CompressorFinder', ) COMPRESS_ENABLED = os.environ.get('COMPRESS_ENABLED', False) TEMPLATE_DIRS = ( os.path.join(BASE_DIR, 'templates'), ) REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.SessionAuthentication', ) } # Honor the 'X-Forwarded-Proto' header for request.is_secure() SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') # Allow all host headers ALLOWED_HOSTS = ['*'] AUTH_USER_MODEL = 'authentication.Account' # Parse database configuration from $DATABASE_URL import dj_database_url DATABASES['default'] = dj_database_url.config() # Enable Connection Pooling DATABASES['default']['ENGINE'] = 'django_postgrespool' # Simplified static file serving. # https://warehouse.python.org/project/whitenoise/ STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage' # Honor the 'X-Forwarded-Proto' header for request.is_secure() SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') ``` If any more information is needed, I can add it. Thanks, erip **EDIT** I noticed this when pushing to Heroku: ![Node.js](https://i.stack.imgur.com/kQ7L5.png) It definitely is being recognized as a Node.js backend rather than Django and is, therefore, not installing from my `requirements.txt`. How can I change this? I tried `heroku config:set BUILDPACK_URL=https://github.com/heroku/heroku-buildpack-python`, but it didn't seem to work.
2015/04/11
[ "https://Stackoverflow.com/questions/29583102", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2883245/" ]
I don't understand why you need Node at all in a Django project - it's not required for Angular or DRF - but the instructions in the linked project mention setting the buildpack to a custom "multi" one: ``` heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git ```
it seems like you have not installed Django in you virtualenv Have you forgot to run `pip install django-toolbelt`
8,988
8,949,454
I have a pygtk Table with 16 squares, each of them containing a label. Label names are: label1, label2, label3, ..., label16. I also have a timer that fires every *n* seconds. When the timer is fired one of the squares is highlighted (just set the font size of it to 18 and the font size of the rest to 12). If there would be only 3 labels, the code would be something like this: ``` def update_grid(self): if self.timer_id is not None: self.__actual_choice = (self.__actual_choice % 16)+1 if self.__actual_choice == 1: self.label1.modify_font(self.__font_big) self.label2.modify_font(self.__font_small) self.label3.modify_font(self.__font_small) elif self.__actual_choice == 2: self.label1.modify_font(self.__font_small) self.label2.modify_font(self.__font_big) self.label3.modify_font(self.__font_small) elif self.__actual_choice == 3: self.label1.modify_font(self.__font_small) self.label2.modify_font(self.__font_small) self.label3.modify_font(self.__font_big) ``` But having 16 labels, the code would be huge. I wonder if there is a way in python of doing something like: ``` self.(label+"i").modify_font(self.__font_small) ```
2012/01/21
[ "https://Stackoverflow.com/questions/8949454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1090788/" ]
You *could* use the built-in function, [`getattr()`](http://docs.python.org/library/functions.html#getattr) as others have suggested: ``` label = getattr(self, 'label%d' % i) label.modify_font(self.__font_small) ``` But in reality, you'd be better off storing your 16 `label`s in a [`list`](http://docs.python.org/tutorial/datastructures.html#more-on-lists). Lots of variables with numbers at the end is a terrible [code smell](http://martinfowler.com/bliki/CodeSmell.html). ``` for index, label in enumerate(self.labels): if index == self.__actual_choice: label.modify_font(self.__font_big) else: label.modify_font(self.__font_small) ```
You should look into the `getattr` function.
8,989
5,333,509
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it? For example I have the numbers 1,2,5,15 and I want to reverse the bits like so: ``` original reversed 1 - 0001 - 1000 - 8 2 - 0010 - 0100 - 4 5 - 0101 - 1010 - 10 15 - 1111 - 1111 - 15 ``` Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so. PS This isn't homework, I am just trying to program the solution to a logic puzzle.
2011/03/17
[ "https://Stackoverflow.com/questions/5333509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/330013/" ]
Numpy indexing arrays provide concise notation for application of bit reversal permutations. ``` import numpy as np def bitrev_map(nbits): """create bit reversal mapping >>> bitrev_map(3) array([0, 4, 2, 6, 1, 5, 3, 7], dtype=uint16) >>> import numpy as np >>> np.arange(8)[bitrev_map(3)] array([0, 4, 2, 6, 1, 5, 3, 7]) >>> (np.arange(8)[bitrev_map(3)])[bitrev_map(3)] array([0, 1, 2, 3, 4, 5, 6, 7]) """ assert isinstance(nbits, int) and nbits > 0, 'bit size must be positive integer' dtype = np.uint32 if nbits > 16 else np.uint16 brmap = np.empty(2**nbits, dtype=dtype) int_, ifmt, fmtstr = int, int.__format__, ("0%db" % nbits) for i in range(2**nbits): brmap[i] = int_(ifmt(i, fmtstr)[::-1], base=2) return brmap ``` When bit reversing many vectors one wants to store the mapping anyway. The implementation reverses binary literal representations.
You can do something like this where you can define how many bits in the argument l. ``` def reverse(x, l): r = x & 1 for i in range (1, l): r = r << 1 | (x >> i) & 1 print(bin(r)) return r ```
8,994
28,055,565
I have an python `dict` whose keys and values are strings, integers and other dicts and tuples ([json does not support those](https://stackoverflow.com/q/7001606/850781)). I want to save it to a text file and then read it from the file. **Basically, I want a [`read`](http://www.lispworks.com/documentation/HyperSpec/Body/f_rd_rd.htm) counterpart to the built-in [`print`](http://www.lispworks.com/documentation/HyperSpec/Body/f_wr_pr.htm)** (like in Lisp). Constraints: 1. the file must be human readable (thus [pickle](https://docs.python.org/2/library/pickle.html) is out) 2. no need to detect circularities. Is there anything better than [json](https://docs.python.org/2/library/json.html)?
2015/01/20
[ "https://Stackoverflow.com/questions/28055565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/850781/" ]
You could use `repr()` on the `dict`, then read it back in and parse it with `ast.literal_eval()`. It's as human readable as Python itself is. Example: ``` In [1]: import ast In [2]: x = {} In [3]: x['string key'] = 'string value' In [4]: x[(42, 56)] = {'dict': 'value'} In [5]: x[13] = ('tuple', 'value') In [6]: repr(x) Out[6]: "{(42, 56): {'dict': 'value'}, 'string key': 'string value', 13: ('tuple', 'value')}" In [7]: with open('/tmp/test.py', 'w') as f: f.write(repr(x)) In [8]: with open('/tmp/test.py', 'r') as f: y = ast.literal_eval(f.read()) In [9]: y Out[9]: {13: ('tuple', 'value'), 'string key': 'string value', (42, 56): {'dict': 'value'}} In [10]: x == y Out[10]: True ``` You may also consider using the `pprint` module for even friendlier formatted output.
Honestly, json is your answer [EDIT: so long as the keys are strings, didn't see the part about dicts as keys], and that's why it's taken over in the least 5 years. What legibility issues does json have? There are tons of json indenter, pretty-printer utilities, browser plug-ins [1][2] - use them and it certainly is human-readable. json(/simplejson) is also extremely performant (C implementations), and it scales, and can be processed serially, which cannot be said for the AST approach (why be eccentric and break scalability?). This also seems to be the consensus from 100% of people answering you here... everyone can't be wrong ;-) XML is dead, good riddance. 1. [How can I pretty-print JSON?](https://stackoverflow.com/questions/352098/how-can-i-pretty-print-json) and countless others 2. [Browser JSON Plugins](https://stackoverflow.com/questions/2547769/browser-json-plugins)
9,004
73,067,801
I'm using databricks, and I have a repo in which I have a basic python module within which I define a class. I'm able to import and access the class and its methods from the databricks notebook. One of the methods within the class within the module looks like this (simplified) ```py def read_raw_json(self): self.df = spark.read.format("json").load(f"{self.base_savepath}/{self.resource}/{self.resource}*.json") ``` When I execute this particular method within the databricks notebook it gives me a `NameError` that 'spark' is not defined. The databricks runtime instantiates with a spark session stored in a variable called "spark". I assumed any methods executed in that runtime would inherit from the parent scope. Anyone know why this isn't the case? EDIT: I was able to get it to work by passing it the spark variable from within the notebook as an object to my class instantiation. But I don't want to call this an answer yet, because I'm not sure why I needed to.
2022/07/21
[ "https://Stackoverflow.com/questions/73067801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9485834/" ]
The INFILE statement is for reading a file as raw TEXT. If you have a SAS dataset then you can just SET the dataset to read it into a data step. So the equivalent for your attempted method would be something like: ``` data _null_; set "C:\myfiles\sample.sas7bdat" end=eof; if eof then put "Observations read=====:" _n_ ; run; ```
One cool thing about sas7bdat files is the amount of metadata stored with them. The row count of that file is already known by SAS as an attribute. You can use `proc contents` to read it. `Observations` is the number of rows in the table. ``` libname files "C:\myfiles"; proc contents data=files.sample; run; ``` A more advanced way is to open the file directly using macro functions. ``` %let dsid = %sysfunc(open(files.sample) ); /* Open the file */ %let nobs = %sysfunc(attrn(&dsid, nlobs) ); /* Get the number of observations */ %let rc = %sysfunc(close(&dsid) ); /* Close the file */ %put Total observations: &nobs ```
9,005
57,009,331
I'm currently coding a text based game in python in which the narrative will start differently depending on answers to certain questions. The first question is simple, a name. I can't seem to get the input to display in the correct text option after the prompt. I tried using "if name is True" and "if name is str" but they both skip to the else option, instead of displaying the correct option after input. ``` while True: try: # This will query for first user input, Name. name = str(input("Please enter your name: ")) except ValueError: print("Sorry, I didn't understand that.") # No valid input will restart loop. continue else: break if name is str: print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` So if I enter John as my name, I'd expect the output to be "Ah, so John is your name? Excellent." But instead I enter John and it outputs: Please enter your name: John No name? That's fine, I suppose. Any help would be appreciated. Thanks.
2019/07/12
[ "https://Stackoverflow.com/questions/57009331", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389547/" ]
use `isinstance` instead of `is`. And, You do not need to use `str(input())` because `input` returns `str`. ``` while True: try: # This will query for first user input, Name. name = input("Please enter your name: ") except ValueError: print("Sorry, I didn't understand that.") # No valid input will restart loop. continue else: break if isinstance(name, str): print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` The result is: ``` Please enter your name: John Ah, so John is your name? Excellent. ``` But, You have to use `string.isalpha()` if you want to check whether `name` is real name in consist of alphabets. ``` if name.isalpha(): print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ```
The exception handling around `input` is not required as the `input` function always returns a string. If the user does not enter a value an empty string is returned. Therefore your code can be simplified to ``` name = input("Please enter your name: ") # In python an empty string is considered `False` allowing # you to use an if statement like this. if name: print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` Further regarding usage of `is`: `is` is used to compare identity as in two variables refer to the same object. It's more common use is to test if a variable is `None` eg `if name is None` (`None` is always the same object). As mentioned you can use `type(name)` to obtain the type of variable *name* but this is discouraged in place of the builtin check `isinstance`. `==` on the other hand compares equality as in `if name == "Dave"`. `isinstance` does more than just check for a specific type, it also handles inherited types.
9,006
55,327,900
I'm currently developing my first python program, a booking system using tkinter. I have a customer account creation screen that uses a number of Entry boxes. When the Entry box is clicked the following key bind is called to clear the entry box of its instruction (ie. "enter name") ``` def entry_click(event): if "enter" in event.widget.get(): event.widget.delete(0, "end") event.widget.insert(0, "") event.widget.configure(fg="white") #example Entry fields new_name = Entry(root) new_name.insert(0, "enter name") new_name.bind('<FocusIn>', entry_click) new_name.bind('<FocusOut>', entry_focusout) new_email = Entry(root) new_email.insert(0, "enter email") new_email.bind('<FocusIn>', entry_click) new_email.bind('<FocusOut>', entry_focusout) ``` In a similar vein I'm looking for a way to create a universal event where the appropriate text for that unique entry field is returned to its initial state (ie. "enter first name" or "enter email") if the box is empty when clicked away from. ``` def entry_focusout(): if not event.widget.get(): event.widget.insert(fg="grey") event.widget.insert(0, #[appropriate text here]) ``` How would I be able to do this? Many thanks.
2019/03/24
[ "https://Stackoverflow.com/questions/55327900", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11240330/" ]
Just add an arg to your `entry_focusout` event and bind to the `Entry` widgets with a lambda function. ``` from tkinter import * root = Tk() def entry_click(event): if event.widget["foreground"] == "grey": event.widget.delete(0, "end") event.widget.insert(0, "") event.widget.configure(fg="black") def entry_focusout(event, msg): if not event.widget.get(): event.widget.configure(fg="grey") #you had a typo here event.widget.insert(0, msg) new_name = Entry(root,fg="grey") new_name.insert(0, "enter name") new_name.bind('<FocusIn>', entry_click) new_name.bind('<FocusOut>', lambda e: entry_focusout(e, "enter name")) new_email = Entry(root,fg="grey") new_email.insert(0, "enter email") new_email.bind('<FocusIn>', entry_click) new_email.bind('<FocusOut>', lambda e: entry_focusout(e, "enter email")) new_name.pack() new_email.pack() root.mainloop() ```
> > **Question**: Return empty `tk.Entry` box to previous state when clicked away > > > The following is a `OOP` **universal** solution. Thanks to @Henry Yik, for the `if event.widget["foreground"] == "grey":` part. ``` class EntryInstruction(tk.Entry): def __init__(self, parent, instruction=None): super().__init__(parent) self.instruction = instruction self.focus_out(None) self.bind('<FocusIn>', self.focus_in) self.bind('<FocusOut>', self.focus_out) def focus_in(self, event): if self["foreground"] == "grey": self.delete(0, "end") self.configure(fg="black") def focus_out(self, event): if not self.get(): self.configure(fg="grey") self.insert(0, self.instruction) ``` > > **Usage**: > > > ``` class App(tk.Tk): def __init__(self): super().__init__() entry = EntryInstruction(self, '<enter name>') entry.grid(row=0, column=0) entry = EntryInstruction(self, '<enter email>') entry.grid(row=1, column=0) if __name__ == "__main__": App().mainloop() ``` ***Tested with Python: 3.5***
9,008
6,003,932
``` import cgi def fill(): s = """\ <html><body> <form method="get" action="./show"> <p>Type a word: <input type="text" name="word"> <input type="submit" value="Submit"</p> </form></body></html> """ return s # Receive the Request object def show(req): # The getfirst() method returns the value of the first field with the # name passed as the method argument word = req.form.getfirst('word', '') print "Creating a text file with the write() method." text_file = open("/var/www/cgi-bin/input.txt", "w") text_file.write(word) text_file.close() # Escape the user input to avoid script injection attacks #word = cgi.escape(word) test(0) '''Input triggers the application to start its process''' simplified_file=open("/var/www/cgi-bin/output.txt", "r").read() s = """\ <html><body> <p>The submitted word was "%s"</p> <p><a href="./fill">Submit another word!</a></p> </body></html> """ return s % simplified_file def test(flag): print flag while flag!=1: x=1 return ``` This mod\_python program's fill method send the text to show method where it writes to input.txt file which is used by my application, till my application is running i don't want rest of the statements to work so i have called a function test, in which i have a while loop which will be looping continuously till the flag is set to 1. If its set to 1 then it will break the while loop and continue execution of rest of the statements. I have made my application to pass test flag variable to set it as 1. according to my logic, it should break the loop and return to show function and continue executing rest but its not happening in that way, its continuously loading the page! please help me through this.. Thank you.. :)
2011/05/14
[ "https://Stackoverflow.com/questions/6003932", "https://Stackoverflow.com", "https://Stackoverflow.com/users/640666/" ]
``` while flag!=1: x=1 ``` This loop won't ever finish. When is `flag` ever going to change so that `flag != 1` is False? Remember, `flag` is a *local* variable so changing it anywhere else isn't going to have an effect -- especially since no other code is going to have the opportunity to run while that loop is still running. It's really not very clear what you're trying to achieve here. You shouldn't be trying to delay code with infinite loops. I'd re-think your architecture carefully.
it is not the most elegant way to do this, but if you need to change the value of flag outside your method, you should use it as a `global` variable. ``` def test(): global flag # use this everywhere you're using flag. print flag while flag!=1: x=1 return ``` but to make a waiting method, have a look to [python Event() objects](http://docs.python.org/library/threading.html#event-objects), they have a wait() method that blocks until the Event's flag is set.
9,009
59,282,950
We use python with pyspark api in order to run simple code on spark cluster. ``` from pyspark import SparkContext, SparkConf conf = SparkConf().setAppName('appName').setMaster('spark://clusterip:7077') sc = SparkContext(conf=conf) rdd = sc.parallelize([1, 2, 3, 4]) rdd.map(lambda x: x**2).collect() ``` It works when we setup a spark cluster locally and with dockers. We would now like to start an emr cluster and test the same code. And seems that pyspark can't connect to the spark cluster on emr We opened ports 8080 and 7077 from our machine to the spark master We are getting past the firewall and just seems that nothing is listening on port 7077 and we get connection refused. We found [this](https://medium.com/@kulasangar/creating-a-spark-job-using-pyspark-and-executing-it-in-aws-emr-70dba5e98a75) explaining how to serve a job using the cli but we need to run it directly from pyspark api on the driver. What are we missing here? How can one start an emr cluster and actually run pyspark code locally on python using this cluster? edit: running this code from the master itself works As opposed to what was suggested, when connecting to the master using ssh, and running python from the terminal, the very same code (with proper adjustments for the master ip, given it's the same machine) works. No issues no problems. How does this make sense given the documentation that clearly states otherwise?
2019/12/11
[ "https://Stackoverflow.com/questions/59282950", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4406595/" ]
You try to run pyspark (which calls spark-submit) form a remote computer outside the spark cluster. This is technically possible but it is not the intended way of deploying applications. In yarn mode, it will make your computer participate in the spark protocol as a client. Thus it would require opening several ports and installing exactly the same spark jars as on spark aws emr. Form the [spark submit doc](https://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit) : ``` A common deployment strategy is to submit your application from a gateway machine that is physically co-located with your worker machines (e.g. Master node in a standalone EC2 cluster) ``` A simple deploy strategy is * sync code to master node via rsync, scp or git ``` cd ~/projects/spark-jobs # on local machine EMR_MASTER_IP='255.16.17.13' TARGET_DIR=spark_jobs rsync -avze "ssh -i ~/dataScienceKey.pem" --rsync-path="mkdir -p ${TARGET_DIR} && rsync" --delete ./ hadoop@${EMR_MASTER_IP}:${TARGET_DIR} ``` * ssh to the master node ``` ssh -i ~/dataScienceKey.pem hadoop@${EMR_HOST} ``` * run `spark-submit` on the master node ``` cd spark_jobs spark-submit --master yarn --deploy-mode cluster my-job.py ``` ``` # my-job.py from pyspark.sql import SparkSession spark = SparkSession.builder.appName("my-job-py").getOrCreate() sc = spark.sparkContext rdd = sc.parallelize([1, 2, 3, 4]) res = rdd.map(lambda x: x**2).collect() print(res) ``` There is a way to submit the job directly to spark emr without syncing. Spark EMR runs [Apache Livy](http://livy.apache.org/) on port `8998` by default. It is a rest webservice which allows to submit jobs via a rest api. You can pass the same `spark-submit` parameters with a `curl` script from your machine. See [doc](https://aws.amazon.com/de/blogs/big-data/orchestrate-apache-spark-applications-using-aws-step-functions-and-apache-livy/) For interactive development we have also configured local running `jupyter notebooks` which automatically submit cell runs to livy. This is done via the [spark-magic project](https://github.com/jupyter-incubator/sparkmagic)
According to this [Amazon Doc](https://aws.amazon.com/premiumsupport/knowledge-center/emr-submit-spark-job-remote-cluster/?nc1=h_ls), you can't do that: > > *Common errors* > > > **Standalone mode** > > > Amazon EMR doesn't support standalone mode for Spark. It's not > possible to submit a Spark application to a remote Amazon EMR cluster > with a command like this: > > > `SparkConf conf = new SparkConf().setMaster("spark://master_url:7077”).setAppName("WordCount");` > > > Instead, set up your local machine as explained earlier in this > article. Then, submit the application using the spark-submit command. > > > You can follow the above linked resource to configure your local machine in order to submit spark jobs to EMR Cluster. Or more simpler, use the ssh key you specified when you create your cluster to connect to the master node and submit spark jobs: ``` ssh -i ~/path/ssh_key hadoop@$<master_ip_address> ```
9,010
1,247,133
Working in python I want to extract a dataset with the following structure: Each item has a unique ID and the unique ID of its parent. Each parent can have one or more children, each of which can have one or more children of its own, to n levels i.e. the data has an upturned tree-like structure. While it has the potential to go on for infinity, in reality a depth of 10 levels is unusual, as is having more than 10 siblings at each level. For each item in the dataset I want to show show all items for which this item is their parent... and so on until it reaches the bottom of the dataset. Doing the first two levels is easy, but I'm unsure how to make it efficiently recurs down through the levels. Any pointers very much appreciated.
2009/08/07
[ "https://Stackoverflow.com/questions/1247133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/408134/" ]
you should probably use a defaultdictionary for this: ``` from collections import defaultdict itemdict = defaultdict(list) for id, parent_id in itemlist: itemdict[parent_id].append(id) ``` then you can recursively print it (with indentation) like ``` def printitem(id, depth=0): print ' '*depth, id for child in itemdict[id]: printitem(child, depth+1) ```
Are you saying that each item only maintains a reference to its parents? If so, then how about ``` def getChildren(item) : children = [] for possibleChild in allItems : if (possibleChild.parent == item) : children.extend(getChildren(possibleChild)) return children ``` This returns a list that contains all items who are in some way descended from item.
9,011
7,394,301
i have a class "karte" i want to know is there a way of dynamic name creation of my new objects normal object creation would be > > karta=karte() > > > but i am curious in something like this > > karta[i]=karte() > > > or something like that where i would be the number of for loop. and at the end i would call object like this ``` karta1.boja karta2.boja karta3.boja ``` how can i achieve that , im new to python thanks.
2011/09/12
[ "https://Stackoverflow.com/questions/7394301", "https://Stackoverflow.com", "https://Stackoverflow.com/users/666160/" ]
You can create a list of objects like this: ``` karta = [] for i in range(10): karta.append(karte()) ``` Or using a list comprehension: ``` karta = [karte() for i in range(10)] ``` Now you can access the objects like this: `karta[i]`. To accomplish your last example, you have to modify the `globals()` dictionary. I do not endorse this at all, but here is how to do it: ``` g = globals() for i in range(10): g["karte" + str(i)] = karte() ``` This is not very pythonic though, you should just use a list.
Unless you have a real need to keep the objects out of a list and have names like karta1, karta2, etc. I would do as you suggest and use a list with a loop to initialize: ``` for i in some_range: karta[i]=karte() ```
9,016
69,238,333
I'm trying to fit a curve to a differential equation. For the sake of simplicity, I'm just doing the logistic equation here. I wrote the code below but I get an error shown below it. I'm not quite sure what I'm doing wrong. ``` import numpy as np import pandas as pd import scipy.optimize as optim from scipy.integrate import odeint df_yeast = pd.DataFrame({'cd': [9.6, 18.3, 29., 47.2, 71.1, 119.1, 174.6, 257.3, 350.7, 441., 513.3, 559.7, 594.8, 629.4, 640.8, 651.1, 655.9, 659.6], 'td': np.arange(18)}) N0 = 1 parsic = [5, 2] def logistic_de(t, N, r, K): return r*N*(1 - N/K) def logistic_solution(t, r, K): return odeint(logistic_de, N0, t, (r, K)) params, _ = optim.curve_fit(logistic_solution, df_yeast['td'], df_yeast['cd'], p0=parsic); ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ValueError: object too deep for desired array --------------------------------------------------------------------------- error Traceback (most recent call last) <ipython-input-94-2a5a467cfa43> in <module> ----> 1 params, _ = optim.curve_fit(logistic_solution, df_yeast['td'], df_yeast['cd'], p0=parsic); ~/SageMath/local/lib/python3.9/site-packages/scipy/optimize/minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs) 782 # Remove full_output from kwargs, otherwise we're passing it in twice. 783 return_full = kwargs.pop('full_output', False) --> 784 res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs) 785 popt, pcov, infodict, errmsg, ier = res 786 ysize = len(infodict['fvec']) ~/SageMath/local/lib/python3.9/site-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 420 if maxfev == 0: 421 maxfev = 200*(n + 1) --> 422 retval = _minpack._lmdif(func, x0, args, full_output, ftol, xtol, 423 gtol, maxfev, epsfcn, factor, diag) 424 else: error: Result from function call is not a proper array of floats. ```
2021/09/18
[ "https://Stackoverflow.com/questions/69238333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10796158/" ]
@hpaulj has pointed out the problem with the shape of the return value from `logistic_solution` and shown that fixing that eliminates the error that you reported. There is, however, another problem in the code. The problem does not generate an error, but it does result in an incorrect solution to your test problem (the logistic differential equation). By default, `odeint` expects the `t` argument of the function that computes the differential equations to be the *second* argument. Either change the order of the first two arguments of `logistic_de`, or add the argument `tfirst=True` to your call of `odeint`. The second option is a bit nicer, because it will allow you to use `logistic_de` with `scipy.integrate.solve_ivp` if you decide to try that function instead of `odeint`.
A sample run of `logistic_solution` produces a (18,1) result: ``` In [268]: logistic_solution(df_yeast['td'], *parsic) Out[268]: array([[ 1.00000000e+00], [ 2.66666671e+00], [ 4.33333337e+00], [ 1.00000004e+00], [-1.23333333e+01], [-4.06666666e+01], [-8.90000000e+01], [-1.62333333e+02], [-2.65666667e+02], [-4.04000000e+02], [-5.82333333e+02], [-8.05666667e+02], [-1.07900000e+03], [-1.40733333e+03], [-1.79566667e+03], [-2.24900000e+03], [-2.77233333e+03], [-3.37066667e+03]]) In [269]: _.shape Out[269]: (18, 1) ``` but the `y` values is ``` In [281]: df_yeast['cd'].values.shape Out[281]: (18,) ``` Define an alternative function that returns a 1d array: ``` In [282]: def foo(t,r,K): ...: return logistic_solution(t,r,K).ravel() ``` This works: ``` In [283]: params, _ = optim.curve_fit(foo, df_yeast['td'], df_yeast['cd'], p0=parsic) In [284]: params Out[284]: array([16.65599815, 15.52779946]) ``` test the `params`: ``` In [287]: logistic_solution(df_yeast['td'], *params) Out[287]: array([[ 1. ], [ 8.97044688], [ 31.45157847], [ 66.2980814 ], [111.36464226], [164.50594767], [223.5766842 ], [286.43153847], [350.92519706], [414.91234658], [476.24767362], [532.78586477], [582.38160664], [622.88958582], [652.1644889 ], [668.0610025 ], [668.43381319], [651.13760758]]) In [288]: df_yeast['cd'].values Out[288]: array([ 9.6, 18.3, 29. , 47.2, 71.1, 119.1, 174.6, 257.3, 350.7, 441. , 513.3, 559.7, 594.8, 629.4, 640.8, 651.1, 655.9, 659.6]) ``` [![enter image description here](https://i.stack.imgur.com/VIztW.png)](https://i.stack.imgur.com/VIztW.png) `too deep` in this context means a 2d array, when it should be 1d, in order to compare with `ydata` ``` ydata : array_like The dependent data, a length M array - nominally ``f(xdata, ...)``. ```
9,017
35,930,924
I am new to Python(I am using Python2.7) and Pycharm, but I need to use MySQLdb module to complete my task. I spent time to search for some guides or tips and finally I go to here but does not found MySQLdb to install. [MySQL-python](http://i.stack.imgur.com/EhTaq.png) But there is error: [Error](http://i.stack.imgur.com/lsXZi.png)
2016/03/11
[ "https://Stackoverflow.com/questions/35930924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5886173/" ]
I have a suggestion,if you have install MySQL database,you follow this,open pycharm and click File->Settings->Project->Project Interpreter,then select your Python interpreter and click install button (the little green plus sign),input "MySQL-Python" and click the button "install package",you will install MySQL-Python successfully.If you have not install MySQL database,you need install MySQL database first.
From Windows Command Prompt / Linux Shell install using wheel ``` pip install wheel ``` Download 32 or 64 Bit version from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python> **64-Bit** ``` pip install MySQL_python-1.2.5-cp27-none-win_amd64.whl ``` **32-Bit** ``` pip install MySQL_python-1.2.5-cp27-none-win32.whl ``` **Using Anaconda2:** Open a Command Prompt ``` $ conda install mysql-python ``` Press y to proceed with installation
9,018
12,609,728
I need to change to reference of a function in a mach-o binary to a custom function defined in my own dylib. The process I am now following is, 1. Replacing references to older functions to the new one. e.g `_fopen` to `_mopen` using sed. 2. I open the mach-o binary in [MachOView](http://sourceforge.net/projects/machoview/) to find the address of the entities I want to change. I then manually change the information in the binary using a hex editor. Is there a way I can automate this process i.e write a program to read the symbols, and dynamic loading info and then change them in the executable. I was looking at the mach-o header files at `/usr/include/mach-o` but am not entire sure how to use them to get this information. Do there exist any libraries present - C or python which help do the same?
2012/09/26
[ "https://Stackoverflow.com/questions/12609728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/390984/" ]
Just a guess and atm I am not able to test it, but try following code: ``` <Button.Template> <ControlTemplate> <Border Background="{StaticResource BlueGradient}" CornerRadius="5"> <DockPanel> <Image x:Name="imgIcon" DockPanel.Dock="Left" Height="32" Margin="4"/> <TextBlock DockPanel.Dock="Right" Text ="Test" VerticalAlignment="Center" HorizontalAlignment="Left" Foreground="White" /> </DockPanel> </Border> </ControlTemplate> </Button.Template> ``` EDIT: Ok next try: Instead of adding your ShinyButton to the Window (remove this.AddChild(shiny)), try adding your ShinyButton to the Grid by adding this code: ``` NameOfGrid.Children.Add(shiny); ``` The reason of this error is that your Window just can have one child and I bet in your case there is already a Grid as this child.
Just tried your code and it works fine, with a couple of caveats. In order to get it to work I needed to drop the resouceDictionary link, as I don't have that file. Is there any chance that some content is being defined, rather than just a style/template etc? Also I note that your code has an x:Class="ShinyButton", is there any content defined in the code as well as in the xaml?
9,021
22,960,956
The python package 'isbntools' (<https://github.com/xlcnd/isbntools>) allows to retrieve bibliography information about books from online resources. In particular the script `isbn_meta [number]` retrieves information about the book with given isbn-number `[number]`. Among other it uses data from google using googleapis as `https://www.googleapis.com/books/v1/volumes?q=isbn+[number]&fields=`. It is obvious that the url can be adjusted to search e.g. for a general `[keyword]` `https://www.googleapis.com/books/v1/volumes?q=[keyword]`. But how can I reuse the code in 'isbntools' to create a script, say `title_meta` which searches and retrieves bibliography data based on `[keyword]`. My problem is to some extent that it is not obvious for me how to deal consistently with the python package.
2014/04/09
[ "https://Stackoverflow.com/questions/22960956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/429540/" ]
you made me some good challenge with this, but here you go: Method that return image with bottom half coloured with some colour ``` - (UIImage *)image:(UIImage *)image withBottomHalfOverlayColor:(UIColor *)color { CGRect rect = CGRectMake(0.f, 0.f, image.size.width, image.size.height); if (UIGraphicsBeginImageContextWithOptions) { CGFloat imageScale = 1.f; if ([self respondsToSelector:@selector(scale)]) imageScale = image.scale; UIGraphicsBeginImageContextWithOptions(image.size, NO, imageScale); } else { UIGraphicsBeginImageContext(image.size); } [image drawInRect:rect]; CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetBlendMode(context, kCGBlendModeSourceIn); CGContextSetFillColorWithColor(context, color.CGColor); CGRect rectToFill = CGRectMake(0.f, image.size.height*0.5f, image.size.width, image.size.height*0.5f); CGContextFillRect(context, rectToFill); UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage; } ``` Example of use : ``` UIImage *image = [UIImage imageNamed:@"q.png"]; image = [self image:image withBottomHalfOverlayColor:[UIColor cyanColor]]; self.imageView.image = image; ``` My results :![enter image description here](https://i.stack.imgur.com/kG1hb.png)![enter image description here](https://i.stack.imgur.com/hyCpe.png)
You can use some tricks: * Use 2 different images and change the whole background. * Use one background color (light blue) and 2 images, one with the bottom half transparent
9,022
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
> > `TypeError: can't use a string pattern` > `on a bytes-like object` > > > what did i do wrong?? > > > You used a string pattern on a bytes object. Use a bytes pattern instead: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ^ Add the b there, it makes it into a bytes object ``` (ps: ``` >>> from disclaimer include dont_use_regexp_on_html "Use BeautifulSoup or lxml instead." ``` )
That worked for me in python3. Hope this helps ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = '<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, str(htmltext)) print(titles) i+=1 ``` And also this in which i added **b** before regex to convert it into byte array. ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = b'<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, htmltext) print(titles) i+=1 ```
9,027
55,937,156
I am trying to read glove.6B.300d.txt file into a Pandas dataframe. (The file can be downloaded from here: <https://github.com/stanfordnlp/GloVe>) Here are the exceptions I am getting: ``` glove = pd.read_csv(filename, sep = ' ') ParserError: Error tokenizing data. C error: EOF inside string starting at line 8 glove = pd.read_csv(filename, sep = ' ', engine = 'python') ParserError: field larger than field limit (131072) ```
2019/05/01
[ "https://Stackoverflow.com/questions/55937156", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8270077/" ]
You can do it in two steps. 1. Get all record which satisfy criteria like `dimension === 2` `let resultArr = jsObjects.filter(data => { return data.dimension === 2 })` 2. Get random object from result. `var randomElement = resultArr[Math.floor(Math.random() * resultArr.length)];` ```js var arr = [{ dimension: 2, x: -973.097, y: -133.411, z: 38.2531 }, { dimension: 3, x: -116.746, y: -48.414, z: 17.226 }, { dimension: 2, x: -946.746, y: -128.411, z: 37.786 }, { dimension: 2, x: -814.093, y: -106.724, z: 37.589 }] //Filter out with specific criteria let resultArr = arr.filter(data => { return data.dimension === 2 }) //Get random element var randomElement = resultArr[Math.floor(Math.random() * resultArr.length)]; console.log(randomElement) ```
You could use `Math.random()` and in the range of `0` to `length` of array. ``` let result = jsObjects.filter(data => { return data.dimension === 2 }) let randomObj = result[Math.floor(Math.random() * result.length)] ```
9,037
26,193,193
How can I change the priority of the path in sys.path in python 2.7? I know that I can use `PYTHONPATH` environment variable, but it is what I will get: ``` $ PYTHONPATH=/tmp python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> for i in sys.path: ... print i ... /usr/local/lib/python2.7/dist-packages/pycuda-2014.1-py2.7-linux-x86_64.egg /usr/local/lib/python2.7/dist-packages/pytest-2.6.2-py2.7.egg /usr/local/lib/python2.7/dist-packages/pytools-2014.3-py2.7.egg /usr/local/lib/python2.7/dist-packages/py-1.4.24-py2.7.egg /usr/lib/python2.7/dist-packages /tmp /usr/lib/python2.7 /usr/lib/python2.7/plat-x86_64-linux-gnu /usr/lib/python2.7/lib-tk /usr/lib/python2.7/lib-old /usr/lib/python2.7/lib-dynload /usr/local/lib/python2.7/dist-packages /usr/lib/python2.7/dist-packages/PILcompat /usr/lib/python2.7/dist-packages/gtk-2.0 /usr/lib/python2.7/dist-packages/ubuntu-sso-client >>> ``` `/tmp` is added between `/usr/lib/python2.7/dist-packages` and `/usr/lib/python2.7`. My goal is to make python to load packages from `/usr/local/lib/python2.7/dist-packages` first. Here is what I want: ``` $ python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.version <module 'numpy.version' from '/usr/local/lib/python2.7/dist-packages/numpy/version.pyc'> >>> ``` If I install `python-numpy` by `apt-get install python-numpy`. Python will try to load from `/usr/lib/python2.7` and not the one I compiled.
2014/10/04
[ "https://Stackoverflow.com/questions/26193193", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470911/" ]
As you may know, [`sys.path` is initialized from](https://docs.python.org/2/tutorial/modules.html#the-module-search-path): * the current directory * your `PYTHONPATH` * an installation-dependent default However unfortunately that is only part of the story: `setuptools` creates [`easy-install.pth`](https://pythonhosted.org/setuptools/easy_install.html) files, which also modify `sys.path` and worst of all they *prepend* packages and therefore totally mess up the order of directories. In particular (at least on my system), there is `/usr/local/lib/python2.7/dist-packages/easy-install.pth` with the following contents: ``` import sys; sys.__plen = len(sys.path) /usr/lib/python2.7/dist-packages import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new) ``` This causes `/usr/lib/python2.7/dist-packages` to be *prepended even before your `PYTHONPATH`*! What you could do is simply change the 2nd line in this file to ``` /usr/local/lib/python2.7/dist-packages ``` and you will get your desired priority. However beware this file might be overwritten or changed again by a future `setuptools` invocation!
We encountered an almost identical situation and wanted to expand upon @kynan's response which is spot-on. In the case where you have such an `easy-install.pth` that you want to overcome, but which you cannot modify it (say you are a user with no root/admin access), you can do the following: * Set up an [alternate python installation scheme](https://docs.python.org/3/install/#alternate-installation) + e.g. we use a PYTHON HOME install (setting PYTHONUSERBASE) * Create a user/home site-packages + You can do this by installing a package into the user env: `pip install <package> --user` * Create a pth to set `sys.__egginsert` to workaround the system/distribution easy-install.pth + Create a `$PYTHONUSERBASE/lib/python2.7/site-packages/fix_easy_install.pth` + Containing: `import sys; sys.__egginsert = len(sys.path);` This will set `sys.__egginsert` to point to the end of your `sys.path` including your usersite paths. When the nefarious system/dist `easy-install.pth` will then insert its items at the end of the system path.
9,038
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
You can use a [collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict) to create the inner dicts automatically when needed: ``` from collections import defaultdict data = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) out = defaultdict(dict) for (name, first, value) in data: out[name][first] = value # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
Yes, with something like this. ```py my_query = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) dict_query = {} for key1, key2, value in my_query: if key1 not in dict_query: dict_query[key1] = {} dict_query[key1][key2] = value ``` Edit1 More elegant. ```py from collections import defaultdict dict_query = defaultdict(dict) for key1, key2, value in my_query: dict_query[key1][key2] = value ```
9,039
66,912,406
Been following a tutorial on udemy for python, and atm im suppose to get a django app deployed. Since I already had a vps, I didnt go with the solution on the tutorial using google cloud, so tried to configure the app on my vps, which is also running plesk. Followed the tutorial at <https://www.plesk.com/blog/tag/django-plesk/> to the letter the best I could, but keep getting the 403 error. ``` httpdocs -djangoProject ---djangoProject ------asgi.py ------__init__.py ------settings.py ------urls.py ------wsgi.py ---manage.py -passenger_wsgi.py -python-app-venv -tmp ``` passenger\_wsgi.py: ``` import sys, os ApplicationDirectory = 'djangoProject' ApplicationName = 'djangoProject' VirtualEnvDirectory = 'python-app-venv' VirtualEnv = os.path.join(os.getcwd(), VirtualEnvDirectory, 'bin', 'python') if sys.executable != VirtualEnv: os.execl(VirtualEnv, VirtualEnv, *sys.argv) sys.path.insert(0, os.path.join(os.getcwd(), ApplicationDirectory)) sys.path.insert(0, os.path.join(os.getcwd(), ApplicationDirectory, ApplicationName)) sys.path.insert(0, os.path.join(os.getcwd(), VirtualEnvDirectory, 'bin')) os.chdir(os.path.join(os.getcwd(), ApplicationDirectory)) os.environ.setdefault('DJANGO_SETTINGS_MODULE', ApplicationName + '.settings') from django.core.wsgi import get_wsgi_application application = get_wsgi_application() ``` passenger is enabled in > > "Tools & Settngs > Apache Web Server" > > > in "Websites & Domains > Domain > Hosting & DNS > Apache & nginx settings" I've got: "Additional directives for HTTP" and "Additional directives for HTTPS" both with: ``` PassengerEnabled On PassengerAppType wsgi PassengerStartupFile passenger_wsgi.py ``` and nginx proxy mode marked "Reverse Proxy Server (nginx)" is also running No idea what else I can give to aid in getting a solution, so if you're willing to assist and need more info please let me know. Very thankfull in advance EDIT: on a previous attempt, deploying a real app on a subdomain, was getting: > > [Thu Apr 01 22:52:37.928495 2021] [autoindex:error] [pid 23614:tid > 140423896925952] [client xx:xx:xx:xx:0] AH01276: Cannot serve > directory /var/www/vhosts/baya.pt/leve/leve/: No matching > DirectoryIndex > (index.html,index.cgi,index.pl,index.php,index.xhtml,index.htm,index.shtml) > found, and server-generated directory index forbidden by Options > directive > > > This time I'm getting no errors logged EDIT2: @Chris: Not sure what you mean, find no errors on the log folders (ssh), but on Plesk I get this several times: > > 2021-04-01 23:40:48 Error 94.61.142.214 403 GET / > HTTP/1.0 <https://baya.pt/> Mozilla/5.0 (X11; Linux x86\_64) > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 > Safari/537.36 2.52 K Apache SSL/TLS access 2021-04-01 > 23:40:48 Error 94.61.142.214 AH01276: Cannot serve directory > /var/www/vhosts/baya.pt/httpdocs/djangoProject/: No matching > DirectoryIndex > (index.html,index.cgi,index.pl,index.php,index.xhtml,index.htm,index.shtml) > found, and server-generated directory index forbidden by Options > directive, referer: <https://baya.pt/> Apache error > > > EDIT 3: removing apache directives and adding to nginx directives: ``` passenger_enabled on; passenger_app_type wsgi; passenger_startup_file passenger_wsgi.py; ``` Now gives me a Passenger error page, log as follows: ``` [ N 2021-04-01 23:50:59.1819 908/T9 age/Cor/CoreMain.cpp:671 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown) [ N 2021-04-01 23:50:59.1819 908/T1 age/Cor/CoreMain.cpp:1246 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected... [ N 2021-04-01 23:50:59.1820 908/Tb Ser/Server.h:902 ]: [ApiServer] Freed 0 spare client objects [ N 2021-04-01 23:50:59.1820 908/Tb Ser/Server.h:558 ]: [ApiServer] Shutdown finished [ N 2021-04-01 23:50:59.1820 908/T9 Ser/Server.h:902 ]: [ServerThr.1] Freed 0 spare client objects [ N 2021-04-01 23:50:59.1820 908/T9 Ser/Server.h:558 ]: [ServerThr.1] Shutdown finished [ N 2021-04-01 23:50:59.2765 30199/T1 age/Wat/WatchdogMain.cpp:1373 ]: Starting Passenger watchdog... [ N 2021-04-01 23:50:59.2871 908/T1 age/Cor/CoreMain.cpp:1325 ]: Passenger core shutdown finished [ N 2021-04-01 23:50:59.3329 30209/T1 age/Cor/CoreMain.cpp:1340 ]: Starting Passenger core... [ N 2021-04-01 23:50:59.3330 30209/T1 age/Cor/CoreMain.cpp:256 ]: Passenger core running in multi-application mode. [ N 2021-04-01 23:50:59.3472 30209/T1 age/Cor/CoreMain.cpp:1015 ]: Passenger core online, PID 30209 [ N 2021-04-01 23:51:01.4339 30209/T7 age/Cor/SecurityUpdateChecker.h:519 ]: Security update check: no update found (next check in 24 hours) App 31762 output: Error: Directory '/var/www/vhosts/baya.pt' is inaccessible because of a filesystem permission error. [ E 2021-04-01 23:51:02.9127 30209/Tc age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/vhosts/baya.pt/httpdocs: Directory '/var/www/vhosts/baya.pt' is inaccessible because of a filesystem permission error. ```
2021/04/01
[ "https://Stackoverflow.com/questions/66912406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6291059/" ]
The '.no-content' has display: none which instantly removes the element and the animation does not take place I just removed that and it worked just fine. I added the other CSS to fix the position of the div 'I assumed you want it like this' ```js const App = () => { const [opened, setOpened] = React.useState(false) const overlay = document.getElementById('overlay'); const handleClick = ()=>{ setOpened(!opened) if(overlay){ if(!opened) overlay.style.display = "block" else{ setTimeout(() => { overlay.style.display = 'none' }, 1000) // animation time } } } return ( <div> <button className='btn' onClick={handleClick }>{opened ? 'Close' : 'Open'}</button> <div style={{margin: 50 }}>Some page content that will be covered</div> <div id="overlay" className={opened ? 'content': 'no-content'}> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <button>I do nothing</button> <p>Tellus molestie nunc non blandit massa enim nec dui. Sapien et ligula ullamcorper malesuada proin libero nunc consequat interdum. Sit amet consectetur adipiscing elit pellentesque habitant morbi tristique. Nisi quis eleifend quam adipiscing vitae proin sagittis nisl rhoncus. Sit amet commodo nulla facilisi nullam vehicula. Vestibulum lorem sed risus ultricies tristique. Sit amet nisl suscipit adipiscing bibendum est ultricies integer quis. Tempus egestas sed sed risus pretium quam vulputate dignissim. Feugiat vivamus at augue eget arcu dictum. Consequat interdum varius sit amet mattis vulputate enim nulla. Sit amet risus nullam eget. Id neque aliquam vestibulum morbi blandit cursus risus at ultrices. Massa tempor nec feugiat nisl pretium fusce id velit. Vestibulum morbi blandit cursus risus. Id diam vel quam elementum pulvinar etiam non. Faucibus a pellentesque sit amet porttitor eget dolor morbi. Dictumst vestibulum rhoncus est pellentesque elit ullamcorper dignissim cras tincidunt. Lorem sed risus ultricies tristique. Viverra orci sagittis eu volutpat odio facilisis mauris. </p> </div> </div> ) } ReactDOM.render( <App />, document.getElementById('app') ); ``` ```css body { width: 500px; height: 500px; } .btn { position: fixed; top: 0; z-index: 9999 } @keyframes fadeOut { 0% { opacity: 1 } 100% { opacity: 0, display: none } } @keyframes fadeIn { 0% { opacity: 0 } 100% { opacity: 1 } } .no-content { opacity: 0; animation: fadeOut 500ms linear; position: absolute; top:0; right:0; width: 100%; height: 100%; background-color: black; color: white; } .content { opacity: 1; animation: fadeIn 1s linear; position: absolute; top:0; right:0; width: 100%; height: 100%; background-color: black; color: white; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.0/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.0/umd/react-dom.production.min.js"></script> <div id="app"></div> ```
You dont need animation for that transition is much cleaner ```js const App = () => { const [opened, setOpened] = React.useState(false) return ( < div > < button className = 'btn' onClick = { () => setOpened(!opened) } > { opened ? 'Close' : 'Open' } < /button> < div style = { { margin: 50 } } > Some page content that will be covered < /div> < div className = {'content ' +( opened ? 'show-content' : '') } > < p > Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. < /p> < button > I do nothing < /button> < p > Tellus molestie nunc non blandit massa enim nec dui.Sapien et ligula ullamcorper malesuada proin libero nunc consequat interdum.Sit amet consectetur adipiscing elit pellentesque habitant morbi tristique.Nisi quis eleifend quam adipiscing vitae proin sagittis nisl rhoncus.Sit amet commodo nulla facilisi nullam vehicula.Vestibulum lorem sed risus ultricies tristique.Sit amet nisl suscipit adipiscing bibendum est ultricies integer quis.Tempus egestas sed sed risus pretium quam vulputate dignissim.Feugiat vivamus at augue eget arcu dictum.Consequat interdum varius sit amet mattis vulputate enim nulla.Sit amet risus nullam eget.Id neque aliquam vestibulum morbi blandit cursus risus at ultrices.Massa tempor nec feugiat nisl pretium fusce id velit.Vestibulum morbi blandit cursus risus.Id diam vel quam elementum pulvinar etiam non.Faucibus a pellentesque sit amet porttitor eget dolor morbi.Dictumst vestibulum rhoncus est pellentesque elit ullamcorper dignissim cras tincidunt.Lorem sed risus ultricies tristique.Viverra orci sagittis eu volutpat odio facilisis mauris. < /p> < /div> < /div> ) } ReactDOM.render( < App / > , document.getElementById('app') ); ``` ```css body { width: 500px; height: 500px; } .btn { position: fixed; top: 0; z-index: 999; } .content { transition: 0.3s; opacity:0; position: absolute; top: 0; right: 0; width: 100%; height: 100%; background-color: black; color: white; } .show-content { opacity:1; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.0/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.0/umd/react-dom.production.min.js"></script> <div id="app"></div> ```
9,047
63,510,765
I am following with Datacamp's tutorial on using convolutional autoencoders for classification [here](https://www.datacamp.com/community/tutorials/autoencoder-classifier-python). I understand in the tutorial that we only need the autoencoder's head (i.e. the encoder part) stacked to a fully-connected layer to do the classification. After stacking, the resulting network (convolutional-autoencoder) is trained twice. The first by setting the encoder's weights to false as: ``` for layer in full_model.layers[0:19]: layer.trainable = False ``` And then setting back to true, and re-trained the network: ``` for layer in full_model.layers[0:19]: layer.trainable = True ``` I cannot understand why we are doing this twice. Anyone with experience working with conv-net or autoencoders?
2020/08/20
[ "https://Stackoverflow.com/questions/63510765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are trying to render an object, and when you convert Javascript objects to string it will convert to "`[object Object]`". I think you are misunderstanding what the "name" prop of the input field should do in this context, you don't actually need that. A better way of writing your handleChange function would be: ```js function handleChange(e) { setUserReview(e.target.value); } ``` This code fragment will set the state of `userReview` to the string that is currently in the textarea, instead of an object `{ userReview: string }` like before. Note that this is not using the `name` prop. There is also another error, `type="text-area"` is not a valid property of the `<input/>` tag. You should do like that: ``` <textarea placeholder="Leave a review..." onChange={handleChange} > {userReview} </textarea> ```
You're using invalid html: ``` <input type="text-area" /> ``` [Textarea](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/textarea) element is to be: ``` <textarea /> ``` --- Okay, I found the issue with the userReview state. You should use: ``` <textarea onChange...> {userReview.userReview} /* name of the userReview state */ </textarea> ```
9,048
2,893,193
I'd like to tell you what I've tried and then I'd really welcome any comments you can provide on how I can get PortAudio and PyAudio setup correctly! I've tried installing the stable and svn releases of PortAudio from [their website](http://www.portaudio.com/download.html) for my Core 2 Duo MacBook Pro running Snow Leopard. The stable release has a sizeof error that [can be fixed(?)](http://pujansrt.blogspot.com/2010/03/compiling-portaudio-on-snow-leopard-mac.html), but the daily svn release installs fine with `./configure && make && make install` (so this is what I'm using). The tests are compiled properly and I can get the binaries to produce output/can read microphone input. Ok, so then PyAudio has troubles. Installing from [source](http://people.csail.mit.edu/hubert/pyaudio/#sources) I get errors about not finding the libraries: ``` mwoods 13 pyaudio-0.2.3$ python setup.py build running build running build_py running build_ext building '_portaudio' extension gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch ppc -arch x86_64 -pipe -DMACOSX=1 -I/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _portaudiomodule.c -o build/temp.macosx-10.6-universal-2.6/_portaudiomodule.o -fno-strict-aliasing _portaudiomodule.c:35:25: error: pa_mac_core.h: No such file or directory _portaudiomodule.c:679: error: expected specifier-qualifier-list before ‘PaMacCoreStreamInfo’ _portaudiomodule.c: In function ‘_pyAudio_MacOSX_hostApiSpecificStreamInfo_cleanup’: _portaudiomodule.c:690: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘paMacCoreStreamInfo’ _portaudiomodule.c:691: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘paMacCoreStreamInfo’ _portaudiomodule.c:692: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘paMacCoreStreamInfo’ _portaudiomodule.c:695: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘channelMap’ _portaudiomodule.c:696: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘channelMap’ _portaudiomodule.c:699: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘flags’ ... another 100 lines of this ... _portaudiomodule.c:2471: error: ‘paMacCoreMinimizeCPUButPlayNice’ undeclared (first use in this function) _portaudiomodule.c:2473: error: ‘paMacCoreMinimizeCPU’ undeclared (first use in this function) lipo: can't open input file: /var/folders/Qc/Qcl516fqHAWupTUV9BE9rU+++TI/-Tmp-//cc7BqpBc.out (No such file or directory) error: command 'gcc-4.2' failed with exit status 1 ``` I can install PyAudio from [their .dmg installer](http://people.csail.mit.edu/hubert/pyaudio/#binaries), but it targets python2.5. If I copy all of the related contents of /Library/Python/2.5/site-packages/ to /Library/Python/2.6/site-packages/ (this includes PyAudio-0.2.3-py2.5.egg-info, \_portaudio.so, pyaudio.py, pyaudio.pyc, and pyaudio.pyo) then my python2.6 can recognize it. ``` In [1]: import pyaudio Please build and install the PortAudio Python bindings first. ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> File "/Library/Python/2.6/site-packages/pyaudio.py", line 103, in <module> sys.exit(-1) SystemExit: -1 Type %exit or %quit to exit IPython (%Exit or %Quit do so unconditionally). In [2]: ``` So this happens because `_portaudio` can't be imported. If I try to import that directly: ``` In [2]: import _portaudio ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> ImportError: /Library/Python/2.6/site-packages/_portaudio.so: no appropriate 64-bit architecture (see "man python" for running in 32-bit mode) ``` Ok, so if I `export VERSIONER_PYTHON_PREFER_32_BIT=yes` and then run python again (well, ipython I suppose), we can see it works but with consequences: ``` In [1]: import pyaudio In [2]: pyaudio Out[2]: <module 'pyaudio' from '/Library/Python/2.6/site-packages/pyaudio.pyc'> In [3]: import pylab ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/pylab.py", line 1, in <module> from matplotlib.pylab import * File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/matplotlib/__init__.py", line 129, in <module> from rcsetup import defaultParams, validate_backend, validate_toolbar File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/matplotlib/rcsetup.py", line 19, in <module> from matplotlib.colors import is_color_like File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/matplotlib/colors.py", line 52, in <module> import numpy as np File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/__init__.py", line 130, in <module> import add_newdocs File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/add_newdocs.py", line 9, in <module> from lib import add_newdoc File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/lib/__init__.py", line 4, in <module> from type_check import * File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/lib/type_check.py", line 8, in <module> import numpy.core.numeric as _nx File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/core/__init__.py", line 5, in <module> import multiarray ImportError: dlopen(/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/core/multiarray.so, 2): no suitable image found. Did find: /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/core/multiarray.so: mach-o, but wrong architecture ``` We can assume pylab was working before! I spent a while getting this far, but can someone help with this install or lend advice from a successful Snow Leopard install? Sorry for the long post, but I'm notorious for only giving partial information and I'm trying to fix that!
2010/05/23
[ "https://Stackoverflow.com/questions/2893193", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51532/" ]
Thanks to PyAudio's author's speedy response to my inquiries, I now have a nicely installed copy. His directions are posted below for anyone who has similar issues. > > Hi Michael, > > > Try this: > > > 1) Make sure your directory layout is > like: > > > ./foo/pyaudio/portaudio-v19/ ./foo/pyaudio/ > > > 2) Build portaudio-v19 from sources, > as you have done > > > 3) cd ./foo/pyaudio/ 4) python > setup.py build --static-link > > > (See the comments at the top of > setup.py for more info on > --static-link) > > > If all goes well, inside > ./foo/pyaudio/build/lib.macosx-10.6-.../, > you'll find the built (fat) objects > comprising i386, ppc, and x86\_64 > binaries. You can also do a "python > setup.py install" if you like. > > > Best, Hubert > > >
I'm on Mac 10.5.8 Intel Core 2 duo and hitting the same issue. The directory layout you need is ``` ./foo/pyaudio/portaudio-v19/ ./foo/pyaudio ``` The reason is setup.py has the following: portaudio\_path = os.environ.get("PORTAUDIO\_PATH", "./portaudio-v19") alternatively, you should be able to set PORTAUDIO\_PATH env variable and get it to work.
9,049
70,872,795
i am new to c++ and i know so much more python than c++ and i have to change a code from c++ to python, in the code to change i found this sentence: ``` p->arity = std::stoi(x, nullptr, 10); ``` i think for sake of simplicity we can use ``` p->arity = x; /* or some whit pointers im really noob on c++ but i think this is not importan */ ``` in python does this means something like ``` p[arity] = x ``` or similar? or what? im a little lost whit all these new(for me) concepts like pointers and memory stuff but now whit -> operator thanks in advance
2022/01/27
[ "https://Stackoverflow.com/questions/70872795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11579387/" ]
The API keys are only available in the deployed function, not in the react app. You can call a function from your react app which then calls the MailChimp API. This keeps you API key out of the client side code which keeps it secure. As the documentation says, you set the API keys with the CLI in the terminal ``` firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID" ``` and then you can use them in a firebase function by calling ``` const apiKey = functions.config().someservice.key ```
Firebase can use Google Cloud Platform Services, you can integrate [GCP Secret Manager](https://cloud.google.com/secret-manager) on your functions. Google Secret Manager is a fully-managed, secure, and convenient storage system for such secrets. Developers have historically leveraged environment variables or the filesystem for managing secrets in Cloud Functions. This was largely because integrating with Secret Manager required developers to write custom code… until now. With this service you can store your raw keys (these will be encrypted) and retrieved by your code function, this carries the benefit that you can restrict the access via [Cloud IAM](https://cloud.google.com/iam) and service accounts. Also this can help you to define which members of your projects or which service accounts can access to the API keys(secrets) The permissions over the secrets can be configured so that even developers cannot see production environment keys, by assigning [access permissions](https://cloud.google.com/secret-manager/docs/access-control) to secrets, but allowing that the function can get the secret (because the service account associated with your function can read the secret). Other benefits include : * Zero code changes. Cloud functions that already consume secrets via environment variables or files bundled with the source upload simply require an additional flag during deployment. The Cloud Functions service resolves and injects the secrets at runtime and the plaintext values are only visible inside the process. * Easy environment separation. It’s easy to use the same codebase across multiple environments, e.g., dev, staging, and prod, because the secrets are decoupled from the code and are resolved at runtime. * Supports the 12-factor app pattern. Because secrets can be injected into environment variables at runtime, the native integration supports the 12-factor pattern while providing stronger security guarantees. * Centralized secret storage, access, and auditing. Leveraging Secret Manager as the centralized secrets management solution enables easy management of access controls, auditing, and access logs. In this document you can find a [code example](https://cloud.google.com/secret-manager/docs/quickstart#secretmanager-quickstart-nodejs) in JS about how to use GCP Secret Manager.
9,050
45,224,882
As I understand, the current java.net.URL handshake (for a GSS/Kerberos authentication mode) always entails a 401 as a first leg operation, which is kind of inefficient if we know the client and server are going to use GSS/Kerberos, right? Does anyone know if preemptive authentication (where you can present the token upfront like in the python one <https://github.com/requests/requests-kerberos#preemptive-authentication>) is available in the java world? A quick google points towards <https://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html> but the preemptive example seems to be for the Basic scheme alone. Thanks!
2017/07/20
[ "https://Stackoverflow.com/questions/45224882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/874076/" ]
I have faced the same issue and came to the same conclusion as you - preemptive SPNEGO authentication is not supported neither in Oracle JRE HttpUrlConnection nor in Apache HTTP Components. I haven't checked other HTTP clients but almost sure that it should be the same. I started working on an alternative Spnego client which can be used with any HTTP client - it's called [Kerb4J](https://github.com/bedrin/kerb4j) You can use it like this: ``` SpnegoClient spnegoClient = SpnegoClient.loginWithKeyTab("clientPrincipal", "C:/kerberos/clientPrincipal.keytab"); URL url = new URL("http://kerberized.service/helloworld"); URLConnection urlConnection = url.openConnection(); HttpURLConnection huc = (HttpURLConnection) urlConnection; SpnegoContext context = spnegoClient.createContext(url); huc.setRequestProperty("Authorization", context.createTokenAsAuthroizationHeader()); // Optional mutual authentication step String challenge = huc.getHeaderField("WWW-Authenticate").substring("Negotiate ".length()); byte[] decode = Base64.getDecoder().decode(challenge); context.processMutualAuthorization(decode, 0, decode.length); ```
After much investigation, it looks like preemptive kerberos authentication is not available in the default Hotspot java implementation. The http-components from Apache is also not able to help with this. However, the default implementation does have the ability to only send headers when the payload is potentially large as noted in the [Expect Header and 100-Continue response](https://www.rfc-editor.org/rfc/rfc7231#section-5.1.1) section. To enable this, we need to use [the fixed length streaming mode](https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html#setFixedLengthStreamingMode-int-) ([or other similar means](https://stackoverflow.com/a/41710065/874076)). But as noted in the [javadoc](https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html#setFixedLengthStreamingMode-int-), authentication and redirection cannot be handled automatically - we are again back to the original problem.
9,051
52,216,312
Data: ``` {"Survived":{"0":0,"1":1,"2":1,"3":1,"4":0,"5":0,"6":0,"7":0,"8":1,"9":1,"10":1,"11":1,"12":0,"13":0,"14":0,"15":1,"16":0,"17":1,"18":0,"19":1,"20":0,"21":1,"22":1,"23":1,"24":0,"25":1,"26":0,"27":0,"28":1,"29":0,"30":0,"31":1,"32":1,"33":0,"34":0,"35":0,"36":1,"37":0,"38":0,"39":1,"40":0,"41":0,"42":0,"43":1,"44":1,"45":0,"46":0,"47":1,"48":0,"49":0},"Pclass":{"0":3,"1":1,"2":3,"3":1,"4":3,"5":3,"6":1,"7":3,"8":3,"9":2,"10":3,"11":1,"12":3,"13":3,"14":3,"15":2,"16":3,"17":2,"18":3,"19":3,"20":2,"21":2,"22":3,"23":1,"24":3,"25":3,"26":3,"27":1,"28":3,"29":3,"30":1,"31":1,"32":3,"33":2,"34":1,"35":1,"36":3,"37":3,"38":3,"39":3,"40":3,"41":2,"42":3,"43":2,"44":3,"45":3,"46":3,"47":3,"48":3,"49":3},"Sex":{"0":"male","1":"female","2":"female","3":"female","4":"male","5":"male","6":"male","7":"male","8":"female","9":"female","10":"female","11":"female","12":"male","13":"male","14":"female","15":"female","16":"male","17":"male","18":"female","19":"female","20":"male","21":"male","22":"female","23":"male","24":"female","25":"female","26":"male","27":"male","28":"female","29":"male","30":"male","31":"female","32":"female","33":"male","34":"male","35":"male","36":"male","37":"male","38":"female","39":"female","40":"female","41":"female","42":"male","43":"female","44":"female","45":"male","46":"male","47":"female","48":"male","49":"female"},"Age":{"0":22.0,"1":38.0,"2":26.0,"3":35.0,"4":35.0,"5":28.0,"6":54.0,"7":2.0,"8":27.0,"9":14.0,"10":4.0,"11":58.0,"12":20.0,"13":39.0,"14":14.0,"15":55.0,"16":2.0,"17":28.0,"18":31.0,"19":28.0,"20":35.0,"21":34.0,"22":15.0,"23":28.0,"24":8.0,"25":38.0,"26":28.0,"27":19.0,"28":28.0,"29":28.0,"30":40.0,"31":28.0,"32":28.0,"33":66.0,"34":28.0,"35":42.0,"36":28.0,"37":21.0,"38":18.0,"39":14.0,"40":40.0,"41":27.0,"42":28.0,"43":3.0,"44":19.0,"45":28.0,"46":28.0,"47":28.0,"48":28.0,"49":18.0},"SibSp":{"0":1,"1":1,"2":0,"3":1,"4":0,"5":0,"6":0,"7":3,"8":0,"9":1,"10":1,"11":0,"12":0,"13":1,"14":0,"15":0,"16":4,"17":0,"18":1,"19":0,"20":0,"21":0,"22":0,"23":0,"24":3,"25":1,"26":0,"27":3,"28":0,"29":0,"30":0,"31":1,"32":0,"33":0,"34":1,"35":1,"36":0,"37":0,"38":2,"39":1,"40":1,"41":1,"42":0,"43":1,"44":0,"45":0,"46":1,"47":0,"48":2,"49":1},"Parch":{"0":0,"1":0,"2":0,"3":0,"4":0,"5":0,"6":0,"7":1,"8":2,"9":0,"10":1,"11":0,"12":0,"13":5,"14":0,"15":0,"16":1,"17":0,"18":0,"19":0,"20":0,"21":0,"22":0,"23":0,"24":1,"25":5,"26":0,"27":2,"28":0,"29":0,"30":0,"31":0,"32":0,"33":0,"34":0,"35":0,"36":0,"37":0,"38":0,"39":0,"40":0,"41":0,"42":0,"43":2,"44":0,"45":0,"46":0,"47":0,"48":0,"49":0},"Fare":{"0":7.25,"1":71.2833,"2":7.925,"3":53.1,"4":8.05,"5":8.4583,"6":51.8625,"7":21.075,"8":11.1333,"9":30.0708,"10":16.7,"11":26.55,"12":8.05,"13":31.275,"14":7.8542,"15":16.0,"16":29.125,"17":13.0,"18":18.0,"19":7.225,"20":26.0,"21":13.0,"22":8.0292,"23":35.5,"24":21.075,"25":31.3875,"26":7.225,"27":263.0,"28":7.8792,"29":7.8958,"30":27.7208,"31":146.5208,"32":7.75,"33":10.5,"34":82.1708,"35":52.0,"36":7.2292,"37":8.05,"38":18.0,"39":11.2417,"40":9.475,"41":21.0,"42":7.8958,"43":41.5792,"44":7.8792,"45":8.05,"46":15.5,"47":7.75,"48":21.6792,"49":17.8},"Embarked":{"0":"S","1":"C","2":"S","3":"S","4":"S","5":"Q","6":"S","7":"S","8":"S","9":"C","10":"S","11":"S","12":"S","13":"S","14":"S","15":"S","16":"Q","17":"S","18":"S","19":"C","20":"S","21":"S","22":"Q","23":"S","24":"S","25":"S","26":"C","27":"S","28":"Q","29":"S","30":"C","31":"C","32":"Q","33":"S","34":"C","35":"S","36":"C","37":"S","38":"S","39":"C","40":"S","41":"S","42":"C","43":"C","44":"Q","45":"S","46":"Q","47":"Q","48":"C","49":"S"},"Sex_Code":{"0":1,"1":0,"2":0,"3":0,"4":1,"5":1,"6":1,"7":1,"8":0,"9":0,"10":0,"11":0,"12":1,"13":1,"14":0,"15":0,"16":1,"17":1,"18":0,"19":0,"20":1,"21":1,"22":0,"23":1,"24":0,"25":0,"26":1,"27":1,"28":0,"29":1,"30":1,"31":0,"32":0,"33":1,"34":1,"35":1,"36":1,"37":1,"38":0,"39":0,"40":0,"41":0,"42":1,"43":0,"44":0,"45":1,"46":1,"47":0,"48":1,"49":0},"Embarked_Code":{"0":2,"1":0,"2":2,"3":2,"4":2,"5":1,"6":2,"7":2,"8":2,"9":0,"10":2,"11":2,"12":2,"13":2,"14":2,"15":2,"16":1,"17":2,"18":2,"19":0,"20":2,"21":2,"22":1,"23":2,"24":2,"25":2,"26":0,"27":2,"28":1,"29":2,"30":0,"31":0,"32":1,"33":2,"34":0,"35":2,"36":0,"37":2,"38":2,"39":0,"40":2,"41":2,"42":0,"43":0,"44":1,"45":2,"46":1,"47":1,"48":0,"49":2}} ``` I'm playing around with the Titanic data-set from Kaggle. I'm trying to find out what percentage within each Pclass of men and women survived. Groupby example: ``` train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2).plot(kind='bar') ``` This shows me within each class how many men and women survived and how many did not, but it would visually be better to see what percentage of men and women survived within each class. Desired Result: ``` train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2)[1]/(train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2)[1]+train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2)[0]) ``` This looks like it gets the desired result, but I'm wondering if there is a much more pythonic way of doing this? like a normalize=True option would be slick. End goal: A bar chart of the ratio survived for each sex within each Pclass
2018/09/07
[ "https://Stackoverflow.com/questions/52216312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7170271/" ]
Try this ``` @Override public void onValidationFailed(View failedView, Rule<?> failedRule) { String message = failedRule.getFailureMessage(); if (failedView instanceof EditText) { failedView.requestFocus(); if(!TextUtils.isEmpty(message){ ((EditText) failedView).setError(message); } } else { Toast.makeText(this, "Record Not Saved", Toast.LENGTH_SHORT).show(); } ```
> > I found the problem : > > > ``` android:theme="@style/TextLabel" ``` > > Had to create a theme first and than a style and use it like : > > > ``` <style name="TextLabel" parent="BellicTheme"> ``` > > Thanks everyone > > >
9,054
44,550,192
Suppose I have the following dict... ``` sample = { 'a' : 100, 'b' : 3, 'e' : 42, 'c' : 250, 'f' : 42, 'd' : 42, } ``` I want to sort this dict with the highest order sort being by value and the lower order sort being by key. The key-value pairs of the result would be this ... ``` ( ('b', 3), ('d', 42), ('e', 42), ('f', 42), ('a', 100), ('c', 250) ) ``` I already know how to do this by writing several lines of python code. However, I'm looking for a python one-liner that will perform this sort, possibly using a comprehension or one or more of python's functional programming constructs. Is such a one-liner even possible in python?
2017/06/14
[ "https://Stackoverflow.com/questions/44550192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1800838/" ]
You can define a lambda that uses both the value and key. ``` sorted(sample.items(), key=lambda x: (x[1], x[0])) ```
You can use the operator module: ``` import operator sample = { 'a' : 100, 'b' : 3, 'e' : 42, 'c' : 250, 'f' : 42, 'd' : 42, } sorted_by_value = tuple(sorted(sample.items(), key=operator.itemgetter(1))) sorted_by_key = tuple(sorted(sample.items(), key=operator.itemgetter(0))) ``` sorted\_by\_value: ``` (('b', 3), ('e', 42), ('d', 42), ('f', 42), ('a', 100), ('c', 250)) ``` sorted\_by\_key: ``` (('a', 100), ('b', 3), ('c', 250), ('d', 42), ('e', 42), ('f', 42)) ```
9,055
37,000,231
I have a Pandas DataFrame as follow: ```python In [28]: df = pd.DataFrame({'A':['CA', 'FO', 'CAP', 'CP'], 'B':['Name1', 'Name2', 'Name3', 'Name4'], 'C':['One', 'Two', 'Other', 'Some']}) In [29]: df Out[29]: A B C 0 CA Name1 One 1 FO Name2 Two 2 CAP Name3 Other 3 CP Name4 Some ``` I am trying to count all records in column A with values of `'CA'` and `'CP'`, to do this I am executing the next: ```python In [30]: len(df.groupby('A').filter(lambda x: x['A'] == 'CA')) Out[30]: 1 ``` There is a way to get both information in a single sentence?, because if I try to do something like this: ```python In [32]: len(df.groupby('A').filter(lambda x: x['A'] == 'CA' or ....: x['A'] == 'CP')) ``` I am getting this error: ```python ValueError Traceback (most recent call last) <ipython-input-32-111c3fde30f2> in <module>() ----> 1 len(df.groupby('A').filter(lambda x: x['A'] == 'CA') or 2 x['A'] == 'CP') c:\python27\lib\site-packages\pandas\core\generic.pyc in __nonzero__(self) 885 raise ValueError("The truth value of a {0} is ambiguous. " 886 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." --> 887 .format(self.__class__.__name__)) 888 889 __bool__ = __nonzero__ ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ```
2016/05/03
[ "https://Stackoverflow.com/questions/37000231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3908401/" ]
You explicitly set the input's ID as "TextBox" but you're trying to retrieve it as `<%= TextBox.ClientID %>`, use "TextBox" as the ID (and better, give a more meaningful name to the ID). ClientId is used for ASP .net controls which have it's ID autogenerated by the ASP runtime.
Turns out I was referencing the wrong control. Referencing the correct control in the script solved my problem. The rest of the code is totally usable. Referencing a control with `#` in the call to the ID is required for jQuery functions but not plain Javascript. `<%= TextBox.ClientID %>` will work even for HTML inputs where the runat attribute is set to server. This will provide the correct ID when using master pages.
9,056
10,414,210
Python 2.6 introduced a `next` function. Why was this necessary? One could always type `obj.next()` instead of `next(obj)`. Is the latter more `pythonic`?
2012/05/02
[ "https://Stackoverflow.com/questions/10414210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/552671/" ]
> > [`next(iterator[, default])`](http://docs.python.org/2/library/functions.html#next) > > > Retrieve the next item from the *iterator* by calling its `next()``(__next__()` in python 3) method. If *default* is given, it is returned if the iterator is exhausted, otherwise `StopIteration` is raised. > > > You get the `default` option.
Apart from the obvious additional functionality, it also looks better when used together with generator expressions. Compare ``` (x for x in lst if x > 2).next() ``` to ``` next(x for x in lst if x > 2) ``` The latter is a lot more consistent with the rest of Python's style, IMHO.
9,057
28,242,398
I am trying to create an algorithm in Python 2.7.9 which can be viewed below: ![enter image description here](https://i.stack.imgur.com/4qsGy.gif) This equates to: `10/3 (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))` When I try to solve it in python with the following code: ``` from __future__ import division import math y = 66 x = "%0.2f" % (10/3 (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))) print x ``` I receive the following error: `TypeError: 'int' object is not callable` Why is this? I have another algorithm below which works just fine: ``` x = "%0.2f" % (-5/4*(-463 + math.sqrt(1216881 - 16000 *y))) ```
2015/01/30
[ "https://Stackoverflow.com/questions/28242398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4241308/" ]
You are missing the multiplication operator below: ``` x = "%0.2f" % (10/3 (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))) ^ Need to add '*' ```
Where is the multiplication operator? ``` x = "%0.2f" % (10/3 * (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))) ``` Tip Whenever you get `TypeError: 'int' object is not callable`, it means that you have something like an integer followed immediately by a brace. Check out for that, Debugging will be a piece of cake.
9,060
28,064,563
I am using distutils (setup.py) to create rpm-packages from my python projects. Now, one of my projects which had a very specific task (say png-creation) is moved to a more general project (image-toolkit). 1. Is there a way to tell the user that the old package (png-creation) is obsolete when he/she installs the new package (image-toolkit). 2. Is there a way to make a new version of the old package (png-creation) which tells the user that he/she should use the new package (image-toolkit) instead? These are two different scenarios from which the first one would be my favorite. In both scenarios I assume that a user has installed my package (png-creation) with his package manager. In the first (my favorite) scenario the following would happen: * The user runs an update with his package manager. * The package manager recognizes that png-creation is obsolete and that image-toolkit has to be installed instead. So the package manger removes png-creation and installs image-toolkit. If this scenario is not possible, the second one would be: * I tell my users that they have to install image-toolkit. * The user runs install image-toolkit with his package manager. * The package manager recognizes that png-creation is not needed anymore and removes it.
2015/01/21
[ "https://Stackoverflow.com/questions/28064563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4034527/" ]
ARI uses a subscription based model for events. Quoting from the documentation on the [wiki](https://wiki.asterisk.org/wiki/display/AST/Introduction+to+ARI+and+Channels): > > Resources in Asterisk do not, by default, send events about themselves to a connected ARI application. In order to get events about resources, one of three things must occur: > > > 1. The resource must be a channel that entered into a Stasis dialplan application. A subscription is implicitly created in this case. The > subscription is implicitly destroyed when the channel leaves the > Stasis dialplan application. > 2. While a channel is in a Stasis dialplan application, the channel may interact with other resources - such as a bridge. While channels > interact with the resource, a subscription is made to that resource. > When no more channels in a Stasis dialplan application are interacting > with the resource, the implicit subscription is destroyed. > 3. At any time, an ARI application may make a subscription to a resource in Asterisk through application operations. While that > resource exists, the ARI application owns the subscription. > > > So, the reason you get events about a channel over your ARI WebSocket is because it went into the Stasis dialplan application. That isn't, however, the only way to get events. If you're interested in events from other event sources, you can subscribe to those resources using the [applications](https://wiki.asterisk.org/wiki/display/AST/Asterisk+13+Applications+REST+API#Asterisk13ApplicationsRESTAPI-subscribe) resource. For example, if I wanted to receive all events that were in relation to PJSIP endpoint "Alice", I would subscribe using the following: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP%2FAlice ``` Note that subscriptions to endpoints implicitly subscribe you to all channels that are created for that endpoint. If you want to subscribe to all endpoints of a particular technology, you can also subscribe to the resource itself: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP ```
For more clarity regarding what Matt Jordan has already provided, here's an example of doing what he suggests with [ari-py](https://github.com/asterisk/ari-py): ``` import ari import logging logging.basicConfig(level=logging.ERROR) client = ari.connect('http://localhost:8088', 'username', 'password') postRequest=client.applications.subscribe(applicationName=["NameOfAppThatWillReapThisEvent-ThisAppShouldBeRunning"], eventSource="endpoint:PJSIP/alice") print postRequest ```
9,061
40,282,812
I have a data in mongoDB, I want to retrieve all the values of a key `"category"` using python code. I have tried several ways but in every case I have to give the "value" to retrieve. Any suggestions would be appreciated. ``` { id = "my_id1" tags: [tag1, tag2, tag3], category: "movie", }, { id = "my_id2" tags: [tag3, tag6, tag9], category: "tv", }, { id = "my_id3" tags: [tag2, tag6, tag8], category: "movie", } ``` I want the output as ``` category: "movie" category: "tv" category: "movie" ```
2016/10/27
[ "https://Stackoverflow.com/questions/40282812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6858122/" ]
This Should Work ``` db.test.find({},{"category":1}); ```
Pymongo's `distinct()` method returns a list of all values associated with a key across all documents in a collection. The following code: ``` db.collection.distinct('category') ``` should return the following list: ``` ['movie', 'tv', 'movie'] ```
9,066
62,657,673
alright so I've been working on some program and I need to send emails from my gmail account.. so I wrote a code (irrelevent, it works) however the mails not send until I approve the captcha.. [captcha url](https://accounts.google.com/b/0/DisplayUnlockCaptcha) and then this solution only work once, What should I do to make it work as I want? some details: python3, smtp-module, ubuntu server built on aws.
2020/06/30
[ "https://Stackoverflow.com/questions/62657673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13046336/" ]
``` using System.Collections; using System.Collections.Generic; using UnityEngine; public class CoroutineController : MonoBehaviour { static CoroutineController _singleton; static Dictionary<string,IEnumerator> _routines = new Dictionary<string,IEnumerator>(100); [RuntimeInitializeOnLoadMethod( RuntimeInitializeLoadType.BeforeSceneLoad )] static void InitializeType () { _singleton = new GameObject($"#{nameof(CoroutineController)}").AddComponent<CoroutineController>(); DontDestroyOnLoad( _singleton ); } public static Coroutine Start ( IEnumerator routine ) => _singleton.StartCoroutine( routine ); public static Coroutine Start ( IEnumerator routine , string id ) { var coroutine = _singleton.StartCoroutine( routine ); if( !_routines.ContainsKey(id) ) _routines.Add( id , routine ); else { _singleton.StopCoroutine( _routines[id] ); _routines[id] = routine; } return coroutine; } public static void Stop ( IEnumerator routine ) => _singleton.StopCoroutine( routine ); public static void Stop ( string id ) { if( _routines.TryGetValue(id,out var routine) ) { _singleton.StopCoroutine( routine ); _routines.Remove( id ); } else Debug.LogWarning($"coroutine '{id}' not found"); } public static void StopAll () => _singleton.StopAllCoroutines(); } ``` Then: ``` CoroutineController.Start( Test() ); ``` You can also stop specific coroutines here by giving them labels: ``` CoroutineController.Start( Test() , "just a test" ); // <few moments later, meme> CoroutineController.Stop( "just a test" ); ```
My solution to starting the Coroutines from places that can't do this is making a Singleton CoroutineManager. I then use this CoroutineManager to invoke these Coroutines from places like ScriptableObjects. You can also use it to cache WaitForEndOfFrame or WaitForFixedUpdate so you don't need to create new ones every time.
9,067
53,058,052
I am trying to create executable python file using pyinstaller, but while loading hooks, it shows error like this, ``` 24021 INFO: Removing import of PySide from module PIL.ImageQt 24021 INFO: Loading module hook "hook-pytz.py"... 24506 INFO: Loading module hook "hook-encodings.py"... 24600 INFO: Loading module hook "hook-pandas.py"... 25037 INFO: Loading module hook "hook-lib2to3.py"... 25131 INFO: Loading module hook "hook-lxml.etree.py"... 25131 INFO: Loading module hook "hook-pycparser.py"... 25396 INFO: Loading module hook "hook-setuptools.py"... 25506 WARNING: Hidden import "setuptools.msvc" not found! 25506 INFO: Loading module hook "hook-distutils.py"... 25521 INFO: Loading module hook "hook-nltk.py"... Unable to find "C:\nltk_data" when adding binary and data files. ``` I have tried coping nltk\_data from Appdata to C drive. but same error.
2018/10/30
[ "https://Stackoverflow.com/questions/53058052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9921123/" ]
I have been working on this issue for a few days not and don't have hair left. For some reason nltk and pyinstaller do not work well together. So my first solution to this issue is to use something other than nltk if it is possible to code the solution without nltk. If you must use NLTK, I solved this by forcing the nltk\_data path into datas. 1. Locate your nltk\_data path. Mine was in C:\Users\user-name\AppData\Roaming\nltk\_data 2. In hook-nltk.py (within pyinstaller directory) I commented out and added lines to look like this. ``` import nltk from PyInstaller.utils.hooks import collect_data_files datas = collect_data_files('nltk', False) ''' for p in nltk.data.path: datas.append((p, "nltk_data")) ''' datas.append(("C:\\Users\\nedhu\\AppData\\Roaming\\nltk_data", "nltk_data")) hiddenimports = ["nltk.chunk.named_entity"] ``` There is a deeper problem with pyinstaller looping through the datas list of paths, but this solution works as a patch.
I solved the problems editing the pyinstaller nltk-hook. After much research, I decided to go it alone in the code structure. I solved my problem by commenting on the lines: `datas=[]` `'''for p in nltk.data.path: datas.append((p, "nltk_data"))'''` `hiddenimports = ["nltk.chunk.named_entity"]` What's more, you need to rename the file: pyi\_rth\_\_nltk.cpython-36.pyc to pyi\_rth\_nltk.cpython-36.pyc . This file have 1 more underline. Warning with the python version.
9,070
28,616,942
I am trying to upload video files to a Bucket in S3 server from android app using a signed URLs which is generated from server side (coded in python) application. We are making a PUT request to the signed URL but we are getting > > `connection reset by peer exception`. > > > But when I try the same URL on the POSTMAN REST CLIENT get a success message. Any help will be appreciated.
2015/02/19
[ "https://Stackoverflow.com/questions/28616942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2562861/" ]
Done this using [Retrofit](http://square.github.io/retrofit/) HTTP client library,it successfully uploaded file to Amazon s3 server. code: ``` public interface UploadService { String BASE_URL = "https://bucket.s3.amazonaws.com/folder"; /** * @param url :signed s3 url string after 'BASE_URL'. * @param file :file to upload,( usage: new TypedFile("mp4", videoFile);. * @param cb :callback. */ @PUT("/{url}") void uploadFile(@Path(value = "url", encode=false) String url, @Body() TypedFile file, Callback<String> cb); } ``` service class ``` public final class ServiceGenerator { private ServiceGenerator() { } public static <S> S createService(Class<S> serviceClass, String baseUrl) { return createService(serviceClass, baseUrl, null, null); } public static <S> S createService(Class<S> serviceClass, String baseUrl, final String accessToken, final String tokenType) { class MyErrorHandler implements ErrorHandler { @Override public Throwable handleError(RetrofitError cause) { return cause; } } Gson gson = new GsonBuilder() .setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES) .registerTypeAdapter(Date.class, new DateTypeAdapter()) .disableHtmlEscaping() .create(); RestAdapter.Builder builder = new RestAdapter.Builder() .setEndpoint(baseUrl) .setClient(new OkClient(new OkHttpClient())) .setErrorHandler(new MyErrorHandler()) .setLogLevel(RestAdapter.LogLevel.FULL) .setConverter(new GsonConverter(gson)); if (accessToken != null) { builder.setRequestInterceptor(new RequestInterceptor() { @Override public void intercept(RequestFacade request) { request.addHeader("Accept", "application/json;versions=1"); request.addHeader("Authorization", tokenType + " " + accessToken); } }); } RestAdapter adapter = builder.build(); return adapter.create(serviceClass); } ``` and use: ``` UploadService uploadService = ServiceGenerator.createService(UploadService.class,UploadService.BASE_URL); uploadService.uploadFile(remUrl,typedFile,new CallbackInstance()); ```
Use dynamic URL instead of providing the base URL, use @Url instead of @Path and pass a complete URI, encode= false is by default Eg: `@Multipart @PUT @Headers("x-amz-acl:public-read") Call<Void> uploadFile(@Url String url, @Header("Content-Type") String contentType, @Part MultipartBody.Part part);`
9,073
41,363,888
I could send mail using the following code ``` E:\Python\django-test\LYYDownloaderServer>python manage.py shell Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (In tel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django.core.mail import send_mail >>> >>> send_mail( ... 'Subject here', ... 'Here is the message.', ... 'redstone-cold@163.com', ... ['2281570025@qq.com'], ... fail_silently=False, ... ) 1 >>> ``` According to the [doc](https://docs.djangoproject.com/en/1.10/howto/error-reporting/#server-errors): > > When DEBUG is False, Django will email the users listed in the ADMINS > setting whenever your code raises an unhandled exception and results > in an internal server error (HTTP status code 500). This gives the > administrators immediate notification of any errors. The ADMINS will > get a description of the error, a complete Python traceback, and > details about the HTTP request that caused the error. > > > but in my case, Django doesn't email reporting an internal server error (HTTP status code 500) [![enter image description here](https://i.stack.imgur.com/zQFGZ.png)](https://i.stack.imgur.com/zQFGZ.png) what's the problem? please help fix the problem settings.py ``` """ Django settings for LYYDownloaderServer project. Generated by 'django-admin startproject' using Django 1.9.1. For more information on this file, see https://docs.djangoproject.com/en/1.9/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.9/ref/settings/ """ import os ADMINS = [('Philip', 'r234327894@163.com'), ('Philip2', '768799875@qq.com')] EMAIL_HOST = 'smtp.163.com' # 'localhost'#'smtp.139.com' # EMAIL_PORT = 25 # EMAIL_USE_TLS = True EMAIL_HOST_USER = 'r234327894@163.com' # '13529123633@139.com' EMAIL_HOST_PASSWORD = '******' # DEFAULT_FROM_EMAIL = 'r234327894@163.com' # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 's4(z8qzt$=x(2t(ok5bb58_!u==+x97t0vpa=*8bb_68baekkh' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = False ALLOWED_HOSTS = ['127.0.0.1']#, '.0letter.com' # Application definition INSTALLED_APPS = [ 'VideoParser.apps.VideoparserConfig', 'FileHost.apps.FilehostConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] MIDDLEWARE_CLASSES = [ 'django.middleware.common.BrokenLinkEmailsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'LYYDownloaderServer.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'LYYDownloaderServer.wsgi.application' # Database # https://docs.djangoproject.com/en/1.9/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Password validation # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/1.9/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.9/howto/static-files/ STATIC_URL = '/static/' ``` the start of views.py ``` from django.http import JsonResponse, HttpResponse import logging import m3u8 import os from VideoParser.parsers.CommonParsers import * import urllib.parse import hashlib from datetime import datetime, timedelta, date from django.views.decorators.csrf import csrf_exempt from django.db import IntegrityError from VideoParser.models import * from importlib import import_module # print('-------------views --------') FILES_DIR = 'files' # specialHostName2module = {'56': 'v56'} logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d %I:%M:%S %p', level=logging.ERROR, handlers=[logging.handlers.RotatingFileHandler(filename=os.path.join(FILES_DIR, 'LYYDownloaderServer.log'), maxBytes=1024 * 1024, backupCount=1)]) ... ```
2016/12/28
[ "https://Stackoverflow.com/questions/41363888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1485853/" ]
First, it doesn't matter if you were able to send the mail using the console, but if you received the mail. I assume you did. Second, it's best to try with exactly the same email address in the console as the one set in the `ADMINS`, just to be sure. Finally, the sender address might also matter. The default is "root@localhost", and while "root" is OK, "localhost" is not, and some mail servers may refuse the email. Specify another sender email address by setting the `SERVER_EMAIL` Django setting.
Djano sends admin emails on error using logging system. As I can see from your `views.py` you are changing logging settings. This can be the cause of the problem as you cleared that django admin handler `mail_admins`. For more information check [django documentation](https://docs.djangoproject.com/en/1.10/topics/logging/#topic-logging-parts-handlers)
9,074
70,453,702
I am a trader, I want to use the XTB API to access the account,T try to learn Python I found XTBApi I install it Windows (python3 -m venv env) but when I enter the command (. \ venv \ Scripts \ activate) it doesn't work: The specified path could not be found. What do I have to do? Thanks How can i convert linux script to windows script : git clone git@github.com:federico123579/XTBApi.git cd XTBApi/ python3 -m venv env . env/bin/activate pip install .
2021/12/22
[ "https://Stackoverflow.com/questions/70453702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17330089/" ]
`stringr` is fine, but here a very good solution exists in base R. ``` x <- head(state.name) x # [1] "Alabama" "Alaska" "Arizona" "Arkansas" "California" "Colorado" substring(x, 5) # [1] "ama" "ka" "ona" "nsas" "fornia" "rado" ```
You may find this useful to rename columns. ```r library(dplyr) library(stringr) df %>% rename_with(str_sub, start = 5L) ``` If you don't want to do it for all of the columns, you can use the `.cols` argument. ```r # like this iris %>% rename_with(str_sub, start = 5L, .cols = starts_with("Sepal")) # or this iris %>% rename_with(str_sub, start = 5L, .cols = 1:2) # or this, and so on... iris %>% rename_with(str_sub, start = 5L, .cols = c("Petal.Length", "Species")) ```
9,076
36,730,812
I'm newbie for raspberry pi and python coding. I'm working on a school project. I've already looked for some tutorials and examples but maybe I'm missing something. I want to build a web server based gpio controller. I'm using flask for this. For going into this, I've started with this example. Just turning on and off the led by refreshing the page. So the problem is, I can't see the response value on the web server side. It's turning on and off the led. But I want to see the situation online. But I just couldn't. I'm getting and internal server error. I'm giving the python and html codes. Can you help me with solving the problem. ``` from flask import Flask from flask import render_template import RPi.GPIO as GPIO app=Flask(__name__) GPIO.setmode(GPIO.BCM) GPIO.setup(4, GPIO.OUT) GPIO.output(4,1) status=GPIO.HIGH @app.route('/') def readPin(): global status global response try: if status==GPIO.LOW: status=GPIO.HIGH print('ON') response="Pin is high" else: status=GPIO.LOW print('OFF') response="Pin is low" except: response="Error reading pin" GPIO.output(4, status) templateData= { 'title' : 'Status of Pin' + status, 'response' : response } return render_template('pin.html', **templateData) if __name__=="__main__": app.run('192.168.2.5') ``` And basically just this line is on my html page. ``` <h1>{{response}}</h1> ``` I think "response" doesn't get a value. What's wrong on this?
2016/04/19
[ "https://Stackoverflow.com/questions/36730812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6227347/" ]
Firstly it helps to run it in debug mode: `app.run(debug=True)` This will help you track down any errors which are being suppressed. Next have a look at the line where you are building the title string: `'title' : 'Status of Pin' + status` If you enable the debug mode, then you should see something saying that an int/bool can't be converted to str implicitly. (Python doesn't know how to add a string and an int/bool). In order to fix this, you should explicitly cast status to a string: `'title' : 'Status of Pin' + str(status)` Or better yet: `'title' : 'Status of Pin: {}'.format(status)`
Your server was probably throwing an exception when trying to create your dictionary, therefore the templateData value was being sent as an empty value. Notice in this example, the TypeError which is thrown when trying to concatenate 2 variables of different type. Hence, wrapping your variable in the str(status) will cast the status variable to it's string repersentation before attempting to combine the variables. ``` [root@cloud-ms-1 alan]# cat add.py a = 'one' b = 2 print a + b [root@cloud-ms-1 alan]# python add.py Traceback (most recent call last): File "add.py", line 6, in <module> print a + b TypeError: cannot concatenate 'str' and 'int' objects [root@cloud-ms-1 alan]# cat add.py a = 'one' b = str(2) print a + b [root@cloud-ms-1 alan]# python add.py one2 ```
9,079
53,480,646
I wrote a python script which makes calculation at every hour. I run this script with crontab scheduled for every hour. But there is one more thing to do; Additionally, I should make calculation once a day by using the results evaluated at every hour. In this context, I defined a thread function which checks the current time is equal to the specified time (15:00 PM, once a day ). If it is, thread function is called and calculation made. What I wanna ask here is; is this approach applicable ? I mean, running the first script at every hour using crontab, and calling the second function using thread function once a day. Is there any other way of doing this ?
2018/11/26
[ "https://Stackoverflow.com/questions/53480646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8118659/" ]
Using jquery method as follows: ``` $("button[name^=t]").click(function(){ //process } ``` The above method will be invoked whenever a `button` whose name starts with `'t'` is clicked.
Run this snippet. You can read text using prev(). ```js $('.btn').on('click', function(){ alert($( this ).prev().val()); }) ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>1 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>2 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name="">3 <i class="fas fa-redo"></i> </button> </div> ```
9,080
50,397,060
I have dataframe like this: ``` >>df L1 L0 desc_L0 4956 10 Hi 1509 nan I am 1510 20 Here 1511 nan where r u ? ``` I want to insert a new column `desc_L1` when value for `L0` is null and same time move respective `desc_L0` value to `desc_L1`. Desired output: ``` L1 L0 desc_L0 desc_L1 4956 10 Hi nan 1509 nan nan I am 1510 20 Here nan 1511 nan nan where r u ? ``` How this can be done in pythonic way?
2018/05/17
[ "https://Stackoverflow.com/questions/50397060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7566673/" ]
First copy your series: ``` df['desc_L1'] = df['desc_L0'] ``` Then use a mask to update the two series: ``` mask = df['L1'].isnull() df.loc[~mask, 'desc_L1'] = np.nan df.loc[mask, 'desc_L0'] = np.nan ```
You can try so: ``` df['desc_L1'] = df['desc_L0'] df['desc_L1'] = np.where(df['L0'].isna(), df['desc_L0'], np.NaN) df['desc_L0'] = np.where(df['L0'].isna(), np.NaN, df['desc_L0']) ``` Input: ``` L0 desc_L0 0 10.0 hi 1 NaN I am 2 20.0 Here 3 NaN where are u? ``` Output: ``` L0 desc_L0 desc_L1 0 10.0 hi NaN 1 NaN NaN I am 2 20.0 Here NaN 3 NaN NaN where are u? ```
9,083
49,172,957
I'm learning python, I want to check if the second largest number is duplicated in a list. I've tried several ways, but I couldn't. Also, I have searched on google for this issue, I have got several answers to get/print 2nd largest number from a list but I couldn't find any answer to check if the 2nd largest number is duplicated. can anyone help me, please? Here is my sample list: ``` list1 = [5, 6, 9, 9, 11] list2 = [8, 9, 13, 14, 14] ```
2018/03/08
[ "https://Stackoverflow.com/questions/49172957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6544266/" ]
Here is a *1-liner*: ``` >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(sorted(list1)[-2]) > 1 True ``` or using [heapq](https://docs.python.org/3/library/heapq.html#heapq.nlargest) ``` >>> import heapq >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(heapq.nlargest(2, list1)[1]) > 1 True ```
This is a simple algorithm: 1. Makes values unique 2. Sort your list by max value 3. Takes the second element 4. Check how many occurences of this elemnt exists in list Code: ``` list1 = [5, 6, 9, 9, 11] list2 = [8, 9, 13, 14, 14] def check(data): # 1. Make data unique unique = list(set(data)) # 2. Sort by value sorted_data = sorted(unique, reverse=True) # 3. Takes the second element item = sorted_data[1] # 4. Check occurences if data.count(item) > 1: return True else: return False print(check(list1)) print(check(list2)) ``` Ouput ``` True False ```
9,084
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
You can play with some slicing through lists to gather elements from original list based on content in another list, sum it up and append to final list: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] i = 0 for x in color: total.append(sum(vp[i:i+x])) i += x print(total) # [60, 90, 60] ```
``` total = [] index = 0 for c in color: inside = 0 for i in range(c): inside += vp[index + i] index += 1 total.append(inside) print(total) ```
9,087
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
Don't use to much reflection.
For Windows Forms on XP and Vista: Turn double buffering on across the board. It does cause transparency issues, so you would definitely want to test the UI: ``` protected override System.Windows.Forms.CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.ExStyle = cp.ExStyle | 0x2000000; return cp; } } ```
9,094
11,070,920
I try to understand the regex in python. How can i split the following sentence with regular expression? ``` "familyname, Givenname A.15.10" ``` this is like the phonebook in python regex <http://docs.python.org/library/re.html>. The person maybe have 2 or more familynames and 2 or more givennames. After the familynames exist ', ' and after givennames exist ''. the last one is the office of the person. What i did until know is ``` import re file=open('file.txt','r') data=file.readlines() for i in range(90): person=re.split('[,\.]',data[i],maxsplit=2) print(person) ``` it gives me a result like this ``` ['Wegner', ' Sven Ake G', '15.10\n'] ``` i want to have something like ``` ['Wegner', ' Sven Ake', 'G', '15', '10']. any idea? ```
2012/06/17
[ "https://Stackoverflow.com/questions/11070920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1364181/" ]
What you want to do is first split the family name by , `familyname, rest = text.split(',', 1)` Then you want to split the office with the first space from the right. `givenname, office = rest.rsplit(' ', 1)`
Assuming that family names don't have a comma, you can take them easily. Given names are sensible to dots. For example: ``` Harney, PJ A.15.10 Harvey, P.J. A.15.10 ``` This means that you should probably trim the rest of the record (family names are out) by a mask at the end (regex "maskpattern$").
9,104
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
This should work: ``` sett = set(['1', '0']) elements = '' for i in sett: elements += i # elements = '10' ``` However, if you're just looking to get a string representation of each element, you can simply do this: ``` elements = ''.join(sett) # elements = '10' ```
Don't know what you mean with "add set elements" to a string. But anyway: Strings are immutable in Python, so you cannot add anything to them.
9,107
70,148,408
I am new to learning python but I can't seem to find out how to make the first part of the while loop run in the background while the second part is running. Putting in the input I set allows the first part to run twice but then pauses it. Here is the code ``` import time def money(): coins = 0 multiplyer = 1 cps = 1 while True: coins = coins + cps time.sleep(1) player_input = input("Input: ") if player_input == "coins": print(coins) player_input money() ``` Result: ``` Input: coins 1 Input: coins 2 Input: ``` My goal is to make the input print out 10 coins after 10 seconds, not 2 coins after two times of typing coins.
2021/11/28
[ "https://Stackoverflow.com/questions/70148408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16408072/" ]
I think you need to refactor your `onSubmit` function to make it `async` so `isSubmitting` will stay `true` during your `signIn` call. ```js const onSubmit = async (data) => { await signIn(data.email, data.password) .then((response) => console.log(response)) .catch((error) => { let message = null if (error.code === "auth/too-many-requests") { message = "Too many unsuccessful attempts, please reset password or try again later" } if (error.code === "auth/wrong-password") { message = "Incorrect password, please try again" } if (error.code === "auth/user-not-found") { message = "User does not exist, please try again" } resetField("password") setFirebaseError(message) }) } ``` [![Edit React Hook Form - Async Submit Validation (forked)](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/react-hook-form-async-submit-validation-forked-ec75r?fontsize=14&hidenavigation=1&theme=dark)
`onSubmit` needs to return a `Promise` for `formState` to update correctly. ``` const onSubmit = (payload) => { // You need to return a promise. return new Promise((resolve) => { setTimeout(() => resolve(), 1000); }); }; ``` References: * <https://react-hook-form.com/api/useform/formstate/> * <https://github.com/react-hook-form/react-hook-form/issues/1363#issuecomment-610681167>
9,116
17,099,808
I want to implement the following code in more of a pythonic way: ``` odd_rows = table.findAll('tr', attrs = {'class':'odd'}) #contain all tr tags even_rows = table.findAll('tr', attrs = {'class':'even'}) for rows in odd_rows: #rows equal 1 <tr> tag rows.findAll('td') #find all the <td> tags located in that one <tr> tag for row in rows.findAll('td'): #find one <td> tag print row #print that <td> tag for rows in even_rows: rows.findAll('td') for row in rows.findAll('td'): print row ``` the line `row.findAll('td')` shows my logic
2013/06/14
[ "https://Stackoverflow.com/questions/17099808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407162/" ]
Perhaps: ``` for row in table.findAll('tr', attrs = {'class':'odd'}) + table.findAll('tr', attrs = {'class':'even'}): for cell in row.findAll('td'): print cell ``` From a performance standpoint, your original code is better. Combining two lists does use resources. However, unless you are writing code for Google scale, I agree with this quote. > > Programs must be written for people to read, and only incidentally for machines to execute. > > - Hal Abelson, Structure and Interpretation of Computer Programs > > > There is more than one way to do it. Write code the way that you feel is most readable to you. The computer can figure out the details.
``` for cls in ("odd", "even"): for rows in table.findAll('tr', class_=cls): for row in rows.findAll('td'): print row ```
9,117
40,785,453
I have a huge file of data: **datatable.txt** ``` id1 england male id2 germany female ... ... ... ``` I have another list of data: **indexes.txt** ``` id1 id3 id6 id10 id11 ``` I want to extract all rows from **datatable.txt** where the id is included in **indexes.txt**. Is it possible to do this with awk/sed/grep? The file sizes are so large using R or python is not convenient.
2016/11/24
[ "https://Stackoverflow.com/questions/40785453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2662639/" ]
You just need a simple `awk` as ``` awk 'FNR==NR {a[$1]; next}; $1 in a' indexes.csv datatable.csv id1 england male ``` 1. `FNR==NR{a[$1];next}` will process on `indexes.csv` storing the entries of the array as the content of the first column till the end of the file. 2. Now on `datatable.csv`, I can match those rows from the first file by doing `$1 in a` which will give me all those rows in current file whose column `$1`'s value `a[$1]` is same as in other file.
maybe i overlook something, but i build two test files: ``` a1: id1 id2 id3 id6 id9 id10 ``` and ``` a2: id1 a 1 id2 b 2 id3 c 3 id4 c 4 id5 e 5 id6 f 6 id7 g 7 id8 h 8 id9 i 9 id10 j 10 ``` with `join a1 a2 2> /dev/null` I get all lines matched by column one.
9,118
49,760,858
I need to get data from this API <https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434> (sample node) This is my code (python): ``` import requests f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') print f.text ``` I want to save only protocol, responseTime and reputation in three subsequent lines of the txt file. It's supposed to look something like this:: ``` protocol: 1.2.0 responseTime: 8157.912472694088 reputation: 1377 ``` Unfortunately, I'm stuck at this point and I can not process this data in any way
2018/04/10
[ "https://Stackoverflow.com/questions/49760858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9590393/" ]
``` import requests f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') # Store content as json answer = f.json() # List of element you want to keep items = ['protocol', 'responseTime', 'reputation'] # Display for item in items: print(item + ':' + str(answer[item])) # If you want to save in a file with open("Output.txt", "w") as text_file: for item in items: print(item + ':' + str(answer[item]), file=text_file) ``` Hope it helps! Cheers
This is a very unrefined way to do what you want that you could build off of. You'd need to sub in a path/filename for text.txt. ``` import requests import json f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') t = json.loads(f.text) with open('text.txt', 'a') as mfile: mfile.write("protocol: {0}".format(str(t['protocol']))) mfile.write("responseTime: {0}".format(str(t['responseTime']))) mfile.write("reputation: {0}".format(str(t['reputation']))) ```
9,119
20,467,107
as we know, python has two built-in url lib: * `urllib` * `urllib2` and a third-party lib: * `urllib3` if my requirement is only to request a API by GET method, assume it return a JSON string. which lib I should use? do they have some duplicated functions? if the `urllib` can implement my require, but after if my requirements get more and more complicated, the `urllib` can not fit my function, I should import another lib at that time, but I really want to import only one lib, because I think import all of them can make me confused, I think the method between them are totally different. so now I am confused which lib I should use, I prefer `urllib3`, I think it can fit my requirement all time, how do you think?
2013/12/09
[ "https://Stackoverflow.com/questions/20467107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122265/" ]
As Alexander says in the comments, use `requests`. That's all you need.
I don't really know what you want to do, but you should try with [`requests`](http://requests.readthedocs.org/en/latest/). It's simple and intuitive.
9,121
60,388,686
Can someone please tell me how to downgrade Python 3.6.9 to 3.6.6 on Ubuntu ? I tried the below commands but didnot work 1) pip install python3.6==3.6.6 2) pip install python3.6.6
2020/02/25
[ "https://Stackoverflow.com/questions/60388686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12651893/" ]
First, verify that 3.6.6 is available: ```sh apt-cache policy python3.6 ``` If available: ```sh apt-get install python3.6=3.6.6 ``` If not available, you'll need to find a repo which has the version you desire and add it to your apt sources list, update, and install: ```sh echo "<repo url>" >> /etc/apt/sources.list.d/python.list apt-get update apt-get install python3.6=3.6.6 ``` I advise against downgrading your system python unless you're certain it's required. For running your application, install python3.6.6 alongside your system python, and better yet, build a virtual environment from 3.6.6: ```sh apt-get install virtualenv virtualenv -p <path to python3.6.6> <venv name> ```
One option is to use Anaconda, which allows you to easily use different Python versions on the same computer. [Here are the installation instructions for Anaconda on Linux](https://docs.anaconda.com/anaconda/install/linux/). Then create a Conda environment by running this command: ``` conda create --name myenv python=3.6.6 ``` Obviously your can use a different name than "myenv". You can then activate the environment in any terminal window: ``` conda activate myenv ``` Then you can pip install any packages you want. Some basics of anaconda environments can be found on the website's [getting started](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) page.
9,124
45,340,587
How do I use python, mss, and opencv to capture my computer screen and save it as an array of images to form a movie? I am converting to gray-scale so it can be a 3 dimensional array. I would like to store each 2d screen shot in a 3d array for viewing and processing. I am having a hard time constructing an array that saves the sequence of screen shots as well as plays back the sequence of screen shots in cv2. Thanks a lot ``` import time import numpy as np import cv2 import mss from PIL import Image with mss.mss() as sct: fps_list=[] matrix_list = [] monitor = {'top':40, 'left':0, 'width':800, 'height':640} timer = 0 while timer <100: last_time = time.time() #get raw pizels from screen and save to numpy array img = np.array(sct.grab(monitor)) img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #Save img data as matrix matrix_list[timer,:,:] = img #Display Image cv2.imshow('Normal', img) fps = 1/ (time.time()-last_time) fps_list.append(fps) #press q to quit timer += 1 if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break #calculate fps fps_list = np.asarray(fps_list) print(np.average(fps_list)) #playback image movie from screencapture t=0 while t < 100: cv.imshow('Playback',img_matrix[t]) t += 1 ```
2017/07/27
[ "https://Stackoverflow.com/questions/45340587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8298595/" ]
A clue perhaps, save screenshots into a list and replay them later (you will have to adapt the sleep time): ``` import time import cv2 import mss import numpy with mss.mss() as sct: monitor = {'top': 40, 'left': 0, 'width': 800, 'height': 640} img_matrix = [] for _ in range(100): # Get raw pizels from screen and save to numpy array img = numpy.array(sct.grab(monitor)) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Save img data as matrix img_matrix.append(img) # Display Image cv2.imshow('Normal', img) # Press q to quit if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break # Playback image movie from screencapture for img in img_matrix: cv2.imshow('Playback', img) # Press q to quit if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break ```
use `collections.OrderedDict()` to saves the sequence ``` import collections .... fps_list= collections.OrderedDict() ... fps_list[timer] = fps ```
9,125
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
When doing in a virtualenv : ``` pip install MySQL-python ``` I got ``` EnvironmentError: mysql_config not found ``` To install mysql\_config, as Artem Fedosov said, first install ``` sudo apt-get install libmysqlclient-dev ``` then everything works fine in virtualenv
The suggested solutions didn't work out for me, because I still got compilation errors after running ``` `$ sudo apt-get install libmysqlclient-dev` ``` so I had to run ``` apt-get install python-dev ``` Then everything worked fine for me with ``` apt-get install python-dev ```
9,126
65,990,047
I've been trying to make a keylogger but got this error in python while running the script. > > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 211, in inner > return f(self, \*args, \*\*kwargs) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\keyboard\_win32.py", line 280, in *process > self.on\_press(key) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init**.py", line 127, in inner > if f(\*args) is False: > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 16, in on\_press > keys.append(str(key)) > NameError: name 'keys' is not defined > Traceback (most recent call last): > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 43, in > listener.join() > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 259, in join > six.reraise(exc\_type, exc\_value, exc\_traceback) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\six.py", line 702, in reraise > raise value.with\_traceback(tb) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 211, in inner > return f(self, \*args, \*\*kwargs) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\keyboard\_win32.py", line 280, in *process > self.on\_press(key) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init**.py", line 127, in inner > if f(\*args) is False: > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 16, in on\_press > keys.append(str(key)) > NameError: name 'keys' is not defined > [Finished in 0.614s] > > > I dont know how to fix this, I already installed pyinput with pip install pyinput and still doesn't work :/ --- Code: ``` import pynput from pynput.keyboard import Key, Listener count = 0 key = [] def on_press(key): global keys, count keys.append(str(key)) print("{0} pressed".format(key)) if count >= 10: count = 0 write_file(keys) keys = () def write_file(keys): with open("log.txt", "w" & "a") as f: for key in keys: k = str(key).replace("'","") f.write(str(key)) def on_release(key): if key == Key.esc: return False with Listener(on_press=on_press,on_release=on_release) as listener: listener.join() ``` --- Any help is appriciated, thanks.
2021/02/01
[ "https://Stackoverflow.com/questions/65990047", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14644110/" ]
Because keys is literally not defined anywhere **I think you made a spelling mistake.** You need to replace `key = []` with `keys = []`
I think you have a typo during initialization of the `keys` list. You have declared it as `key` but you need `keys`. You need: ``` keys = [] ``` instead of: ``` key = [] ```
9,136
4,522,733
Okay, so I'm admittedly a newbie to programming, but I can't determine how to get python v3.2 to generate a random positive integer between parameters I've given it. Just so you can understand the context, I'm trying to create a guessing-game where the user inputs parameters (say 1 to 50), and the computer generates a random number between the given numbers. The user would then have to guess the value that the computer has chosen. I've searched long and hard, but all of the solutions I can find only tell one how to get earlier versions of python to generate a random integer. As near as I can tell, v.3.2 changed how to generate and label a random integer. Anyone know how to do this?
2010/12/23
[ "https://Stackoverflow.com/questions/4522733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/552834/" ]
Use [random.randrange](http://docs.python.org/dev/py3k/library/random.html#random.randrange) or [random.randint](http://docs.python.org/dev/py3k/library/random.html#random.randint) (Note the links are to the Python 3k docs). ``` In [67]: import random In [69]: random.randrange(1,10) Out[69]: 8 ```
You can use `random` module: ``` import random # Random integer >= 5 and < 10 random.randrange(5, 10) ```
9,138
50,182,828
I've installed pillow for python 3 on my macbook successfully. But I still can't use PIL library. I tried uninstalling and installing it again. I've also tried `import Image` without `from PIL` as well. I do not have PIL installed, though. It says > > Could not find a version that satisfies the requirement PIL (from > versions: ) > > > ``` from PIL import Image --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-4-b7f01c2f8cfe> in <module>() ----> 1 from PIL import Image /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/Image.py in <module>() 58 # Also note that Image.core is not a publicly documented interface, 59 # and should be considered private and subject to change. ---> 60 from . import _imaging as core 61 if PILLOW_VERSION != getattr(core, 'PILLOW_VERSION', None): 62 raise ImportError("The _imaging extension was built for another " ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/_imaging.cpython-36m-darwin.so, 2): Symbol not found: _clock_gettime Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/.dylibs/liblzma.5.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/.dylibs/liblzma.5.dylib ```
2018/05/04
[ "https://Stackoverflow.com/questions/50182828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5991761/" ]
If you use `Anaconda`, you may try: ``` conda install Pillow ``` because this works for me.
Pil is depricated and replaced by pillow. Pillow is the official fork of PIL Install using pip or how you usually do it. <https://pillow.readthedocs.io/en/5.1.x/index.html>
9,139
28,233,090
I want to get the width % shown. So that i can monitor the progress.How to get the value using selenium in python. I don't know how to achieve this.
2015/01/30
[ "https://Stackoverflow.com/questions/28233090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4472647/" ]
A `CSS3`-only (actually moved into `CSS4` specs) solution would be the `pointer-events` property, e.g. ``` .media__body:hover:after, .media__body:hover:before { ... pointer-events: none; } ``` supported on [all modern browser](https://developer.mozilla.org/en-US/docs/Web/CSS/pointer-events) but only from `IE11` (on HTML/XML content) Example: <http://jsfiddle.net/8uz7mx6h/> --- Another solution, supported also on older IE, is to apply `position: relative` with a `z-index` to the paragraph, e.g. ``` .media__body p { margin-bottom: 1.5em; position: relative; z-index: 1; } ``` Example: <http://jsfiddle.net/0nrbjwmg/2/>
``` .media__body p { margin-bottom: 1.5em; position: relative; z-index:999; ``` } try this !
9,140
33,116,636
I am very new to REGEX and HTML in particular. I know that BeautifulSoup is a way to deal with HTML but would like to try regex I need to search the text for HTML tags (I use findall). I tried multiple scenarios and examples in Stackoverflow but only got [] (empty string). Here is what I tried: ``` #reHTML = r'(?:<([A-Z][A-Z0-9]*)\b[^>]*>(.*?)</\1>)' #reHTML = r'\<p>(.*?)\</p>' #reHTML = r'<p>(.*?)\</p>' #reHTML = r'<raw[^>]*?>(.*?)</raw>' reHTML = r'<p>(.*?)</p>' #reHTML = r'<.*?>' ``` and: ``` rHTML = re.compile(reHTML, re.VERBOSE) HTMLpara = rHTML.findall('http://pythonprogramming.net/parse-website-using- regular-expressions-urllib/', re.IGNORECASE) ``` Obviously, I am missing something. Please, help
2015/10/14
[ "https://Stackoverflow.com/questions/33116636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5287011/" ]
You misunderstood [regex.findall(string[, pos[, endpos]])](https://docs.python.org/3.5/library/re.html?highlight=findall#re.regex.findall) `HTMLpara = rHTML.findall('http://pythonprogramming.net/parse-website-using- regular-expressions-urllib/', re.IGNORECASE)` means you will match the `rHTML` pattern with the string(`"http://pythonprogramming.net/parse-website-using- regular-expressions-urllib/"`),so you will get `[]` You'd better request the URL to get data, then call findall to analyze the result string, as [below](http://pythonprogramming.net/parse-website-using-regular-expressions-urllib/). ``` import urllib.request import re url = 'http://pythonprogramming.net/parse-website-using-regular-expressions-urllib/' req = urllib.request.Request(url) resp = urllib.request.urlopen(req) respData = resp.read() paragraphs = re.findall(r'<p>(.*?)</p>',str(respData)) ```
This will read in a webpage and find any instances of `<html>` or `</html>`. Is this the solution you are looking for? ``` import re import urllib2 url = "http://stackoverflow.com" f = urllib2.urlopen(url) file = f.read() p = re.compile("<html>|</html>") instances = p.findall(file) print instances ``` Output: ``` ['<html>', '</html>'] ``` I think your problem was you were trying to search the URL string for HTML tags instead of actually loading the webpage and searching it.
9,141
15,279,942
Coming from python I could do something like this. ``` values = (1, 'ab', 2.7) s = struct.Struct('I 2s f') packet = s.pack(*values) ``` I can pack together arbitrary types together very simply with python. What is the standard way to do it in Objective C?
2013/03/07
[ "https://Stackoverflow.com/questions/15279942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/299648/" ]
Try using something like this: ``` window.addEventListener('load', function() { document.getElementById("demo").innerHTML="Works"; }, false); ```
Where did you get `document.onready` from? That would **never work**. To ensure the page is loaded, you could use `window.onload`; ``` window.onload = function () { document.getElementById("demo").innerHTML="Works"; } ```
9,142