qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
Have you tried [Pyglet](http://www.pyglet.org/) with [PyOpenGL](http://pyopengl.sourceforge.net/)? The two goes very well together. Wheaties' suggest is quite good as well, although [PyOgre](http://www.ogre3d.org/wiki/index.php/PyOgre) also has a steep learning curve, as it is indeed higher-level. On another thought, there is also [PyGame](http://www.pygame.org/), which is a Python wrapper of [SDL](http://www.libsdl.org/).
I personally prefer PyOpenGL, and you can use [WxPython](http://www.wxpython.org/) or [PyQT](http://wiki.python.org/moin/PyQt) to create your rendering context.
Also, there is [PyProcessing](http://code.google.com/p/pyprocessing/), which is still in early stages, but very, very nifty.
|
I have no personal experience with this, but I have heard some decent things about [Pyglet](http://www.pyglet.org/index.html)
|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
You could try mlab / Mayavi (a wrapper for VTK). There's some examples here: <http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html>
|
I have no personal experience with this, but I have heard some decent things about [Pyglet](http://www.pyglet.org/index.html)
|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
[Panda3D](http://www.panda3d.org/) seems to be a nice 3D graphics library designed to be used in Python, although it's mostly game oriented. I've browsed the manuals a few times and it's very polished and of a high quality, it has even been used in some big studio's games (like Disney's Pirates of the Caribbean online, if I remember well).
|
I have no personal experience with this, but I have heard some decent things about [Pyglet](http://www.pyglet.org/index.html)
|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
Have you tried [Pyglet](http://www.pyglet.org/) with [PyOpenGL](http://pyopengl.sourceforge.net/)? The two goes very well together. Wheaties' suggest is quite good as well, although [PyOgre](http://www.ogre3d.org/wiki/index.php/PyOgre) also has a steep learning curve, as it is indeed higher-level. On another thought, there is also [PyGame](http://www.pygame.org/), which is a Python wrapper of [SDL](http://www.libsdl.org/).
I personally prefer PyOpenGL, and you can use [WxPython](http://www.wxpython.org/) or [PyQT](http://wiki.python.org/moin/PyQt) to create your rendering context.
Also, there is [PyProcessing](http://code.google.com/p/pyprocessing/), which is still in early stages, but very, very nifty.
|
You could try mlab / Mayavi (a wrapper for VTK). There's some examples here: <http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html>
|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
Have you tried [Pyglet](http://www.pyglet.org/) with [PyOpenGL](http://pyopengl.sourceforge.net/)? The two goes very well together. Wheaties' suggest is quite good as well, although [PyOgre](http://www.ogre3d.org/wiki/index.php/PyOgre) also has a steep learning curve, as it is indeed higher-level. On another thought, there is also [PyGame](http://www.pygame.org/), which is a Python wrapper of [SDL](http://www.libsdl.org/).
I personally prefer PyOpenGL, and you can use [WxPython](http://www.wxpython.org/) or [PyQT](http://wiki.python.org/moin/PyQt) to create your rendering context.
Also, there is [PyProcessing](http://code.google.com/p/pyprocessing/), which is still in early stages, but very, very nifty.
|
I used openGL with C++ a few years back - found it quite low level. I also have used Java3D which seemed to be a bit higher level. If you are not stuck on using python - try Java3D - very simple to get up and running.
|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
You could try mlab / Mayavi (a wrapper for VTK). There's some examples here: <http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html>
|
I used openGL with C++ a few years back - found it quite low level. I also have used Java3D which seemed to be a bit higher level. If you are not stuck on using python - try Java3D - very simple to get up and running.
|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
[Panda3D](http://www.panda3d.org/) seems to be a nice 3D graphics library designed to be used in Python, although it's mostly game oriented. I've browsed the manuals a few times and it's very polished and of a high quality, it has even been used in some big studio's games (like Disney's Pirates of the Caribbean online, if I remember well).
|
You could try mlab / Mayavi (a wrapper for VTK). There's some examples here: <http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html>
|
2,823,907
|
I am looking for a graphics library for 3D reconstruction research to develop my specific viewer based on some library. OpenGL seems in a low level and I have to remake the wheel everywhere. And I also tried VTK(visualization toolkit). However, it seems too abstract that I need to master many conceptions before I start. Is there any other graphics library? I prefer to program in python. So I would like the library has a python wrapper. I think something like O3D would be better. But O3D is for javascript and it seems that Google already stops the development.
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2823907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166482/"
] |
[Panda3D](http://www.panda3d.org/) seems to be a nice 3D graphics library designed to be used in Python, although it's mostly game oriented. I've browsed the manuals a few times and it's very polished and of a high quality, it has even been used in some big studio's games (like Disney's Pirates of the Caribbean online, if I remember well).
|
I used openGL with C++ a few years back - found it quite low level. I also have used Java3D which seemed to be a bit higher level. If you are not stuck on using python - try Java3D - very simple to get up and running.
|
21,327,768
|
Note: I was using the wrong source file for my data - once that was fixed, my issue was resolved. It turns out, there is no simple way to use `int(..)` on a string that is not an integer literal.
This is an example from the book "Machine Learning In Action", and I cannot quite figure out what is wrong. Here's some background:
```
from numpy import as *
def file2matrix(filename):
fr = open(filename)
numberOfLines = len(fr.readlines())
returnMat = zeros((numberOfLines,3))
classLabelVector = []
fr = open(filename)
index = 0
for line in fr.readlines():
line = line.strip()
listFromLine = line.split('\t')
returnMat[index,:] = listFromLine[0:3]
classLabelVector.append(int(listFromLine[-1])) # Problem here.
index += 1
return returnMat,classLabelVector
```
The .txt file is as follows:
```
40920 8.326976 0.953952 largeDoses
14488 7.153469 1.673904 smallDoses
26052 1.441871 0.805124 didntLike
75136 13.147394 0.428964 didntLike
38344 1.669788 0.134296 didntLike
...
```
I am getting an error on the line `classLabelVector.append(int(listFromLine[-1]))` because, I believe, `int(..)` is trying to parse over a String (ie `"largeDoses"`) that is a not a literal integer. Am I missing something?
I looked up the documentation for `int()`, but it only seems to parse numbers and integer literals:
<http://docs.python.org/2/library/functions.html#int>
Also, an excerpt from the book explains this section as follows:
>
> Finally, you loop over all the lines in the file and strip off the return line character with line.strip(). Next, you split the line
> into a list of elements delimited by the tab character: '\t'. You take
> the first three elements and shove them into a row of your matrix, and
> you use the Python feature of negative indexing to get the last item
> from the list to put into classLabelVector. You have to explicitly
> tell the interpreter that you’d like the integer version of the last
> item in the list, or it will give you the string version. Usually,
> you’d have to do this, but NumPy takes care of those details for you.
>
>
>
|
2014/01/24
|
[
"https://Stackoverflow.com/questions/21327768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1884158/"
] |
strings like "largeDoses" could not be converted to integers. In folder `Ch02` of [that code project](https://github.com/pbharrin/machinelearninginaction), you have two data files, use the second one `datingTestSet2.txt` instead of loading the first
|
You can use [ast.literal\_eval](http://docs.python.org/2/library/ast.html#ast.literal_eval) and catch the exception ValueError the malformed string (by the way int('9.4') will raise an exception)
|
60,490,195
|
I am trying to write a small program using the AzureML Python SDK (v1.0.85) to register an Environment in AMLS and use that definition to construct a local Conda environment when experiments are being run (for a pre-trained model). The code works fine for simple scenarios where all dependencies are loaded from Conda/ public PyPI, but when I introduce a private dependency (e.g. a utils library) I am getting a InternalServerError with the message "Error getting recipe specifications".
The code I am using to register the environment is (after having authenticated to Azure and connected to our workspace):
```
environment_name = config['environment']['name']
py_version = "3.7"
conda_packages = ["pip"]
pip_packages = ["azureml-defaults"]
private_packages = ["./env-wheels/utils-0.0.3-py3-none-any.whl"]
print(f"Creating environment with name {environment_name}")
environment = Environment(name=environment_name)
conda_deps = CondaDependencies()
print(f"Adding Python version: {py_version}")
conda_deps.set_python_version(py_version)
for conda_pkg in conda_packages:
print(f"Adding Conda denpendency: {conda_pkg}")
conda_deps.add_conda_package(conda_pkg)
for pip_pkg in pip_packages:
print(f"Adding Pip dependency: {pip_pkg}")
conda_deps.add_pip_package(pip_pkg)
for private_pkg in private_packages:
print(f"Uploading private wheel from {private_pkg}")
private_pkg_url = Environment.add_private_pip_wheel(workspace=ws, file_path=Path(private_pkg).absolute(), exist_ok=True)
print(f"Adding private Pip dependency: {private_pkg_url}")
conda_deps.add_pip_package(private_pkg_url)
environment.python.conda_dependencies = conda_deps
environment.register(workspace=ws)
```
And the code I am using to create the local Conda environment is:
```
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
print(f"Building environment...")
amls_environment.build_local(workspace=ws)
```
The exact error message being returned when `build_local(...)` is called is:
```
Traceback (most recent call last):
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 814, in build_local
raise error
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 807, in build_local
recipe = environment_client._get_recipe_for_build(name=self.name, version=self.version, **payload)
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\_restclient\environment_client.py", line 171, in _get_recipe_for_build
raise Exception(message)
Exception: Error getting recipe specifications. Code: 500
: {
"error": {
"code": "ServiceError",
"message": "InternalServerError",
"detailsUri": null,
"target": null,
"details": [],
"innerError": null,
"debugInfo": null
},
"correlation": {
"operation": "15043e1469e85a4c96a3c18c45a2af67",
"request": "19231be75a2b8192"
},
"environment": "westeurope",
"location": "westeurope",
"time": "2020-02-28T09:38:47.8900715+00:00"
}
Process finished with exit code 1
```
Has anyone seen this error before or able to provide some guidance around what the issue may be?
|
2020/03/02
|
[
"https://Stackoverflow.com/questions/60490195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9821873/"
] |
The issue was with out firewall blocking the required requests between AMLS and the storage container (I presume to get the environment definitions/ private wheels).
We resolved this by updating the firewall with appropriate ALLOW rules for the AMLS service to contact and read from the attached storage container.
|
Assuming that you'd like to run in the script on a remote compute, then my suggestion would be to pass the environment you just "got". to a `RunConfiguration`, then pass that to an `ScriptRunConfig`, `Estimator`, or a `PythonScriptStep`
```py
from azureml.core import ScriptRunConfig
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
src = ScriptRunConfig(source_directory=project_folder, script='train.py')
# Set compute target to the one created in previous step
src.run_config.target = cpu_cluster.name
# Set environment
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
src.run_config.environment = amls_environment
run = experiment.submit(config=src)
run
```
Check out the rest of the notebook [here](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb).
If you're looking for a local run [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/using-environments/using-environments.ipynb) might help.
|
1,419,416
|
please, could someone explain to me a few basic things about working with languages like C? Especially on Windows?
1. If I want to use some other library, what do I need from the library? Header files .h and ..?
2. What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def?
3. Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW?
4. To use another library, what is preferred way? LoadLibrary() or #include <>?
5. There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project?
6. How do I create one big .exe? Is this called "static linking"?
7. How to include some random file into .exe? Say a program icon or start-up song?
8. How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()?
9. If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest
10. What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies?
Well, so many question... Thanks for every info!
|
2009/09/14
|
[
"https://Stackoverflow.com/questions/1419416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166939/"
] |
>
> 1: If I want to use some other library, what do I need from the library? Header files .h and ..?
>
>
>
... and, usually a `*.lib` file which you pass as an argument to your linker.
>
> 2: What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def?
>
>
>
This might be useful: [Static libraries, dynamic libraries, DLLs, entry points, headers … how to get out of this alive?](https://stackoverflow.com/questions/784781)
>
> 3: Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW?
>
>
>
Yes, it matters. For interop between compilers, the normal way is to use a C-style (not C++-style) API, with well-defined parameter-passing conventions (e.g. `__stdcall`), or to use 'COM' interfaces.
>
> 4: To use another library, what is preferred way? LoadLibrary() or #include <>?
>
>
>
`#include` is for the compiler (e.g. so that it can compile calls to the library); and `LoadLibrary` (or, using a `*.lib` file) is for the run-time linker/loader (so that it can substitute the actual address of those library methods into your code): i.e. you need both.
>
> 5: There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project?
>
>
>
If it's only source then you can compile that source (once) into a library, and then (when you build your project) link to that library (without recompiling the library).
>
> 6: How do I create one big .exe? Is this called "static linking"?
>
>
>
Yes, compile everything and pass it all to the linker.
>
> 7: How to include some random file into .exe? Say a program icon or start-up song?
>
>
>
Define that in a Windows-specific 'resource file', which is compiled by the 'resource compiler'.
>
> 8: How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()?
>
>
>
Yes.
>
> 9: If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest
>
>
>
I don't understand your question/example.
>
> 10: What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies?
>
>
>
Yes.
|
In all seriousness, the place to go to learn how to run your local environment is the documentation for your local environment. After all we on not even know exactly what your environment *is*, much less have it in front of us.
But here are some answers:
`1.` You need the headers, and a linkable object of some kind. Or you need the source so that you can build these.
`3.` It matters that the library is in a format that your linker understands. In c++ and perhaps other languages, it also needs to understand the name mangling that was used.
`6.` Forcing all the library code to be included in the executable is, indeed, called "static linking".
`7.` There is [at least one StackOverflow question on "resource compilers"](https://stackoverflow.com/questions/381915/free-resource-editor-for-windows-rc-files).
|
1,419,416
|
please, could someone explain to me a few basic things about working with languages like C? Especially on Windows?
1. If I want to use some other library, what do I need from the library? Header files .h and ..?
2. What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def?
3. Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW?
4. To use another library, what is preferred way? LoadLibrary() or #include <>?
5. There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project?
6. How do I create one big .exe? Is this called "static linking"?
7. How to include some random file into .exe? Say a program icon or start-up song?
8. How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()?
9. If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest
10. What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies?
Well, so many question... Thanks for every info!
|
2009/09/14
|
[
"https://Stackoverflow.com/questions/1419416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166939/"
] |
*If I want to use some other library, what do I need from the library? Header files .h and ..?*
You need header .h or .hpp for C,C++ although some languages don't require header files. You'll also need .a, .so, .dll, .lib, .jar etc files. These files contain the machine code that you linker can link into your program. Goes without saying that the format of library is must be understood by you linker.
*What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def?*
dll and .a are library files, that contain code components that you can link into your own program. a .exe is your final program into which .a or .dll has already been linked.
*Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW?*
Yes, it is important that the library that you are using is compatible with your platform. Typically Unix libraries will not run on windows and vice versa, if you are using JAVA you are better off since a .jar files will usually work on any platform with JAVA enabled (though versions matter )
*To use another library, what is preferred way? LoadLibrary() or #include <>?*
include is not a way to use a library its just a preprocessor directive telling you preprocessor to include a external source file in your current source file. This file can be any file not just .h although usually it would be .h or a .hpp
You'll be better off my leaving the decision about when to load a library to you runtime environment or your linker, unless you know for sure that loading a library at a particular point of time is going to add some value to your code. The performance cost and exact method of doing this is platform dependent.
*There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project?*
If you have source code you'll need to recompile it every time you make a change to it.
however if you have not changed the source of library in anyway there is no need to recompile it. The build tool like Make are intelligent enough to take this decision for you.
*How do I create one big .exe? Is this called "static linking"?*
Creating a static .exe is dependent on the build tool you are using.
with gcc this would usually mean that you have to you -static option
gcc -static -o my.exe my.c
*How to include some random file into .exe? Say a program icon or start-up song?*
Nothing in programming is random. If it were we would be in trouble. Again the way you can play a song or display an icon is dependent on the platform you are using on some platforms it may even be impossible to do so.
*How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()?*
You'll need a header file with all your function prototypes and you can split you program into several .c files that contain one or more functions. You main files will include the header file. All source files need to be compiled individually and then linked into one executable. Typically you'll get a .o for every .c and then you link all the .o together to get a .exe
*If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest*
Yes one library may require another library however its not advisable to package different libraries together, you may be violating the IPR and also for the fact that each library is usually a well define unit with a specific purpose and combining them into one usually doesn't make much sense.
*What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies?*
Again depends on the platform, usually on most OS the memory will be recovered after the program dies but on certain platforms like an embedded system it may be permanently lost.
It always a good idea to clean up the resources your program has used.
|
In all seriousness, the place to go to learn how to run your local environment is the documentation for your local environment. After all we on not even know exactly what your environment *is*, much less have it in front of us.
But here are some answers:
`1.` You need the headers, and a linkable object of some kind. Or you need the source so that you can build these.
`3.` It matters that the library is in a format that your linker understands. In c++ and perhaps other languages, it also needs to understand the name mangling that was used.
`6.` Forcing all the library code to be included in the executable is, indeed, called "static linking".
`7.` There is [at least one StackOverflow question on "resource compilers"](https://stackoverflow.com/questions/381915/free-resource-editor-for-windows-rc-files).
|
1,419,416
|
please, could someone explain to me a few basic things about working with languages like C? Especially on Windows?
1. If I want to use some other library, what do I need from the library? Header files .h and ..?
2. What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def?
3. Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW?
4. To use another library, what is preferred way? LoadLibrary() or #include <>?
5. There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project?
6. How do I create one big .exe? Is this called "static linking"?
7. How to include some random file into .exe? Say a program icon or start-up song?
8. How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()?
9. If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest
10. What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies?
Well, so many question... Thanks for every info!
|
2009/09/14
|
[
"https://Stackoverflow.com/questions/1419416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/166939/"
] |
>
> 1: If I want to use some other library, what do I need from the library? Header files .h and ..?
>
>
>
... and, usually a `*.lib` file which you pass as an argument to your linker.
>
> 2: What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def?
>
>
>
This might be useful: [Static libraries, dynamic libraries, DLLs, entry points, headers … how to get out of this alive?](https://stackoverflow.com/questions/784781)
>
> 3: Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW?
>
>
>
Yes, it matters. For interop between compilers, the normal way is to use a C-style (not C++-style) API, with well-defined parameter-passing conventions (e.g. `__stdcall`), or to use 'COM' interfaces.
>
> 4: To use another library, what is preferred way? LoadLibrary() or #include <>?
>
>
>
`#include` is for the compiler (e.g. so that it can compile calls to the library); and `LoadLibrary` (or, using a `*.lib` file) is for the run-time linker/loader (so that it can substitute the actual address of those library methods into your code): i.e. you need both.
>
> 5: There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project?
>
>
>
If it's only source then you can compile that source (once) into a library, and then (when you build your project) link to that library (without recompiling the library).
>
> 6: How do I create one big .exe? Is this called "static linking"?
>
>
>
Yes, compile everything and pass it all to the linker.
>
> 7: How to include some random file into .exe? Say a program icon or start-up song?
>
>
>
Define that in a Windows-specific 'resource file', which is compiled by the 'resource compiler'.
>
> 8: How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()?
>
>
>
Yes.
>
> 9: If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest
>
>
>
I don't understand your question/example.
>
> 10: What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies?
>
>
>
Yes.
|
*If I want to use some other library, what do I need from the library? Header files .h and ..?*
You need header .h or .hpp for C,C++ although some languages don't require header files. You'll also need .a, .so, .dll, .lib, .jar etc files. These files contain the machine code that you linker can link into your program. Goes without saying that the format of library is must be understood by you linker.
*What is the difference between .dll and .dll.a.? .dll and .lib? .dll and .exe? What is .def?*
dll and .a are library files, that contain code components that you can link into your own program. a .exe is your final program into which .a or .dll has already been linked.
*Does it matter how was the library compiled? I mean, is it possible to use, on Windows, a C++ library compiled by VC from within my C code compiled by MinGW?*
Yes, it is important that the library that you are using is compatible with your platform. Typically Unix libraries will not run on windows and vice versa, if you are using JAVA you are better off since a .jar files will usually work on any platform with JAVA enabled (though versions matter )
*To use another library, what is preferred way? LoadLibrary() or #include <>?*
include is not a way to use a library its just a preprocessor directive telling you preprocessor to include a external source file in your current source file. This file can be any file not just .h although usually it would be .h or a .hpp
You'll be better off my leaving the decision about when to load a library to you runtime environment or your linker, unless you know for sure that loading a library at a particular point of time is going to add some value to your code. The performance cost and exact method of doing this is platform dependent.
*There are some libraries which only provide the source code or .dll - how to use such libraries? Do I have to recompile them every time I rebuild my project?*
If you have source code you'll need to recompile it every time you make a change to it.
however if you have not changed the source of library in anyway there is no need to recompile it. The build tool like Make are intelligent enough to take this decision for you.
*How do I create one big .exe? Is this called "static linking"?*
Creating a static .exe is dependent on the build tool you are using.
with gcc this would usually mean that you have to you -static option
gcc -static -o my.exe my.c
*How to include some random file into .exe? Say a program icon or start-up song?*
Nothing in programming is random. If it were we would be in trouble. Again the way you can play a song or display an icon is dependent on the platform you are using on some platforms it may even be impossible to do so.
*How do I split my huge .c into smaller ones? Do I need to create for every part a header file which then I include in the part with WinMain() or main()?*
You'll need a header file with all your function prototypes and you can split you program into several .c files that contain one or more functions. You main files will include the header file. All source files need to be compiled individually and then linked into one executable. Typically you'll get a .o for every .c and then you link all the .o together to get a .exe
*If there is a library which needs another library, is it possible to combine these two into one file? Say, python26.dll needs msvcr90.dll and Microsoft.VC90.CRT.manifest*
Yes one library may require another library however its not advisable to package different libraries together, you may be violating the IPR and also for the fact that each library is usually a well define unit with a specific purpose and combining them into one usually doesn't make much sense.
*What happens if I don't free previously allocated memory? Is this going to be cleaned up if the program (process) dies?*
Again depends on the platform, usually on most OS the memory will be recovered after the program dies but on certain platforms like an embedded system it may be permanently lost.
It always a good idea to clean up the resources your program has used.
|
20,303,558
|
Coming from this link: [Splitlines in Python a table with empty spaces](https://stackoverflow.com/questions/20252728/splitlines-in-python-a-table-with-empty-spaces)
It works well but there is a problem when the size of the columns change:
```
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd unknown /proc/1/cwd (readlink: Permission denied)
init 1 root rtd unknown /proc/1/root
```
And the problem starts in col Device or Size/OFF but maybe in other situations could happen in all columns.
```
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 8,1 4096 2 /
init 1 root rtd DIR 8,1 4096 2 /
init 1 root txt REG 8,1 36992 139325 /sbin/init
init 1 root mem REG 8,1 14696 190970 /lib/libdl-2.11.3.so
init 1 root mem REG 8,1 1437064 190958 /lib/libc-2.11.3.so
python 30077 carlos 1u CHR 1,3 0t0 700 /dev/null
```
Checking always is the same in the first row, the first column starts in C of COMMAND, second ends in D of PID, the four col. in D +1 of FD....
is there any way to count the number of spaces in the first row to use them to fill this code to parse the other rows?
```
# note: variable-length NAME field at the end intentionally omitted
base_format = '8s 1x 6s 1x 10s 1x 4s 1x 9s 1x 6s 1x 9s 1x 6s 1x'
base_format_size = struct.calcsize(base_format)
```
Any ideas how to solve the problem?
|
2013/11/30
|
[
"https://Stackoverflow.com/questions/20303558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2898827/"
] |
I did a bit of reading on `lsof -F` after checking out the other thread and found that it does produce easily parsed output. Here's a quick demonstration of the general idea. It parses that and prints a small subset of the parsed output to show format. Are you able to use `-F` for your use case?
```
import subprocess
import copy
import pprint
def get_rows(output_to_parse, whitelist_keys):
lines = output_to_parse.split("\n")
rows = []
while lines:
row = _get_new_row(lines, whitelist_keys)
rows.append(row)
return rows
def _get_new_row(lines, whitelist_keys):
new_row_keys = set()
output = {}
repeat = False
while lines and repeat is False:
line = lines.pop()
if line == '':
continue
key = line[0]
if key not in whitelist_keys:
raise(ValueError(key))
value = line[1:]
if key not in new_row_keys:
new_row_keys.add(key)
output[key] = value
else:
repeat = True
return output
if __name__ == "__main__":
identifiers = subprocess.Popen(["lsof", "-F", "?"], stderr=subprocess.PIPE).communicate()
keys = set([line.strip()[0] for line in identifiers[1].split("\n") if line != ''][1:])
lsof_output = subprocess.check_output(["lsof", "-F"])
rows = get_rows(lsof_output, whitelist_keys=keys)
pprint.pprint(rows[:20])
```
|
As [@Tim Wilder said](https://stackoverflow.com/a/20304501/4279), you could use `lsof -F` to get machine-readable output. Here's a script that converts `lsof` output into json. One json object per line. It produces output as soon as pipe buffers are full without waiting for the whole `lsof` process to end (it takes a while on my system):
```
#!/usr/bin/env python
import json
import locale
from collections import OrderedDict
from subprocess import Popen, PIPE
encoding = locale.getpreferredencoding(True) # for json
# define fields we are intersted in, see `lsof -F?`
# http://www.opensource.apple.com/source/lsof/lsof-12/lsof/lsof_fields.h
ids = 'cpLftsn'
headers = 'COMMAND PID USER FD TYPE SIZE NAME'.split() # see lsof(8)
names = OrderedDict(zip(ids, headers)) # preserve order
# use '\0' byte as a field terminator and '\n' to separate each process/file set
p = Popen(["lsof", "-F{}0".format(''.join(names))], stdout=PIPE, bufsize=-1)
for line in p.stdout: # each line is a process or a file set
# id -> field
fields = {f[:1].decode('ascii', 'strict'): f[1:].decode(encoding)
for f in line.split(b'\0') if f.rstrip(b'\n')}
if 'p' in fields: # process set
process_info = fields # start new process set
elif 'f' in fields: # file set
fields.update(process_info) # add process info (they don't intersect)
result = OrderedDict((name, fields.get(id))
for id, name in names.items())
print(json.dumps(result)) # one object per line
else:
assert 0, 'either p or f ids should be present in a set'
p.communicate() # close stream, wait for the child to exit
```
The field names such as `COMMAND` are described in the [`lsof(8)` man page](http://linux.die.net/man/8/lsof#Output). To get the full list of available field ids, run `lsof -F?` or see [lsof\_fields.h](http://www.opensource.apple.com/source/lsof/lsof-12/lsof/lsof_fields.h).
Fields that are not available are set to `null`. You could omit them instead. I've used `OrderedDict` to preserve order from run to run. `sorted_keys` parameter for `json.dumps` could be used instead. To pretty print, you could use `indent` parameter.
`lsof` converts non-printable (in current locale) characters using special encoding. It makes some values ambiguous.
|
35,196,449
|
I am writing script supporting my Django project development. First I thought of using bash for this purpose but due to lack of enough knowledge and total lack of time I decided to write something using argparse and running system commands using subprocess.
Everything went ok until I had to run
```
./manage.py migrate
```
I do it by running:
```
import subprocess
...
subprocess.Popen("python {} migrate".format(absolute_path_to_manage_py).split())
```
The output seems to look ok:
```
Operations to perform:
Apply all migrations: sessions, admin, auth, contenttypes, accounts, pcb
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
...
Applying sessions.0001_initial... OK
```
And it stops suddenly, but script is still active (it's still running) and what's worse when I run django app I get a message that I still have some unapplied migrations.
I think I don't know something about running system commands from Python or something related to django migrations.
Any hint how I could overcome this?
|
2016/02/04
|
[
"https://Stackoverflow.com/questions/35196449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1844201/"
] |
This is not impossible, but it is a bad idea - it's very tricky to get right.
Instead, you want to separate the business logic from the UI, so that you can do the logic in the background, while the UI is still on the UI thread. The key is that you must not modify the UI controls from the background thread - instead, you first load whatever data you need, and then either use `Invoke` or `ReportProgress` to marshal it back on the UI thread, where you can update the UI.
If creating the controls themselves is taking too long, you might want to look at some options at increasing the performance there. For example, using `BeginUpdate` and friends to avoid unnecessary redraws and layouting while you're still filling in the data, or creating the controls on demand (as they are about to be visible), rather than all upfront.
A simple example using `await` (using Windows Forms, but the basic idea works everywhere):
```
tabSlow.Enabled = false;
try
{
var data = await Database.LoadDataAsync();
dataGrid.DataSource = data;
}
finally
{
tabSlow.Enabled = true;
}
```
The database operation is done in the background, and when it's done, the UI update is performed back on the UI thread. If you need to do some expensive calculations, rather than I/O, you'd simply use `await Task.Run(...)` instead - just make sure that the task itself doesn't update any UI; it should just return the data you need to update the UI.
|
WPF does not allow you to change UI from the background thread. Only ONE SINGLE thread can handle UI thread. Instead, you should calculate your data in the background thread, and then call `Application.Current.Dispatcher.Invoke(...)` method to update the UI. For example:
```
Application.Current.Dispatcher.Invoke(() => textBoxTab2.Text = "changing UI from the background thread");
```
|
36,811,332
|
I'm having some trouble create a table using values from a text file. My text file looks like this:
```
e432,6/5/3,6962,c8429,A,4324
e340,2/3/5,566623,c1210,A,3201
e4202,6/5/3,4232,c8419,E,4232
e3230,2/3/5,66632,c1120,A,53204
e4202,6/5/3,61962,c8429,A,4322
```
I would like to generate a table containing arrays where: the last column (amountpaid) value is less than the third column's (final total), and if the fifth column's (status) equals 'A'. Outstanding total is found when subtracting final total and amount paid.
My code to generate a table is:
```
data = open("pJoptionc.txt", "r")
info=data.readlines()
data.close
for li in info:
status=li.split(",")[4]
finaltotal=int(li.split(",")[2])
amountpaid=int(li.split(",")[5])
totalrev=0
headers = ["Estimate Number", "Date", "Final Total", "Customer Number", "Status", "Amount Paid", "Outstanding Amount"]
print(" ".join(headers))
for line in open("pJoptionc.txt", "r"):
line = line.strip().split(",")
line.append(str(int(line[2]) - int(line[5])))
if line[2] == line[5] or line[4] in ("E"):
continue
for i, word in enumerate(line):
print(word.ljust(len(headers[i - (i > 4)])), end=" " * ((i - (i > 4)) != len(headers) - 1))
print()
outstandingrev =(finaltotal) - (amountpaid)
totalrev += int(outstandingrev)
print("The total amount of outstanding revenue is...")
print("£",totalrev)
```
My desired output is
```
Estimate Number Date Final Total Customer Number Status Amount Paid Outstanding Amount
e432 6/5/3 6962 c8429 A 4324 2638
e340 2/3/5 566623 c1210 A 3201 563422
e3230 2/3/5 66632 c1120 A 53204 13428
e4202 6/5/3 61962 c8429 A 4322 57640
The total amount of outstanding revenue is...
£ 13428
```
However, when I run the code, the output is the table repeated over and over, and negative values are in the outstanding amount column. I'm using python 3.4.3.
|
2016/04/23
|
[
"https://Stackoverflow.com/questions/36811332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5854201/"
] |
AFAIK, neither JavaScript nor TypeScript provide a generic hashing function.
You have to import a third-party lib, like [ts-md5](https://www.npmjs.com/package/ts-md5) for instance, and give it a string representation of your object: `Md5.hashStr(JSON.stringify(yourObject))`.
Obviously, depending on your precise use case, this could be perfect, or way too slow, or generate too many conflicts...
|
If you want to compare the objects and not the data, then the @Valery solution is not for you, as it will compare the data and not the two objects.
If you want to compare the data and not the objects, then JSON.stringify(obj1) === JSON.stringify(obj2) is enough, which is simple string comparison.
|
36,811,332
|
I'm having some trouble create a table using values from a text file. My text file looks like this:
```
e432,6/5/3,6962,c8429,A,4324
e340,2/3/5,566623,c1210,A,3201
e4202,6/5/3,4232,c8419,E,4232
e3230,2/3/5,66632,c1120,A,53204
e4202,6/5/3,61962,c8429,A,4322
```
I would like to generate a table containing arrays where: the last column (amountpaid) value is less than the third column's (final total), and if the fifth column's (status) equals 'A'. Outstanding total is found when subtracting final total and amount paid.
My code to generate a table is:
```
data = open("pJoptionc.txt", "r")
info=data.readlines()
data.close
for li in info:
status=li.split(",")[4]
finaltotal=int(li.split(",")[2])
amountpaid=int(li.split(",")[5])
totalrev=0
headers = ["Estimate Number", "Date", "Final Total", "Customer Number", "Status", "Amount Paid", "Outstanding Amount"]
print(" ".join(headers))
for line in open("pJoptionc.txt", "r"):
line = line.strip().split(",")
line.append(str(int(line[2]) - int(line[5])))
if line[2] == line[5] or line[4] in ("E"):
continue
for i, word in enumerate(line):
print(word.ljust(len(headers[i - (i > 4)])), end=" " * ((i - (i > 4)) != len(headers) - 1))
print()
outstandingrev =(finaltotal) - (amountpaid)
totalrev += int(outstandingrev)
print("The total amount of outstanding revenue is...")
print("£",totalrev)
```
My desired output is
```
Estimate Number Date Final Total Customer Number Status Amount Paid Outstanding Amount
e432 6/5/3 6962 c8429 A 4324 2638
e340 2/3/5 566623 c1210 A 3201 563422
e3230 2/3/5 66632 c1120 A 53204 13428
e4202 6/5/3 61962 c8429 A 4322 57640
The total amount of outstanding revenue is...
£ 13428
```
However, when I run the code, the output is the table repeated over and over, and negative values are in the outstanding amount column. I'm using python 3.4.3.
|
2016/04/23
|
[
"https://Stackoverflow.com/questions/36811332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5854201/"
] |
AFAIK, neither JavaScript nor TypeScript provide a generic hashing function.
You have to import a third-party lib, like [ts-md5](https://www.npmjs.com/package/ts-md5) for instance, and give it a string representation of your object: `Md5.hashStr(JSON.stringify(yourObject))`.
Obviously, depending on your precise use case, this could be perfect, or way too slow, or generate too many conflicts...
|
For non-crypto uses, like implementing a hash table, here is is the typescript of the venerable java hashCode of a string:
```
export function hashCode(str: string): number {
var h: number = 0;
for (var i = 0; i < str.length; i++) {
h = 31 * h + str.charCodeAt(i);
}
return h & 0xFFFFFFFF
}
```
|
25,960,754
|
```
ImportError at /
cannot import name views
Request Method: GET
Request URL: http://127.0.0.1:8000/
Django Version: 1.7
Exception Type: ImportError
Exception Value:
cannot import name views
Exception Location: /Users/adam/Desktop/qblog/qblog/urls.py in <module>, line 1
Python Executable: /Users/adam/Desktop/venv/bin/python
Python Version: 2.7.8
Python Path:
['/Users/adam/Desktop/qblog',
'/Users/adam/Desktop/venv/lib/python27.zip',
'/Users/adam/Desktop/venv/lib/python2.7',
'/Users/adam/Desktop/venv/lib/python2.7/plat-darwin',
'/Users/adam/Desktop/venv/lib/python2.7/plat-mac',
'/Users/adam/Desktop/venv/lib/python2.7/plat-mac/lib-scriptpackages',
'/Users/adam/Desktop/venv/lib/python2.7/lib-tk',
'/Users/adam/Desktop/venv/lib/python2.7/lib-old',
'/Users/adam/Desktop/venv/lib/python2.7/lib-dynload',
'/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7',
'/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin',
'/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk',
'/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac',
'/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages',
'/Users/adam/Desktop/venv/lib/python2.7/site-packages']
Server time: Sun, 21 Sep 2014 15:12:22 +0000
```
Here is urls.py located in qblog/qblog/:
```
from django.conf.urls import patterns, url
from . import views
urlpatterns = patterns(
'',
url(r'^admin/', include(admin.site.urls)),
url(r'^markdown/', include('django_markdown.urls')),
url(r'^', include('blog.urls')),
)
```
Also, if I add "library" to the first import statement (which I don't need) it will give me the same error but with library, "Cannot import name library".
Here is urls.py located in qblog/blog/:
```
from django.conf.urls import patterns, include, url
from . import views
urlpatterns = patterns(
'',
url(r'^$', views.BlogIndex.as_view(), name="index"),
)
```
Going to the url `http://127.0.0.1:8000/index` provides the same error.
I do not get any errors in the terminal when running `./manage.py runserver`
Project structure:
```
.
├── blog
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── admin.py
│ ├── admin.pyc
│ ├── migrations
│ │ ├── 0001_initial.py
│ │ ├── 0001_initial.pyc
│ │ ├── 0002_auto_20140921_1414.py
│ │ ├── 0002_auto_20140921_1414.pyc
│ │ ├── 0003_auto_20140921_1501.py
│ │ ├── 0003_auto_20140921_1501.pyc
│ │ ├── __init__.py
│ │ └── __init__.pyc
│ ├── models.py
│ ├── models.pyc
│ ├── tests.py
│ ├── urls.py
│ ├── urls.pyc
│ ├── views.py
│ └── views.pyc
├── db.sqlite3
├── manage.py
├── qblog
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
├── static
│ ├── css
│ │ ├── blog.css
│ │ └── bootstrap.min.css
│ ├── icons
│ │ └── favicon.ico
│ └── js
│ ├── bootstrap.min.js
│ └── docs.min.js
└── templates
├── base.html
├── home.html
└── post.html
```
|
2014/09/21
|
[
"https://Stackoverflow.com/questions/25960754",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779600/"
] |
There is no need to import the views in your project-level file. You are not using them there, so no reason to import them.
If you *did* need to, you would just to `from blog import views`, because the views are in the blog directory and manage.py puts the top-level directory into the Python path.
|
You can just use `import views`.This works for me
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
In order to use the python package you need the dependencies first. For mac you can just use `brew install ta-lib` and then `pip install TA-Lib` will work just fine.
|
I faced the same problems trying with Anaconda 5.1.0 and Python 3.6 via Visual Studio.
The solution was to get a wheel from <https://www.lfd.uci.edu/~gohlke/pythonlibs>, then install it via pip. You need to make sure the wheel matches your python version (in my case, 3.6).
In Anaconda, I just opened a prompt, navigated to where the wheel was, and ran the following:
`python -m pip install TA_Lib-0.4.17-cp36-cp36m-win_amd64.whl`
For Visual Studio, it was more obtuse. Go to the Python Environments tab, choose 'Overview' in the dropdown, then `Open in PowerShell'. At that point, run the same command as for ANaconda above.
[](https://i.stack.imgur.com/S0V34.png)
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
I faced the same problems trying with Anaconda 5.1.0 and Python 3.6 via Visual Studio.
The solution was to get a wheel from <https://www.lfd.uci.edu/~gohlke/pythonlibs>, then install it via pip. You need to make sure the wheel matches your python version (in my case, 3.6).
In Anaconda, I just opened a prompt, navigated to where the wheel was, and ran the following:
`python -m pip install TA_Lib-0.4.17-cp36-cp36m-win_amd64.whl`
For Visual Studio, it was more obtuse. Go to the Python Environments tab, choose 'Overview' in the dropdown, then `Open in PowerShell'. At that point, run the same command as for ANaconda above.
[](https://i.stack.imgur.com/S0V34.png)
|
Install an updated **Microsoft visual c++ Redistributable for Visual Studio 2015, 2017 and 2019**:
<https://support.microsoft.com/he-il/help/2977003/the-latest-supported-visual-c-downloads>
worked for me..
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
Download `ta-lib-0.4.0-msvc.zip` from <http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-msvc.zip> and unzip to `C:\ta-lib`
This is a 32-bit release. If you want to use 64-bit Python, you will need to build a 64-bit version of the library.
Some unofficial (and unsupported) instructions for building on 64-bit Windows 10, here for reference:
1. Download and Unzip `ta-lib-0.4.0-msvc.zip`
2. Move the Unzipped Folder `ta-lib` to `C:\`
3. Download and Install Visual Studio Community 2015 or 2017 - have to do the big install i'm afraid - no other way
Remember to Select [Visual C++] Feature
4. Build TA-Lib Library - From Windows Start Menu, Start [VS2015 x64 Native Tools Command Prompt]
`cd` to `C:\ta-lib\c\make\cdr\win32\msvc`
Build the Library by typing `nmake`
5. Try installing `ta-lib` again from `pip` or pycharm etc...
|
Download related package from
<https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib>
```
TA_Lib‑0.4.17‑cp36‑cp36m‑win_amd64.whl (Since I have python 3.6, cp36)
```
and use
```
pip install TA_Lib‑0.4.17‑cp36‑cp36m‑win_amd64.whl
```
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
Download related package from
<https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib>
```
TA_Lib‑0.4.17‑cp36‑cp36m‑win_amd64.whl (Since I have python 3.6, cp36)
```
and use
```
pip install TA_Lib‑0.4.17‑cp36‑cp36m‑win_amd64.whl
```
|
Install an updated **Microsoft visual c++ Redistributable for Visual Studio 2015, 2017 and 2019**:
<https://support.microsoft.com/he-il/help/2977003/the-latest-supported-visual-c-downloads>
worked for me..
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
Download related package from
<https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib>
```
TA_Lib‑0.4.17‑cp36‑cp36m‑win_amd64.whl (Since I have python 3.6, cp36)
```
and use
```
pip install TA_Lib‑0.4.17‑cp36‑cp36m‑win_amd64.whl
```
|
Had to spend good amount of time even with so many people facing the same issue. Long story short WINDOWS \*\*\*T. I am on Windows 10 running python3.7
Enough of ranting here are the steps that worked for me
1. Install Visual C++ Build tools (<https://www.youtube.com/watch?v=P4_R34Lb-PE>)
<https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019>
2. While installing Build tools make sure you have selected Windows10 SDK that resolved some io.h file not found error. I had to modify the installation multiple times by adding Visual C++ components. [](https://i.stack.imgur.com/V7whp.png)
3. After this `pip3 install ta-lib` or `python3 -m pip install ta-lib` didn't work. What worked was downloading those .whl files as mentioned above [https://www.lfd.uci.edu/~gohlke/pythonlibs] and since I have python3.7 I had to select the one with cp37 (TA\_Lib-0.4.18-cp37-cp37m-win\_amd64.whl) in it.
I hope I am not missing any step but by the time I figured above steps I was 4 hours older.
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
you can proceed as follows:
1. Go to the following page: [https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib](https://www.lfd.uci.edu/%7Egohlke/pythonlibs/#ta-lib)
Choose your version of python: `cp35` means Python 3.5 (64 bit for example)
2. Download the package and unzip in `...\Python\Python35\Scripts`
3. Go on `cmd` and in the same directory (`...\Python\Python35\Scripts`) execute the following command:
`pip3 install TA_Lib-0.4.17-cp35-cp35m-win_amd64.whl`
4. installed!
|
Had to spend good amount of time even with so many people facing the same issue. Long story short WINDOWS \*\*\*T. I am on Windows 10 running python3.7
Enough of ranting here are the steps that worked for me
1. Install Visual C++ Build tools (<https://www.youtube.com/watch?v=P4_R34Lb-PE>)
<https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019>
2. While installing Build tools make sure you have selected Windows10 SDK that resolved some io.h file not found error. I had to modify the installation multiple times by adding Visual C++ components. [](https://i.stack.imgur.com/V7whp.png)
3. After this `pip3 install ta-lib` or `python3 -m pip install ta-lib` didn't work. What worked was downloading those .whl files as mentioned above [https://www.lfd.uci.edu/~gohlke/pythonlibs] and since I have python3.7 I had to select the one with cp37 (TA\_Lib-0.4.18-cp37-cp37m-win\_amd64.whl) in it.
I hope I am not missing any step but by the time I figured above steps I was 4 hours older.
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
The following solved the issue I had installing ta-lib for Python:
1. OS: Windows 10
Python: 2.7, embedded in [miniconda](https://conda.io/miniconda).
Miniconda: 64 bits.
[PyCharm](https://www.jetbrains.com/pycharm/) 2018.1.4 Community Edition.
2. You need to convert ta-lib to 64 bits. You can find it already converted in [here](https://github.com/afnhsn/TA-Lib_x64).
The site also tells you what to do, however there are several steps not included or confusing there that I am explaining here.
It is important that you do not just unzip the file 'ta-lib x64.zip' at 'C:'. Inside the zip file, there is a 'ta-lib' folder. This folder is the one that has to be in 'C:'
3. From the same github account, download and execute C++ Build Tools `en_visual_cpp_build_tools_2015_update_3_x86_x64_8923157.exe`.
4. Microsoft Visual C++ 9.0 is required. Get it from [here](http://aka.ms/vcpython27).
You must download and install it, in case you don't already have it.
5. Inside your python environment, run `pip install ta-lib`
This worked for me, I hope this info is useful for you.
Note: At the time, there is no TA-lib developed for Python 3.x, that is why I used Python 2.7
|
From everything that I have tried, the simplest way to solve this was more obvious than it looked based on previous answers. Run the following command on your conda terminal:
```
conda install -c conda-forge ta-lib
```
Make sure you are running this command on the desired environment.
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
Download `ta-lib-0.4.0-msvc.zip` from <http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-msvc.zip> and unzip to `C:\ta-lib`
This is a 32-bit release. If you want to use 64-bit Python, you will need to build a 64-bit version of the library.
Some unofficial (and unsupported) instructions for building on 64-bit Windows 10, here for reference:
1. Download and Unzip `ta-lib-0.4.0-msvc.zip`
2. Move the Unzipped Folder `ta-lib` to `C:\`
3. Download and Install Visual Studio Community 2015 or 2017 - have to do the big install i'm afraid - no other way
Remember to Select [Visual C++] Feature
4. Build TA-Lib Library - From Windows Start Menu, Start [VS2015 x64 Native Tools Command Prompt]
`cd` to `C:\ta-lib\c\make\cdr\win32\msvc`
Build the Library by typing `nmake`
5. Try installing `ta-lib` again from `pip` or pycharm etc...
|
From everything that I have tried, the simplest way to solve this was more obvious than it looked based on previous answers. Run the following command on your conda terminal:
```
conda install -c conda-forge ta-lib
```
Make sure you are running this command on the desired environment.
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
In order to use the python package you need the dependencies first. For mac you can just use `brew install ta-lib` and then `pip install TA-Lib` will work just fine.
|
From everything that I have tried, the simplest way to solve this was more obvious than it looked based on previous answers. Run the following command on your conda terminal:
```
conda install -c conda-forge ta-lib
```
Make sure you are running this command on the desired environment.
|
41,155,985
|
Frustratingly having a lot of difficult installing the TA-Lib package in python.
<https://pypi.python.org/pypi/TA-Lib>
I have read through all the forum posts I can find on this but no such luck for my particular problem..
Windows 10
Python 3.5.2
Anaconda 4.2.0
Cython 0.24.1
Microsoft Visual Studio 14.0
I have downloaded and extracted ta-lib-0.4.0-msvc.zip to C:/TA-Lib
(common problems seem to be people not installing the underlying TA-Lib file <http://www.ta-lib.org/hdr_dw.html>)
If someone could help me solve this I would be very appreciative!
Using 'pip install ta-lib' I get the following:
```
C:\Users\Matt>pip install ta-lib
Collecting ta-lib
Using cached TA-Lib-0.4.10.tar.gz
Building wheels for collected packages: ta-lib
Running setup.py bdist_wheel for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\Matt\AppData\Local\Temp\tmpqstzmsgspip-wheel- --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Failed building wheel for ta-lib
Running setup.py clean for ta-lib
Failed to build ta-lib
Installing collected packages: ta-lib
Running setup.py install for ta-lib ... error
Complete output from command c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.5\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.5\talib
copying talib\test_data.py -> build\lib.win-amd64-3.5\talib
copying talib\test_func.py -> build\lib.win-amd64-3.5\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.5\talib
copying talib\__init__.py -> build\lib.win-amd64-3.5\talib
running build_ext
skipping 'talib\common.c' Cython extension (up-to-date)
building 'talib.common' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\matt\anaconda3\lib\site-packages\numpy\core\include -Ic:\ta-lib\c\include -Ic:\users\matt\anaconda3\include -Ic:\users\matt\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctalib\common.c /Fobuild\temp.win-amd64-3.5\Release\talib\common.obj
common.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:c:\users\matt\anaconda3\libs /LIBPATH:c:\users\matt\anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" ta_libc_cdr.lib /EXPORT:PyInit_common build\temp.win-amd64-3.5\Release\talib\common.obj /OUT:build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib
common.obj : warning LNK4197: export 'PyInit_common' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\talib\common.cp35-win_amd64.exp
common.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_Shutdown
common.obj : error LNK2001: unresolved external symbol TA_Initialize
common.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
common.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.5\talib\common.cp35-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120
----------------------------------------
Command "c:\users\matt\anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\Matt\\AppData\\Local\\Temp\\pip-build-vv02ktg_\\ta-lib\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\Matt\AppData\Local\Temp\pip-qxmjmn5m-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Matt\AppData\Local\Temp\pip-build-vv02ktg_\ta-lib\
```
|
2016/12/15
|
[
"https://Stackoverflow.com/questions/41155985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273219/"
] |
From <https://github.com/mrjbq7/ta-lib>:
'This typically means that it can't find the underlying TA-Lib library, a dependency which needs to be installed.'
Install the underlying TA-Lib library first from here:
<https://www.ta-lib.org/hdr_dw.html>
I used the 'ta-lib-0.4.0-msvc.zip' one.
Then download a whl file from: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib>
I used the 'TA\_Lib‑0.4.16‑cp35‑cp35m‑win\_amd64.whl' one.
I can not definitely remember but I think I lastly ran pip install TA-Lib as well
|
Install an updated **Microsoft visual c++ Redistributable for Visual Studio 2015, 2017 and 2019**:
<https://support.microsoft.com/he-il/help/2977003/the-latest-supported-visual-c-downloads>
worked for me..
|
31,091,104
|
I am using a PHP server at the back end and a basic web page which asks the user to upload an image. This image is used as an input to a MATLAB script to be executed on the server side.
What I need is something like a MATLAB session(not clear on that word) that is already running on the server side which runs the MATLAB script. I know about the command: `"matlab -nodesktop -nojvm"` which can be used but the point is that I don't wish to invoke MATLAB again and again, rather, just execute the MATLAB script on the running MATLAB instance whenever a user uploads an image, get the output(necessary).
There are some constraints:
1. OS -> Ubuntu
2. Can't use python engine.
|
2015/06/27
|
[
"https://Stackoverflow.com/questions/31091104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3159395/"
] |
There exist multiple interfaces to control matlab. Probably best choice for this case is [matlabcontrol](https://code.google.com/p/matlabcontrol/) or the matlab engine for python (which you can't use for some reason). On windows a third alternative would be com.
Besides controlling the matlab process, you could implement an application in matlab which recieves the data, processes it and sends it back. I solved a similar problem using [apache xmlrpc](https://ws.apache.org/xmlrpc/) in matlab.
There are also some submissions on matlab file exchange, directly providing a matlab console [via web](http://www.mathworks.com/matlabcentral/fileexchange/29027-web-server)
|
There is an API for C++ where you call the Matlab engine using engOpen. This will open Matlab and leave it running until you close it. Then your C++ program can wait and listen for the image to process.
<http://www.mathworks.com/help/matlab/calling-matlab-engine-from-c-c-and-fortran-programs.html>
Another option is to compile the Matlab script as a standalone executable. Hard code the input image name and output and let PHP handle moving around the inputs and outputs. All the server needs to do is call the executable. It takes about 5 seconds to start the Matlab runtime each time.
|
31,091,104
|
I am using a PHP server at the back end and a basic web page which asks the user to upload an image. This image is used as an input to a MATLAB script to be executed on the server side.
What I need is something like a MATLAB session(not clear on that word) that is already running on the server side which runs the MATLAB script. I know about the command: `"matlab -nodesktop -nojvm"` which can be used but the point is that I don't wish to invoke MATLAB again and again, rather, just execute the MATLAB script on the running MATLAB instance whenever a user uploads an image, get the output(necessary).
There are some constraints:
1. OS -> Ubuntu
2. Can't use python engine.
|
2015/06/27
|
[
"https://Stackoverflow.com/questions/31091104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3159395/"
] |
You could write Matlab code to check the upload folder for new images regularly. Process new images and then move the processed images to an archive folder.
To check for new files use `dir` command
```
FILES = dir(['path/to/upload/folder/*.PNG']);
```
Replace PNG extension with that of your image files.
To move files use `movefile` command
```
movefile('path/to/upload/folder/Filename.PNG', 'path/to/archive/folder/', 'f')
```
To run the Matlab script from terminal and keep it running in the background
```
/usr/local/MATLAB/R2014a/bin/matlab -nodisplay -nosplash -r "cd /path/to/matlab/code; MatlabScript" ; < ctrl > Z; bg; disown -h %1
```
|
There is an API for C++ where you call the Matlab engine using engOpen. This will open Matlab and leave it running until you close it. Then your C++ program can wait and listen for the image to process.
<http://www.mathworks.com/help/matlab/calling-matlab-engine-from-c-c-and-fortran-programs.html>
Another option is to compile the Matlab script as a standalone executable. Hard code the input image name and output and let PHP handle moving around the inputs and outputs. All the server needs to do is call the executable. It takes about 5 seconds to start the Matlab runtime each time.
|
24,518,868
|
I am looking at the bencode specification and I am writing a C++ port of the deluge bit-torrent client's bencode implementation (written in python).
The Python implementation includes a dictionary of { data\_types : callback\_functions } by which a function wrapper easily selects which encoding function to use by a dictionary look-up of the data type of the variable provided to the wrapper function.
I did a search for identifying variable types during run-time of C++ equivalents to python primitives, and found that typeid() might provide what I am looking for. I know that my code will not be storing the results of the typeid() function return value, but I would rather not rely on a function that provides compiler specific string literals. I found a [solution](http://www.cplusplus.com/forum/general/21246/) that would allow me to provide a portable means to specify the type manually as a string literal, it seems difficult for me to wrap my head around, templates are a nightmare to me.
Here is the Python code in which types are used to select appropriate functions:
```
encode_func = {}
encode_func[Bencached] = encode_bencached
encode_func[IntType] = encode_int
encode_func[LongType] = encode_int
encode_func[StringType] = encode_string
encode_func[ListType] = encode_list
encode_func[TupleType] = encode_list
encode_func[DictType] = encode_dict
try:
from types import BooleanType
encode_func[BooleanType] = encode_bool
except ImportError:
pass
def bencode(x):
r = []
encode_func[type(x)](x, r)
return ''.join(r)
```
Here is the C++ sample I found that allows for explicit class type enumeration:
```
struct Foobar;
template<typename T> struct type_name
{
static const char* name() { static_assert(false, "You are missing a DECL_TYPE_NAME"); }
};
template<> struct type_name<int> { static const char* name() {return "int";} };
template<> struct type_name<string> { static const char* name() {return "string";} };
template<> struct type_name<Foobar> { static const char* name() {return "Foobar";} };
int main()
{
cout << type_name<int>::name(); // prints "int"
cout << type_name<float>::name(); // compile time error. float wasn't declared.
}
```
If this was implemented appropriately in my classes, I believe I could use `switch(variable.name())` to select which function to use to encode the value.
I have three specific questions:
1. What is the purpose of `template<typename T> struct type_name { ... };` in the parent class `name()` function? I don't really even understand what it is, do templates have scopes? Why can't the `name()` function just `static_assert()` a compile error if called?
2. Why are the `template<> struct type_name<type> { static const char* name() {return "type";} }` values in the parent-class? Would it not be more clear to simply over-load the `name()` function for each child-class of a different type?
3. If I over-load `name()` for each child-class, and cast a child-class variable to the parent-class, do I still call the `name()` function call of the child-class if I call `variable.name()`? or will I cause a compile time error because I called the parent-class's `name()` function? Does casting change the scope of the class?
I believe that my question stems from a mis-understanding of templates and their purpose in C++, as well as a lack of understanding of inheritance.
|
2014/07/01
|
[
"https://Stackoverflow.com/questions/24518868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3795241/"
] |
A primer on templates
---------------------
C++, in contrast to the dynamic Python, is a **statically typed language**. What this means is that the types of objects are known at compile time (more on the differences [here](https://stackoverflow.com/a/1517670/2567683)).
A way to generalize code in C++ is templates, which provide the tools for [static or compile time polymorphism](http://www.catonmat.net/blog/cpp-polymorphism/). To get an idea take a Python example of a `min` function :
```
def min(a, b) :
if (a < b) :
return a
return b
```
To translate this in C++ you'd say
```
int min(int a, int b) {
return a < b ? a : b;
}
```
But the problem is that this translation is inacurrate : It only works for integers (and any type that is convertible to integers, which may cause truncation)
One way to overcome this, would be to define overloads for all your types :
```
double min(double a, double b); // the definition would be the same as above
float min(float a, float b);
```
But that would be tedious and error prone and far from generic. A better solution is to work with templates and generalize the function for any type `T` (that supports the `<` operator):
```
template<typename T>
T min(T a, T b) { return a < b ? a : b; }
```
or better yet you could even handle the case when the two arguments have a different type :
```
template<typename T1, typename T2>
auto min(T1 a, T2 b) -> decltype(a+b) { return a < b ? a : b; }
```
Templates are a powerful mechanism and I couldn't even begin to provide links and sources; you'd have to google your way through this.
On specific questions
---------------------
* The syntax `template<> struct<Atype> { ...` is called a template (full) specialization. What it does is mandate the layout of a class template for specific template parameters. Here for example :
```
template<> struct type_name<int> { static const char* name() {return "int";} };
```
We are informed that when the type of `T` in `type_name` is `int` the member function `name()` returns "int"
* There are no child classes here, you are not building a hierarchy. `name` is a member function in a class template and you're specializing this class for different types of template parameters.
On the big picture
------------------
Being a statically typed language, C++ can provide for cleaner solutions to this :
>
> ...a function wrapper easily selects which encoding function to use by a dictionary look-up of the data type of the variable provided to the wrapper function.
>
>
>
You can directly parametrize the behaviour based on your types.
**An Example** would be to overload your function
```
void Encode(int arg) { /* calls encode_int on arg */ }
void Encode(TupleType arg) { /* calls encode_list on arg */ }
```
Other example would be to work with templates and policies that parametrize the behaviour for each type.
|
What you're seeing here is not a parent-child relationship as in normal inheritance, it's a [specialization](http://en.cppreference.com/w/cpp/language/template_specialization). The template defines the default implementation, and the specializations replace that implementation for parameters that match the specific type. So:
1. `template<typename T> struct type_name { ... };` is defining the template class. Without it none of the following definitions would make any sense, since they'd be specializing something that doesn't exist.
2. Those are the specializations for specific types. They are *not* child classes.
3. Because it's not inheritance, you can't cast.
P.S. You can't switch on a string, only integer types.
|
49,342,652
|
I have a Django Web-Application that uses celery in the background for periodic tasks.
Right now I have three docker images
* one for the django application
* one for celery workers
* one for the celery scheduler
whose `Dockerfile`s all look like this:
```
FROM alpine:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY Pipfile Pipfile.lock ./
RUN apk update && \
apk add python3 postgresql-libs jpeg-dev git && \
apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \
pip3 install --no-cache-dir pipenv && \
pipenv install --system && \
apk --purge del .build-deps
COPY . ./
# Run the image as a non-root user
RUN adduser -D noroot
USER noroot
EXPOSE $PORT
CMD <Different CMD for all three containers>
```
So they are all exactly the same except the last line.
Would it make sense here to create some kind of base image that contains everything except CMD. And all three images use that as base and add only their respective CMD?
Or won't that give me any advantages, because everything is cached anyway?
Is a seperation like you see above reasonable?
Two small bonus questions:
* Sometimes the `apk update ..` layer is cached by docker. How does docker know that there are no updates here?
* I often read that I should decrease layers as far a possible to reduce image size. But isn't that against the caching idea and will result in longer builds?
|
2018/03/17
|
[
"https://Stackoverflow.com/questions/49342652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/376172/"
] |
I will suggest to use one Dockerfile and just update your CMD during runtime. Litle bit modification will work for both local and Heroku as well.
As far Heroku is concern they provide environment variable to start container with the environment variable.
[heroku set-up-your-local-environment-variables](https://devcenter.heroku.com/articles/heroku-local#set-up-your-local-environment-variables)
```
FROM alpine:3.7
ENV PYTHONUNBUFFERED 1
ENV APPLICATION_TO_RUN=default_application
RUN mkdir /code
WORKDIR /code
COPY Pipfile Pipfile.lock ./
RUN apk update && \
apk add python3 postgresql-libs jpeg-dev git && \
apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \
pip3 install --no-cache-dir pipenv && \
pipenv install --system && \
apk --purge del .build-deps
COPY . ./
# Run the image as a non-root user
RUN adduser -D noroot
USER noroot
EXPOSE $PORT
CMD $APPLICATION_TO_RUN
```
So When run you container pass your application name to run command.
```
docker run -it --name test -e APPLICATION_TO_RUN="celery beat" --rm test
```
|
I would recommend looking at [docker-compose](https://docs.docker.com/compose/) to simplify management of multiple containers.
Use a single Dockerfile like the one you posted above, then create a `docker-compose.yml` that might look something like this:
```
version: '3'
services:
# a django service serving an application on port 80
django:
build: .
command: python manage.py runserver
ports:
- 8000:80
# the celery worker
worker:
build: .
command: celery worker
# the celery scheduler
scheduler:
build: .
command: celery beat
```
Of course, modify the commands here to be whatever you are using for your currently separate Dockerfiles.
When you want to rebuild the image, `docker-compose build` will rebuild your container image from your Dockerfile for the first service, then reuse the built image for the other services (because they already exist in the cache). `docker-compose up` will spin up 3 instances of your container image, but overriding the run command each time.
If you want to get more sophisticated, there are [plenty of resources](https://blog.syncano.io/configuring-running-django-celery-docker-containers-pt-1/) out there for the very common combination of django and celery.
|
61,882,562
|
I'm trying to access to my Api rest that I released in Heroku with docker and see that Dynos is running the gunicorn command that I put in the Dockerfile.
The Dockerfile that I used is:
```
FROM ubuntu:18.04
RUN apt update
RUN apt install -y python3 python3-pip
RUN mkdir /opt/app
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
COPY Ski4All/ /opt/app/
COPY requirements.txt /opt/app/
RUN pip3 install -r /opt/app/requirements.txt
ENV PORT=8000
CMD exec gunicorn Ski4All.wsgi:application — bind 0.0.0.0:$PORT
```
When I release I go into the container via `heroku run bash -a "name app"` and executing `ps aux` I don't see the api running. But if execute the command of my Dockerfile when I'm in the container.
Any idea?
|
2020/05/19
|
[
"https://Stackoverflow.com/questions/61882562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13039952/"
] |
@jhaos mentioned gunicorn was running in the root path
Change the following in `Dockerfile`
```
RUN mkdir /opt/app to WORKDIR /opt/app
COPY Ski4All/ /opt/app/ to COPY Ski4All .
```
|
The problem was that the gunicorn command was running in the / root path and not in the correct workdir. In the comments @HariHaraSuhan solved the error.
|
12,183,763
|
I am following the tutorial for deploying a django project on AWS elastic beanstalk here:
<http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html>
My app works when I test locally but when I deploy, I'm getting a 404 error. Looking at the event logs, I see this message:
`Error running user's commands : An error occurred running '. /opt/python/ondeck/env && PYTHONPATH=/opt/python/ondeck/app: django-admin.py syncdb --noinput' (rc: 127) /bin/sh: django-admin.py: command not found`
That leads me to believe that the tutorial is missing a part about installing django files on the server or at least configuring my project to recognize django-admin.py. I have django installed on my local machine so it works there.
I know python support is brand new for elastic beanstalk but has anyone deployed django to it?
|
2012/08/29
|
[
"https://Stackoverflow.com/questions/12183763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/656707/"
] |
I believe you don't need to put container\_commands in .config because there is no database or table at this moment.
|
I followed the same tutorial recently and had a similar result.
At step 6, upon seeing the default django 'congrats' page render locally, I deployed to EB as instructed and got a 404 instead of the default 'congrats' page.
I decided to use the code up to that point as a foundation for following the '[getting started with django tutorial](https://docs.djangoproject.com/en/dev/intro/tutorial01/)' which led me to a successful rendering of a 'home' view. This is a much more useful place to be anyway. I do agree that there is something wrong with the AWS tutorial and posted to the AWS forums [here](https://forums.aws.amazon.com/thread.jspa?threadID=103117&tstart=0).
|
12,183,763
|
I am following the tutorial for deploying a django project on AWS elastic beanstalk here:
<http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html>
My app works when I test locally but when I deploy, I'm getting a 404 error. Looking at the event logs, I see this message:
`Error running user's commands : An error occurred running '. /opt/python/ondeck/env && PYTHONPATH=/opt/python/ondeck/app: django-admin.py syncdb --noinput' (rc: 127) /bin/sh: django-admin.py: command not found`
That leads me to believe that the tutorial is missing a part about installing django files on the server or at least configuring my project to recognize django-admin.py. I have django installed on my local machine so it works there.
I know python support is brand new for elastic beanstalk but has anyone deployed django to it?
|
2012/08/29
|
[
"https://Stackoverflow.com/questions/12183763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/656707/"
] |
Did you made the step?: Freeze the requirements.txt file.
```
(djangodev)# pip freeze > requirements.txt
```
Note
Make sure your requirements.txt file contains the following:
```
Django==1.4.1
MySQL-python==1.2.3
```
I had the same problem because I skipped it. Once I did it, add, commit and push. It works!
|
I followed the same tutorial recently and had a similar result.
At step 6, upon seeing the default django 'congrats' page render locally, I deployed to EB as instructed and got a 404 instead of the default 'congrats' page.
I decided to use the code up to that point as a foundation for following the '[getting started with django tutorial](https://docs.djangoproject.com/en/dev/intro/tutorial01/)' which led me to a successful rendering of a 'home' view. This is a much more useful place to be anyway. I do agree that there is something wrong with the AWS tutorial and posted to the AWS forums [here](https://forums.aws.amazon.com/thread.jspa?threadID=103117&tstart=0).
|
12,183,763
|
I am following the tutorial for deploying a django project on AWS elastic beanstalk here:
<http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html>
My app works when I test locally but when I deploy, I'm getting a 404 error. Looking at the event logs, I see this message:
`Error running user's commands : An error occurred running '. /opt/python/ondeck/env && PYTHONPATH=/opt/python/ondeck/app: django-admin.py syncdb --noinput' (rc: 127) /bin/sh: django-admin.py: command not found`
That leads me to believe that the tutorial is missing a part about installing django files on the server or at least configuring my project to recognize django-admin.py. I have django installed on my local machine so it works there.
I know python support is brand new for elastic beanstalk but has anyone deployed django to it?
|
2012/08/29
|
[
"https://Stackoverflow.com/questions/12183763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/656707/"
] |
I believe you don't need to put container\_commands in .config because there is no database or table at this moment.
|
If you can, you should try to access the log file; it might give you a better idea of what's going on. Here's a link that might help:
<http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.loggingS3.title.html>
|
12,183,763
|
I am following the tutorial for deploying a django project on AWS elastic beanstalk here:
<http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html>
My app works when I test locally but when I deploy, I'm getting a 404 error. Looking at the event logs, I see this message:
`Error running user's commands : An error occurred running '. /opt/python/ondeck/env && PYTHONPATH=/opt/python/ondeck/app: django-admin.py syncdb --noinput' (rc: 127) /bin/sh: django-admin.py: command not found`
That leads me to believe that the tutorial is missing a part about installing django files on the server or at least configuring my project to recognize django-admin.py. I have django installed on my local machine so it works there.
I know python support is brand new for elastic beanstalk but has anyone deployed django to it?
|
2012/08/29
|
[
"https://Stackoverflow.com/questions/12183763",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/656707/"
] |
Did you made the step?: Freeze the requirements.txt file.
```
(djangodev)# pip freeze > requirements.txt
```
Note
Make sure your requirements.txt file contains the following:
```
Django==1.4.1
MySQL-python==1.2.3
```
I had the same problem because I skipped it. Once I did it, add, commit and push. It works!
|
If you can, you should try to access the log file; it might give you a better idea of what's going on. Here's a link that might help:
<http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.loggingS3.title.html>
|
730,573
|
I have gotten quite familiar with django's email sending abilities, but I havn't seen anything about it receiving and processing emails from users. Is this functionality available?
A few google searches have not turned up very promising results. Though I did find this: [Receive and send emails in python](https://stackoverflow.com/questions/348392/receive-and-send-emails-in-python)
Am I going to have to roll my own? if so, I'll be posting that app faster than you can say... whatever you say.
thanks,
Jim
**update**: I'm not trying to make an email server, I just need to add some functionality where you can email an image to the site and have it pop up in your account.
|
2009/04/08
|
[
"https://Stackoverflow.com/questions/730573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2908/"
] |
There's an app called [jutda-helpdesk](http://code.google.com/p/jutda-helpdesk/) that uses Python's `poplib` and `imaplib` to process incoming emails. You just have to have an account somewhere with POP3 or IMAP access.
This is adapted from their [get\_email.py](http://code.google.com/p/jutda-helpdesk/source/browse/trunk/management/commands/get_email.py):
```
def process_mail(mb):
print "Processing: %s" % q
if mb.email_box_type == 'pop3':
if mb.email_box_ssl:
if not mb.email_box_port: mb.email_box_port = 995
server = poplib.POP3_SSL(mb.email_box_host, int(mb.email_box_port))
else:
if not mb.email_box_port: mb.email_box_port = 110
server = poplib.POP3(mb.email_box_host, int(mb.email_box_port))
server.getwelcome()
server.user(mb.email_box_user)
server.pass_(mb.email_box_pass)
messagesInfo = server.list()[1]
for msg in messagesInfo:
msgNum = msg.split(" ")[0]
msgSize = msg.split(" ")[1]
full_message = "\n".join(server.retr(msgNum)[1])
# Do something with the message
server.dele(msgNum)
server.quit()
elif mb.email_box_type == 'imap':
if mb.email_box_ssl:
if not mb.email_box_port: mb.email_box_port = 993
server = imaplib.IMAP4_SSL(mb.email_box_host, int(mb.email_box_port))
else:
if not mb.email_box_port: mb.email_box_port = 143
server = imaplib.IMAP4(mb.email_box_host, int(mb.email_box_port))
server.login(mb.email_box_user, mb.email_box_pass)
server.select(mb.email_box_imap_folder)
status, data = server.search(None, 'ALL')
for num in data[0].split():
status, data = server.fetch(num, '(RFC822)')
full_message = data[0][1]
# Do something with the message
server.store(num, '+FLAGS', '\\Deleted')
server.expunge()
server.close()
server.logout()
```
`mb` is just some object to store all the mail server info, the rest should be pretty clear.
You'll probably need to check the docs on `poplib` and `imaplib` to get specific parts of the message, but hopefully this is enough to get you going.
|
Django is really intended as a web server (well, as a framework that fits into a web server), not as an email server. I suppose you could put some code into a Django web application that starts up an email server, using the kind of code shown in that question you linked to, but I really wouldn't recommend it; it's an abuse of the capabilities of dynamic web programming.
The normal practice is to have separate email and web servers, and for that you would want to look into something like Sendmail or (better yet) Postfix. For POP3 you would also need something like Dovecot or Courier, I think. (It's certainly possible to have the email server notify your web application when emails are received so it can act on them, if that's what you want to do.)
*EDIT*: in response to your comments: yes you are trying to make (or at least use) an email server. An email server is just a program that receives emails (and may be capable of sending them as well, but you don't need that).
You could definitely write a small email server in Python that just receives these emails and saves the images to the filesystem or a database or whatever. (Might be worth asking a new question, about) But don't make it part of your Django web app; keep it as its own separate program.
|
730,573
|
I have gotten quite familiar with django's email sending abilities, but I havn't seen anything about it receiving and processing emails from users. Is this functionality available?
A few google searches have not turned up very promising results. Though I did find this: [Receive and send emails in python](https://stackoverflow.com/questions/348392/receive-and-send-emails-in-python)
Am I going to have to roll my own? if so, I'll be posting that app faster than you can say... whatever you say.
thanks,
Jim
**update**: I'm not trying to make an email server, I just need to add some functionality where you can email an image to the site and have it pop up in your account.
|
2009/04/08
|
[
"https://Stackoverflow.com/questions/730573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2908/"
] |
I know this question is pretty old now but just thought I'd add for future reference that you might want to give <http://cloudmailin.com> a go. We have quite a few django users using the system and it should be a little simpler than the proposed solution.
|
Django is really intended as a web server (well, as a framework that fits into a web server), not as an email server. I suppose you could put some code into a Django web application that starts up an email server, using the kind of code shown in that question you linked to, but I really wouldn't recommend it; it's an abuse of the capabilities of dynamic web programming.
The normal practice is to have separate email and web servers, and for that you would want to look into something like Sendmail or (better yet) Postfix. For POP3 you would also need something like Dovecot or Courier, I think. (It's certainly possible to have the email server notify your web application when emails are received so it can act on them, if that's what you want to do.)
*EDIT*: in response to your comments: yes you are trying to make (or at least use) an email server. An email server is just a program that receives emails (and may be capable of sending them as well, but you don't need that).
You could definitely write a small email server in Python that just receives these emails and saves the images to the filesystem or a database or whatever. (Might be worth asking a new question, about) But don't make it part of your Django web app; keep it as its own separate program.
|
730,573
|
I have gotten quite familiar with django's email sending abilities, but I havn't seen anything about it receiving and processing emails from users. Is this functionality available?
A few google searches have not turned up very promising results. Though I did find this: [Receive and send emails in python](https://stackoverflow.com/questions/348392/receive-and-send-emails-in-python)
Am I going to have to roll my own? if so, I'll be posting that app faster than you can say... whatever you say.
thanks,
Jim
**update**: I'm not trying to make an email server, I just need to add some functionality where you can email an image to the site and have it pop up in your account.
|
2009/04/08
|
[
"https://Stackoverflow.com/questions/730573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2908/"
] |
There's an app called [jutda-helpdesk](http://code.google.com/p/jutda-helpdesk/) that uses Python's `poplib` and `imaplib` to process incoming emails. You just have to have an account somewhere with POP3 or IMAP access.
This is adapted from their [get\_email.py](http://code.google.com/p/jutda-helpdesk/source/browse/trunk/management/commands/get_email.py):
```
def process_mail(mb):
print "Processing: %s" % q
if mb.email_box_type == 'pop3':
if mb.email_box_ssl:
if not mb.email_box_port: mb.email_box_port = 995
server = poplib.POP3_SSL(mb.email_box_host, int(mb.email_box_port))
else:
if not mb.email_box_port: mb.email_box_port = 110
server = poplib.POP3(mb.email_box_host, int(mb.email_box_port))
server.getwelcome()
server.user(mb.email_box_user)
server.pass_(mb.email_box_pass)
messagesInfo = server.list()[1]
for msg in messagesInfo:
msgNum = msg.split(" ")[0]
msgSize = msg.split(" ")[1]
full_message = "\n".join(server.retr(msgNum)[1])
# Do something with the message
server.dele(msgNum)
server.quit()
elif mb.email_box_type == 'imap':
if mb.email_box_ssl:
if not mb.email_box_port: mb.email_box_port = 993
server = imaplib.IMAP4_SSL(mb.email_box_host, int(mb.email_box_port))
else:
if not mb.email_box_port: mb.email_box_port = 143
server = imaplib.IMAP4(mb.email_box_host, int(mb.email_box_port))
server.login(mb.email_box_user, mb.email_box_pass)
server.select(mb.email_box_imap_folder)
status, data = server.search(None, 'ALL')
for num in data[0].split():
status, data = server.fetch(num, '(RFC822)')
full_message = data[0][1]
# Do something with the message
server.store(num, '+FLAGS', '\\Deleted')
server.expunge()
server.close()
server.logout()
```
`mb` is just some object to store all the mail server info, the rest should be pretty clear.
You'll probably need to check the docs on `poplib` and `imaplib` to get specific parts of the message, but hopefully this is enough to get you going.
|
I know this question is pretty old now but just thought I'd add for future reference that you might want to give <http://cloudmailin.com> a go. We have quite a few django users using the system and it should be a little simpler than the proposed solution.
|
39,840,323
|
I'm using tensorflow 0.10 and I was benchmarking the examples found in the [official HowTo on reading data](https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#reading-data). This HowTo illustrates different methods to move data to tensorflow, using the same MNIST example.
I was surprised by the results and I was wondering if anyone has enough low-level understanding to explain what is happening.
In the HowTo there are basically 3 methods to read in data:
* `Feeding`: building the mini-batch in python and passing it with `sess.run(..., feed_dict={x: mini_batch})`
* `Reading from files`: use `tf` operations to open the files and create mini-batches. (Bypass handling data in python.)
* `Preloaded data`: load all the data in either a single `tf` variable or constant and use `tf` functions to break that up in mini-batches. The variable or constant is pinned to the cpu, not gpu.
The scripts I used to run my benchmarks are found within tensorflow:
* `Feeding`: [examples/tutorials/mnist/fully\_connected\_feed.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/tutorials/mnist/fully_connected_feed.py)
* `Reading from files`: [examples/how\_tos/reading\_data/convert\_to\_records.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/convert_to_records.py) and [examples/how\_tos/reading\_data/fully\_connected\_reader.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py)
* `Preloaded data (constant)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py)
* `Preloaded data (variable)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded\_var.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py)
I ran those scripts unmodified, except for the last two because they crash --for version 0.10 at least-- unless I add an extra `sess.run(tf.initialize_local_variables())`.
Main Question
-------------
The time to execute 100 mini-batches of 100 examples running on a GTX1060:
* `Feeding`: `~0.001 s`
* `Reading from files`: `~0.010 s`
* `Preloaded data (constant)`: `~0.010 s`
* `Preloaded data (variable)`: `~0.010 s`
Those results are quite surprising to me. I would have expected `Feeding` to be the slowest since it does almost everything in python, while the other methods use lower-level tensorflow/C++ to carry similar operations. It is the complete opposite of what I expected. Does anyone understand what is going on?
Secondary question
------------------
I have access to another machine which has a Titan X and older NVidia drivers. The relative results were roughly in line with the above, except for `Preloaded data (constant)` which was catastrophically slow, taking many seconds for a single mini-batch.
Is this some known issue that performance can vary greatly with hardware/drivers?
|
2016/10/03
|
[
"https://Stackoverflow.com/questions/39840323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/765009/"
] |
**Update Oct 9** the slowness comes because the computation runs too fast for Python to pre-empt the computation thread and to schedule the pre-fetching threads. Computation in main thread takes 2ms and apparently that's too little for the pre-fetching thread to grab the GIL. Pre-fetching thread has larger delay and hence can always be pre-empted by computation thread. So the computation thread runs through all of the examples, and then spends most of the time blocked on GIL as some prefetching thread gets scheduled and enqueues a single example. The solution is to increase number of Python threads, increase queue size to fit the entire dataset, start queue runners, and then pause main thread for a couple of seconds to give queue runners to pre-populate the queue.
**Old stuff**
That's surprisingly slow.
This looks some kind of special cases making the last 3 examples unnecessarily slow (most effort went into optimizing large models like ImageNet, so MNIST didn't get as much attention).
You can diagnose the problems by getting timelines, as described [here](https://github.com/tensorflow/tensorflow/issues/1824#issuecomment-225754659)
Here [are](https://github.com/yaroslavvb/stuff/tree/master/input_benchmarks) 3 of those examples with timeline collection enabled.
Here's the timeline for `feed_dict` implementation
[](https://i.stack.imgur.com/HCrZP.png)
The important thing to notice is that matmul takes a good chunk of the time, so the reading overhead is not significant
Now here's the timeline for `reader` implementation
[](https://i.stack.imgur.com/7ppyp.png)
You can see that operation is bottlenecked on QueueDequeueMany which takes whopping 45ms.
If you zoom in, you'll see a bunch of tiny MEMCPY and Cast operations, which is a sign of some op being CPU only (`parse_single_example`), and the dequeue having to schedule multiple independent CPU->GPU transfers
For the `var` example below with GPU disabled, I don't see tiny little ops, but QueueDequeueMany still takes over 10ms. The timing seems to scale linearly with batch size, so there's some fundamental slowness there. Filed [#4740](https://github.com/tensorflow/tensorflow/issues/4740)
[](https://i.stack.imgur.com/3SDZ2.png)
|
Yaroslav nails the problem well. With small models you'll need to speed up the data import. One way to do this is with the Tensorflow function, [tf.TFRecordReader.read\_up\_to](https://www.tensorflow.org/api_docs/python/io_ops/readers#TFRecordReader.read_up_to), that reads multiple records in each `session.run()` call, thereby removing the excess overhead caused by multiple calls.
```
enqueue_many_size = SOME_ENQUEUE_MANY_SIZE
reader = tf.TFRecordReader(options = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.ZLIB))
_, queue_batch = reader.read_up_to(filename_queue, enqueue_many_size)
batch_serialized_example = tf.train.shuffle_batch(
[queue_batch],
batch_size=batch_size,
num_threads=thread_number,
capacity=capacity,
min_after_dequeue=min_after_dequeue,
enqueue_many=True)
```
This was also addressed in this [SO question](https://stackoverflow.com/a/41768163/599139).
|
39,840,323
|
I'm using tensorflow 0.10 and I was benchmarking the examples found in the [official HowTo on reading data](https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#reading-data). This HowTo illustrates different methods to move data to tensorflow, using the same MNIST example.
I was surprised by the results and I was wondering if anyone has enough low-level understanding to explain what is happening.
In the HowTo there are basically 3 methods to read in data:
* `Feeding`: building the mini-batch in python and passing it with `sess.run(..., feed_dict={x: mini_batch})`
* `Reading from files`: use `tf` operations to open the files and create mini-batches. (Bypass handling data in python.)
* `Preloaded data`: load all the data in either a single `tf` variable or constant and use `tf` functions to break that up in mini-batches. The variable or constant is pinned to the cpu, not gpu.
The scripts I used to run my benchmarks are found within tensorflow:
* `Feeding`: [examples/tutorials/mnist/fully\_connected\_feed.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/tutorials/mnist/fully_connected_feed.py)
* `Reading from files`: [examples/how\_tos/reading\_data/convert\_to\_records.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/convert_to_records.py) and [examples/how\_tos/reading\_data/fully\_connected\_reader.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py)
* `Preloaded data (constant)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py)
* `Preloaded data (variable)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded\_var.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py)
I ran those scripts unmodified, except for the last two because they crash --for version 0.10 at least-- unless I add an extra `sess.run(tf.initialize_local_variables())`.
Main Question
-------------
The time to execute 100 mini-batches of 100 examples running on a GTX1060:
* `Feeding`: `~0.001 s`
* `Reading from files`: `~0.010 s`
* `Preloaded data (constant)`: `~0.010 s`
* `Preloaded data (variable)`: `~0.010 s`
Those results are quite surprising to me. I would have expected `Feeding` to be the slowest since it does almost everything in python, while the other methods use lower-level tensorflow/C++ to carry similar operations. It is the complete opposite of what I expected. Does anyone understand what is going on?
Secondary question
------------------
I have access to another machine which has a Titan X and older NVidia drivers. The relative results were roughly in line with the above, except for `Preloaded data (constant)` which was catastrophically slow, taking many seconds for a single mini-batch.
Is this some known issue that performance can vary greatly with hardware/drivers?
|
2016/10/03
|
[
"https://Stackoverflow.com/questions/39840323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/765009/"
] |
**Update Oct 9** the slowness comes because the computation runs too fast for Python to pre-empt the computation thread and to schedule the pre-fetching threads. Computation in main thread takes 2ms and apparently that's too little for the pre-fetching thread to grab the GIL. Pre-fetching thread has larger delay and hence can always be pre-empted by computation thread. So the computation thread runs through all of the examples, and then spends most of the time blocked on GIL as some prefetching thread gets scheduled and enqueues a single example. The solution is to increase number of Python threads, increase queue size to fit the entire dataset, start queue runners, and then pause main thread for a couple of seconds to give queue runners to pre-populate the queue.
**Old stuff**
That's surprisingly slow.
This looks some kind of special cases making the last 3 examples unnecessarily slow (most effort went into optimizing large models like ImageNet, so MNIST didn't get as much attention).
You can diagnose the problems by getting timelines, as described [here](https://github.com/tensorflow/tensorflow/issues/1824#issuecomment-225754659)
Here [are](https://github.com/yaroslavvb/stuff/tree/master/input_benchmarks) 3 of those examples with timeline collection enabled.
Here's the timeline for `feed_dict` implementation
[](https://i.stack.imgur.com/HCrZP.png)
The important thing to notice is that matmul takes a good chunk of the time, so the reading overhead is not significant
Now here's the timeline for `reader` implementation
[](https://i.stack.imgur.com/7ppyp.png)
You can see that operation is bottlenecked on QueueDequeueMany which takes whopping 45ms.
If you zoom in, you'll see a bunch of tiny MEMCPY and Cast operations, which is a sign of some op being CPU only (`parse_single_example`), and the dequeue having to schedule multiple independent CPU->GPU transfers
For the `var` example below with GPU disabled, I don't see tiny little ops, but QueueDequeueMany still takes over 10ms. The timing seems to scale linearly with batch size, so there's some fundamental slowness there. Filed [#4740](https://github.com/tensorflow/tensorflow/issues/4740)
[](https://i.stack.imgur.com/3SDZ2.png)
|
The main question is that why **the example with the preloaded data (constant)**
[examples/how\_tos/reading\_data/fully\_connected\_preloaded.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py) is significantly slower than other data loading example codes **when using GPU**.
I had the same problem, that `fully_connected_preloaded.py` is unexpectedly slow on my Titan X. The problem was that the whole dataset was pre-loaded on CPU, not GPU.
First, let me share my initial attempts. I applied the following performance tips by Yaroslav.
* set `capacity=55000` for `tf.train.slice_input_producer`.(55000 is the size of MNIST training set in my case)
* set `num_threads=5` for `tf.train.batch`.
* set `capacity=500` for `tf.train.batch`.
* put `time.sleep(10)` after `tf.train.start_queue_runners`.
However, the average speed per each batch stays the same. I tried `timeline` visualization for profiling, and still got `QueueDequeueManyV2` dominating.
The problem was [the line 65](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py#L65) of `fully_connected_preloaded.py`. The following code loads entire dataset to CPU, still providing a bottleneck for CPU-GPU data transmission.
```
with tf.device('/cpu:0'):
input_images = tf.constant(data_sets.train.images)
input_labels = tf.constant(data_sets.train.labels)
```
Hence, I switched the device allocation.
```
with tf.device('/gpu:0')
```
Then I got x100 speed-up per each batch.
Note:
1. This was possible because Titan X has enough memory space to preload entire dataset.
2. In the original code(`fully_connected_preloaded.py`), the comment in [the line 64](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py#L64) says "rest of pipeline is CPU-only". I am not sure about what this comment intended.
|
39,840,323
|
I'm using tensorflow 0.10 and I was benchmarking the examples found in the [official HowTo on reading data](https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#reading-data). This HowTo illustrates different methods to move data to tensorflow, using the same MNIST example.
I was surprised by the results and I was wondering if anyone has enough low-level understanding to explain what is happening.
In the HowTo there are basically 3 methods to read in data:
* `Feeding`: building the mini-batch in python and passing it with `sess.run(..., feed_dict={x: mini_batch})`
* `Reading from files`: use `tf` operations to open the files and create mini-batches. (Bypass handling data in python.)
* `Preloaded data`: load all the data in either a single `tf` variable or constant and use `tf` functions to break that up in mini-batches. The variable or constant is pinned to the cpu, not gpu.
The scripts I used to run my benchmarks are found within tensorflow:
* `Feeding`: [examples/tutorials/mnist/fully\_connected\_feed.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/tutorials/mnist/fully_connected_feed.py)
* `Reading from files`: [examples/how\_tos/reading\_data/convert\_to\_records.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/convert_to_records.py) and [examples/how\_tos/reading\_data/fully\_connected\_reader.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py)
* `Preloaded data (constant)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py)
* `Preloaded data (variable)`: [examples/how\_tos/reading\_data/fully\_connected\_preloaded\_var.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded_var.py)
I ran those scripts unmodified, except for the last two because they crash --for version 0.10 at least-- unless I add an extra `sess.run(tf.initialize_local_variables())`.
Main Question
-------------
The time to execute 100 mini-batches of 100 examples running on a GTX1060:
* `Feeding`: `~0.001 s`
* `Reading from files`: `~0.010 s`
* `Preloaded data (constant)`: `~0.010 s`
* `Preloaded data (variable)`: `~0.010 s`
Those results are quite surprising to me. I would have expected `Feeding` to be the slowest since it does almost everything in python, while the other methods use lower-level tensorflow/C++ to carry similar operations. It is the complete opposite of what I expected. Does anyone understand what is going on?
Secondary question
------------------
I have access to another machine which has a Titan X and older NVidia drivers. The relative results were roughly in line with the above, except for `Preloaded data (constant)` which was catastrophically slow, taking many seconds for a single mini-batch.
Is this some known issue that performance can vary greatly with hardware/drivers?
|
2016/10/03
|
[
"https://Stackoverflow.com/questions/39840323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/765009/"
] |
Yaroslav nails the problem well. With small models you'll need to speed up the data import. One way to do this is with the Tensorflow function, [tf.TFRecordReader.read\_up\_to](https://www.tensorflow.org/api_docs/python/io_ops/readers#TFRecordReader.read_up_to), that reads multiple records in each `session.run()` call, thereby removing the excess overhead caused by multiple calls.
```
enqueue_many_size = SOME_ENQUEUE_MANY_SIZE
reader = tf.TFRecordReader(options = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.ZLIB))
_, queue_batch = reader.read_up_to(filename_queue, enqueue_many_size)
batch_serialized_example = tf.train.shuffle_batch(
[queue_batch],
batch_size=batch_size,
num_threads=thread_number,
capacity=capacity,
min_after_dequeue=min_after_dequeue,
enqueue_many=True)
```
This was also addressed in this [SO question](https://stackoverflow.com/a/41768163/599139).
|
The main question is that why **the example with the preloaded data (constant)**
[examples/how\_tos/reading\_data/fully\_connected\_preloaded.py](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py) is significantly slower than other data loading example codes **when using GPU**.
I had the same problem, that `fully_connected_preloaded.py` is unexpectedly slow on my Titan X. The problem was that the whole dataset was pre-loaded on CPU, not GPU.
First, let me share my initial attempts. I applied the following performance tips by Yaroslav.
* set `capacity=55000` for `tf.train.slice_input_producer`.(55000 is the size of MNIST training set in my case)
* set `num_threads=5` for `tf.train.batch`.
* set `capacity=500` for `tf.train.batch`.
* put `time.sleep(10)` after `tf.train.start_queue_runners`.
However, the average speed per each batch stays the same. I tried `timeline` visualization for profiling, and still got `QueueDequeueManyV2` dominating.
The problem was [the line 65](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py#L65) of `fully_connected_preloaded.py`. The following code loads entire dataset to CPU, still providing a bottleneck for CPU-GPU data transmission.
```
with tf.device('/cpu:0'):
input_images = tf.constant(data_sets.train.images)
input_labels = tf.constant(data_sets.train.labels)
```
Hence, I switched the device allocation.
```
with tf.device('/gpu:0')
```
Then I got x100 speed-up per each batch.
Note:
1. This was possible because Titan X has enough memory space to preload entire dataset.
2. In the original code(`fully_connected_preloaded.py`), the comment in [the line 64](https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/how_tos/reading_data/fully_connected_preloaded.py#L64) says "rest of pipeline is CPU-only". I am not sure about what this comment intended.
|
52,407,452
|
When I run the following piece of code, only the print statements in the method, which I have dynamically assigned to the class "Test" only return "my\_unique\_method\_name".
**How to print the method name I have given it?** (Which is "wizardry" in the method—see "Expected Output" below.)
```
#!/usr/bin/python3
import sys
import inspect
class Test:
pass
def my_unique_method_name(self):
print(inspect.stack()[0][3])
print(sys._getframe().f_code.co_name)
print(inspect.currentframe().f_code.co_name)
Test.wizardry = my_unique_method_name
t = Test()
t.wizardry()
```
**Current Output**
```none
my_unique_method_name
my_unique_method_name
my_unique_method_name
```
**Expected Output**
```none
wizardry
```
|
2018/09/19
|
[
"https://Stackoverflow.com/questions/52407452",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5039582/"
] |
As you have already made the step to use template matching with the `template match="Class/Student"` I would suggest to stick with that approach and simply write two templates, one for the `Class` elements, the other for the `Student` elements
```
<xsl:template match="Class">
<ul>
<xsl:apply-templates select="Student"/>
</ul>
</xsl:template>
<xsl:template match="Student">
<li>
<xsl:apply-templates/>
</li>
</xsl:template>
```
For more complex cases this results in cleaner and more modular code.
|
You want one `ul` per `Class`, not per `Student`, so change
```
<xsl:template match="Class/Student">
```
to
```
<xsl:template match="Class">
```
Then change
```
<xsl:for-each select="../Student">
```
to
```
<xsl:for-each select="Student">
```
to get one `li` per `Student` child element of the `Class` current node.
|
39,745,460
|
I am working on some piece of python code that calls various linux tools (like ssh) for automation purposes. Right now I am looking into "return code" handling.
Thus: I am looking for a simple way to run *some* command that gives me a specific non-zero return code; something like
```
echo "this is a testcommand, that should return with rc=5"
```
for example. But of course, the above comes back with rc=0.
I know that I can call `false`, but this will always return with rc=1. I am looking for something that gives me an rc that I can control.
Edit: first answers suggest to *exit*; but the problem with that: *exit* is a bash function. So, when I try to run that from within a python script, I get "No such file or directory: exit".
So, I am actually looking for some "binary" tool that gives me that (obviously one can write some simple script to get that; I am just looking if there is something similar to *false* that is already shipped with any Linux/Unix).
|
2016/09/28
|
[
"https://Stackoverflow.com/questions/39745460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1531124/"
] |
Run `exit` in a subshell.
```
$ (exit 5) ; echo $?
5
```
|
This is not exactly what you are asking but custom `rc` can be achieved through exit command.
```
echo "this is a test command, that should return with " ;exit 5
echo $?
5
```
|
39,745,460
|
I am working on some piece of python code that calls various linux tools (like ssh) for automation purposes. Right now I am looking into "return code" handling.
Thus: I am looking for a simple way to run *some* command that gives me a specific non-zero return code; something like
```
echo "this is a testcommand, that should return with rc=5"
```
for example. But of course, the above comes back with rc=0.
I know that I can call `false`, but this will always return with rc=1. I am looking for something that gives me an rc that I can control.
Edit: first answers suggest to *exit*; but the problem with that: *exit* is a bash function. So, when I try to run that from within a python script, I get "No such file or directory: exit".
So, I am actually looking for some "binary" tool that gives me that (obviously one can write some simple script to get that; I am just looking if there is something similar to *false* that is already shipped with any Linux/Unix).
|
2016/09/28
|
[
"https://Stackoverflow.com/questions/39745460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1531124/"
] |
Run `exit` in a subshell.
```
$ (exit 5) ; echo $?
5
```
|
I have this function defined in `.bashrc`:
```
return_errorcode ()
{
return $1
}
```
So, I can directly use something like
```
$ return_errorcode 5
$ echo $?
5
```
Compared to `(exit 5); echo $?` option, this mechanism saves you a subshell.
|
4,377,260
|
As an exercise I built a little script that query Google Suggest JSON API. The code is quite simple:
```
query = 'a'
url = "http://clients1.google.co.jp/complete/search?hl=ja&q=%s&json=t" %query
response = urllib.urlopen(url)
result = json.load(response)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x83 in position 0: invalid start byte
```
If I try to `read()` the response object, this is what I've got:
```
'["a",["amazon","ana","au","apple","adobe","alc","\x83A\x83}\x83]\x83\x93","\x83A\x83\x81\x83u\x83\x8d","\x83A\x83X\x83N\x83\x8b","\x83A\x83\x8b\x83N"],["","","","","","","","","",""]]'
```
So it seams that the error is raised when python try to decode the string. This only happens with google.co.jp and the Japanese language. I tried the same code with different contry/languages and I **do not** get the same issue: when I try to deserialize the object everything works OK.
* I checked the response headers for and they always specify utf-8 as the response encoding.
* I checked the JSON string with an online parser (http://json.parser.online.fr/) and again all seams OK
Any ideas to solve this problem? What make the JSON `load()` function choke?
Thanks in advance.
|
2010/12/07
|
[
"https://Stackoverflow.com/questions/4377260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246242/"
] |
The response header (`print response.header`) contains the following information:
```
Content-Type: text/javascript; charset=Shift_JIS
```
Note the charset.
If you specify this encoding in `json.load` it will work:
```
result = json.load(response, encoding='shift_jis')
```
|
Regardless of what the spec says, the string "\x83A\x83}\x83]\x83\x93" is not UTF-8.
At a guess, it is one of [ "cp932", "shift\_jis", "shift\_jis\_2004", "shift\_jisx0213" ]; try decoding as one of these.
|
32,566,625
|
Say I generated a `dots = psychopy.visual.DotStim`. Is it possible to change the number of dots later? `dots.nDots = 5` leads to an error on the next `dots.draw()` because the underlying matrices don't match:
```
Traceback (most recent call last):
File "/home/jonas/Documents/projects/work pggcs/experiment/dots.py", line 32, in <module>
dots_right.draw()
File "/usr/lib/python2.7/dist-packages/psychopy/visual/dot.py", line 279, in draw
self._update_dotsXY()
File "/usr/lib/python2.7/dist-packages/psychopy/visual/dot.py", line 362, in _update_dotsXY
self._verticesBase[:,0] += self.speed*numpy.reshape(numpy.cos(self._dotsDir),(self.nDots,))
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 218, in reshape
return reshape(newshape, order=order)
ValueError: total size of new array must be unchanged
```
The same is true of `psychopy.visual.ElementArrayStim` for which setting `stim.nElements = 5` similarly results in an error on the next draw.
A solution is of course to instantiate a whole new `DotStim` or `ElementArrayStim` every time the number of dots/elements should change but this seems too heavy.
|
2015/09/14
|
[
"https://Stackoverflow.com/questions/32566625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1297830/"
] |
It can be fixed for the `DotStim`:
```
dots.nDots = 5
dots._dotsDir = [0]*dots.nDots
dots. _verticesBase = dots._newDotsXY(dots.nDots)
```
This set movement of all dots to 0 but you can change that value to whatever you like or specify for individual dots. This is a hack which will likely break if you modify other aspects of the DotStim.
I haven't figured out a solution for `ElementArrayStim`.
|
Yes, it would be nice to be able to do this but I haven't gotten around to it. The code that has to be run when the nDots/nElements changes is pretty close to starting from scratch with a new stimulus.**init** so adding this in 'correctly' probably means some refactoring (move a lot of the **init** code into setNDots() and then call it from **init**).
There's an additional potential issue which is that the elements might have changed (e.g. the user has set the orientations) and then updates the number of elements. Which ones do we remove? And what orientation do we give those that we add? (This is less of an issue for DotStim though)
Basically, the issue is a little thorny and hasn't ben a priority for me.
|
48,279,419
|
How does the quality of my code gets affected if I don't use `__init__` method in python? A simple example would be appreciated.
|
2018/01/16
|
[
"https://Stackoverflow.com/questions/48279419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7735772/"
] |
*Short answer*; nothing happens.
*Long answer*; if you have a class `B`, which inherits from a class `A`, and if `B` has no `__init__` method defined, then the parent's (in this case, `A`) `__init__` is invoked.
```
In [137]: class A:
...: def __init__(self):
...: print("In A")
...:
In [138]: class B(A): pass
In [139]: B()
In A
Out[139]: <__main__.B at 0x1230a5ac8>
```
If `A` has no predecessor, then the almighty superclass `object`'s `__init__` is invoked, which does nothing.
You can view class hierarchy by querying the dunder `__mro__`.
```
In [140]: B.__mro__
Out[140]: (__main__.B, __main__.A, object)
```
Also see [Why do we use \_\_init\_\_ in Python classes?](https://stackoverflow.com/questions/8609153/why-do-we-use-init-in-python-classes) for some context as to *when and where* its usage is appropriate.
|
Its not a matter of quality here, in `Object-oriented-design` which python supports, `__init__` is way to supply data when an object is created first. In `OOPS`, this is called a `constructor`. In other words **A constructor is a method which prepares a valid object**.
There are design patters on large projects are build that rely on the constructor feature provided by python. Without this they will not function,
**for e.g.**
* You want to keep track of every object that is created for a class, now you need a method which is executed every time a object is created, hence the **constructor**.
* Other useful example for a constructor is lets say you want to create a `customer` object for a Bank. Now a customer for a bank will must have an `account number`, so basically you have to set a rule for a valid customer object for a Bank hence the constructor.
|
11,577,619
|
I got two files each containing a column with "time" and one with "id" like this:
File 1:
```
time id
11.24 1
11.26 2
11.27 3
11.29 5
11.30 6
```
File 2:
```
time id
11.25 1
11.26 3
11.27 4
11.31 6
11.32 7
11.33 8
```
Im trying to do a python script which can subtract the time of the rows with matching id from each other. The files are of different length.
I tried using `set(id's of file 1) & set(id's of file 2)` to get the matching id, but now I'm stuck. Any help will be much appreciated, thank you.
|
2012/07/20
|
[
"https://Stackoverflow.com/questions/11577619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1540477/"
] |
Python Set do not support ordering for the elements. I would store the data as a dictionary
```
file1 = {1:'11:24', 2:'11:26', ... etc}
file2 = {1:'11:25', 3:'11:26', ... etc}
```
The loop over the intersection of the keys (or union based on your needs) to do the subtraction (time based or math based).
|
This is a bit old school. Look at using a default dict from the `collections` module for a more elegant approach.
This will work for any number of files, I've named mine `f1`, `f2` etc. The general idea is to process each file and build up a list of time values for each id. After file processing, iterate over the dictionary subtracting each value as you go (via `reduce` on the values list).
```
from operator import sub
d = {}
for fname in ('f1','f2'):
for l in open(fname):
t, i = l.split()
d[i] = d.get(i, []) + [float(t)]
results = {}
for k,v in d.items():
results[k] = reduce(sub, v)
print results
{'1': -0.009999999999999787, '3': 0.009999999999999787, '2': 11.26, '5': 11.29, '4': 11.27, '7': 11.32, '6': -0.009999999999999787, '8': 11.33}
```
**Updated**
If you want to include only those ids with more than one value:
```
results = {}
for k,v in d.items():
if len(v) > 1:
results[k] = reduce(sub, v)
```
|
11,577,619
|
I got two files each containing a column with "time" and one with "id" like this:
File 1:
```
time id
11.24 1
11.26 2
11.27 3
11.29 5
11.30 6
```
File 2:
```
time id
11.25 1
11.26 3
11.27 4
11.31 6
11.32 7
11.33 8
```
Im trying to do a python script which can subtract the time of the rows with matching id from each other. The files are of different length.
I tried using `set(id's of file 1) & set(id's of file 2)` to get the matching id, but now I'm stuck. Any help will be much appreciated, thank you.
|
2012/07/20
|
[
"https://Stackoverflow.com/questions/11577619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1540477/"
] |
Python Set do not support ordering for the elements. I would store the data as a dictionary
```
file1 = {1:'11:24', 2:'11:26', ... etc}
file2 = {1:'11:25', 3:'11:26', ... etc}
```
The loop over the intersection of the keys (or union based on your needs) to do the subtraction (time based or math based).
|
You can use this as a base (instead of treating '11.24' as a float, I guess you want to adapt for hours/minutes or minutes/seconds)... you can effectively union and subtract matching keys using a `defaultdict`.
As long as you can get your data into a format like this:
```
f1 = [
[11.24, 1],
[11.26, 2],
[11.27, 3],
[11.29, 5],
[11.30, 6]
]
f2 = [
[11.25, 1],
[11.26, 3],
[11.27, 4],
[11.31, 6],
[11.32, 7],
[11.33, 8]
]
```
Then:
```
from collections import defaultdict
from itertools import chain
dd = defaultdict(float)
for k, v in chain(
((b, a) for a, b in f1),
((b, -a) for a, b in f2)): # negate a
dd[k] += v
```
Results in:
```
{1: -0.009999999999999787,
2: 11.26,
3: 0.009999999999999787,
4: -11.27,
5: 11.29,
6: -0.009999999999999787,
7: -11.32,
8: -11.33}
```
**For matches only**
```
matches = dict( (k, v) for v, k in f1 )
d2 = dict( (k, v) for v, k in f2 )
for k, v in matches.items():
try:
matches[k] = v - d2[k]
except KeyError as e:
del matches[k]
print matches
# {1: -0.009999999999999787, 3: 0.009999999999999787, 6: -0.009999999999999787}
```
|
11,577,619
|
I got two files each containing a column with "time" and one with "id" like this:
File 1:
```
time id
11.24 1
11.26 2
11.27 3
11.29 5
11.30 6
```
File 2:
```
time id
11.25 1
11.26 3
11.27 4
11.31 6
11.32 7
11.33 8
```
Im trying to do a python script which can subtract the time of the rows with matching id from each other. The files are of different length.
I tried using `set(id's of file 1) & set(id's of file 2)` to get the matching id, but now I'm stuck. Any help will be much appreciated, thank you.
|
2012/07/20
|
[
"https://Stackoverflow.com/questions/11577619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1540477/"
] |
List comprehensions can do the trick very easily:
```
#read these from file if you want to, included in this form for brevity
F1 = {1: 11.24, 2: 11.26, 3:11.27, 5:11.29, 6:11.30}
F2 = {1:11.25, 3:11.26, 4:11.27, 6:11.31, 7:11.32, 8:11.33}
K1 = set(F1.keys())
K2 = set(F2.keys())
result = dict([ (k, F1[k] - F2[k]) for k in (K1 & K2)])
print result
```
This will output:
```
{1: -0.009999999999999787, 3: 0.009999999999999787, 6: -0.009999999999999787}
```
Edit: As mhawke points out, the last line could read:
```
result = {k: F1[k] - F2[k]) for k in (K1 & K2)}
```
I had forgotten all about dict comprehensions.
|
This is a bit old school. Look at using a default dict from the `collections` module for a more elegant approach.
This will work for any number of files, I've named mine `f1`, `f2` etc. The general idea is to process each file and build up a list of time values for each id. After file processing, iterate over the dictionary subtracting each value as you go (via `reduce` on the values list).
```
from operator import sub
d = {}
for fname in ('f1','f2'):
for l in open(fname):
t, i = l.split()
d[i] = d.get(i, []) + [float(t)]
results = {}
for k,v in d.items():
results[k] = reduce(sub, v)
print results
{'1': -0.009999999999999787, '3': 0.009999999999999787, '2': 11.26, '5': 11.29, '4': 11.27, '7': 11.32, '6': -0.009999999999999787, '8': 11.33}
```
**Updated**
If you want to include only those ids with more than one value:
```
results = {}
for k,v in d.items():
if len(v) > 1:
results[k] = reduce(sub, v)
```
|
11,577,619
|
I got two files each containing a column with "time" and one with "id" like this:
File 1:
```
time id
11.24 1
11.26 2
11.27 3
11.29 5
11.30 6
```
File 2:
```
time id
11.25 1
11.26 3
11.27 4
11.31 6
11.32 7
11.33 8
```
Im trying to do a python script which can subtract the time of the rows with matching id from each other. The files are of different length.
I tried using `set(id's of file 1) & set(id's of file 2)` to get the matching id, but now I'm stuck. Any help will be much appreciated, thank you.
|
2012/07/20
|
[
"https://Stackoverflow.com/questions/11577619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1540477/"
] |
List comprehensions can do the trick very easily:
```
#read these from file if you want to, included in this form for brevity
F1 = {1: 11.24, 2: 11.26, 3:11.27, 5:11.29, 6:11.30}
F2 = {1:11.25, 3:11.26, 4:11.27, 6:11.31, 7:11.32, 8:11.33}
K1 = set(F1.keys())
K2 = set(F2.keys())
result = dict([ (k, F1[k] - F2[k]) for k in (K1 & K2)])
print result
```
This will output:
```
{1: -0.009999999999999787, 3: 0.009999999999999787, 6: -0.009999999999999787}
```
Edit: As mhawke points out, the last line could read:
```
result = {k: F1[k] - F2[k]) for k in (K1 & K2)}
```
I had forgotten all about dict comprehensions.
|
You can use this as a base (instead of treating '11.24' as a float, I guess you want to adapt for hours/minutes or minutes/seconds)... you can effectively union and subtract matching keys using a `defaultdict`.
As long as you can get your data into a format like this:
```
f1 = [
[11.24, 1],
[11.26, 2],
[11.27, 3],
[11.29, 5],
[11.30, 6]
]
f2 = [
[11.25, 1],
[11.26, 3],
[11.27, 4],
[11.31, 6],
[11.32, 7],
[11.33, 8]
]
```
Then:
```
from collections import defaultdict
from itertools import chain
dd = defaultdict(float)
for k, v in chain(
((b, a) for a, b in f1),
((b, -a) for a, b in f2)): # negate a
dd[k] += v
```
Results in:
```
{1: -0.009999999999999787,
2: 11.26,
3: 0.009999999999999787,
4: -11.27,
5: 11.29,
6: -0.009999999999999787,
7: -11.32,
8: -11.33}
```
**For matches only**
```
matches = dict( (k, v) for v, k in f1 )
d2 = dict( (k, v) for v, k in f2 )
for k, v in matches.items():
try:
matches[k] = v - d2[k]
except KeyError as e:
del matches[k]
print matches
# {1: -0.009999999999999787, 3: 0.009999999999999787, 6: -0.009999999999999787}
```
|
23,221,577
|
I'm following [this tutorial on Embedding Python on C](https://docs.python.org/2.7/extending/embedding.html), but their [Pure Embedding](https://docs.python.org/2.7/extending/embedding.html#pure-embedding) example is not working for me.
I have on the same folder (taken from the example):
**call.c**
```
#include <Python.h>
int
main(int argc, char *argv[])
{
PyObject *pName, *pModule, *pDict, *pFunc;
PyObject *pArgs, *pValue;
int i;
if (argc < 3) {
fprintf(stderr,"Usage: call pythonfile funcname [args]\n");
return 1;
}
Py_Initialize();
pName = PyString_FromString(argv[1]);
/* Error checking of pName left out */
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, argv[2]);
/* pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(argc - 3);
for (i = 0; i < argc - 3; ++i) {
pValue = PyInt_FromLong(atoi(argv[i + 3]));
if (!pValue) {
Py_DECREF(pArgs);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert argument\n");
return 1;
}
/* pValue reference stolen here: */
PyTuple_SetItem(pArgs, i, pValue);
}
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL) {
printf("Result of call: %ld\n", PyInt_AsLong(pValue));
Py_DECREF(pValue);
}
else {
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
fprintf(stderr,"Call failed\n");
return 1;
}
}
else {
if (PyErr_Occurred())
PyErr_Print();
fprintf(stderr, "Cannot find function \"%s\"\n", argv[2]);
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
else {
PyErr_Print();
fprintf(stderr, "Failed to load \"%s\"\n", argv[1]);
return 1;
}
Py_Finalize();
return 0;
}
```
**multiply.py**
```
def multiply(a,b):
print "Will compute", a, "times", b
c = 0
for i in range(0, a):
c = c + b
return c
```
And I compile and run like this:
```
$ gcc $(python-config --cflags) call.c $(python-config --ldflags) -o call
call.c: In function ‘main’:
call.c:6:33: warning: unused variable ‘pDict’ [-Wunused-variable]
PyObject *pName, *pModule, *pDict, *pFunc;
^
# This seems OK because it's true that pDict is not used
$ ./call multiply multiply 3 2
ImportError: No module named multiply
Failed to load "multiply"
```
Why can't it load `multiply` module?
Example doesn't show filenames nor paths. Can it be a path problem?
Thanks a lot.
|
2014/04/22
|
[
"https://Stackoverflow.com/questions/23221577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1970845/"
] |
Try setting `PYTHONPATH`:
```
export PYTHONPATH=`pwd`
```
|
Really, the example should have `PySys_SetPath(".");` after initialization.
|
71,363,142
|
how do you call a php function with html
I have this:
```
<input class="myButton" type="button" name="woonkameruit" value="UIT" onclick="kameroff('woonkamer');<?php woonkameruit()?>">
```
and
```
function woonkameruit() {
exec("sudo pkill python");
exec("sudo python3/home/pi/Documents/Programmas/WOONKAMERUit.py");
}
```
can you help?
|
2022/03/05
|
[
"https://Stackoverflow.com/questions/71363142",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17288855/"
] |
You have to send the function name via Javascript, and create some controller like logic. It is a similar to what Laravel Livewire does. For Example:
frontend.php:
```
<!-- Frontend Logic -->
<input id="myButton" class="myButton" type="button" name="woonkameruit" value="UIT">
<!-- Put Before </body> -->
<script type="text/javascript">
function callBackend(f) {
fetch(`https://myapp.com/backend.php?f=${f}`)
.then(response => response.json())
.then(data => {
// Do what you have to do with response
console.log(data)
if(data.success) {
// Function Run Successfully
} else {
// Function Error
}
})
// Other Errors
.catch(err => console.error(err));
}
const myButton = document.getElementById('myButton')
myButton.onclick = (e) => {
e.preventDefault()
callBackend('woonkameruit')
}
</script>
```
backend.php:
```
<?php
// Backend logic
function send_json_to_client($status, $data) {
$cors = "https://myapp.com";
header("Access-Control-Allow-Origin: $cors");
header('Content-Type: application/json; charset=utf-8');
http_response_code($status);
echo json_encode($data);
exit(1);
}
function woonkameruit() {
$first_command = exec("sudo pkill python");
$second_command = exec("sudo python3/home/pi/Documents/Programmas/WOONKAMERUit.py");
$success = false;
$message = 'Something Went Wrong';
$status = 500;
// If everything went good. Put here your success logic
if($first_command !== false && $second_command !== false) {
$success = true;
$message = 'Everything Is Good';
$status = 200;
}
return [
'status' => $status,
'response' => [
'success' => $success,
'message' => $message,
],
];
}
$authorizedFunctions = [
'woonkameruit'
];
if(isset($_GET['f'])) {
$fn = filter_input(INPUT_GET, 'f', FILTER_SANITIZE_STRING);
if( in_array( $fn, $authorizedFunctions ) ) {
$response = $fn();
send_json_to_client($response['status'], $response['response']);
} else {
send_json_to_client(403, [
'success' => false,
'message' => 'Unauthorized'
]);
}
} else {
send_json_to_client(404, [
'success' => false,
'message' => 'Not Found'
]);
}
```
|
You cannot directly call PHP functions with HTML as it is in the backend.
However you can send a request to the backend with JS or HTML and make the PHP handle it so it executes a function.
To do this there are multiple ways:
HTML: Form
JS: Ajax
If you do not know how to implement this I suggest going to websites such as w3schools or just looking in youtube.
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
```
>>> # Printable
>>> s = 'test'
>>> len(s)+2 == len(repr(s))
True
>>> # Unprintable
>>> s = 'test\x00'
>>> len(s)+2 == len(repr(s))
False
```
|
```
ctrlchar = "\n\r| "
# ------------------------------------------------------------------------
# This will let you control what you deem 'printable'
# Clean enough to display any binary
def isprint(chh):
if ord(chh) > 127:
return False
if ord(chh) < 32:
return False
if chh in ctrlchar:
return False
if chh in string.printable:
return True
return False
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
In Python 3, strings have an `isprintable()` method:
```
>>> 'a, '.isprintable()
True
```
For Python 2.7, see Dave Webb's answer.
|
```
# Here is the full routine to display an arbitrary binary string
# Python 2
ctrlchar = "\n\r| "
# ------------------------------------------------------------------------
def isprint(chh):
if ord(chh) > 127:
return False
if ord(chh) < 32:
return False
if chh in ctrlchar:
return False
if chh in string.printable:
return True
return False
# ------------------------------------------------------------------------
# Return a hex dump formatted string
def hexdump(strx, llen = 16):
lenx = len(strx)
outx = ""
for aa in range(lenx/16):
outx += " "
for bb in range(16):
outx += "%02x " % ord(strx[aa * 16 + bb])
outx += " | "
for cc in range(16):
chh = strx[aa * 16 + cc]
if isprint(chh):
outx += "%c" % chh
else:
outx += "."
outx += " | \n"
# Print remainder on last line
remn = lenx % 16 ; divi = lenx / 16
if remn:
outx += " "
for dd in range(remn):
outx += "%02x " % ord(strx[divi * 16 + dd])
outx += " " * ((16 - remn) * 3)
outx += " | "
for cc in range(remn):
chh = strx[divi * 16 + cc]
if isprint(chh):
outx += "%c" % chh
else:
outx += "."
outx += " " * ((16 - remn))
outx += " | \n"
return(outx)
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
As you've said the [`string` module has `printable`](http://docs.python.org/library/string.html#string.printable) so it's just a case of checking if all the characters in your string are in `printable`:
```
>>> hello = 'Hello World!'
>>> bell = chr(7)
>>> import string
>>> all(c in string.printable for c in hello)
True
>>> all(c in string.printable for c in bell)
False
```
You could convert both strings to sets - so the set would contain each character in the string once - and check if the set created by your string [is a subset of](http://docs.python.org/library/stdtypes.html#set.issubset) the printable characters:
```
>>> printset = set(string.printable)
>>> helloset = set(hello)
>>> bellset = set(bell)
>>> helloset
set(['!', ' ', 'e', 'd', 'H', 'l', 'o', 'r', 'W'])
>>> helloset.issubset(printset)
True
>>> set(bell).issubset(printset)
False
```
So, in summary, you would probably want to do this:
```
import string
printset = set(string.printable)
isprintable = set(yourstring).issubset(printset)
```
|
```
ctrlchar = "\n\r| "
# ------------------------------------------------------------------------
# This will let you control what you deem 'printable'
# Clean enough to display any binary
def isprint(chh):
if ord(chh) > 127:
return False
if ord(chh) < 32:
return False
if chh in ctrlchar:
return False
if chh in string.printable:
return True
return False
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
As you've said the [`string` module has `printable`](http://docs.python.org/library/string.html#string.printable) so it's just a case of checking if all the characters in your string are in `printable`:
```
>>> hello = 'Hello World!'
>>> bell = chr(7)
>>> import string
>>> all(c in string.printable for c in hello)
True
>>> all(c in string.printable for c in bell)
False
```
You could convert both strings to sets - so the set would contain each character in the string once - and check if the set created by your string [is a subset of](http://docs.python.org/library/stdtypes.html#set.issubset) the printable characters:
```
>>> printset = set(string.printable)
>>> helloset = set(hello)
>>> bellset = set(bell)
>>> helloset
set(['!', ' ', 'e', 'd', 'H', 'l', 'o', 'r', 'W'])
>>> helloset.issubset(printset)
True
>>> set(bell).issubset(printset)
False
```
So, in summary, you would probably want to do this:
```
import string
printset = set(string.printable)
isprintable = set(yourstring).issubset(printset)
```
|
```
>>> # Printable
>>> s = 'test'
>>> len(s)+2 == len(repr(s))
True
>>> # Unprintable
>>> s = 'test\x00'
>>> len(s)+2 == len(repr(s))
False
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
In Python 3, strings have an `isprintable()` method:
```
>>> 'a, '.isprintable()
True
```
For Python 2.7, see Dave Webb's answer.
|
The `category` function from the `unicodedata` [module](https://docs.python.org/2/library/unicodedata.html#unicodedata.category) might suit your needs. For instance, you can use this to check whether there are any control characters in a string while still allowing non-ASCII characters.
```
>>> import unicodedata
>>> def has_control_chars(s):
... return any(unicodedata.category(c) == 'Cc' for c in s)
>>> has_control_chars('Hello 世界')
False
>>> has_control_chars('Hello \x1f 世界')
True
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
In the ASCII table, [\x20-\x7e] are printable characters.
Use regular expressions to check whether characters other than these characters are included in the string.
You can make sure whether this is a printable string.
```
>>> import re
>>> # Printable
>>> print re.search(r'[^\x20-\x7e]', 'test')
None
>>> # Unprintable
>>> re.search(r'[^\x20-\x7e]', 'test\x00') != None
True
>>> # Optional expression
>>> pattern = r'[^\t-\r\x20-\x7e]'
```
|
Mine is a solution to get rid of any known set of characters. it might help.
```
non_printable_chars = set("\n\t\r ") # Space included intensionally
is_printable = lambda string:bool(set(string) - set(non_printable_chars))
...
...
if is_printable(string):
print("""do something""")
...
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
In Python 3, strings have an `isprintable()` method:
```
>>> 'a, '.isprintable()
True
```
For Python 2.7, see Dave Webb's answer.
|
Mine is a solution to get rid of any known set of characters. it might help.
```
non_printable_chars = set("\n\t\r ") # Space included intensionally
is_printable = lambda string:bool(set(string) - set(non_printable_chars))
...
...
if is_printable(string):
print("""do something""")
...
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
As you've said the [`string` module has `printable`](http://docs.python.org/library/string.html#string.printable) so it's just a case of checking if all the characters in your string are in `printable`:
```
>>> hello = 'Hello World!'
>>> bell = chr(7)
>>> import string
>>> all(c in string.printable for c in hello)
True
>>> all(c in string.printable for c in bell)
False
```
You could convert both strings to sets - so the set would contain each character in the string once - and check if the set created by your string [is a subset of](http://docs.python.org/library/stdtypes.html#set.issubset) the printable characters:
```
>>> printset = set(string.printable)
>>> helloset = set(hello)
>>> bellset = set(bell)
>>> helloset
set(['!', ' ', 'e', 'd', 'H', 'l', 'o', 'r', 'W'])
>>> helloset.issubset(printset)
True
>>> set(bell).issubset(printset)
False
```
So, in summary, you would probably want to do this:
```
import string
printset = set(string.printable)
isprintable = set(yourstring).issubset(printset)
```
|
```
# Here is the full routine to display an arbitrary binary string
# Python 2
ctrlchar = "\n\r| "
# ------------------------------------------------------------------------
def isprint(chh):
if ord(chh) > 127:
return False
if ord(chh) < 32:
return False
if chh in ctrlchar:
return False
if chh in string.printable:
return True
return False
# ------------------------------------------------------------------------
# Return a hex dump formatted string
def hexdump(strx, llen = 16):
lenx = len(strx)
outx = ""
for aa in range(lenx/16):
outx += " "
for bb in range(16):
outx += "%02x " % ord(strx[aa * 16 + bb])
outx += " | "
for cc in range(16):
chh = strx[aa * 16 + cc]
if isprint(chh):
outx += "%c" % chh
else:
outx += "."
outx += " | \n"
# Print remainder on last line
remn = lenx % 16 ; divi = lenx / 16
if remn:
outx += " "
for dd in range(remn):
outx += "%02x " % ord(strx[divi * 16 + dd])
outx += " " * ((16 - remn) * 3)
outx += " | "
for cc in range(remn):
chh = strx[divi * 16 + cc]
if isprint(chh):
outx += "%c" % chh
else:
outx += "."
outx += " " * ((16 - remn))
outx += " | \n"
return(outx)
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
As you've said the [`string` module has `printable`](http://docs.python.org/library/string.html#string.printable) so it's just a case of checking if all the characters in your string are in `printable`:
```
>>> hello = 'Hello World!'
>>> bell = chr(7)
>>> import string
>>> all(c in string.printable for c in hello)
True
>>> all(c in string.printable for c in bell)
False
```
You could convert both strings to sets - so the set would contain each character in the string once - and check if the set created by your string [is a subset of](http://docs.python.org/library/stdtypes.html#set.issubset) the printable characters:
```
>>> printset = set(string.printable)
>>> helloset = set(hello)
>>> bellset = set(bell)
>>> helloset
set(['!', ' ', 'e', 'd', 'H', 'l', 'o', 'r', 'W'])
>>> helloset.issubset(printset)
True
>>> set(bell).issubset(printset)
False
```
So, in summary, you would probably want to do this:
```
import string
printset = set(string.printable)
isprintable = set(yourstring).issubset(printset)
```
|
In the ASCII table, [\x20-\x7e] are printable characters.
Use regular expressions to check whether characters other than these characters are included in the string.
You can make sure whether this is a printable string.
```
>>> import re
>>> # Printable
>>> print re.search(r'[^\x20-\x7e]', 'test')
None
>>> # Unprintable
>>> re.search(r'[^\x20-\x7e]', 'test\x00') != None
True
>>> # Optional expression
>>> pattern = r'[^\t-\r\x20-\x7e]'
```
|
3,636,928
|
I have some code that pulls data from a com-port and I want to make sure that what I got really is a printable string (i.e. ASCII, maybe UTF-8) before printing it. Is there a function for doing this? The first half dozen places I looked, didn't have anything that looks like what I want. ([string has printable](http://docs.python.org/library/string.html#string.printable) but I didn't see anything (there, or in [the string methods](http://docs.python.org/library/stdtypes.html#string-methods)) to check if every char in one string is in another.
Note: control characters are **not** printable for my purposes.
---
Edit: I was/am looking for a single function, not a roll-your-own solution:
What I ended up with is:
```
all(ord(c) < 127 and c in string.printable for c in input_str)
```
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3636928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
] |
This Python 3 string contains all kinds of special characters:
```
s = 'abcd\x65\x66 äüöë\xf1 \u00a0\u00a1\u00a2 漢字 \a\b\r\t\n\v\\ \231\x9a \u2640\u2642\uffff'
```
If you try to show it in the console (or use `repr`), it makes a pretty good job of escaping all non-printable characters from that string:
```
>>> s
'abcdef äüöëñ \xa0¡¢ 漢字 \x07\x08\r\t\n\x0b\\ \x99\x9a ♀♂\uffff'
```
It is smart enough to recognise e.g. horizontal tab (`\t`) as printable, but vertical tab (`\v`) as not printable (shows up as `\x0b` rather than `\v`).
Every other non printable character also shows up as either `\xNN` or `\uNNNN` in the `repr`. Therefore, we can use that as the test:
```
def is_printable(s):
return not any(repr(ch).startswith("'\\x") or repr(ch).startswith("'\\u") for ch in s)
```
There may be some borderline characters, for example non-breaking white space (`\xa0`) is treated as non-printable here. Maybe it shouldn't be, but those special ones could then be hard-coded.
---
P.S.
You could do this to extract only printable characters from a string:
```
>>> ''.join(ch for ch in s if is_printable(ch))
'abcdef äüöëñ ¡¢ 漢字 \r\t\n\\ ♀♂'
```
|
The `category` function from the `unicodedata` [module](https://docs.python.org/2/library/unicodedata.html#unicodedata.category) might suit your needs. For instance, you can use this to check whether there are any control characters in a string while still allowing non-ASCII characters.
```
>>> import unicodedata
>>> def has_control_chars(s):
... return any(unicodedata.category(c) == 'Cc' for c in s)
>>> has_control_chars('Hello 世界')
False
>>> has_control_chars('Hello \x1f 世界')
True
```
|
53,727,275
|
I am trying to ammend a group of files in a folder, by adding F to the 4th line (which is number 3 in python, if I'm correct). With the following code below, the code is just continuously running and not making the amendments, anyone got any ideas?
```
import os
from glob import glob
list_of_files = glob('*.gjf') # get list of all .gjf files
for file in list_of_files:
# read file:
with open(file, 'r+') as f:
lines=f.readlines()
for line in lines:
lines.insert(3,'F')
```
|
2018/12/11
|
[
"https://Stackoverflow.com/questions/53727275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10750169/"
] |
The error is expected. Function runtime is based on C#, when the response tries to add the `Allow` header, underlying C# code checks its name. It's by design that `Allow` is a read-only header in [HttpContentHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpcontentheaders?view=netframework-4.7.2#properties) hence we can't add it in [HttpResponseHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpresponseheaders?view=netframework-4.7.2).
Here are two workarounds for you to refer.
1. Use a custom header name like `Allow-Method`.
2. Create a new Function app, it uses Function runtime 2.0 by default, where we can set `Allow` header.
|
Have you tried adding the allowed methods to the **function.json** files as noted in the [documentation](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook#trigger---configuration)?
Example below:
```
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"post",
"options"
]
```
|
53,727,275
|
I am trying to ammend a group of files in a folder, by adding F to the 4th line (which is number 3 in python, if I'm correct). With the following code below, the code is just continuously running and not making the amendments, anyone got any ideas?
```
import os
from glob import glob
list_of_files = glob('*.gjf') # get list of all .gjf files
for file in list_of_files:
# read file:
with open(file, 'r+') as f:
lines=f.readlines()
for line in lines:
lines.insert(3,'F')
```
|
2018/12/11
|
[
"https://Stackoverflow.com/questions/53727275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10750169/"
] |
The error is expected. Function runtime is based on C#, when the response tries to add the `Allow` header, underlying C# code checks its name. It's by design that `Allow` is a read-only header in [HttpContentHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpcontentheaders?view=netframework-4.7.2#properties) hence we can't add it in [HttpResponseHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpresponseheaders?view=netframework-4.7.2).
Here are two workarounds for you to refer.
1. Use a custom header name like `Allow-Method`.
2. Create a new Function app, it uses Function runtime 2.0 by default, where we can set `Allow` header.
|
If you are editing this through the portal, navigate to the `integrate` section of the function and select POST and OPTIONS methods.
|
53,727,275
|
I am trying to ammend a group of files in a folder, by adding F to the 4th line (which is number 3 in python, if I'm correct). With the following code below, the code is just continuously running and not making the amendments, anyone got any ideas?
```
import os
from glob import glob
list_of_files = glob('*.gjf') # get list of all .gjf files
for file in list_of_files:
# read file:
with open(file, 'r+') as f:
lines=f.readlines()
for line in lines:
lines.insert(3,'F')
```
|
2018/12/11
|
[
"https://Stackoverflow.com/questions/53727275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10750169/"
] |
The error is expected. Function runtime is based on C#, when the response tries to add the `Allow` header, underlying C# code checks its name. It's by design that `Allow` is a read-only header in [HttpContentHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpcontentheaders?view=netframework-4.7.2#properties) hence we can't add it in [HttpResponseHeaders](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpresponseheaders?view=netframework-4.7.2).
Here are two workarounds for you to refer.
1. Use a custom header name like `Allow-Method`.
2. Create a new Function app, it uses Function runtime 2.0 by default, where we can set `Allow` header.
|
Solved it through putting an APIM in front of teh logic app. This APIM just returns a 200 in case of the OPTIONS request, and calls the logic app for the subsequent HTTP post requests
|
71,044,523
|
For Example I have enabled the mTLS in my istio service in STRICT mode. and I have authorization policy that have kind of source.principals rule check.
Now I want to access these rules details like source.principals and source.namespace after request is authenticated and authorized so that I can do more business login in my flask(python) service.
My python code looks like this:
```
from flask import Flask
app = Flask(__name__)
@app.route("/mutate-hook", methods=['POST'])
def mutate():
source = request.headers.get('source')
# I am expacting source to the source from this: https://istio-releases.github.io/v0.1/docs/reference/config/mixer/attribute-vocabulary.html
print(source)
request_json = request.get_json()
# I am expacting source to the source from this: https://istio-releases.github.io/v0.1/docs/reference/config/mixer/attribute-vocabulary.html
request_source = request_json.get('source')
print(request_source["principal"], request_source['namespace'])
```
|
2022/02/09
|
[
"https://Stackoverflow.com/questions/71044523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2090808/"
] |
Have a look at the [AuthConfig class](https://github.com/manfredsteyer/angular-oauth2-oidc/blob/master/projects/lib/src/auth.config.ts) and try setting `loginUrl` to the authorize endpoint and also set the token endpoint explicitly. See if this gives you a different error.
A good library should allow you to set endpoints explicitly. As an example, a couple of years ago an SPA of mine did not work with Azure AD since it did not allow CORS requests to the token endpoint. I worked around this by setting a token endpoint in the [oidc client library](https://github.com/IdentityModel/oidc-client-js) to point to an API, so that I could call the token endpoint from there.
|
Extending **auth.config** about additional loginUrl and tokenUrl solved the issue.
This is finall version of my config file:
```
export const OAUTH_CONFIG: AuthConfig = {
issuer: environment.identityProviderBaseUrl,
loginUrl: environment.identityProviderLoginUrl,
logoutUrl: environment.identityProviderLogoutUrl,
redirectUri: environment.identityProvideRedirectUrl,
tokenEndpoint: environment.identityProviderTokenUrl,
clientId: environment.clientId,
dummyClientSecret: environment.clientSecret,
responseType: 'code',
requireHttps: false,
scope: 'write',
showDebugInformation: true,
oidc: false,
useIdTokenHintForSilentRefresh: false
};
```
Of corse then we should remember to remove `loadDiscoveryDocumentAndTryLogin()` .
```
public configureCodeFlow(): void {
this.authService.configure(OAUTH_CONFIG);
this.authService.setupAutomaticSilentRefresh(undefined, 'access_token');
this.authService.tryLoginCodeFlow();
}
```
|
14,386,536
|
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request
referral = Referral()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__
val = field.get_default()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default
if isinstance(field_default, self.rel.to):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question:
```
class Referral (models.Model):
opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False)
origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None)
candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False)
intro = models.TextField(max_length=1000,null=False,blank=False)
experience = models.TextField(max_length=5000,null=False,blank=False)
email = models.CharField(max_length=255,null=False,blank=False)
phone = models.CharField(max_length=255,null=False,blank=True,default='')
def __unicode__(self):
return u"%s" % self.id
```
Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
|
2013/01/17
|
[
"https://Stackoverflow.com/questions/14386536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988262/"
] |
**UPDATE** (with solution below)
I've been digging into the Django model code and it seems like there is a bug that creates a race condition when using "app.model"-based identifiers for the related field in a ForeignKey. When the application is running in production mode as opposed to DEBUG, the ForeignKey.get\_default method referenced in the exception above tries to check if the provided default value is an instance of the related field (self.rel.to):
```
def get_default(self):
"Here we check if the default value is an object and return the to_field if so."
field_default = super(ForeignKey, self).get_default()
if isinstance(field_default, self.rel.to):
return getattr(field_default, self.rel.get_related_field().attname)
return field_default
```
Initially when a ForeignKey is instantiated with a string-based related field, self.rel.to is set to the string-based identifier. There is a separate function in related.py called add\_lazy\_relation that, under normal circumstances, attempts to convert this string-based identifier into a model class reference. Because models are loaded in a lazy fashion, it's possible that this conversion can be deferred until the AppCache is fully loaded.
Therefore, if get\_default is called on a string-based ForeignKey relation before AppCache is fully populated, it's possible for a TypeError exception to be raised. Apparently putting my application into production mode was enough to shift the timing of model caching that this error started occurring for me.
**SOLUTION**
It seems this is genuinely a bug in Django, but here's how to get around it if you do ever run into this problem. Add the following code snippet immediately before you instantiate a troublesome model:
```
from django.db.models.loading import cache as model_cache
if not model_cache.loaded:
model_cache._populate()
```
This checks the loaded flag on the AppCache to determine if the cache if fully populated. If it is not, we will force the cache to fully populate now. And the problem will be solved.
|
Another cause:
Make sure that all Foreign Keys are in the same reference models in the same application (app label). I was banging my head against the wall for a while on this one.
```
class Meta:
db_table = 'reservation'
app_label = 'admin'
```
|
14,386,536
|
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request
referral = Referral()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__
val = field.get_default()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default
if isinstance(field_default, self.rel.to):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question:
```
class Referral (models.Model):
opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False)
origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None)
candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False)
intro = models.TextField(max_length=1000,null=False,blank=False)
experience = models.TextField(max_length=5000,null=False,blank=False)
email = models.CharField(max_length=255,null=False,blank=False)
phone = models.CharField(max_length=255,null=False,blank=True,default='')
def __unicode__(self):
return u"%s" % self.id
```
Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
|
2013/01/17
|
[
"https://Stackoverflow.com/questions/14386536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988262/"
] |
**UPDATE** (with solution below)
I've been digging into the Django model code and it seems like there is a bug that creates a race condition when using "app.model"-based identifiers for the related field in a ForeignKey. When the application is running in production mode as opposed to DEBUG, the ForeignKey.get\_default method referenced in the exception above tries to check if the provided default value is an instance of the related field (self.rel.to):
```
def get_default(self):
"Here we check if the default value is an object and return the to_field if so."
field_default = super(ForeignKey, self).get_default()
if isinstance(field_default, self.rel.to):
return getattr(field_default, self.rel.get_related_field().attname)
return field_default
```
Initially when a ForeignKey is instantiated with a string-based related field, self.rel.to is set to the string-based identifier. There is a separate function in related.py called add\_lazy\_relation that, under normal circumstances, attempts to convert this string-based identifier into a model class reference. Because models are loaded in a lazy fashion, it's possible that this conversion can be deferred until the AppCache is fully loaded.
Therefore, if get\_default is called on a string-based ForeignKey relation before AppCache is fully populated, it's possible for a TypeError exception to be raised. Apparently putting my application into production mode was enough to shift the timing of model caching that this error started occurring for me.
**SOLUTION**
It seems this is genuinely a bug in Django, but here's how to get around it if you do ever run into this problem. Add the following code snippet immediately before you instantiate a troublesome model:
```
from django.db.models.loading import cache as model_cache
if not model_cache.loaded:
model_cache._populate()
```
This checks the loaded flag on the AppCache to determine if the cache if fully populated. If it is not, we will force the cache to fully populate now. And the problem will be solved.
|
In my case i had a model 'Entity' and a 'User' which inherited 'AbstractBaseUser'. A 'User' model had a entity field with ForeignKey to Entity configured that way:
```
entity = m.ForeignKey('core.Entity', on_delete=m.SET_NULL, null=True)
```
in the end the workaround was to change it to
```
from core.models import Entity
entity = m.ForeignKey(Entity, on_delete=m.SET_NULL, null=True)
```
I don't know the cause of the problem, but it just works that way
|
14,386,536
|
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request
referral = Referral()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__
val = field.get_default()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default
if isinstance(field_default, self.rel.to):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question:
```
class Referral (models.Model):
opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False)
origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None)
candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False)
intro = models.TextField(max_length=1000,null=False,blank=False)
experience = models.TextField(max_length=5000,null=False,blank=False)
email = models.CharField(max_length=255,null=False,blank=False)
phone = models.CharField(max_length=255,null=False,blank=True,default='')
def __unicode__(self):
return u"%s" % self.id
```
Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
|
2013/01/17
|
[
"https://Stackoverflow.com/questions/14386536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988262/"
] |
**UPDATE** (with solution below)
I've been digging into the Django model code and it seems like there is a bug that creates a race condition when using "app.model"-based identifiers for the related field in a ForeignKey. When the application is running in production mode as opposed to DEBUG, the ForeignKey.get\_default method referenced in the exception above tries to check if the provided default value is an instance of the related field (self.rel.to):
```
def get_default(self):
"Here we check if the default value is an object and return the to_field if so."
field_default = super(ForeignKey, self).get_default()
if isinstance(field_default, self.rel.to):
return getattr(field_default, self.rel.get_related_field().attname)
return field_default
```
Initially when a ForeignKey is instantiated with a string-based related field, self.rel.to is set to the string-based identifier. There is a separate function in related.py called add\_lazy\_relation that, under normal circumstances, attempts to convert this string-based identifier into a model class reference. Because models are loaded in a lazy fashion, it's possible that this conversion can be deferred until the AppCache is fully loaded.
Therefore, if get\_default is called on a string-based ForeignKey relation before AppCache is fully populated, it's possible for a TypeError exception to be raised. Apparently putting my application into production mode was enough to shift the timing of model caching that this error started occurring for me.
**SOLUTION**
It seems this is genuinely a bug in Django, but here's how to get around it if you do ever run into this problem. Add the following code snippet immediately before you instantiate a troublesome model:
```
from django.db.models.loading import cache as model_cache
if not model_cache.loaded:
model_cache._populate()
```
This checks the loaded flag on the AppCache to determine if the cache if fully populated. If it is not, we will force the cache to fully populate now. And the problem will be solved.
|
In my case, I made a mistake with the class call by surrounding the class name with quotes. Removing them solved the error for me. See the wrong code below:
```
from django.db import models
from django.contrib.auth.models import AbstractUser
from account.models import Profile
class User(AbstractUser):
first_name = None;
last_name = None;
profile = models.ForeignKey(
'Profile',
on_delete=models.CASCADE,
);
```
|
14,386,536
|
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request
referral = Referral()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__
val = field.get_default()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default
if isinstance(field_default, self.rel.to):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question:
```
class Referral (models.Model):
opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False)
origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None)
candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False)
intro = models.TextField(max_length=1000,null=False,blank=False)
experience = models.TextField(max_length=5000,null=False,blank=False)
email = models.CharField(max_length=255,null=False,blank=False)
phone = models.CharField(max_length=255,null=False,blank=True,default='')
def __unicode__(self):
return u"%s" % self.id
```
Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
|
2013/01/17
|
[
"https://Stackoverflow.com/questions/14386536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988262/"
] |
**UPDATE** (with solution below)
I've been digging into the Django model code and it seems like there is a bug that creates a race condition when using "app.model"-based identifiers for the related field in a ForeignKey. When the application is running in production mode as opposed to DEBUG, the ForeignKey.get\_default method referenced in the exception above tries to check if the provided default value is an instance of the related field (self.rel.to):
```
def get_default(self):
"Here we check if the default value is an object and return the to_field if so."
field_default = super(ForeignKey, self).get_default()
if isinstance(field_default, self.rel.to):
return getattr(field_default, self.rel.get_related_field().attname)
return field_default
```
Initially when a ForeignKey is instantiated with a string-based related field, self.rel.to is set to the string-based identifier. There is a separate function in related.py called add\_lazy\_relation that, under normal circumstances, attempts to convert this string-based identifier into a model class reference. Because models are loaded in a lazy fashion, it's possible that this conversion can be deferred until the AppCache is fully loaded.
Therefore, if get\_default is called on a string-based ForeignKey relation before AppCache is fully populated, it's possible for a TypeError exception to be raised. Apparently putting my application into production mode was enough to shift the timing of model caching that this error started occurring for me.
**SOLUTION**
It seems this is genuinely a bug in Django, but here's how to get around it if you do ever run into this problem. Add the following code snippet immediately before you instantiate a troublesome model:
```
from django.db.models.loading import cache as model_cache
if not model_cache.loaded:
model_cache._populate()
```
This checks the loaded flag on the AppCache to determine if the cache if fully populated. If it is not, we will force the cache to fully populate now. And the problem will be solved.
|
As it has already been highlighted in other answers, the most common cause seems to be referring to a model using the `'app.Model'` syntax.
However, it might still be quite a big challenge to figure out which model exactly is causing the trouble.
I found that a quick solution to that question is to go directly to the place where the exception is raised (`django.db.models.fields.related.ForeignKey.get_default`) and add the following print statetements
```py
def get_default(self):
"""Return the to_field if the default value is an object."""
field_default = super().get_default()
# Add this line:
print(f'Here is our small little bug: {field_default}, {self.remote_field.model}')
if isinstance(field_default, self.remote_field.model):
return getattr(field_default, self.target_field.attname)
return field_default
```
Now the name of the problematic model is printed out and finding the relevant place in your code is no longer a big issue.
Tested on Django 2.2.
|
14,386,536
|
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request
referral = Referral()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__
val = field.get_default()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default
if isinstance(field_default, self.rel.to):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question:
```
class Referral (models.Model):
opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False)
origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None)
candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False)
intro = models.TextField(max_length=1000,null=False,blank=False)
experience = models.TextField(max_length=5000,null=False,blank=False)
email = models.CharField(max_length=255,null=False,blank=False)
phone = models.CharField(max_length=255,null=False,blank=True,default='')
def __unicode__(self):
return u"%s" % self.id
```
Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
|
2013/01/17
|
[
"https://Stackoverflow.com/questions/14386536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988262/"
] |
In my case i had a model 'Entity' and a 'User' which inherited 'AbstractBaseUser'. A 'User' model had a entity field with ForeignKey to Entity configured that way:
```
entity = m.ForeignKey('core.Entity', on_delete=m.SET_NULL, null=True)
```
in the end the workaround was to change it to
```
from core.models import Entity
entity = m.ForeignKey(Entity, on_delete=m.SET_NULL, null=True)
```
I don't know the cause of the problem, but it just works that way
|
Another cause:
Make sure that all Foreign Keys are in the same reference models in the same application (app label). I was banging my head against the wall for a while on this one.
```
class Meta:
db_table = 'reservation'
app_label = 'admin'
```
|
14,386,536
|
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request
referral = Referral()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__
val = field.get_default()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default
if isinstance(field_default, self.rel.to):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question:
```
class Referral (models.Model):
opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False)
origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None)
candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False)
intro = models.TextField(max_length=1000,null=False,blank=False)
experience = models.TextField(max_length=5000,null=False,blank=False)
email = models.CharField(max_length=255,null=False,blank=False)
phone = models.CharField(max_length=255,null=False,blank=True,default='')
def __unicode__(self):
return u"%s" % self.id
```
Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
|
2013/01/17
|
[
"https://Stackoverflow.com/questions/14386536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988262/"
] |
In my case i had a model 'Entity' and a 'User' which inherited 'AbstractBaseUser'. A 'User' model had a entity field with ForeignKey to Entity configured that way:
```
entity = m.ForeignKey('core.Entity', on_delete=m.SET_NULL, null=True)
```
in the end the workaround was to change it to
```
from core.models import Entity
entity = m.ForeignKey(Entity, on_delete=m.SET_NULL, null=True)
```
I don't know the cause of the problem, but it just works that way
|
In my case, I made a mistake with the class call by surrounding the class name with quotes. Removing them solved the error for me. See the wrong code below:
```
from django.db import models
from django.contrib.auth.models import AbstractUser
from account.models import Profile
class User(AbstractUser):
first_name = None;
last_name = None;
profile = models.ForeignKey(
'Profile',
on_delete=models.CASCADE,
);
```
|
14,386,536
|
I have an existing, functional Django application that has been running in DEBUG mode for the last several months. When I change the site to run in production mode, I begin getting the following Exception emails sent to me when I hit a specific view that tries to create a new Referral model object.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/contrib/auth/decorators.py", line 20, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/var/django/acclaimd2/program/api.py", line 807, in put_interview_request
referral = Referral()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/base.py", line 349, in __init__
val = field.get_default()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4.2-py2.7.egg/django/db/models/fields/related.py", line 955, in get_default
if isinstance(field_default, self.rel.to):
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
```
As you can see, merely trying to instantiate a Referral model object triggers this exception. Here is the model in question:
```
class Referral (models.Model):
opening = models.ForeignKey(Opening,related_name='referrals',null=False,blank=False)
origin_request = models.ForeignKey('common.request',related_name='referrals',null=True,default=None)
candidate = models.ForeignKey(User,related_name='referrals',null=False,blank=False)
intro = models.TextField(max_length=1000,null=False,blank=False)
experience = models.TextField(max_length=5000,null=False,blank=False)
email = models.CharField(max_length=255,null=False,blank=False)
phone = models.CharField(max_length=255,null=False,blank=True,default='')
def __unicode__(self):
return u"%s" % self.id
```
Is this a bug in Django or am I unknowingly doing something that I ought not? Anyone have any suggestions for fixes or workarounds?
|
2013/01/17
|
[
"https://Stackoverflow.com/questions/14386536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988262/"
] |
In my case i had a model 'Entity' and a 'User' which inherited 'AbstractBaseUser'. A 'User' model had a entity field with ForeignKey to Entity configured that way:
```
entity = m.ForeignKey('core.Entity', on_delete=m.SET_NULL, null=True)
```
in the end the workaround was to change it to
```
from core.models import Entity
entity = m.ForeignKey(Entity, on_delete=m.SET_NULL, null=True)
```
I don't know the cause of the problem, but it just works that way
|
As it has already been highlighted in other answers, the most common cause seems to be referring to a model using the `'app.Model'` syntax.
However, it might still be quite a big challenge to figure out which model exactly is causing the trouble.
I found that a quick solution to that question is to go directly to the place where the exception is raised (`django.db.models.fields.related.ForeignKey.get_default`) and add the following print statetements
```py
def get_default(self):
"""Return the to_field if the default value is an object."""
field_default = super().get_default()
# Add this line:
print(f'Here is our small little bug: {field_default}, {self.remote_field.model}')
if isinstance(field_default, self.remote_field.model):
return getattr(field_default, self.target_field.attname)
return field_default
```
Now the name of the problematic model is printed out and finding the relevant place in your code is no longer a big issue.
Tested on Django 2.2.
|
50,866,111
|
I'm working with retrain.python file from this demo.
I'm getting different types of files:
[](https://i.stack.imgur.com/WLY0Y.png)
[](https://i.stack.imgur.com/htD6r.png)
[](https://i.stack.imgur.com/mC8aI.png)
I want to freeze the graph.pb with the checkpoints files, to optimize the frozen file then to convert optimized file to tflite file in order to use it in android app.
I'm tried different ways to freeze the file but no luck,
>
> getting checkpoint file doesn't exist in Terminal
>
>
>
and
>
> UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position
> 1: invalid start byte
>
>
>
How to complete all steps and get the tflite file and how to combine the labels.txt file?
Note:
Here is the command I used in Terminal:
```
python freeze_graph.py \
--input_graph=/home/automator/Desktop/retrain/code/graph/graph.pb \
--input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model.ckpt \
--output_graph=/home/automator/Desktop/retrain/code/frozen.pb \
--output_node_names=output_node \
--input_saved_model_dir=/home/automator/Desktop/retrain/code/export/frozen.pb \ --output_node_names=outInput
```
The Error:
checkpoint '' doesn't exist!
**Tried:**
```
--input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model.ckpt
--input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model
--input_checkpoint=/home/automator/Desktop/retrain/code/tmp/modelmodel.ckpt
....
```
Please help!
|
2018/06/14
|
[
"https://Stackoverflow.com/questions/50866111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4024038/"
] |
Here is a nice script to freeze a graph
```
import os
import argparse
import tensorflow as tf
from tensorflow.python.framework import graph_util
from tensorflow.python.platform import gfile
def load_graph_def(model_path, sess=None):
if os.path.isfile(model_path):
with gfile.FastGFile(model_path, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
else:
sess = sess if sess is not None else tf.get_default_session()
saver = tf.train.import_meta_graph(model_path + '.meta')
saver.restore(sess, model_path)
def freeze_from_checkpoint(checkpoint_file, output_layer_name):
model_folder = os.path.basename(checkpoint_file)
output_graph = os.path.join(model_folder, checkpoint_file + '.pb')
with tf.Session() as sess:
load_graph_def(checkpoint_file)
graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()
print("Exporting graph...")
output_graph_def = graph_util.convert_variables_to_constants(
sess,
input_graph_def,
output_layer_name.split(","))
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('model_path')
parser.add_argument('output_layer')
args = parser.parse_args()
freeze_from_checkpoint(checkpoint_file=args.model_path, output_layer_name=args.output_layer)
```
Save it as freeze\_graph.py
Call it:
python freeze\_graph.py /home/automator/Desktop/retrain/code/tmp/model.data-000000-of-00001 "output\_node\_name"
|
Given that you have a `meta graph` saved, try using the `input_meta_graph` argument:
```
python freeze_graph.py \
--input_meta_graph=/home/automator/Desktop/retrain/code/tmp/model.meta \
--input_checkpoint=/home/automator/Desktop/retrain/code/tmp/model.ckpt \
--input_binary=true \
--output_graph=/home/automator/Desktop/retrain/code/frozen.pb \
--output_node_names=output_node
```
The issue is that you are passing the `--input_saved_model_dir` argument that [overwrites](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py#L227) the `input_meta_graph` argument but you don't seem to have a [SavedModel](https://www.tensorflow.org/programmers_guide/saved_model#build_and_load_a_savedmodel).
|
60,763,948
|
These are my two models, when I try to open City page on Django I get an error: "column city.country\_id\_id does not exist". I don't know why python adds extra `_id` there.
```
class Country(models.Model):
country_id = models.CharField(primary_key=True,max_length=3)
country_name = models.CharField(max_length=30, blank=True, null=True)
class Meta:
managed = False
db_table = 'country'
class City(models.Model):
city_id=models.CharField(primary_key=True,max_length=3)
city_name=models.CharField(max_length=30, blank=True, null=True)
country_id = models.ForeignKey(Country, on_delete=models.CASCADE)
class Meta:
managed = False
db_table = 'city'
```

|
2020/03/19
|
[
"https://Stackoverflow.com/questions/60763948",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13090788/"
] |
as the comments have begun to mention, and in reference to your final statement, this code absolutely *should not* work, for multiple reasons.
1) you open the file three times, for no apparent reason.
2) `outfile` isn't declared, doesn't do anything.
3) when you open a file with `w` it clears the contents of afformentioned file.
firstly fix these issues.
you understand the fundementals, your upper function is fine etc etc.
this is what you must do.
1) dont open the same file 3 times for no reason
2) define `outfile`
3) use `a` instead of `w` so you append rather then delete and write
|
This will work.
```
f_name = input("Input file name: ")
with open(f_name, "r+") as f:
lines = f.read().splitlines() # get string, split lines
lines = [l.capitalize() for l in lines] # capitalize each line
f.seek(0) # move the cursor to the beginning
f.write('\n'.join(lines)) # join the lines and write to the same file
```
|
3,939,482
|
I have a stats app written in python that on a timer refreshes the ssh screen with stats. Right now it uses os.system('clear') to clear the screen and then outputs a multi line data with the stats.
I'd like to do just do a \r instead of executing the clear but that only works with one line, is it possible to do this with multiple lines?
A classic example of what I want to do is when you execute the "top" command which lists the current processes it updates the screen without executing the "clear" and it's got many lines.
Anyone have any tips for this?
|
2010/10/15
|
[
"https://Stackoverflow.com/questions/3939482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/388538/"
] |
It doesn't really answer your question, but there isn't really anything wrong with calling os.system to clear out the terminal (other than the system running on different operating systems) in which case you could use:
`os.system('cls' if os.name=='nt' else 'clear')`
|
For simple applications you can use:
```
print('\n' * number_of_lines)
```
For more advanced there is [curses](http://docs.python.org/library/curses.html) module in standard library.
|
3,939,482
|
I have a stats app written in python that on a timer refreshes the ssh screen with stats. Right now it uses os.system('clear') to clear the screen and then outputs a multi line data with the stats.
I'd like to do just do a \r instead of executing the clear but that only works with one line, is it possible to do this with multiple lines?
A classic example of what I want to do is when you execute the "top" command which lists the current processes it updates the screen without executing the "clear" and it's got many lines.
Anyone have any tips for this?
|
2010/10/15
|
[
"https://Stackoverflow.com/questions/3939482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/388538/"
] |
solved the issue with:
```
import curses
window = curses.initscr()
window.addstr(1, 0, "my text")
window.refresh()
curses.endwin()
```
|
For simple applications you can use:
```
print('\n' * number_of_lines)
```
For more advanced there is [curses](http://docs.python.org/library/curses.html) module in standard library.
|
40,119,361
|
I have a dictionary which has the following structure in a python program
`{'John':{'age': '12', 'height':'152', 'weight':'45}}`, this is the result returned from a function.
My question is how may I extract the sub-dictionary please? so that I can have the data in this form only `{'age': '12', 'height':'152', 'weight':'45}`.
\*I can think of a solution of using for loop to go through the dictionary, since there is only one item in this dictionary, I can then store it into a new variable, but I could like to learn an alternative please
Many thanks
|
2016/10/18
|
[
"https://Stackoverflow.com/questions/40119361",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6797800/"
] |
To get a value from a dictionary, use dict[key]:
```
>>> d = {'John':{'age': '12', 'height':'152', 'weight':'45'}}
>>> d['John']
{'age': '12', 'height': '152', 'weight': '45'}
>>>
```
|
```
>>> d = {'John':{'age': '12', 'height':'152', 'weight':'45'}, 'Kim':{'age': '13', 'height': '113', 'weight': '30'}}
>>> for key in d:
... print(key, d[key])
...
John {'height': '152', 'weight': '45', 'age': '12'}
Kim {'height': '113', 'weight': '30', 'age': '13'}
```
Just access the subdictionary with `d[key]`. If you have multiple keys, something like the above loop will allow you to go through all of them.
|
45,103,348
|
I try to make to load the content from mysqldb using an animation and a limit of content taken and it takes content is displaying when it gets at the end of content the animation of loading is continue loading a I get this error
My script:
```
<script>
$(document).ready(function(){
var limit = 7;
var start = 0;
var action = 'inactive';
function load_country_data(limit, start)
{
$.ajax({
url:"/select_index",
method:"POST",
data:{limit:limit, start:start},
cache:false,
success:function(data)
{
var obj2 = JSON.parse(data);
console.log(data);
console.log(obj2);
var i=0;
var j=0;
for(i in obj2){
//for(j in obj2[i]){
$('#load_data').prepend("<form><div class='panel panel-white post panel-shadow user-post'><div class='post-heading'><div class='pull-left image'><img src='static/"+obj2[i][4]+"' class='img-circle avatar' alt='user profile image'></div><div class='pull-left meta'> <div class='title h5'><a href='#'><b>"+obj2[i][1]+"</b></a> made a post.</div><h6 class='text-muted time'>1 minute ago</h6></div></div> <div class='post-description'> <p>"+obj2[i][0]+"</p><div class='stats'><a href='#' class='btn btn-default stat-item'> <i class='fa fa-thumbs-up icon'></i>2</a> <a href='#' class='btn btn-default stat-item'><i class='fa fa-share icon'></i>12</a></div></div><div class='post-footer'><div class='input-group'> <input class='form-control' placeholder='Add a comment' type='text'></div> </div></div></form>" );
// }
}
// $('#load_data').append(data);
if(data == '')
{
$('#load_data_message').html("No Data Found");
action = 'active';
}
else
{
$('#load_data_message').html("<div class='loadingC'><div class='loadingCcerc'</div></div>");
action = "inactive";
}
}
});
}
if(action == 'inactive')
{
action = 'active';
load_country_data(limit, start);
}
$(window).scroll(function(){
if($(window).scrollTop() + $(window).height() > $("#load_data").height() && action == 'inactive')
{
action = 'active';
start = start + limit;
setTimeout(function(){
load_country_data(limit, start);
}, 800);
}
});
});
</script>
```
I see something that the error came from json.parse
My python script:
```
@app.route('/select_index',methods=["POST","GET"])
def select_index():
limit=request.args['limit']
print(limit)
start=request.args['start']
print(start)
select="SELECT comments.post,comments.name,comments.post_id , register.id,register.profile_pic,comments.id FROM comments,register WHERE register.id=comments.post_id ORDER BY comments.id DESC LIMIT "+str(start)+","+str(limit)+" "
con.execute(select)
fetch=con.fetchall()
print(fetch)
json_fetch=str(jsonify(fetch))
print(len(json_fetch))
if json_fetch=="<Response 3 bytes [200 OK]>":
return ""
else:
return jsonify(fetch)
```
|
2017/07/14
|
[
"https://Stackoverflow.com/questions/45103348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7906290/"
] |
You are returning an empty string from your python code:
```
if json_fetch=="<Response 3 bytes [200 OK]>":
return ""
```
When you use `JSON.parse("")`, you are returned with the `Uncaught SyntaxError: Unexpected end of JSON input`
|
i change but now is returning an empty array "[]" and the animation did't stop to place on the screen the text("did't stop")
script:
```
<script>
$(document).ready(function(){
var limit = 7;
var start = 0;
var action = 'inactive';
function load_country_data(limit, start)
{
$.ajax({
url:"/select_index",
method:"POST",
data:{limit:limit, start:start},
cache:false,
success:function(data)
{
var obj2 = JSON.parse(data);
console.log(data);
console.log(obj2);
var i=0;
var j=0;
for(i in obj2){
//for(j in obj2[i]){
$('#load_data').prepend("<form><div class='panel panel-white post panel-shadow user-post'><div class='post-heading'><div class='pull-left image'><img src='static/"+obj2[i][4]+"' class='img-circle avatar' alt='user profile image'></div><div class='pull-left meta'> <div class='title h5'><a href='#'><b>"+obj2[i][1]+"</b></a> made a post.</div><h6 class='text-muted time'>1 minute ago</h6></div></div> <div class='post-description'> <p>"+obj2[i][0]+"</p><div class='stats'><a href='#' class='btn btn-default stat-item'> <i class='fa fa-thumbs-up icon'></i>2</a> <a href='#' class='btn btn-default stat-item'><i class='fa fa-share icon'></i>12</a></div></div><div class='post-footer'><div class='input-group'> <input class='form-control' placeholder='Add a comment' type='text'></div> </div></div></form>" );
// }
}
//SELECT comments.post,comments.name,comments.post_id , register.id,register.profile_pic,comments.id,abonari.id_user,abonari.id_abonat FROM comments,register,abonari WHERE register.id=comments.post_id AND register.id=abonari.id_abonat ORDER BY comments.id DESC
// $('#load_data').append(data);
if(data == '')
{
$('#load_data_message').html("No Data Found");
action = 'active';
}
else
{
$('#load_data_message').html("<div class='loadingC'><div class='loadingCcerc'</div></div>");
action = "inactive";
}
}
});
}
if(action == 'inactive')
{
action = 'active';
load_country_data(limit, start);
}
$(window).scroll(function(){
if($(window).scrollTop() + $(window).height() > $("#load_data").height() && action == 'inactive')
{
action = 'active';
start = start + limit;
setTimeout(function(){
load_country_data(limit, start);
}, 800);
}
});
});
</script>
```
and python:
```
@app.route('/select_index',methods=["POST","GET"])
def select_index():
limit=request.args['limit']
print(limit)
start=request.args['start']
print(start)
select="SELECT comments.post,comments.name,comments.post_id , register.id,register.profile_pic,comments.id FROM comments,register WHERE register.id=comments.post_id ORDER BY comments.id DESC LIMIT "+str(start)+","+str(limit)+" "
con.execute(select)
fetch=con.fetchall()
print(fetch)
json_fetch=str(jsonify(fetch))
print(len(json_fetch))
return jsonify(fetch)
```
|
29,037,501
|
I created an SSH-Agent to provide my key to the ssh/scp cmd when connecting to my server.
I also scripted a SSH-Add with the command 'expect' to write my paraphrase when it's needed.
This works perfectly with my user "user".
But I'm executing a python script that uses /dev/mem that need to be run as root through sudo. This python script call another bash script with ssh and scp cmd inside.
Therefore all these cmd are executed as root and my agent/ssh-add doesn't work anymore, keeping asking for the paraphrase for each file.
How could I fix that ? I don't want to log as root and run a agent as root.
I tried the sudo -u user ssh but it doesn't work (ie: need to enter my paraphrase)
Any ideas?
Thanks in advance,
Mat
EDIT: my code:
The py script needing the sudo
```
#!/usr/bin/env python2.7
import RPi.GPIO as GPIO
import time
import subprocess
from subprocess import call
from datetime import datetime
import picamera
import os
import sys
GPIO.setmode(GPIO.BCM)
# GPIO 23 set up as input. It is pulled up to stop false signals
GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_UP)
#set path and time to create the folder where the images will be saved
pathtoscript = "/home/pi/python-scripts"
current_time = time.localtime()[0:6]
dirfmt = "%4d-%02d-%02d-%02d-%02d-%02d"
dirpath = os.path.join(pathtoscript , dirfmt)
localdirname = dirpath % current_time[0:6] #dirname created with date and time
remotedirname = dirfmt % current_time[0:6] #remote-dirname created with date and time
os.mkdir(localdirname) #mkdir
pictureName = localdirname + "/image%02d.jpg" #path+name of pictures
var = 1
while var == 1:
try:
GPIO.wait_for_edge(23, GPIO.FALLING)
with picamera.PiCamera() as camera:
#camera.capture_sequence(["/home/pi/python-scripts/'dirname'/image%02d.jpg" % i for i in range(2)])
camera.capture_sequence([pictureName % i for i in range(19)])
camera.close()
cmd = '/home/pi/python-scripts/picturesToServer {0} &'.format(remotedirname)
call ([cmd], shell=True)
except KeyboardInterrupt:
GPIO.cleanup() # clean up GPIO on CTRL+C exit
GPIO.cleanup() # clean up GPIO on normal exit
```
---
the bash script:
```
#!/bin/bash
cd $1
ssh user@server mkdir /home/repulsion/picsToAnimate/"$1" >/dev/null 2>&1
ssh user@server cp "$1"/* /home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
for i in $( ls ); do
scp $i user@server:/home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
done
```
|
2015/03/13
|
[
"https://Stackoverflow.com/questions/29037501",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542774/"
] |
There's no general way to get this behavior.
You can create an ImmutablePerson class with a constructor that would accept a Person and construct an immutable version of that Person .
|
Sure, just have a boolean flag in the Person object which says if the object is locked for modifications. If it is locked just have all setters do nothing or have them throw exceptions.
When invoking `immutableObject(person)` just set the flag to true. Setting the flag will also lock/deny the ability to set/change the flag later.
|
29,037,501
|
I created an SSH-Agent to provide my key to the ssh/scp cmd when connecting to my server.
I also scripted a SSH-Add with the command 'expect' to write my paraphrase when it's needed.
This works perfectly with my user "user".
But I'm executing a python script that uses /dev/mem that need to be run as root through sudo. This python script call another bash script with ssh and scp cmd inside.
Therefore all these cmd are executed as root and my agent/ssh-add doesn't work anymore, keeping asking for the paraphrase for each file.
How could I fix that ? I don't want to log as root and run a agent as root.
I tried the sudo -u user ssh but it doesn't work (ie: need to enter my paraphrase)
Any ideas?
Thanks in advance,
Mat
EDIT: my code:
The py script needing the sudo
```
#!/usr/bin/env python2.7
import RPi.GPIO as GPIO
import time
import subprocess
from subprocess import call
from datetime import datetime
import picamera
import os
import sys
GPIO.setmode(GPIO.BCM)
# GPIO 23 set up as input. It is pulled up to stop false signals
GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_UP)
#set path and time to create the folder where the images will be saved
pathtoscript = "/home/pi/python-scripts"
current_time = time.localtime()[0:6]
dirfmt = "%4d-%02d-%02d-%02d-%02d-%02d"
dirpath = os.path.join(pathtoscript , dirfmt)
localdirname = dirpath % current_time[0:6] #dirname created with date and time
remotedirname = dirfmt % current_time[0:6] #remote-dirname created with date and time
os.mkdir(localdirname) #mkdir
pictureName = localdirname + "/image%02d.jpg" #path+name of pictures
var = 1
while var == 1:
try:
GPIO.wait_for_edge(23, GPIO.FALLING)
with picamera.PiCamera() as camera:
#camera.capture_sequence(["/home/pi/python-scripts/'dirname'/image%02d.jpg" % i for i in range(2)])
camera.capture_sequence([pictureName % i for i in range(19)])
camera.close()
cmd = '/home/pi/python-scripts/picturesToServer {0} &'.format(remotedirname)
call ([cmd], shell=True)
except KeyboardInterrupt:
GPIO.cleanup() # clean up GPIO on CTRL+C exit
GPIO.cleanup() # clean up GPIO on normal exit
```
---
the bash script:
```
#!/bin/bash
cd $1
ssh user@server mkdir /home/repulsion/picsToAnimate/"$1" >/dev/null 2>&1
ssh user@server cp "$1"/* /home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
for i in $( ls ); do
scp $i user@server:/home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
done
```
|
2015/03/13
|
[
"https://Stackoverflow.com/questions/29037501",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542774/"
] |
There is no way to do it without doing some work.
You need to either get a compile time check by creating a new immutable object from the mutable one as Eran suggests, add some code for a runtime check, or get a weaker compiler time check by using a split interface
e.g
```
interface ReadOnlyPerson {
int getX();
}
interface ModifiablePerson extends ReadOnlyPerson{
void setX();
}
class Person implements ModifiablePerson {
}
```
You can then pass out the immutable reference after construction.
However this pattern does not give a strong guarantee that the object will not be modified as the ReadOnlyPerson reference can be cast etc.
|
Sure, just have a boolean flag in the Person object which says if the object is locked for modifications. If it is locked just have all setters do nothing or have them throw exceptions.
When invoking `immutableObject(person)` just set the flag to true. Setting the flag will also lock/deny the ability to set/change the flag later.
|
29,037,501
|
I created an SSH-Agent to provide my key to the ssh/scp cmd when connecting to my server.
I also scripted a SSH-Add with the command 'expect' to write my paraphrase when it's needed.
This works perfectly with my user "user".
But I'm executing a python script that uses /dev/mem that need to be run as root through sudo. This python script call another bash script with ssh and scp cmd inside.
Therefore all these cmd are executed as root and my agent/ssh-add doesn't work anymore, keeping asking for the paraphrase for each file.
How could I fix that ? I don't want to log as root and run a agent as root.
I tried the sudo -u user ssh but it doesn't work (ie: need to enter my paraphrase)
Any ideas?
Thanks in advance,
Mat
EDIT: my code:
The py script needing the sudo
```
#!/usr/bin/env python2.7
import RPi.GPIO as GPIO
import time
import subprocess
from subprocess import call
from datetime import datetime
import picamera
import os
import sys
GPIO.setmode(GPIO.BCM)
# GPIO 23 set up as input. It is pulled up to stop false signals
GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_UP)
#set path and time to create the folder where the images will be saved
pathtoscript = "/home/pi/python-scripts"
current_time = time.localtime()[0:6]
dirfmt = "%4d-%02d-%02d-%02d-%02d-%02d"
dirpath = os.path.join(pathtoscript , dirfmt)
localdirname = dirpath % current_time[0:6] #dirname created with date and time
remotedirname = dirfmt % current_time[0:6] #remote-dirname created with date and time
os.mkdir(localdirname) #mkdir
pictureName = localdirname + "/image%02d.jpg" #path+name of pictures
var = 1
while var == 1:
try:
GPIO.wait_for_edge(23, GPIO.FALLING)
with picamera.PiCamera() as camera:
#camera.capture_sequence(["/home/pi/python-scripts/'dirname'/image%02d.jpg" % i for i in range(2)])
camera.capture_sequence([pictureName % i for i in range(19)])
camera.close()
cmd = '/home/pi/python-scripts/picturesToServer {0} &'.format(remotedirname)
call ([cmd], shell=True)
except KeyboardInterrupt:
GPIO.cleanup() # clean up GPIO on CTRL+C exit
GPIO.cleanup() # clean up GPIO on normal exit
```
---
the bash script:
```
#!/bin/bash
cd $1
ssh user@server mkdir /home/repulsion/picsToAnimate/"$1" >/dev/null 2>&1
ssh user@server cp "$1"/* /home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
for i in $( ls ); do
scp $i user@server:/home/repulsion/picsToAnimate/"$1"/ >/dev/null 2>&1
done
```
|
2015/03/13
|
[
"https://Stackoverflow.com/questions/29037501",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4542774/"
] |
There's no general way to get this behavior.
You can create an ImmutablePerson class with a constructor that would accept a Person and construct an immutable version of that Person .
|
There is no way to do it without doing some work.
You need to either get a compile time check by creating a new immutable object from the mutable one as Eran suggests, add some code for a runtime check, or get a weaker compiler time check by using a split interface
e.g
```
interface ReadOnlyPerson {
int getX();
}
interface ModifiablePerson extends ReadOnlyPerson{
void setX();
}
class Person implements ModifiablePerson {
}
```
You can then pass out the immutable reference after construction.
However this pattern does not give a strong guarantee that the object will not be modified as the ReadOnlyPerson reference can be cast etc.
|
45,309,118
|
Trying to create a game called `hangman` in Python.
I've come a long way, but the 'core' functionality is failing me.
I've edited out all the parts which are irrelevant for this question.
Here it comes:
```
picked = ['yaaayyy']
length = len(picked)
dashed = "-" * length
guessed = picked.replace(picked, dashed)
while tries != -1:
input = raw_input("Try a letter: ")
if input in picked:
print "Correct letter!"
found = [i for i, x in enumerate(picked) if x == input]
for item in found:
guessed = guessed[:item] + input + guessed[i+1:]
print guessed
```
Upon calling this script, python creates a variable named `guessed` containing 7 dashes `-------`
It asks the user for a letter and if the letter is correct, it will replace the `-` with the correct letter. But not keeping the previous letters.
The word to be guessed is `yaaayyy`
Output of code:
```
Word is 7 characters:
-------
Try a letter: a
Correct letter!
-aaa
Try a letter: y
Correct letter!
yyyy
```
Goal:
```
Word is 7 characters:
-------
Try a letter: a
Correct letter!
-aaa---
Try a letter: y
Correct letter!
yaaayyy
```
|
2017/07/25
|
[
"https://Stackoverflow.com/questions/45309118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4834431/"
] |
This code seems to be slightly wrong:
```
found = [i for i, x in enumerate(picked) if x == input]
for item in found:
guessed = guessed[:item] + input + guessed[i+1:]
```
That last line should probably be:
```
guessed = guessed[:item] + input + guessed[item+1:]
```
**EDIT**
This seems simpler to me:
```
for i, x in enumerate(picked):
if x == input:
guessed = guessed[:i] + input + guessed[i+1:]
```
**EDIT 2**
I'm not sure if this is clearer or not, but it's probably a little more efficient:
```
guessed = ''.join(x if picked[i] == input else c for i, c in enumerate(guessed))
```
|
Assuming you're using python3 you can solve it by simply doing:
```
user_input = input()
guessed = ''.join(letter if user_input == letter else guessed[i] for i, letter in enumerate(picked))
```
|
51,954,369
|
```
[{"answerInfo":{"extraData":{"am_answer_type":"NN"}},"content":"MP3:not support.","messageId":"c4d6a2f4649d483a811fcce4b26ae9a1"}]
```
How to extract "MP3: not support" from this String using regular expression or python code?
But an error was generated per the suggestion:
```
Traceback (most recent call last):
File "/Users/congminmin/PycharmProjects/Misc/csv/csvLoader.py", line 16, in <module>
print(question+ " " + json.loads(answer)[0]['content'])
File "/Users/congminmin/anaconda3/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/Users/congminmin/anaconda3/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/congminmin/anaconda3/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
|
2018/08/21
|
[
"https://Stackoverflow.com/questions/51954369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/697911/"
] |
In my code below, I create a dataframe then output it to a .csv file using `pandas.DataFrame.to_csv()` ([link to docs](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html)). In this example, I also add try and except clauses to raise an exception if the user enters an invalid Series ID (not the case with this example) or if the user exceeds the number of daily hits to the BLS API (Unregistered users may request up to 25 queries daily - [per documentation FAQs](https://www.bls.gov/developers/api_faqs.htm#signatures4)).
In this example, I add a column showing the SeriesID which would prove more useful if your list of SeriesIDs contained more than one item and would help distinguish between different series. I also do some extra manipulation to the footnotes column to make it more meaningful in the outputted dataframe.
```
import pandas as pd
import json
import requests
headers = {'Content-type': 'application/json'}
data = json.dumps({"seriesid": ['LAUMT421090000000005'],"startyear":"2011", "endyear":"2014"})
p = requests.post('https://api.bls.gov/publicAPI/v1/timeseries/data/', data=data, headers=headers)
json_data = json.loads(p.text)
try:
df = pd.DataFrame()
for series in json_data['Results']['series']:
df_initial = pd.DataFrame(series)
series_col = df_initial['seriesID'][0]
for i in range(0, len(df_initial) - 1):
df_row = pd.DataFrame(df_initial['data'][i])
df_row['seriesID'] = series_col
if 'code' not in str(df_row['footnotes']):
df_row['footnotes'] = ''
else:
df_row['footnotes'] = str(df_row['footnotes']).split("'code': '",1)[1][:1]
df = df.append(df_row, ignore_index=True)
df.to_csv('blsdata.csv', index=False)
except:
json_data['status'] == 'REQUEST_NOT_PROCESSED'
print('BLS API has given the following Response:', json_data['status'])
print('Reason:', json_data['message'])
```
If you would like to output a .xlsx file rather than a .csv, simply substitute `df.to_csv('blsdata.csv', index=False)` with `df.to_excel('blsdata.xlsx', index=False, engine='xlsxwriter')`.
Expected Outputted .csv file:
[](https://i.stack.imgur.com/FXhEz.png)
Expected Outputted .xlsx file if you use `pandas.DataFrame.to_excel()` rather than `pandas.DataFrame.to_csv()`:
[](https://i.stack.imgur.com/PVgOi.png)
|
```
import requests
import json
import csv
headers = {'Content-type': 'application/json'}
data = json.dumps({"seriesid": ['LAUMT421090000000005'],"startyear":"2011", "endyear":"2014"})
p = requests.post('https://api.bls.gov/publicAPI/v2/timeseries/data/', data=data, headers=headers)
json_data = json.loads(p.text)
```
After the above lines of code, use the steps describe in the following stackoverflow link for the next steps:
[How can I convert JSON to CSV?](https://stackoverflow.com/questions/1871524/how-can-i-convert-json-to-csv)
Hope it helps!
|
30,532,431
|
My program consists in mouse drawing: Simultaneous reproduction of the drawn curves are done on a toplevel window. My aim is to set the vertical and horizontal scroll bars to the toplevel window.
The drawing works as I expected except I am not seeing the scrollbars as well as I am getting this error (which does not stop the program, however):
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1489, in __call__
return self.func(*args)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1523, in yview
res = self.tk.call(self._w, 'yview', *args)
TclError: unknown option "0.0": must be moveto or scroll
```
The program consists of these lines:
```
from Tkinter import *
import numpy as np
import cv2
import Image, ImageTk
class Test:
def __init__(self, parent):
self.parent = parent
self.b1="up"
self.xold=None
self.yold=None
self.liste=[]
self.top = TopLevelWindow()
self.s=400,400,3
self.im=np.zeros(self.s,dtype=np.uint8)
cv2.imshow("hello",self.im)
def test(self):
self.drawingArea=Canvas(self.parent,width=400,height=400)
self.drawingArea.pack()
self.drawingArea.bind("<Motion>",self.motion)
self.drawingArea.bind("<ButtonPress-1>",self.b1down)
self.drawingArea.bind("<ButtonRelease-1>",self.b1up)
def b1down(self,event):
self.b1="down"
def b1up(self,event):
self.b1="up"
self.xold=None
self.yold=None
self.liste.append((self.xold,self.yold))
def motion(self,event):
if self.b1=="down":
if self.xold is not None and self.yold is not None:
event.widget.create_line(self.xold,self.yold,event.x,event.y,fill="red",width=3,smooth=TRUE)
self.top.draw_line(self.xold,self.yold,event.x,event.y)
self.xold=event.x
self.yold=event.y
self.liste.append((self.xold,self.yold))
class TopLevelWindow(Frame):
def __init__(self):
Frame.__init__(self)
self.top=Toplevel()
self.top.wm_title("Second Window")
self.canvas=Canvas(self.top,width=400,height=400)
self.canvas.grid(row=0,column=0,sticky=N+E+S+W)
self.sbarv=Scrollbar(self,orient=VERTICAL)
self.sbarh=Scrollbar(self,orient=HORIZONTAL)
self.sbarv.config(command=self.canvas.yview)
self.sbarh.config(command=self.canvas.xview)
self.canvas.config(yscrollcommand=self.canvas.yview)
self.canvas.config(xscrollcommand=self.canvas.xview)
self.sbarv.grid(row=0,column=1,sticky=N+S)
self.sbarh.grid(row=1,column=0,sticky=W+E)
self.canvas.config(scrollregion=(0,0,400,400))
def draw_line(self, xold, yold, x, y):
self.canvas.create_line(xold,yold,x,y,fill="blue",width=3,smooth=TRUE)
if __name__=="__main__":
root = Tk()
root.wm_title("Main Window")
v = Test(root)
v.test()
root.mainloop()
```
|
2015/05/29
|
[
"https://Stackoverflow.com/questions/30532431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4949239/"
] |
These lines are incorrect:
```
self.canvas.config(yscrollcommand=self.canvas.yview)
self.canvas.config(xscrollcommand=self.canvas.xview)
```
You're telling the canvas to scroll the canvas when the canvas scrolls. The `yscrollcommand` and `xscrollcommand` options typically need to call the `set` method of a scrollbar:
```
self.canvas.config(yscrollcommand=self.sbarv.set)
self.canvas.config(xscrollcommand=self.sbarh.set)
```
|
I want to share the solution I found in case someone in the future encounters this problem:
I only had encrust the two scrollbars into the same parent widget as the canvas itself. I mean:
```
self.sbarv=Scrollbar(self.top,orient=VERTICAL)
self.sbarh=Scrollbar(self.top,orient=HORIZONTAL)
```
|
29,583,102
|
I've been working locally on an AngularJS front-end, Django REST backend project and I want to deploy it to Heroku. It keeps giving me errors when I try to push, however,
I've followed the steps on [here](https://devcenter.heroku.com/articles/getting-started-with-django), which has helped me deploy Django-only apps before.
Below is the output from my `heroku logs`.
```
8" dyno= connect= service= status=503 bytes=
2015-04-11T20:51:24.680164+00:00 heroku[api]: Deploy efbd3fb by me@example.com
2015-04-11T20:51:24.680164+00:00 heroku[api]: Release v5 created by me@example.com
2015-04-11T20:51:25.010989+00:00 heroku[web.1]: State changed from crashed to starting
2015-04-11T20:51:30.109140+00:00 heroku[web.1]: Starting process with command `python manage.py collectstatic --noinput; gunicorn myApp.wsgi --log-file -`
2015-04-11T20:51:31.795813+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY)
2015-04-11T20:51:31.795838+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1
2015-04-11T20:51:31.824251+00:00 app[web.1]: Traceback (most recent call last):
2015-04-11T20:51:31.824257+00:00 app[web.1]: File "manage.py", line 8, in <module>
2015-04-11T20:51:31.824267+00:00 app[web.1]: from django.core.management import execute_from_command_line
2015-04-11T20:51:31.824296+00:00 app[web.1]: ImportError: No module named django.core.management
2015-04-11T20:51:31.827843+00:00 app[web.1]: bash: gunicorn: command not found
2015-04-11T20:51:32.656017+00:00 heroku[web.1]: Process exited with status 127
2015-04-11T20:51:32.672886+00:00 heroku[web.1]: State changed from starting to crashed
2015-04-11T20:51:32.672886+00:00 heroku[web.1]: State changed from crashed to starting
2015-04-11T20:51:36.675087+00:00 heroku[web.1]: Starting process with command `python manage.py collectstatic --noinput; gunicorn myApp.wsgi --log-file -`
2015-04-11T20:51:38.446374+00:00 app[web.1]: bash: gunicorn: command not found
2015-04-11T20:51:38.420704+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY)
2015-04-11T20:51:38.420729+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1
2015-04-11T20:51:38.442039+00:00 app[web.1]: File "manage.py", line 8, in <module>
2015-04-11T20:51:38.442033+00:00 app[web.1]: Traceback (most recent call last):
2015-04-11T20:51:38.443526+00:00 app[web.1]: ImportError: No module named django.core.management
2015-04-11T20:51:38.442047+00:00 app[web.1]: from django.core.management import execute_from_command_line
2015-04-11T20:51:39.265192+00:00 heroku[web.1]: Process exited with status 127
2015-04-11T20:51:39.287328+00:00 heroku[web.1]: State changed from starting to crashed
2015-04-11T20:52:10.960135+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=myapp.herokuapp.com request_id=afbb960d-eae4-4891-a885-d4a7e3880f1f fwd="64.247.79.248" dyno= connect= service= status=503 bytes=
2015-04-11T20:52:11.321003+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=myapp.herokuapp.com request_id=de3242f1-7dda-4cc5-8b35-6d7c77392151 fwd="64.247.79.248" dyno= connect= service= status=503 bytes=
```
It seems like Heroku is confused because of an ambiguity between which server to use (Node.js vs. Django) and can't resolve it. Django can't seem to be found by Heroku.
The basic outline of my project is based on this example: <https://github.com/brwr/thinkster-django-angular>.
My settings file is as follows:
```
"""
Django settings for my project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = SUPER_SECRET
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.environ.get('DEBUG', True)
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'debug_toolbar',
'rest_framework',
'compressor',
'authentication',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'myApp.urls'
WSGI_APPLICATION = 'myApp.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
import dj_database_url
DATABASES = {
'default': dj_database_url.config(
default='sqlite:///' + os.path.join(BASE_DIR, 'db.sqlite3')
)
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = 'staticfiles'
STATICFILES_DIRS = (
# os.path.join(BASE_DIR, 'dist/static'),
os.path.join(BASE_DIR, 'static'),
)
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'compressor.finders.CompressorFinder',
)
COMPRESS_ENABLED = os.environ.get('COMPRESS_ENABLED', False)
TEMPLATE_DIRS = (
os.path.join(BASE_DIR, 'templates'),
)
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.SessionAuthentication',
)
}
# Honor the 'X-Forwarded-Proto' header for request.is_secure()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Allow all host headers
ALLOWED_HOSTS = ['*']
AUTH_USER_MODEL = 'authentication.Account'
# Parse database configuration from $DATABASE_URL
import dj_database_url
DATABASES['default'] = dj_database_url.config()
# Enable Connection Pooling
DATABASES['default']['ENGINE'] = 'django_postgrespool'
# Simplified static file serving.
# https://warehouse.python.org/project/whitenoise/
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
# Honor the 'X-Forwarded-Proto' header for request.is_secure()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
```
If any more information is needed, I can add it.
Thanks,
erip
**EDIT**
I noticed this when pushing to Heroku:

It definitely is being recognized as a Node.js backend rather than Django and is, therefore, not installing from my `requirements.txt`. How can I change this? I tried `heroku config:set BUILDPACK_URL=https://github.com/heroku/heroku-buildpack-python`, but it didn't seem to work.
|
2015/04/11
|
[
"https://Stackoverflow.com/questions/29583102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2883245/"
] |
I don't understand why you need Node at all in a Django project - it's not required for Angular or DRF - but the instructions in the linked project mention setting the buildpack to a custom "multi" one:
```
heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
```
|
it seems like you have not installed Django in you virtualenv
Have you forgot to run `pip install django-toolbelt`
|
8,949,454
|
I have a pygtk Table with 16 squares, each of them containing a label. Label names are: label1, label2, label3, ..., label16.
I also have a timer that fires every *n* seconds. When the timer is fired one of the squares is highlighted (just set the font size of it to 18 and the font size of the rest to 12).
If there would be only 3 labels, the code would be something like this:
```
def update_grid(self):
if self.timer_id is not None:
self.__actual_choice = (self.__actual_choice % 16)+1
if self.__actual_choice == 1:
self.label1.modify_font(self.__font_big)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 2:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_big)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 3:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_big)
```
But having 16 labels, the code would be huge. I wonder if there is a way in python of doing something like:
```
self.(label+"i").modify_font(self.__font_small)
```
|
2012/01/21
|
[
"https://Stackoverflow.com/questions/8949454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1090788/"
] |
You *could* use the built-in function, [`getattr()`](http://docs.python.org/library/functions.html#getattr) as others have suggested:
```
label = getattr(self, 'label%d' % i)
label.modify_font(self.__font_small)
```
But in reality, you'd be better off storing your 16 `label`s in a [`list`](http://docs.python.org/tutorial/datastructures.html#more-on-lists). Lots of variables with numbers at the end is a terrible [code smell](http://martinfowler.com/bliki/CodeSmell.html).
```
for index, label in enumerate(self.labels):
if index == self.__actual_choice:
label.modify_font(self.__font_big)
else:
label.modify_font(self.__font_small)
```
|
You should look into the `getattr` function.
|
8,949,454
|
I have a pygtk Table with 16 squares, each of them containing a label. Label names are: label1, label2, label3, ..., label16.
I also have a timer that fires every *n* seconds. When the timer is fired one of the squares is highlighted (just set the font size of it to 18 and the font size of the rest to 12).
If there would be only 3 labels, the code would be something like this:
```
def update_grid(self):
if self.timer_id is not None:
self.__actual_choice = (self.__actual_choice % 16)+1
if self.__actual_choice == 1:
self.label1.modify_font(self.__font_big)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 2:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_big)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 3:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_big)
```
But having 16 labels, the code would be huge. I wonder if there is a way in python of doing something like:
```
self.(label+"i").modify_font(self.__font_small)
```
|
2012/01/21
|
[
"https://Stackoverflow.com/questions/8949454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1090788/"
] |
You should look into the `getattr` function.
|
Why in the world would you write that as a bunch of `if` statements?
```
modify_fonts = [getattr(self, "label%s" % i).modify_font for i in xrange(1, 17)]
```
You now have a list of the methods you can call for each label. That is, `modify_fonts[0]` is `self.label1.modify_font` and so on. Then you call them in a loop.
```
actual_choice = (self.__actual_choice % 16) # no +1, we're zero-based
for index, modify_font in enumerate(modify_fonts):
modify_font(self.__font_big if index == actual_choice else self.__font_small)
```
|
8,949,454
|
I have a pygtk Table with 16 squares, each of them containing a label. Label names are: label1, label2, label3, ..., label16.
I also have a timer that fires every *n* seconds. When the timer is fired one of the squares is highlighted (just set the font size of it to 18 and the font size of the rest to 12).
If there would be only 3 labels, the code would be something like this:
```
def update_grid(self):
if self.timer_id is not None:
self.__actual_choice = (self.__actual_choice % 16)+1
if self.__actual_choice == 1:
self.label1.modify_font(self.__font_big)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 2:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_big)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 3:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_big)
```
But having 16 labels, the code would be huge. I wonder if there is a way in python of doing something like:
```
self.(label+"i").modify_font(self.__font_small)
```
|
2012/01/21
|
[
"https://Stackoverflow.com/questions/8949454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1090788/"
] |
You *could* use the built-in function, [`getattr()`](http://docs.python.org/library/functions.html#getattr) as others have suggested:
```
label = getattr(self, 'label%d' % i)
label.modify_font(self.__font_small)
```
But in reality, you'd be better off storing your 16 `label`s in a [`list`](http://docs.python.org/tutorial/datastructures.html#more-on-lists). Lots of variables with numbers at the end is a terrible [code smell](http://martinfowler.com/bliki/CodeSmell.html).
```
for index, label in enumerate(self.labels):
if index == self.__actual_choice:
label.modify_font(self.__font_big)
else:
label.modify_font(self.__font_small)
```
|
Use [`getattr`](http://docs.python.org/library/functions.html#getattr):
```
self.__actual_choice = (self.__actual_choice % 16)+1
for i in range(1, 17):
if i == self.__actual_choice:
getattr(self, 'label' + str(i)).modify_font(self.__font_big)
else:
getattr(self, 'label' + str(i)).modify_font(self.__font_small)
```
|
8,949,454
|
I have a pygtk Table with 16 squares, each of them containing a label. Label names are: label1, label2, label3, ..., label16.
I also have a timer that fires every *n* seconds. When the timer is fired one of the squares is highlighted (just set the font size of it to 18 and the font size of the rest to 12).
If there would be only 3 labels, the code would be something like this:
```
def update_grid(self):
if self.timer_id is not None:
self.__actual_choice = (self.__actual_choice % 16)+1
if self.__actual_choice == 1:
self.label1.modify_font(self.__font_big)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 2:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_big)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 3:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_big)
```
But having 16 labels, the code would be huge. I wonder if there is a way in python of doing something like:
```
self.(label+"i").modify_font(self.__font_small)
```
|
2012/01/21
|
[
"https://Stackoverflow.com/questions/8949454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1090788/"
] |
Use [`getattr`](http://docs.python.org/library/functions.html#getattr):
```
self.__actual_choice = (self.__actual_choice % 16)+1
for i in range(1, 17):
if i == self.__actual_choice:
getattr(self, 'label' + str(i)).modify_font(self.__font_big)
else:
getattr(self, 'label' + str(i)).modify_font(self.__font_small)
```
|
Why in the world would you write that as a bunch of `if` statements?
```
modify_fonts = [getattr(self, "label%s" % i).modify_font for i in xrange(1, 17)]
```
You now have a list of the methods you can call for each label. That is, `modify_fonts[0]` is `self.label1.modify_font` and so on. Then you call them in a loop.
```
actual_choice = (self.__actual_choice % 16) # no +1, we're zero-based
for index, modify_font in enumerate(modify_fonts):
modify_font(self.__font_big if index == actual_choice else self.__font_small)
```
|
8,949,454
|
I have a pygtk Table with 16 squares, each of them containing a label. Label names are: label1, label2, label3, ..., label16.
I also have a timer that fires every *n* seconds. When the timer is fired one of the squares is highlighted (just set the font size of it to 18 and the font size of the rest to 12).
If there would be only 3 labels, the code would be something like this:
```
def update_grid(self):
if self.timer_id is not None:
self.__actual_choice = (self.__actual_choice % 16)+1
if self.__actual_choice == 1:
self.label1.modify_font(self.__font_big)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 2:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_big)
self.label3.modify_font(self.__font_small)
elif self.__actual_choice == 3:
self.label1.modify_font(self.__font_small)
self.label2.modify_font(self.__font_small)
self.label3.modify_font(self.__font_big)
```
But having 16 labels, the code would be huge. I wonder if there is a way in python of doing something like:
```
self.(label+"i").modify_font(self.__font_small)
```
|
2012/01/21
|
[
"https://Stackoverflow.com/questions/8949454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1090788/"
] |
You *could* use the built-in function, [`getattr()`](http://docs.python.org/library/functions.html#getattr) as others have suggested:
```
label = getattr(self, 'label%d' % i)
label.modify_font(self.__font_small)
```
But in reality, you'd be better off storing your 16 `label`s in a [`list`](http://docs.python.org/tutorial/datastructures.html#more-on-lists). Lots of variables with numbers at the end is a terrible [code smell](http://martinfowler.com/bliki/CodeSmell.html).
```
for index, label in enumerate(self.labels):
if index == self.__actual_choice:
label.modify_font(self.__font_big)
else:
label.modify_font(self.__font_small)
```
|
Why in the world would you write that as a bunch of `if` statements?
```
modify_fonts = [getattr(self, "label%s" % i).modify_font for i in xrange(1, 17)]
```
You now have a list of the methods you can call for each label. That is, `modify_fonts[0]` is `self.label1.modify_font` and so on. Then you call them in a loop.
```
actual_choice = (self.__actual_choice % 16) # no +1, we're zero-based
for index, modify_font in enumerate(modify_fonts):
modify_font(self.__font_big if index == actual_choice else self.__font_small)
```
|
5,333,509
|
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it?
For example I have the numbers 1,2,5,15 and I want to reverse the bits like so:
```
original reversed
1 - 0001 - 1000 - 8
2 - 0010 - 0100 - 4
5 - 0101 - 1010 - 10
15 - 1111 - 1111 - 15
```
Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so.
PS This isn't homework, I am just trying to program the solution to a logic puzzle.
|
2011/03/17
|
[
"https://Stackoverflow.com/questions/5333509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/330013/"
] |
Numpy indexing arrays provide concise notation for application of bit reversal permutations.
```
import numpy as np
def bitrev_map(nbits):
"""create bit reversal mapping
>>> bitrev_map(3)
array([0, 4, 2, 6, 1, 5, 3, 7], dtype=uint16)
>>> import numpy as np
>>> np.arange(8)[bitrev_map(3)]
array([0, 4, 2, 6, 1, 5, 3, 7])
>>> (np.arange(8)[bitrev_map(3)])[bitrev_map(3)]
array([0, 1, 2, 3, 4, 5, 6, 7])
"""
assert isinstance(nbits, int) and nbits > 0, 'bit size must be positive integer'
dtype = np.uint32 if nbits > 16 else np.uint16
brmap = np.empty(2**nbits, dtype=dtype)
int_, ifmt, fmtstr = int, int.__format__, ("0%db" % nbits)
for i in range(2**nbits):
brmap[i] = int_(ifmt(i, fmtstr)[::-1], base=2)
return brmap
```
When bit reversing many vectors one wants to store the mapping anyway.
The implementation reverses binary literal representations.
|
You can do something like this where you can define how many bits in the argument l.
```
def reverse(x, l):
r = x & 1
for i in range (1, l):
r = r << 1 | (x >> i) & 1
print(bin(r))
return r
```
|
5,333,509
|
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it?
For example I have the numbers 1,2,5,15 and I want to reverse the bits like so:
```
original reversed
1 - 0001 - 1000 - 8
2 - 0010 - 0100 - 4
5 - 0101 - 1010 - 10
15 - 1111 - 1111 - 15
```
Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so.
PS This isn't homework, I am just trying to program the solution to a logic puzzle.
|
2011/03/17
|
[
"https://Stackoverflow.com/questions/5333509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/330013/"
] |
This is a fast way to do it:
I got it from [here](https://stackoverflow.com/questions/746171/best-algorithm-for-bit-reversal-from-msb-lsb-to-lsb-msb-in-c)
```
BitReverseTable256 = [0x00, 0x80, 0x40, 0xC0, 0x20, 0xA0, 0x60, 0xE0, 0x10, 0x90, 0x50, 0xD0, 0x30, 0xB0, 0x70, 0xF0,
0x08, 0x88, 0x48, 0xC8, 0x28, 0xA8, 0x68, 0xE8, 0x18, 0x98, 0x58, 0xD8, 0x38, 0xB8, 0x78, 0xF8,
0x04, 0x84, 0x44, 0xC4, 0x24, 0xA4, 0x64, 0xE4, 0x14, 0x94, 0x54, 0xD4, 0x34, 0xB4, 0x74, 0xF4,
0x0C, 0x8C, 0x4C, 0xCC, 0x2C, 0xAC, 0x6C, 0xEC, 0x1C, 0x9C, 0x5C, 0xDC, 0x3C, 0xBC, 0x7C, 0xFC,
0x02, 0x82, 0x42, 0xC2, 0x22, 0xA2, 0x62, 0xE2, 0x12, 0x92, 0x52, 0xD2, 0x32, 0xB2, 0x72, 0xF2,
0x0A, 0x8A, 0x4A, 0xCA, 0x2A, 0xAA, 0x6A, 0xEA, 0x1A, 0x9A, 0x5A, 0xDA, 0x3A, 0xBA, 0x7A, 0xFA,
0x06, 0x86, 0x46, 0xC6, 0x26, 0xA6, 0x66, 0xE6, 0x16, 0x96, 0x56, 0xD6, 0x36, 0xB6, 0x76, 0xF6,
0x0E, 0x8E, 0x4E, 0xCE, 0x2E, 0xAE, 0x6E, 0xEE, 0x1E, 0x9E, 0x5E, 0xDE, 0x3E, 0xBE, 0x7E, 0xFE,
0x01, 0x81, 0x41, 0xC1, 0x21, 0xA1, 0x61, 0xE1, 0x11, 0x91, 0x51, 0xD1, 0x31, 0xB1, 0x71, 0xF1,
0x09, 0x89, 0x49, 0xC9, 0x29, 0xA9, 0x69, 0xE9, 0x19, 0x99, 0x59, 0xD9, 0x39, 0xB9, 0x79, 0xF9,
0x05, 0x85, 0x45, 0xC5, 0x25, 0xA5, 0x65, 0xE5, 0x15, 0x95, 0x55, 0xD5, 0x35, 0xB5, 0x75, 0xF5,
0x0D, 0x8D, 0x4D, 0xCD, 0x2D, 0xAD, 0x6D, 0xED, 0x1D, 0x9D, 0x5D, 0xDD, 0x3D, 0xBD, 0x7D, 0xFD,
0x03, 0x83, 0x43, 0xC3, 0x23, 0xA3, 0x63, 0xE3, 0x13, 0x93, 0x53, 0xD3, 0x33, 0xB3, 0x73, 0xF3,
0x0B, 0x8B, 0x4B, 0xCB, 0x2B, 0xAB, 0x6B, 0xEB, 0x1B, 0x9B, 0x5B, 0xDB, 0x3B, 0xBB, 0x7B, 0xFB,
0x07, 0x87, 0x47, 0xC7, 0x27, 0xA7, 0x67, 0xE7, 0x17, 0x97, 0x57, 0xD7, 0x37, 0xB7, 0x77, 0xF7,
0x0F, 0x8F, 0x4F, 0xCF, 0x2F, 0xAF, 0x6F, 0xEF, 0x1F, 0x9F, 0x5F, 0xDF, 0x3F, 0xBF, 0x7F, 0xFF]
v = 32 # any int number will be reverse 32-bit value, 8 bits at time
c = BitReverseTable256[v & 0xff] << 24 |
BitReverseTable256[(v >> 8) & 0xff] << 16 |
BitReverseTable256[(v >> 16) & 0xff] << 8 |
BitReverseTable256[(v >> 24) & 0xff]
```
|
You can do something like this where you can define how many bits in the argument l.
```
def reverse(x, l):
r = x & 1
for i in range (1, l):
r = r << 1 | (x >> i) & 1
print(bin(r))
return r
```
|
5,333,509
|
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it?
For example I have the numbers 1,2,5,15 and I want to reverse the bits like so:
```
original reversed
1 - 0001 - 1000 - 8
2 - 0010 - 0100 - 4
5 - 0101 - 1010 - 10
15 - 1111 - 1111 - 15
```
Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so.
PS This isn't homework, I am just trying to program the solution to a logic puzzle.
|
2011/03/17
|
[
"https://Stackoverflow.com/questions/5333509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/330013/"
] |
This is a fast way to do it:
I got it from [here](https://stackoverflow.com/questions/746171/best-algorithm-for-bit-reversal-from-msb-lsb-to-lsb-msb-in-c)
```
BitReverseTable256 = [0x00, 0x80, 0x40, 0xC0, 0x20, 0xA0, 0x60, 0xE0, 0x10, 0x90, 0x50, 0xD0, 0x30, 0xB0, 0x70, 0xF0,
0x08, 0x88, 0x48, 0xC8, 0x28, 0xA8, 0x68, 0xE8, 0x18, 0x98, 0x58, 0xD8, 0x38, 0xB8, 0x78, 0xF8,
0x04, 0x84, 0x44, 0xC4, 0x24, 0xA4, 0x64, 0xE4, 0x14, 0x94, 0x54, 0xD4, 0x34, 0xB4, 0x74, 0xF4,
0x0C, 0x8C, 0x4C, 0xCC, 0x2C, 0xAC, 0x6C, 0xEC, 0x1C, 0x9C, 0x5C, 0xDC, 0x3C, 0xBC, 0x7C, 0xFC,
0x02, 0x82, 0x42, 0xC2, 0x22, 0xA2, 0x62, 0xE2, 0x12, 0x92, 0x52, 0xD2, 0x32, 0xB2, 0x72, 0xF2,
0x0A, 0x8A, 0x4A, 0xCA, 0x2A, 0xAA, 0x6A, 0xEA, 0x1A, 0x9A, 0x5A, 0xDA, 0x3A, 0xBA, 0x7A, 0xFA,
0x06, 0x86, 0x46, 0xC6, 0x26, 0xA6, 0x66, 0xE6, 0x16, 0x96, 0x56, 0xD6, 0x36, 0xB6, 0x76, 0xF6,
0x0E, 0x8E, 0x4E, 0xCE, 0x2E, 0xAE, 0x6E, 0xEE, 0x1E, 0x9E, 0x5E, 0xDE, 0x3E, 0xBE, 0x7E, 0xFE,
0x01, 0x81, 0x41, 0xC1, 0x21, 0xA1, 0x61, 0xE1, 0x11, 0x91, 0x51, 0xD1, 0x31, 0xB1, 0x71, 0xF1,
0x09, 0x89, 0x49, 0xC9, 0x29, 0xA9, 0x69, 0xE9, 0x19, 0x99, 0x59, 0xD9, 0x39, 0xB9, 0x79, 0xF9,
0x05, 0x85, 0x45, 0xC5, 0x25, 0xA5, 0x65, 0xE5, 0x15, 0x95, 0x55, 0xD5, 0x35, 0xB5, 0x75, 0xF5,
0x0D, 0x8D, 0x4D, 0xCD, 0x2D, 0xAD, 0x6D, 0xED, 0x1D, 0x9D, 0x5D, 0xDD, 0x3D, 0xBD, 0x7D, 0xFD,
0x03, 0x83, 0x43, 0xC3, 0x23, 0xA3, 0x63, 0xE3, 0x13, 0x93, 0x53, 0xD3, 0x33, 0xB3, 0x73, 0xF3,
0x0B, 0x8B, 0x4B, 0xCB, 0x2B, 0xAB, 0x6B, 0xEB, 0x1B, 0x9B, 0x5B, 0xDB, 0x3B, 0xBB, 0x7B, 0xFB,
0x07, 0x87, 0x47, 0xC7, 0x27, 0xA7, 0x67, 0xE7, 0x17, 0x97, 0x57, 0xD7, 0x37, 0xB7, 0x77, 0xF7,
0x0F, 0x8F, 0x4F, 0xCF, 0x2F, 0xAF, 0x6F, 0xEF, 0x1F, 0x9F, 0x5F, 0xDF, 0x3F, 0xBF, 0x7F, 0xFF]
v = 32 # any int number will be reverse 32-bit value, 8 bits at time
c = BitReverseTable256[v & 0xff] << 24 |
BitReverseTable256[(v >> 8) & 0xff] << 16 |
BitReverseTable256[(v >> 16) & 0xff] << 8 |
BitReverseTable256[(v >> 24) & 0xff]
```
|
[This](http://wiki.python.org/moin/BitwiseOperators) is the first link I found googling for "python bitwise" and it has a pretty good summary of how to do bit-twiddling in Python.
If you don't care too much about speed, the most straightforward solution is to go through each bit-place of the original number and, if it is set, set the corresponding bit of your return value.
```
def reversed( x, num_bits ):
answer = 0
for i in range( num_bits ): # for each bit number
if (x & (1 << i)): # if it matches that bit
answer |= (1 << (num_bits - 1 - i)) # set the "opposite" bit in answer
return answer
```
I'm sure you could do this with a comprehension, too. If you really needed speed, you'd probably do it 8 bits at a time with a 256-value precomputed lookup table.
|
5,333,509
|
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it?
For example I have the numbers 1,2,5,15 and I want to reverse the bits like so:
```
original reversed
1 - 0001 - 1000 - 8
2 - 0010 - 0100 - 4
5 - 0101 - 1010 - 10
15 - 1111 - 1111 - 15
```
Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so.
PS This isn't homework, I am just trying to program the solution to a logic puzzle.
|
2011/03/17
|
[
"https://Stackoverflow.com/questions/5333509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/330013/"
] |
If you truly want 32 bits rather than 4, then here is a function to do what you want:
```
def revbits(x):
return int(bin(x)[2:].zfill(32)[::-1], 2)
```
Basically I'm converting to binary, stripping off the '0b' in front, filling with zeros up to 32 chars, reversing, and converting back to an integer.
It might be faster to do the actual bit work as in gnibbler's answer.
|
You can do something like this where you can define how many bits in the argument l.
```
def reverse(x, l):
r = x & 1
for i in range (1, l):
r = r << 1 | (x >> i) & 1
print(bin(r))
return r
```
|
5,333,509
|
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it?
For example I have the numbers 1,2,5,15 and I want to reverse the bits like so:
```
original reversed
1 - 0001 - 1000 - 8
2 - 0010 - 0100 - 4
5 - 0101 - 1010 - 10
15 - 1111 - 1111 - 15
```
Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so.
PS This isn't homework, I am just trying to program the solution to a logic puzzle.
|
2011/03/17
|
[
"https://Stackoverflow.com/questions/5333509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/330013/"
] |
This is a fast way to do it:
I got it from [here](https://stackoverflow.com/questions/746171/best-algorithm-for-bit-reversal-from-msb-lsb-to-lsb-msb-in-c)
```
BitReverseTable256 = [0x00, 0x80, 0x40, 0xC0, 0x20, 0xA0, 0x60, 0xE0, 0x10, 0x90, 0x50, 0xD0, 0x30, 0xB0, 0x70, 0xF0,
0x08, 0x88, 0x48, 0xC8, 0x28, 0xA8, 0x68, 0xE8, 0x18, 0x98, 0x58, 0xD8, 0x38, 0xB8, 0x78, 0xF8,
0x04, 0x84, 0x44, 0xC4, 0x24, 0xA4, 0x64, 0xE4, 0x14, 0x94, 0x54, 0xD4, 0x34, 0xB4, 0x74, 0xF4,
0x0C, 0x8C, 0x4C, 0xCC, 0x2C, 0xAC, 0x6C, 0xEC, 0x1C, 0x9C, 0x5C, 0xDC, 0x3C, 0xBC, 0x7C, 0xFC,
0x02, 0x82, 0x42, 0xC2, 0x22, 0xA2, 0x62, 0xE2, 0x12, 0x92, 0x52, 0xD2, 0x32, 0xB2, 0x72, 0xF2,
0x0A, 0x8A, 0x4A, 0xCA, 0x2A, 0xAA, 0x6A, 0xEA, 0x1A, 0x9A, 0x5A, 0xDA, 0x3A, 0xBA, 0x7A, 0xFA,
0x06, 0x86, 0x46, 0xC6, 0x26, 0xA6, 0x66, 0xE6, 0x16, 0x96, 0x56, 0xD6, 0x36, 0xB6, 0x76, 0xF6,
0x0E, 0x8E, 0x4E, 0xCE, 0x2E, 0xAE, 0x6E, 0xEE, 0x1E, 0x9E, 0x5E, 0xDE, 0x3E, 0xBE, 0x7E, 0xFE,
0x01, 0x81, 0x41, 0xC1, 0x21, 0xA1, 0x61, 0xE1, 0x11, 0x91, 0x51, 0xD1, 0x31, 0xB1, 0x71, 0xF1,
0x09, 0x89, 0x49, 0xC9, 0x29, 0xA9, 0x69, 0xE9, 0x19, 0x99, 0x59, 0xD9, 0x39, 0xB9, 0x79, 0xF9,
0x05, 0x85, 0x45, 0xC5, 0x25, 0xA5, 0x65, 0xE5, 0x15, 0x95, 0x55, 0xD5, 0x35, 0xB5, 0x75, 0xF5,
0x0D, 0x8D, 0x4D, 0xCD, 0x2D, 0xAD, 0x6D, 0xED, 0x1D, 0x9D, 0x5D, 0xDD, 0x3D, 0xBD, 0x7D, 0xFD,
0x03, 0x83, 0x43, 0xC3, 0x23, 0xA3, 0x63, 0xE3, 0x13, 0x93, 0x53, 0xD3, 0x33, 0xB3, 0x73, 0xF3,
0x0B, 0x8B, 0x4B, 0xCB, 0x2B, 0xAB, 0x6B, 0xEB, 0x1B, 0x9B, 0x5B, 0xDB, 0x3B, 0xBB, 0x7B, 0xFB,
0x07, 0x87, 0x47, 0xC7, 0x27, 0xA7, 0x67, 0xE7, 0x17, 0x97, 0x57, 0xD7, 0x37, 0xB7, 0x77, 0xF7,
0x0F, 0x8F, 0x4F, 0xCF, 0x2F, 0xAF, 0x6F, 0xEF, 0x1F, 0x9F, 0x5F, 0xDF, 0x3F, 0xBF, 0x7F, 0xFF]
v = 32 # any int number will be reverse 32-bit value, 8 bits at time
c = BitReverseTable256[v & 0xff] << 24 |
BitReverseTable256[(v >> 8) & 0xff] << 16 |
BitReverseTable256[(v >> 16) & 0xff] << 8 |
BitReverseTable256[(v >> 24) & 0xff]
```
|
Numpy indexing arrays provide concise notation for application of bit reversal permutations.
```
import numpy as np
def bitrev_map(nbits):
"""create bit reversal mapping
>>> bitrev_map(3)
array([0, 4, 2, 6, 1, 5, 3, 7], dtype=uint16)
>>> import numpy as np
>>> np.arange(8)[bitrev_map(3)]
array([0, 4, 2, 6, 1, 5, 3, 7])
>>> (np.arange(8)[bitrev_map(3)])[bitrev_map(3)]
array([0, 1, 2, 3, 4, 5, 6, 7])
"""
assert isinstance(nbits, int) and nbits > 0, 'bit size must be positive integer'
dtype = np.uint32 if nbits > 16 else np.uint16
brmap = np.empty(2**nbits, dtype=dtype)
int_, ifmt, fmtstr = int, int.__format__, ("0%db" % nbits)
for i in range(2**nbits):
brmap[i] = int_(ifmt(i, fmtstr)[::-1], base=2)
return brmap
```
When bit reversing many vectors one wants to store the mapping anyway.
The implementation reverses binary literal representations.
|
5,333,509
|
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it?
For example I have the numbers 1,2,5,15 and I want to reverse the bits like so:
```
original reversed
1 - 0001 - 1000 - 8
2 - 0010 - 0100 - 4
5 - 0101 - 1010 - 10
15 - 1111 - 1111 - 15
```
Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so.
PS This isn't homework, I am just trying to program the solution to a logic puzzle.
|
2011/03/17
|
[
"https://Stackoverflow.com/questions/5333509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/330013/"
] |
```
reversed_ = sum(1<<(numbits-1-i) for i in range(numbits) if original>>i&1)
```
|
[This](http://wiki.python.org/moin/BitwiseOperators) is the first link I found googling for "python bitwise" and it has a pretty good summary of how to do bit-twiddling in Python.
If you don't care too much about speed, the most straightforward solution is to go through each bit-place of the original number and, if it is set, set the corresponding bit of your return value.
```
def reversed( x, num_bits ):
answer = 0
for i in range( num_bits ): # for each bit number
if (x & (1 << i)): # if it matches that bit
answer |= (1 << (num_bits - 1 - i)) # set the "opposite" bit in answer
return answer
```
I'm sure you could do this with a comprehension, too. If you really needed speed, you'd probably do it 8 bits at a time with a 256-value precomputed lookup table.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.