GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
48,676,209
0
0
0
0
1
false
2
2018-02-08T01:13:00.000
2
1
0
Sampling from a multivariate probability density function in python
48,675,954
0.379949
python,random,statistics,probability,probability-density
There a few different paths one can follow here. (1) If P(x,y,z) factors as P(x,y,z) = P(x) P(y) P(z) (i.e., x, y, and z are independent) then you can sample each one separately. (2) If P(x,y,z) has a more general factorization, you can reduce the number of variables that need to be sampled to whatever's conditional on...
I have a multivariate probability density function P(x,y,z), and I want to sample from it. Normally, I would use numpy.random.choice() for this sort of task, but this function only works for 1-dimensional probability densities. Is there an equivalent function for multivariate PDFs?
0
1
965
0
48,683,822
0
0
0
0
1
false
0
2018-02-08T10:47:00.000
0
3
0
RGB in OpenCV. What does it mean?
48,683,621
0
python,opencv,image-processing
As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before...
Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image. Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we op...
0
1
135
0
48,694,488
0
0
0
0
2
false
0
2018-02-08T19:23:00.000
0
2
0
Get 3D coordinates in OpenCV having X,Y and distance to object
48,693,266
0
python,c++,opencv
The "without calibration" bit dooms you, sorry. Without knowing the focal length (or, equivalently, the field of view) you cannot "convert" a pixel into a ray. Note that you can sometimes get an approximate calibration directly from the camera - for example, it might write a focal length for its lens into the EXIF hea...
I am trying to convert X,Y position of a tracked object in an image to 3D coordinates. I got the distance to the object based on the size of the tracked object (A marker) but now I need to convert all of this to a 3D coordinate in the space. I have been reading a lot about this but all of the methods I found require a ...
0
1
630
0
48,694,562
0
0
0
0
2
false
0
2018-02-08T19:23:00.000
0
2
0
Get 3D coordinates in OpenCV having X,Y and distance to object
48,693,266
0
python,c++,opencv
If you're using some sort of micro controller, it may be possible to point a sensor towards that object that's seen through the camera to get the distance. You would most likely have to have a complex algorithm to get multiple cameras to work together to return the distance. If there's no calibration, there would be no...
I am trying to convert X,Y position of a tracked object in an image to 3D coordinates. I got the distance to the object based on the size of the tracked object (A marker) but now I need to convert all of this to a 3D coordinate in the space. I have been reading a lot about this but all of the methods I found require a ...
0
1
630
0
49,092,434
0
0
0
0
1
false
1
2018-02-08T21:07:00.000
0
5
0
pandas read csv ignore newline
48,694,790
0
python,pandas,biopython
There is no good way to do this. BioPython alone seems to be sufficient, over a hybrid solution involving iterating through a BioPython object, and inserting into a dataframe
i have a dataset (for compbio people out there, it's a FASTA) that is littered with newlines, that don't act as a delimiter of the data. Is there a way for pandas to ignore newlines when importing, using any of the pandas read functions? sample data: >ERR899297.10000174 TGTAATATTGCCTGTAGCGGGAGTTGTTGTCTCAGGATCAGCATTA...
0
1
9,964
0
48,696,674
0
0
0
0
1
false
0
2018-02-08T23:36:00.000
0
2
0
Matching genes in string in Python
48,696,556
0
python,regex,subset
For character handling at such micro level the query will end up being clunky with high response time — if you're lucky to write a working one. This's more of a script kind of operation.
I'm trying to match text strings (gene names) in a column from one file to text strings in a column of another, in order to create a subset the second. For simplicity, the first will look more or less like this: hits = ["IL1", "NRC31", "AR", etc.] However, the column of interest in the second df looks like this: 68 ...
0
1
271
0
48,697,994
0
0
0
0
1
true
0
2018-02-09T02:41:00.000
1
1
0
What is the normalization method so that there is no negative value?
48,697,967
1.2
python,normalization,image-preprocessing
Without more information about your source code and the packages you're using, this is really more of a data science question than a python question. To answer your question, a more than satisfactory method in most circumstances it min-max scaling. Simply normalize each coordinate of your images between 0 and 1. Whethe...
I am trying to normalize MR image. There is a negative value in the MR image. So the MR image was normalized using the Gaussian method, resulting in a negative area. But i don't want to get negative area. My question: What is the normalization method so that there is no negative value? Thanks in advance
0
1
424
0
48,707,014
0
0
0
0
1
true
0
2018-02-09T05:38:00.000
0
1
0
Extending a trendline in a lmfit plot
48,699,357
1.2
python,lmfit
More detail about what you are actually doing would be helpful. That is, vague questions can really only get vague answers. Assuming you are doing curve fitting with lmfit's Model class, then once you have your Model and a set of Parameters (say, after a fit has refined them to best match some data), then you can use ...
I have fitted a curve using lmfit but the trendline/curve is short. Please how do I extend the trendline/curve in both directions because the trendline/curve is hanging. Sample codes are warmly welcome my senior programmers. Thanks.
0
1
213
0
49,782,296
0
0
0
0
1
true
2
2018-02-09T07:15:00.000
14
1
0
How to use RASA NLU with RASA CORE
48,700,554
1.2
python,rasa-nlu,rasa-core
RASA NLU is the natural language understanding piece, which is used for taking examples of natural language and translating them into "intents." For example: "yes", "yeah", "yep" and "for sure" would all be translated into the "yes" intent. RASA CORE on the other hand is the engine that processes the flow of conversat...
I am new to chatbot application and RASA as well, can anyone please help me to understand how should i use RASA NLU with RASA CORE.
0
1
1,115
0
48,713,292
0
1
0
0
1
false
1
2018-02-09T14:33:00.000
-1
1
0
multiplicative group using SymPy
48,708,133
-0.197375
python,sympy
after reading more on the subject , Multiplicative group Z * p In classical cyclic group gryptography we usually use multiplicative group Z p * , where p is prime. Z p * = { 1, 2, .... , p - 1} combined with multiplication of integers mod p So it is simply G2= [ i for i in range(1, n-1 )] #G2 multiplicativ Group of ...
I'm trying to create a multiplicative group of order q. This code generates an additive cyclic group of order 5 from sympy.combinatorics.generators import cyclic list(cyclic(5)) [(4), (0 1 2 3 4), (0 2 4 1 3), (0 3 1 4 2), (0 4 3 2 1)] Any help ?
0
1
463
0
48,715,905
0
0
0
0
1
false
0
2018-02-10T00:08:00.000
0
2
0
support vector regression time series forecasting - python
48,715,867
0
python,scikit-learn,time-series,svm
given multi-variable regression, y = Regression is a multi-dimensional separation which can be hard to visualize in ones head since it is not 3D. The better question might be, which are consequential to the output value `y'. Since you have the code to the loadavg in the kernel source, you can use the input parameters...
I have a dataset of peak load for a year. Its a simple two column dataset with the date and load(kWh). I want to train it on the first 9 months and then let it predict the next three months . I can't get my head around how to implement SVR. I understand my 'y' would be predicted value in kWh but what about my X value...
0
1
2,181
0
48,735,246
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
9
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
1
python,python-3.x,python-2.7,tensorflow,pip
Uninstalling Python and then reinstalling solved my issue and I was able to successfully install TensorFlow.
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
51,831,928
0
0
0
0
13
true
305
2018-02-10T12:35:00.000
228
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
1.2
python,python-3.x,python-2.7,tensorflow,pip
As of October 2020: Tensorflow only supports the 64-bit version of Python Tensorflow only supports Python 3.5 to 3.8 So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version.
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
65,537,792
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
7
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
1
python,python-3.x,python-2.7,tensorflow,pip
(as of Jan 1st, 2021) Any over version 3.9.x there is no support for TensorFlow 2. If you are installing packages via pip with 3.9, you simply get a "package doesn't exist" message. After reverting to the latest 3.8.x. Thought I would drop this here, I will update when 3.9.x is working with Tensorflow 2.x
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
55,988,352
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
71
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
1
python,python-3.x,python-2.7,tensorflow,pip
I installed it successfully by pip install https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0-py3-none-any.whl
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
59,836,416
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
5
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
0.043451
python,python-3.x,python-2.7,tensorflow,pip
Looks like the problem is with Python 3.8. Use Python 3.7 instead. Steps I took to solve this. Created a python 3.7 environment with conda List item Installed rasa using pip install rasa within the environment. Worked for me.
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
69,111,030
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
1
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
0.008695
python,python-3.x,python-2.7,tensorflow,pip
using pip install tensorflow --user did it for me
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
49,432,863
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
36
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
1
python,python-3.x,python-2.7,tensorflow,pip
I am giving it for Windows If you are using python-3 Upgrade pip to the latest version using py -m pip install --upgrade pip Install package using py -m pip install <package-name> If you are using python-2 Upgrade pip to the latest version using py -2 -m pip install --upgrade pip Install package using py -2 -m pip ...
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
53,488,421
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
42
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
1
python,python-3.x,python-2.7,tensorflow,pip
if you are using anaconda, python 3.7 is installed by default, so you have to downgrade it to 3.6: conda install python=3.6 then: pip install tensorflow it worked for me in Ubuntu.
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
67,496,288
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
0
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
0
python,python-3.x,python-2.7,tensorflow,pip
This issue also happens with other libraries such as matplotlib(which doesn't support Python > 3.9 for some functions) let's just use COLAB.
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
60,302,029
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
0
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
0
python,python-3.x,python-2.7,tensorflow,pip
use python version 3.6 or 3.7 but the important thing is you should install the python version of 64-bit.
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
61,057,983
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
-2
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
-0.01739
python,python-3.x,python-2.7,tensorflow,pip
I solved the same problem with python 3.7 by installing one by one all the packages required Here are the steps: Install the package See the error message: couldn't find a version that satisfies the requirement -- the name of the module required Install the module required. Very often, installation of the required ...
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
62,932,939
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
3
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
0.026081
python,python-3.x,python-2.7,tensorflow,pip
For version TensorFlow 2.2: Make sure you have python 3.8 try: python --version or python3 --version or py --version Upgrade the pip of the python which has version 3.8 try: python3 -m pip install --upgrade pip or python -m pip install --upgrade pip or py -m pip install --upgrade pip Install TensorFlow: try: pyth...
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
64,305,430
0
0
0
0
13
false
305
2018-02-10T12:35:00.000
0
23
0
Could not find a version that satisfies the requirement tensorflow
48,720,833
0
python,python-3.x,python-2.7,tensorflow,pip
In case you are using Docker, make sure you have FROM python:x.y.z instead of FROM python:x.y.z-alpine.
I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement Ten...
0
1
663,737
0
48,729,205
0
0
0
0
1
false
0
2018-02-11T07:03:00.000
0
3
1
How do I install packages for python ML on ubuntu?
48,729,174
0
python,numpy,ubuntu,matplotlib,installation
U cannot install it using apt-get. u need to install pip first. After you install pip, just google about how to install different packages using pip
I am having problems trying to install the following packages on Ubuntu: scipy numpy matplotlib pandas sklearn When I execute the command: sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose I get the following message: Reading package lis...
0
1
428
0
48,739,426
0
0
0
0
1
true
5
2018-02-11T11:44:00.000
4
4
0
How to upgrade tensorflow with GPU on google colaboratory
48,731,124
1.2
python,tensorflow,google-colaboratory
Even if you will install gpu version !pip install tensorflow-gpu==1.5.0 it will still fail to import it because of the cuda libraries. Currently I have not found a way to use 1.5 version with GPU. So I would rather use 1.4.1 with gpu than 1.5 without gpu. You can send them a feedback ( Home - Send Feedback ) and hope...
Currently google colaboratory uses tensorflow 1.4.1. I want to upgrade it to 1.5.0 version. Each time when i executed !pip install --upgrade tensorflow command, notebook instance succesfully upgrades the tensorflow version to 1.5.0. But after upgrade operation tensorflow instance only supports "CPU". When i have execut...
0
1
15,191
0
51,146,080
0
0
0
0
1
false
0
2018-02-12T03:01:00.000
0
1
0
OOM when training on GPU external server
48,738,979
0
python-2.7,tensorflow,out-of-memory,gpu
You can try using model.fit_generator instead.
I am trying to train my deep learning code using Keras with tensorflow backend on a remote server with GPU. However, even the GPU server states OOM. This was the output: 2018-02-09 14:19:28.918619: I tensorflow/core/common_runtime/bfc_allocator.cc:685] Stats: Limit: 10658837300 InUse: 10314885120 MaxInUse: 1034931...
0
1
102
0
49,017,753
0
0
0
0
1
false
0
2018-02-12T18:28:00.000
0
1
0
Import my python module to rstudio
48,753,128
0
python,rstudio,r-markdown,python-import
First I had to make a setup.py file for my project. activate the virtual environment corresponding to my project source activate, then run python setup.py develop Now, I can import my own python library from R as I installed it in my environment.
I have developed few modules in python and I want to import them to rstudio RMarkdown file. However, I am not sure how I can do it. For example, I can't do from code.extract_feat.cluster_blast import fill_df_by_blast as fill_df as I am used to do it in pycharm. Any hint? Thanks.
0
1
378
0
49,515,251
0
0
0
0
1
false
2
2018-02-12T19:52:00.000
0
1
0
Uploading CSV - 'utf-8' codec can't decode byte 0x92 in position 16: invalid start byte
48,754,469
0
python-3.x
Try the below: pd.read_csv("filepath",encoding='cp1252'). This one should work as it worked for me.
I have been trying to upload a csv file using pandas .read() function. But as you can see from my title this is what I get "'utf-8' codec can't decode byte 0x92 in position 16: invalid start byte" And it's weird because from the same folder I was able to upload a different csv file without problems. Something that mi...
0
1
603
0
48,777,636
0
1
0
0
1
false
0
2018-02-13T00:58:00.000
0
1
0
Running tensorflow in ipython
48,757,970
0
tensorflow,ipython
I think I figured out the problem. pip was pointing to /Library/Frameworks/Python.framework/Versions/3.4/bin/pip My ipython was pointing to /opt/local/bin/ipython I re-installed tensorflow within my virtual environment by calling /opt/local/bin/pip-2.7 install --upgrade tensorflow Now I can use tensorflow within ipytho...
tensorflow works using python in a virtualenv I created, but tensorflow doesn't work in the same virtualenv with ipython. This is the error I get: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the pac...
0
1
213
0
48,761,331
0
0
0
0
1
true
1
2018-02-13T04:35:00.000
0
1
0
What does tensorflow nonmaximum suppression function's argument "score" do to this function?
48,759,535
1.2
python,tensorflow,computer-vision
The scores argument decides the sorting order. The method tf.image.non_max_suppression goes through (greedily, so all input entries are covered) input bounding boxes in order decided by this scores argument, selects only those bounding boxes from them which are not overlapping (more than iou_threshold) with boxes alrea...
I read the document about the function and I understood how NMS works. What I'm not clear is scores argument to this function. I think NMS first look at bottom right coordinate and sort according to it and calculate IoU then discard some boxes which have IoU greater than the threshold that you set. In this theory score...
0
1
537
0
48,762,415
0
0
0
0
1
true
1
2018-02-13T07:00:00.000
1
1
0
Image Classification using Tensorflow
48,761,144
1.2
python-3.x,tensorflow,computer-vision,softmax,sigmoid
Since you're doing single label classification, softmax is the best loss function for this, as it maps your final layer logit values to a probability distribution. Sigmoid is used when it's multilabel classification. It's always better to use a momentum based optimizer compared to vanilla gradient descent. There's a b...
I am doing transfer-learning/retraining using Tensorflow Inception V3 model. I have 6 labels. A given image can be one single type only, i.e, no multiple class detection is needed. I have three queries: Which activation function is best for my case? Presently retrain.py file provided by tensorflow uses softmax? What a...
0
1
124
0
48,771,960
0
0
0
0
1
false
1
2018-02-13T13:18:00.000
0
1
0
estimation of subpixel values from images in Python
48,767,750
0
python,opencv,image-processing,python-imaging-library
There are two common methods: bilinear interpolation, bicubic interpolation. These evaluate an intermediate value, based on the values at four or sixteen neighboring pixels, using weighting functions based on the fractional parts of the coordinates. Lookup these expressions. From my experience, the bilinear quality i...
I have an image and and am transforming it with a nonlinear spatial transformation. I have a written a function that, for every pixel (i, j) in the destination image array, returns a coordinate (y, x) in the source array. The returned coordinate is a floating point value, meaning that it corresponds to a point that li...
0
1
1,342
0
48,769,576
0
1
0
0
1
false
1
2018-02-13T14:31:00.000
0
1
0
Why would I need stacks and queues for Depth First Search?
48,769,149
0
python,search,data-structures
I realized that I misread the assignment. It said: "Important note: Make sure to use the Stack, Queue and PriorityQueue data structures provided to you in util.py! These data structure implementations have particular properties which are required for compatibility with the autograder." I had misread it as saying that ...
I'm working on a project from the Berkeley AI curriculum, and they require me to use stacks, queues, and priority queues in my Depth First Graph Search implementation. I stored my fringe in a priority queue and my already visited states in a set. What am I supposed to use stacks and queues for in this assignment? I'm n...
0
1
86
0
62,222,676
0
0
0
0
1
false
31
2018-02-13T15:46:00.000
36
2
0
What is the difference between save a pandas dataframe to pickle and to csv?
48,770,542
1
python,pandas,csv,pickle
csv ✅human readable ✅cross platform ⛔slower ⛔more disk space ⛔doesn't preserve types in some cases pickle ✅fast saving/loading ✅less disk space ⛔non human readable ⛔python only Also take a look at parquet format (to_parquet, read_parquet) ✅fast saving/loading ✅less disk space than pickle ✅supported by many platfor...
I am learning python pandas. I see a tutorial which shows two ways to save a pandas dataframe. pd.to_csv('sub.csv') and to open pd.read_csv('sub.csv') pd.to_pickle('sub.pkl') and to open pd.read_pickle('sub.pkl') The tutorial says to_pickle is to save the dataframe to disk. I am confused about this. Because when I us...
0
1
25,209
0
48,770,832
0
0
0
0
1
true
4
2018-02-13T15:59:00.000
4
2
0
Why linspace was named like that in numpy?
48,770,786
1.2
python,numpy
A linear space. So in other words, from a straight line over an interval we take n samples.
I'm learning python and numpy. The docstring of numpy.linspace says Return evenly spaced numbers over a specified interval. Returns num evenly spaced samples, calculated over the interval [start, stop]. So I guess the "space" part of linspace means "space". But what does "lin" stand for?
0
1
672
0
48,787,920
0
0
0
0
1
false
0
2018-02-14T12:33:00.000
0
1
0
seed=1, TensorFlor- Xavier_initializer
48,787,340
0
python,tensorflow
It's to define the random seed. By this means, the weight values are always initialized by the same values. From Wiki: A random seed is a number (or vector) used to initialize a pseudo-random number generator.
What does seed=1 is doing in the following code: W3 = tf.get_variable("W3", [L3, L2], initializer = tf.contrib.layers.xavier_initializer(seed=1))
0
1
160
0
48,790,015
0
0
0
0
1
false
0
2018-02-14T14:20:00.000
0
1
0
scipy.optimize.least_squares - limit number of jacobian evaluations
48,789,406
0
python,optimization,scipy,least-squares
According to the help of scipy.optimize.least_squares, max_nfev is the number of function evaluations before the program exits : max_nfev : None or int, optional Maximum number of function evaluations before the termination. If None (default), the value is chosen automatically: Again, according to the hel...
I am trying to use scipy.optimize.least_squares(fun= my_fun, jac=my_jac, max_nfev= 1000) with two callable functions: my_fun and my_jac both fuctions: my_fun and my_jac, use an external software to evaluate their value, this task is much time consuming, therefore I prefer to control the number of evaluations for both t...
0
1
675
0
49,395,441
0
0
0
0
1
true
0
2018-02-14T20:22:00.000
0
1
0
Translating entire coordinates of array to new origin
48,795,574
1.2
python,arrays,numpy,coordinates,translation
My initial question was very misleading - my apologies for the confusion. I've since solved the problem by translating my local array (data cube) within a global array. To accomplish this, I needed to first plot my data within a larger array (such as a Mayavi scene, which I did). Then, within this scene, I moved my da...
I have a 128-length (s) array cube with unique values held at each point inside. At the center of this cube is the meat of the data (representing an object), while on the inner borders of the cube, there are mostly zero values. I need to shift this entire array such that the meat of the data is actually at the origin ...
0
1
297
0
54,771,885
0
0
0
0
1
false
8
2018-02-14T20:50:00.000
3
2
0
[Tensorflow][Object detection] ValueError when try to train with --num_clones=2
48,795,950
0.291313
python,tensorflow,object-detection
You don't mention which type of model you are training - if like me you were using the default model from the TensorFlow Object Detection API example (Faster-RCNN-Inception-V2) then num_clones should equal the batch_size. I was using a GPU however, but when I went from one clone to two, I saw a similar error and settin...
I wanted to train on multiple CPU so i run this command C:\Users\solution\Desktop\Tensorflow\research>python object_detection/train.py --logtostderr --pipeline_config_path=C:\Users\solution\Desktop\Tensorflow\myFolder\power_drink.config --train_dir=C:\Users\solution\Desktop\Tensorflow\research\object_detection\tra...
0
1
2,859
0
48,822,919
0
0
0
0
1
true
0
2018-02-16T08:30:00.000
0
1
0
Probability for correct Image Classification in Tensorflow
48,822,796
1.2
python,image-processing,tensorflow
Single label classification is not something Neural Networks can do "off-the-shelf". How do you train it ? With only data relevant to your target domain ? Your model will only learn to output one. You have two strategies: you use the same strategy as in the "HotDog or Not HotDog app", you put the whole imagenet in tw...
I am using Tensorflow retraining model for Image Classification. I am doing single label classification. I want to set a threshold for correct classification. In other words, if the highest probability is less than a given threshold, I can say that the image is "unknown" i.e. if np.max(results) < 0.5 -> set label as ...
0
1
288
0
48,829,716
0
0
0
0
1
true
2
2018-02-16T10:24:00.000
2
1
0
create environment module to work with opencv-python on hpc nodes
48,824,675
1.2
python-2.7,opencv,hpc,torque,environment-modules
The Python module uses a system library (namely libSM.so.6 : library support for the freedesktop.org version of X) that is present on the head node, but not on the compute nodes (which is not very surprising) You can either: ask the administrators to have that library installed systemwide on the compute nodes through ...
I have a task to train neural networks using tensorflow and opencv-python on HPC nodes via Torque. I have made privatemodule with python virtualenv and installed tensorflow and opencv-python modules in it. In the node I can load my python module. But when I try to run training script I get following error: Traceback...
0
1
388
0
48,833,452
0
0
0
0
1
false
2
2018-02-16T10:55:00.000
0
1
0
Clustering Customers with Python (sklearn)
48,825,248
0
python,cluster-analysis,customer
Avoid comparing Silhouettes of different projections or scalings. Internal measures tend to be too sensitive. Do not use tSNE for clustering (Google for the discussion on stats.SE, feel free to edit the link into this answer). It will cause false separation and false adjacency; it is a visualization technique. PCA will...
I work at an ecommerce company and I'm responsible for clustering our customers based on their transactional behavior. I've never worked with clustering before, so I'm having a bit of a rough time. 1st) I've gathered data on customers and I've chosen 12 variables that specify very nicely how these customers behave. Eac...
0
1
214
0
61,806,979
0
0
0
0
1
false
10
2018-02-17T09:52:00.000
3
7
0
Heroku: deploying Deep Learning model
48,840,025
0.085505
python,tensorflow,heroku,keras,deep-learning
A lot of these answers are great for reducing slug size but if anyone still has problems with deploying a deep learning model to heroku it is important to note that for whatever reason tensorflow 2.0 is ~500MB whereas earlier versions are much smaller. Using an earlier version of tensorflow can greatly reduce your slug...
I have developed a rest API using Flask to expose a Python Keras Deep Learning model (CNN for text classification). I have a very simple script that loads the model into memory and outputs class probabilities for a given text input. The API works perfectly locally. However, when I git push heroku master, I get Compiled...
1
1
7,474
0
48,840,524
0
1
0
0
1
false
0
2018-02-17T10:21:00.000
0
3
0
numpy got installed in Python3.5 but not in Python3.6
48,840,282
0
python,python-3.5,python-3.6
Cannot comment since I don't the rep. If your default python is 3.5 when you check python --version, the way to go would be to find the location of the python executable for the desired version (here 3.6). cd to that folder and then run the command given by Mike.
I have both Python 3.5 and Python 3.6 on my laptop. I am using Ubuntu 16.04. I used pip3 to install numpy. It is working with Python3.5 but not with Python3.6. Please help.
0
1
2,139
0
64,853,941
0
0
0
0
1
false
3
2018-02-18T20:57:00.000
0
2
0
What does np.polyfit do and return?
48,856,497
0
python,numpy
These are essentially the beta and the alpha values for the given data. Where beta necessarily demonstrates the degree of volatility or the slope
I went through the docs but I'm not able to interpret correctly IN my code, I wanted to find a line that goes through 2 points(x1,y1), (x2,y2), so I've used np.polyfit((x1,x2),(y1,y2),1) since its a 1 degree polynomial(a straight line) It returns me [ -1.04 727.2 ] Though my code (which is a much larger file) runs...
0
1
6,890
0
48,877,243
0
0
0
0
2
false
0
2018-02-19T06:47:00.000
1
2
0
how to manually give weight to features using python in machine learning
48,860,824
0.099668
python,regression,jupyter-notebook,decision-tree
The whole point of using machine learning is to let it decide on its own how much weight should be given to which predictor based on its importance in predicting the label correctly. It just doesn't makes any sense trying to do this on your own and then also use machine learning.
I have a data set with continuous label ranging from one to five with nine different features. So I wanted to give weight to each feature manually because some of the features have very less dependency on the label so I wanted to give more weight to those features which have more dependency on the label. How can I manu...
0
1
839
0
48,877,568
0
0
0
0
2
false
0
2018-02-19T06:47:00.000
0
2
0
how to manually give weight to features using python in machine learning
48,860,824
0
python,regression,jupyter-notebook,decision-tree
Don't assign weights manually, let the model learn the weights itself. It will automatically decide which features are more important.
I have a data set with continuous label ranging from one to five with nine different features. So I wanted to give weight to each feature manually because some of the features have very less dependency on the label so I wanted to give more weight to those features which have more dependency on the label. How can I manu...
0
1
839
0
48,870,492
0
0
0
0
1
true
2
2018-02-19T10:43:00.000
6
3
1
Convert hdf5 to netcdf4 in bash, R, python or NCL?
48,864,357
1.2
python,r,hdf5,netcdf4,ncl
with netcdf-c library you can: $ nccopy in.h5 out.nc
Is there a quick and simple way to convert HDF5 files to netcdf(4) from the command line in bash? Alternatively a simple script that handle such a conversion automatically in R, NCL or python ?
0
1
5,482
0
49,065,611
0
0
0
0
1
false
8
2018-02-20T07:57:00.000
2
2
0
Classification: skewed data within a class
48,880,273
0.197375
python,tensorflow,neural-network,keras,multilabel-classification
You're on the right track. Usually, you would either balance your data set before training, i.e. reducing the over-represented class or generate artificial (augmented) data for the under-represented class to boost its occurrence. Reduce over-represented class This one is simpler, you would just randomly pick as many s...
I'm trying to build a multilabel-classifier to predict the probabilities of some input data being either 0 or 1. I'm using a neural network and Tensorflow + Keras (maybe a CNN later). The problem is the following: The data is highly skewed. There are a lot more negative examples than positive maybe 90:10. So my neural...
0
1
1,027
0
48,882,154
0
0
0
0
2
false
0
2018-02-20T09:41:00.000
0
2
0
import tensorflow with python 2.7.6
48,882,088
0
python,tensorflow
Your computer seems to be incompatible with the library tensorflow. Your computer needs to be able to use FMA instructions but can't.
Python terminal getting abort with following msg: /grid/common//pkgs/python/v2.7.6/bin/python Python 2.7.6 (default, Jan 17 2014, 04:05:53) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import tensorflow as tf 2018-02-20 01:40:11.268134...
0
1
230
0
48,882,437
0
0
0
0
2
false
0
2018-02-20T09:41:00.000
0
2
0
import tensorflow with python 2.7.6
48,882,088
0
python,tensorflow
You need to compile TensorFlow on the same computer.
Python terminal getting abort with following msg: /grid/common//pkgs/python/v2.7.6/bin/python Python 2.7.6 (default, Jan 17 2014, 04:05:53) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import tensorflow as tf 2018-02-20 01:40:11.268134...
0
1
230
0
48,884,112
0
0
0
0
1
false
0
2018-02-20T11:08:00.000
0
1
0
pytesseract - Read text from images with more accuracy
48,883,888
0
opencv,python-tesseract
Localize your detection by setting the rectangles where Tesseract has to look. You can then restrict according to rectangle which type of data is present at that place example: numerical,alphabets etc.You can also make a dictionary file for tesseract to improve accuracy(This can be used for detecting card holder name b...
I am working on pytesseract. I want to read data from Driving License kind of thing. Presently i am converting .jpg image to binary(gray scale) format using opencv but i am not accurate result. How do you solve this? Is there any standard size of image?
0
1
424
1
50,014,778
0
0
0
0
1
false
0
2018-02-20T16:56:00.000
0
3
0
detecting when the camera view is blocked (black frame)
48,890,390
0
python,opencv,camera,background-subtraction,opencv-contour
A possible cause for this error could be mild jitters in the frame that occur due to mild shaking of the camera If your background subtraction algorithm isn't tolerant enough to low-value colour changes, then a tamper alert will be triggered even if you shake the camera a bit. I would suggest using MOG2 for background ...
I'm trying to detect camera tampering (lens being blocked, resulting in a black frame). The approach I have taken so far is to apply background subtraction and then finding contours post thresholding the foreground mask. Next, I find the area of each contour and if the contour area is higher than a threshold value (say...
0
1
1,009
0
64,678,289
0
0
0
0
1
false
3
2018-02-20T18:02:00.000
-1
3
0
One-hot-encoding with missing categories
48,891,538
-0.066568
python,scikit-learn,one-hot-encoding
Basically first we need to apply fit_transform for the base data and next apply transform for the sample data, so sample data also will get the exact no.of columns w.r.t base data.
I have a dataset with a category column. In order to use linear regression, I 1-hot encode this column. My set has 10 columns, including the category column. After dropping that column and appending the 1-hot encoded matrix, I end up with 14 columns (10 - 1 + 5). So I train (fit) my LinearRegression model with a matri...
0
1
2,335
0
49,717,503
0
1
0
0
1
true
0
2018-02-20T23:31:00.000
0
1
0
How to add directory to a python running inside virtualenv
48,895,898
1.2
python-3.x,tensorflow,virtualenv,python-3.4,virtualenvwrapper
I had faced a similar issue for the same hardware. If i am guessing right and you are following the same set of install instructions , install the. Whl for tensorflow without using sudo as using the sudo even from inside the virtual environment installs it in the place as seen by the root directory and not inside the ...
I have installed tensorflow and opencv on odroid xu4. Tensorflow was installed using a .whl file for raspberry pi and it built successfully. Opencv was built successfully inside virtualenv environment. I can import opencv as import cv2 from inside virtual environment for python but not tensorflow. Tensorflow is getting...
0
1
255
0
48,897,354
0
0
0
0
1
false
1
2018-02-21T02:33:00.000
1
2
0
Convert numpy array of a image into blocks
48,897,331
0.099668
python,numpy
Please provide your array structure. you can use img_arrary.reshape(8,8), to work total elements must be 64
I have of a numpy array of a image.I want to convert this image into 8*8 block using python.How should I do this?
0
1
778
0
51,193,611
0
0
0
0
1
false
1
2018-02-21T04:04:00.000
0
1
0
catboost cv producing log files
48,897,988
0
python,catboost
Try setting training the parameter allow_writing_files to False.
A number of TSV files and json files are being created when I used the cross validation CV object. I cannot find any way to prevent CV from not producing these in the documentation and end up deleting them manually. These files are obviously coming from CV (I have checked) and are named after the folds or general resul...
0
1
317
0
48,899,306
0
0
0
0
1
true
1
2018-02-21T06:13:00.000
1
1
0
How to build a decoder using dynamic rnn in Tensorflow?
48,899,234
1.2
python,tensorflow,recurrent-neural-network,sequence-to-sequence,encoder-decoder
If for example, you are using Tensorflow's attention_decoder method, pass a parameter "loop_function" to your decoder. Google search for "extract_argmax_and_embed", that is your loop function.
I know how to build an encoder using dynamic rnn in Tensorflow, but my question is how can we use it for decoder? Because in decoder at each time step we should feed the prediction of previous time step. Thanks in advance!
0
1
342
1
48,952,393
0
0
0
0
1
true
0
2018-02-21T16:58:00.000
0
1
0
PyOpenGL how to rotate a scene with the mouse
48,911,436
1.2
python,pygame,blender,pyopengl
Ok I think i have found what you should do just for the people that have trouble with this like I did this is the way you should do it: to rotate around a cube with the camera in opengl: your x mouse value has to be added to the z rotator of your scene and the cosinus of your y mouse value has to be added to the x rota...
I am trying to create a simple scene in 3d (in python) where you have a cube in front of you, and you are able to rotate it around with the mouse. I understand that you should rotate the complete scene to mimic camera movement but i can't figure out how you should do this. Just to clarify I want the camera (or scene) ...
0
1
553
0
48,920,286
0
0
0
0
1
true
3
2018-02-21T17:53:00.000
5
2
0
Keras - how to set weights to a single layer
48,912,449
1.2
python,keras
Keras expects the layer weights to be a list of length 2. First element is the kernel weights and the second is the bias. You can always call get_weights() on the layer to see shape of weights of that layer. set_weights() would expect exactly the same.
I'm trying to set the weights of a hidden layer. I'm assuming layers[0] is the inputs, and I want to set the weights of the first hidden layer so set the index to 1. model.layers[1].set_weights(weights) However, when I try this I get an error: ValueError: You called `set_weights(weights)` on layer "dense_64" with a w...
0
1
8,132
0
60,953,415
0
0
0
0
2
false
13
2018-02-22T10:29:00.000
0
4
0
Choosing subset of farthest points in given set of points
48,925,086
0
python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling
Find the maximum extent of all points. Split into 7x7x7 voxels. For all points in a voxel find the point closest to its centre. Return these 7x7x7 points. Some voxels may contain no points, hopefully not too many.
Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in...
0
1
3,420
0
48,925,457
0
0
0
0
2
false
13
2018-02-22T10:29:00.000
1
4
0
Choosing subset of farthest points in given set of points
48,925,086
0.049958
python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling
If you can afford to do ~ k*n distance calculations then you could Find the center of the distribution of points. Select the point furthest from the center. (and remove it from the set of un-selected points). Find the point furthest from all the currently selected points and select it. Repeat 3. until you end with k p...
Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in...
0
1
3,420
0
48,930,465
0
0
0
0
1
false
0
2018-02-22T14:47:00.000
2
1
0
Normalize 2D array given mean and std value
48,930,303
0.379949
python-3.x,numpy,scikit-learn,normalization
Normalization is: (X - Mean) / Deviation So do just that: (2d_data - mean) / std
l have a dataset called 2d_data which has a dimension=(44500,224,224) such that 44500 is the number of sample. l would like to normalize this data set using the following mean and std values : mean=0.485 and std=0.229 How can l do that ? Thank you
0
1
716
0
48,954,577
0
0
0
0
1
true
1
2018-02-22T18:31:00.000
1
1
0
Converting NumPy floats to ints without loss of precision
48,934,830
1.2
python,numpy,opencv
"Part of our algorithm involves running a convex hull on some of the points in this space, but cv2.convexHull() requires an ndarray with dtype = int." cv2.convexHull() also accepts numpy array with float32 number. Try using cv2.convexHull(numpy.array(a,dtype = 'float32')) where a is a list of dimension n*2 (n = no. of ...
I am working on a vision algorithm with OpenCV in Python. One of the components of it requires comparing points in color-space, where the x and y components are not integers. Our list of points is stored as ndarray with dtype = float64, and our numbers range from -10 to 10 give or take. Part of our algorithm involves r...
0
1
563
0
48,936,596
0
0
0
0
1
false
1
2018-02-22T20:22:00.000
1
3
0
gradient boosting- features contribution
48,936,542
0.066568
python,scikit-learn
Use the feature_importances_ property. Very easy.
Is there a way in python by which I can get contribution of each feature in probability predicted by my gradient boosting classification model for each test observation. Can anyone give actual mathematics behind probability prediction in gradient boosting classification model and how can it be implemented in Python.
0
1
1,153
0
48,942,976
0
0
0
0
1
false
0
2018-02-23T07:09:00.000
0
1
0
Calculate confidence score of document
48,942,865
0
python,machine-learning,deep-learning
How about adding up/taking the mean of your title scores(since they'd be on the same scale) and content scores for all the methods so now you'll have a single title score and single content score. To get a single score for a document, you'll have to combine the title and content scores. To do that, you can take a weig...
Using different methods, I am scoring documents & it's title. Now I want to aggregate all these scores into single score(confidence score). I want to use unsupervised method. I want confidence score in terms of probability or percentage. Here , M= Method No, TS = document title score, CS = document content score eg 1 D...
0
1
189
0
55,395,113
0
0
0
0
1
true
0
2018-02-24T10:31:00.000
0
1
0
N grams for Sentiment Analysis
48,961,822
1.2
python,nltk,sentiment-analysis,n-gram
Use textblob package. It offers a simple API to access its methods and perform basic NLP tasks. NLP is natural language processing. Which process your text by tokenization, noun extract, lemmatization, words inflection, NGRAMS etc. There also some other packages like spacy, nltk. But textblob will be better for beg...
I am doing sentiment analysis on reviews of products from various retailers. I was wondering if there was an API that used n grams for sentiment analysis to classify a review as a positive or negative. I have a CSV file filled with reviews which I would like to run it in python and hence would like an API or a package ...
0
1
469
0
48,970,082
0
1
0
0
1
false
2
2018-02-24T13:11:00.000
-1
1
0
using NLTK to find the related verbs to a specific noun
48,963,243
-0.197375
python,nlp,nltk
Given a corpus of documents, you can apply part of speech tagging to get verb roots, nouns and mapping of those nouns to those verb roots. From there you should be able to deduce the most common 'relations' an 'entity' expresses, although you may want to describe your relations as something that occurs between two diff...
Is there any way to find the related verbs to a specific noun by using NLTK. For example for the word "University" I'd like to have the verbs "study" and "graduate" as an output. I mainly need this feature for relation extraction among some given entities.
0
1
277
0
48,977,717
0
0
0
0
2
false
0
2018-02-25T13:17:00.000
0
4
0
Installing tensorflow on GPU
48,973,883
0
python,tensorflow,gpu
First of all, if you want to see a performance gain, you should have a better GPU, and second of all, Tensorflow uses CUDA, which is only for NVidia GPUs which have CUDA Capability of 3.0 or higher. I recommend you use some cloud service such as AWS or Google Cloud if you really want to do deep learning.
I've installed tensorflow CPU version. I'm using Windows 10 and I have AMD Radeon 8600M as my GPU. Can I install GPU version of tensorflow now? Will there be any problem? If not, where can I get instructions to install GPU version?
0
1
691
0
48,974,256
0
0
0
0
2
false
0
2018-02-25T13:17:00.000
-1
4
0
Installing tensorflow on GPU
48,973,883
-0.049958
python,tensorflow,gpu
It depends on your graphic card, it has to be nvidia, and you have to install cuda version corresponding on your system and SO. Then, you have install cuDNN corresponding on the CUDA version you had installed Steps: Install NVIDIA 367 driver Install CUDA 8.0 Install cuDNN 5.0 Reboot Install tensorflow from source with...
I've installed tensorflow CPU version. I'm using Windows 10 and I have AMD Radeon 8600M as my GPU. Can I install GPU version of tensorflow now? Will there be any problem? If not, where can I get instructions to install GPU version?
0
1
691
0
54,447,128
0
0
0
0
1
false
2
2018-02-26T03:28:00.000
0
2
0
what is the difference between tf.nn.convolution and tf.nn.conv2d?
48,981,022
0
python,tensorflow,machine-learning,neural-network,deep-learning
Functionally, dilations augument in tf.nn.conv2d is the same as dilations_rate in tf.nn.convolution as well as rate in tf.nn.atrous_conv2d. They all represent the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. The dilation factor for each dimension of input specif...
I want to make dilated convolution on a feature. In tensorflow I found tf.nn.convolution and tf.nn.conv2d. But tf.nn.conv2d doesn't seem to support dilated convolution. So I tried using tf.nn.convolution. Do the 2 formulations below give the same result? tf.nn.conv2d(x, w, strides=[1, 1, 2, 2], padding='SAME',data_f...
0
1
2,014
0
49,021,501
0
0
0
0
1
false
0
2018-02-27T10:06:00.000
0
1
0
find anomalies in records of categorical data
49,006,013
0
python,machine-learning,statistics,data-science
Take a look on nearest neighborhoods method and cluster analysis. Metric can be simple (like squared error) or even custom (with predefined weights for each category). Nearest neighborhoods will answer the question 'how different is the current row from the other row' and cluster analysis will answer the question 'is i...
I have a dataset with m observations and p categorical variables (nominal), each variable X1,X2...Xp has several different classes (possible values). Ultimately I am looking for a way to find anomalies i.e to identify rows for which the combination of values seems incorrect with respect to the data I saw so far. So far...
0
1
826
0
49,011,315
0
0
0
0
1
false
0
2018-02-27T14:40:00.000
0
1
0
ImportError: numpy.core.multiarray failed to import on windows
49,011,268
0
python,numpy
Numpy 1.8.1 is very out of date - you should upgrade to the latest version (1.14.1 as of writing) and that error will be resolved. Out of interest, I've seen this question asked before - are you following a guide that is out of date or something?
I am using python 2.7 on windows 10 . I installed numpy-1.8.1-win32-superpack-python2.7 and extracted opencv-3.4.0-vc14_vc15. I copied cv2.pyd from opencv\build\python\2.7\x86 and pasted to C:\Python27\Lib\site-packages. I could import numpy without any error. While I run import cv2 it gives an error like RuntimeError...
0
1
4,894
0
49,017,149
0
0
0
0
1
true
1
2018-02-27T20:06:00.000
1
5
0
What is the best approach to let C# and Python communicate for this machine learning task?
49,017,084
1.2
c#,python,unity3d,tensorflow,machine-learning
You have a few options: Subprocess You can open the python script via the Unity's C# then send stdout and stdin data to and from the process. In the Python side it's as simple as input() and print(), and in the C# side it's basically reading and writing from a Stream object (as far as I remember) UDP/TCP sockets You ca...
I'm developing a simple game for a university project using Unity. This game makes use of machine learning, so I need TensorFlow in order to build a Neural Network (NN) to accomplish certain actions in the game depending on the prediction of the NN. In particular my learning approach is reinforcement learning. I need t...
0
1
2,512
0
50,096,071
0
0
0
0
1
false
6
2018-02-27T22:22:00.000
4
1
1
TypeError: can't pickle memoryview objects when running basic add.delay(1,2) test
49,018,923
0.664037
python-3.x,celery,typeerror,pickle,memoryview
After uninstalling librabbitmq, the problem was resolved.
Trying to run the most basic test of add.delay(1,2) using celery 4.1.0 with Python 3.6.4 and getting the following error: [2018-02-27 13:58:50,194: INFO/MainProcess] Received task: exb.tasks.test_tasks.add[52c3fb33-ce00-4165-ad18-15026eca55e9] [2018-02-27 13:58:50,194: CRITICAL/MainProcess] Unrecoverable error: ...
0
1
3,865
0
49,023,961
0
0
0
0
1
true
0
2018-02-28T06:35:00.000
1
1
0
set dimension of svd algorithm in python
49,023,337
1.2
python,numpy,scipy,svd
If A is a 3 x 5 matrix then it has rank at most 3. Therefore the SVD of A contains at most 3 singular values. Note that in your example above, the singular values are stored as a vector instead of a diagonal matrix. Trivially this means that you can pad your matrices with zeroes at the bottom. Since the full S matrix c...
svd formular: A ≈ UΣV* I use numpy.linalg.svd to run svd algorithm. And I want to set dimension of matrix. For example: A=3*5 dimension, after running numpy.linalg.svd, U=3*3 dimension, Σ=3*1 dimension, V*=5*5 dimension. I need to set specific dimension like U=3*64 dimension, V*=64*5 dimension. But it seems there is n...
0
1
646
0
49,028,439
0
0
0
0
1
false
0
2018-02-28T11:00:00.000
-1
1
0
Keep sklearnt model in memory to speed up prediction
49,027,972
-0.197375
python,python-2.7,scikit-learn
Can't you store only the parameters of your SVM classifier with clf.get_params() instead of the whole object?
I have trained a SVM model with sklearn, I need to connect this to php. To do this I am using exec command to call in the console the python script, where I load the model with pickle and predict the results. The problem is that loading the model with pickle takes some time (a couple of seconds) and I would like it to ...
0
1
186
0
51,527,999
0
0
0
0
1
false
2
2018-02-28T20:37:00.000
2
1
0
How to find markov blanket for a node?
49,038,111
0.379949
python,weka,markov,rweka,markov-models
Find all parents of the node Find all children of the node Find all parents of the children of the node These altogether gives you the Markov blanket for a given node.
I want to do feature selection using markov blanket algorithm. I am wondering is there any API in java/weka or in python to find the markov blanket . Consider I have a dataset. The dataset has number of variables and one one target variable. I want to find the markov blanket of the target variable. Any information woul...
0
1
1,298
0
49,043,299
0
0
0
0
1
false
0
2018-03-01T05:28:00.000
1
2
0
Custom Yaxis plot in matplotlib python
49,043,162
0.099668
python,matplotlib
This should work matplotlib.pyplot.yticks(np.arange(start, stop+1, step))
Let's say if I have Height = [3, 12, 5, 18, 45] and plot my graph then the yaxis will have ticks starting 0 up to 45 with an interval of 5, which means 0, 5, 10, 15, 20 and so on up to 45. Is there a way to define the interval gap (or the step). For example I want the yaxis to be 0, 15, 30, 45 for the same data set.
0
1
40
0
49,045,428
0
0
0
0
1
false
0
2018-03-01T08:08:00.000
0
1
0
How to checkwhether an index in a tensorarray has been initialized?
49,045,210
0
tensorflow,python-3.5
The only option as I see it is creating an initialization loop where every index is set to 0. This eliminates the problem but may not be an ideal way.
Is it in anyway possible to check whether an index in a TensorArray has been initialized? As I understand TensorArrays can't be initialized with default values. However I need a way to increment the number on that index which I try to do by reading it, adding one and then writing it to the same index. If the index is n...
0
1
69
0
68,770,190
0
0
0
0
1
false
1
2018-03-01T11:02:00.000
0
2
0
Error indices[0] = 0 is not in [0, 0) while training an object-detection model with tensorflow
49,048,262
0
python,tensorflow,object-detection
I had the same issue using the centernet_mobilenetv2 model, but I just deleted the num_keypoints parameter in the pipeline.config file and then all was working fine. I don't know what is the problem with that parameter but I was able to run the training without it.
So I am currently attempting to train a custom object-detection model on tensorflow to recognize images of a raspberrypi2. Everything is already set up and running on my hardware,but due to limitations of my gpu I settled for the cloud. I have uploaded my data(train & test records ans csv-files) and my checkpoint model...
0
1
591
0
49,081,754
0
0
0
0
1
false
2
2018-03-01T21:37:00.000
1
1
0
Semantically weighted mean of word embeddings
49,059,089
0.197375
python,vector,semantics,word2vec,word-embedding
Actually averaging of word vectors can be done in two ways Mean of word vectors without tfidf weights. Mean of Word vectors multiplied with tfidf weights. This will solve your problem of word importance.
Given a list of word embedding vectors I'm trying to calculate an average word embedding where some words are more meaningful than others. In other words, I want to calculate a semantically weighted word embedding. All the stuff I found is on just finding the mean vector (which is quite trivial of course) which represe...
0
1
937
0
49,065,671
0
0
0
0
1
true
0
2018-03-02T04:45:00.000
0
1
0
Training a categorical classification example
49,062,970
1.2
python,machine-learning
For the first question, I would say, you don't need to convert it, but it would make the evaluation on the test set easier. Your classifier will output one hot encoded values, which you can convert back to string, and evaluate those values, however I think if you would have the test targets as 0-1s would help. For the...
I am new to Machine Learning. I am currently solving a classification problem which has strings as its target. I have split the test and training sets and I have dealt with the string attributes by converting them by OneHotEncoder and also, I am using StandardScaler to scale the numerical features of the training set. ...
0
1
86
0
49,074,441
0
0
0
0
1
false
0
2018-03-02T17:23:00.000
0
1
0
Read_CSV() with a non-constant file location
49,074,246
0
python,pandas,csv
The expression sorted(glob.glob("DailyDownload/*/*_YEH.csv"))[-1] will return one file from the most recent day's downloads. This might work for you if you are certain that only one file per day will be downloaded. A better solution might be to grab all the files (glob.glob("DailyDownload/*/*_YEH.csv") and then somehow...
Quick question here I'd like to use pandas read_csv to bring in a file for my python script but it is a daily drop and both the filename and file location changes each day... My first thought is to get around this by prompting the user for the path? Or is there a more elegant solution that can be coded? The filepath (...
0
1
34
0
49,102,970
0
1
0
0
1
true
3
2018-03-03T02:18:00.000
1
3
0
Error while importing TensorFlow: Illegal instruction (core dumped)
49,079,990
1.2
python,linux,tensorflow
Compiling tensorflow from source solved the problem, so it seems my system wasn't supported.
I installed tensorflow following the instructions on their site, but when I try to run import tensorflow as tf I get the following error: Illegal instruction (core dumped). I tried this with the CPU and GPU versions, using Virtualenv and "native" pip, but the same error occurs in every case. The parameters of my PC: O...
0
1
3,188
0
49,102,029
0
0
0
0
1
true
0
2018-03-04T13:56:00.000
0
1
0
How can I find pairs of numbers in a matrix without using so many nested loops?
49,096,206
1.2
python,algorithm,performance
You may check if the algorithm outlined below are fast enough. Sort the numbers in the 3D array that are in the given range and keep track of the indexes. Now do a nested loop where the outer loop find candidates for the smallest number and the inner for the largest. The inner loop starts with the next number in the li...
I have to write an algorithm that will find two numbers in a 3D array (nested lists) that are: That are in a given range (min < num1, num2, < max) Do not overlap Are as close in value as possible ( abs(num1 - num1) is minimal) If there exists more pairs of numbers that satisfy 1), 2) and 3), pick the ones whose sum...
0
1
149
0
50,023,202
0
0
0
0
1
false
0
2018-03-05T01:51:00.000
0
1
0
Python Pandas , Date '9999dec31'd
49,102,424
0
python,pandas,sas
I have run into the same error, but with SQL server data, not SAS. I believe Python Pandas may be trying to store this as a pandas datetime, which stores its values in nanoseconds. 9999DEC31 has far too many nanoseconds, as you might expect, for it to handle. You could try reading it in as an integer of days since t...
I am doing something very simple but it seems that it does not work. I am importing a SAS table into pandas's dataframe. for the date column. I have NA which is actually using '9999dec31'd to represent it, which is 2936547 in numeric value. Python Pandas.read_sas() cant work with this value because it is too big. Any w...
0
1
305
0
49,103,642
0
0
0
0
1
false
0
2018-03-05T04:32:00.000
0
1
0
Tensorflow: Combining Loss Functions in LSTM Model for Domain Adaptation
49,103,531
0
python,tensorflow,deep-learning,keras,lstm
From an implementation perspective, the short answer would be yes. However, I believe your question could be more specific, maybe what you mean is whether you could do it with tf.estimator?
Can any one please help me out? I am working on my thesis work. Its about Predicting Parkinson disease, Since i want to build an LSTM model to adapt independent of patients. Currently i have implemented it using TensorFlow with my own loss function. Since i am planning to introduce both labeled train and unlabeled trai...
0
1
297
0
50,983,852
0
0
0
0
1
false
3
2018-03-05T05:32:00.000
0
2
0
Compare frames from videos opencv python
49,104,023
0
python,opencv
I don't know why it doesn't work but to solve your problem I would suggest to implement a new function which returns true even if there is a small difference for each pixel color value. Using the appropriate threshold, you should be able to exclude false negatives.
I've been trying to compare videos from frames taken from video using opencv videocapture() python! Took the first frame from a video let's call it frame1 and when I saved the video and took the same first frame again let's call it frame2 Comparing frame 1 and frame 2 returns false. When I expected true. I also saved...
0
1
1,712
0
49,108,001
0
0
0
0
1
false
3
2018-03-05T08:42:00.000
2
1
0
Building a dashboard in Dash
49,106,413
0.379949
python,plotly-dash
I have similar experience. A lot said python is more readable, while I agree, however, I don't find it as on par with R or Shiny in their respective fields yet.
I have used Shiny for R and specifically the Shinydashboard package to build easily navigatable dashboards in the past year or so. I have recently started using the Python, pandas, etc ecosystem for doing data analysis. I now want to build a dashboard with a number of inputs and outputs. I can get the functionality up ...
0
1
836
0
49,135,392
0
0
0
0
1
true
1
2018-03-05T10:43:00.000
1
2
0
How do I read/convert an HDF file containing a pandas dataframe written in Python 2.7 in Python 3.6?
49,108,596
1.2
python,python-3.x,python-2.7,pandas
Not exactly a solution but more of a workaround. I simply read the files in their corresponding Python versions and saved them as a CSV file, which can then be read any version of Python.
I wrote a dataframe in Python 2.7 but now I need to open it in Python 3.6, and vice versa (I want to compare two dataframes written in both versions). If I open a Python2.7-generated HDF file using pandas in Python 3.6, this is the error produced: UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 1:...
0
1
348
0
49,116,554
0
0
0
0
1
false
1
2018-03-05T17:20:00.000
0
1
0
randomly split DataFrame by group?
49,116,070
0
python-3.x,pandas,numpy,random,scikit-learn
Oh, there is an easy way ! create a list / array of unique group_ids create a random mask for this list and use the mask to split the file
I have a DataFrame where multiple rows share group_id values (very large number of groups). Is there an elegant way to randomly split this data into training and test data in a way that the training and test sets do not share group_id? The best process I can come up with right now is - create mask from msk = np.rando...
0
1
324
0
49,120,540
0
0
0
0
2
false
0
2018-03-05T21:24:00.000
0
2
0
Error in prediction using sknn.mlp
49,119,718
0
python,windows,pyspark
I don't work with Python on Windows, so this answer will be very vague, but maybe it will guide you in the right direction. Sometimes there are cross-platform errors due to one module still not being updated for the OS, frequently when another related module gets an update. I recall something happened to me with a djan...
I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows package...
0
1
52
0
49,144,887
0
0
0
0
2
false
0
2018-03-05T21:24:00.000
0
2
0
Error in prediction using sknn.mlp
49,119,718
0
python,windows,pyspark
I finally solved the problem on windows. Here is the solution in case you face it. The Theano package was faulty. I installed the latest version from github and then it threw another error as below: RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environment. In order to solve...
I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows package...
0
1
52
0
52,918,725
0
0
0
0
1
false
2
2018-03-06T08:04:00.000
0
3
0
Object center detection using Convnet is always returning center of image rather than center of object
49,126,007
0
python,computer-vision,deep-learning,keras,conv-neural-network
Since you haven't mentioned it in the details, the following suggestions (if you haven't implemented them already), could help: 1) Normalizing the input data (say for e.g, if you are working on input images, x_train = x_train/255 before feeding the input to the layer) 2) Try linear activation for the last output layer ...
I have a small dataset of ~150 images. Each image has an object (rectangle box with white and black color) placed on the floor. The object is same in all images but the pattern of the floor is different. The objective is to train network to find the center of the image. Each image is of dimension 256x256x3. Train_X is ...
0
1
599
0
49,131,939
0
0
0
0
1
true
1
2018-03-06T12:30:00.000
0
1
0
Do I need to provide sentences for training Spacy NER or are paragraphs fine?
49,130,905
1.2
nlp,python-3.5,spacy
Paragraphs should be fine. Could you give an example input data point?
I am trying to train a new Spacy model to recognize references to law articles. I start using a blank model, and train the ner pipe according to the example given in the documentation. The performance of the trained model is really poor, even with several thousands on input points. I am tryong to figure out why. One po...
0
1
444
0
49,181,090
0
0
0
0
1
false
3
2018-03-07T12:34:00.000
-2
3
0
Stop contourf interpolating values
49,152,116
-0.132549
python,matplotlib,matplotlib-basemap,contourf
Given that the question has not been updated to clearify the actual problem, I will simply answer the question as it is: No, there is no way that contour would not interpolate because the whole concept of a contour plot is to interpolate the values.
I am trying to plot some 2D values in a Basemap with contourf (matplotlib). However, contourf by default interpolates intermediate values and gives a smoother image of the data. Is there any way to make contourf to stop interpolating between values? I have tried by adding the keyword argument interpolation='nearest' b...
0
1
6,384
0
49,273,055
0
0
0
0
1
true
0
2018-03-07T22:29:00.000
1
1
0
How to make dynamic hierarchical TensorFlow neural net?
49,162,276
1.2
python,tensorflow,neural-network,artificial-intelligence
I don't know if and how this would work in TF. But specific "dynamic" deep learning libraries exist that might give a better fit to your use case. PyTorch for example.
I'm investigating on using tensorflow for an experimental AI algorithm using dynamic neural nets alowing the system to scale(remove and add) layers and width of layers. How should one go about this? And the followup is if I want to also make the nets hierarchical so that they converge to two values (the classifier and ...
0
1
195
0
49,189,473
0
0
0
0
2
false
3
2018-03-08T06:22:00.000
0
3
0
How to find the optimal number of clusters using k-prototype in python
49,166,657
0
python,cluster-analysis
Most evaluation methods need a distance matrix. They will then work with mixed data, as long as you have a distance function that helps solving your problem. But they will not be very scalable.
I am trying to cluster some big data by using the k-prototypes algorithm. I am unable to use K-Means algorithm as I have both categorical and numeric data. Via k prototype clustering method I have been able to create clusters if I define what k value I want. How do I find the appropriate number of clusters for this.? ...
0
1
8,033