GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
68,612,285
0
0
0
0
2
false
2
2017-05-09T12:21:00.000
1
3
0
dynamically growing array in numba jitted functions
43,869,734
0.066568
python,numpy,dynamic-arrays,numba
To dynamically increase the size of an existing array (and therefore do it in-place), numpy.ndarray.resize must be used instead of numpy.resize. This method is NOT implemented in Python, and is not available in Numba, so it just cannot be done.
It seems that numpy.resize is not supported in numba. What is the best way to use dynamically growing arrays with numba.jit in nopython mode? So far the best I could do is define and resize the arrays outside the jitted function, is there a better (and neater) option?
0
1
2,018
0
43,872,949
0
1
0
0
1
false
0
2017-05-09T13:39:00.000
0
1
0
Merging two DataFrames (CSV files) with different dates using Python
43,871,444
0
python,csv,dataframe,merge
The first file is smth like: Timestamp ; Flow1 ; Flow 2 2017/02/17 00:05 ; 540 ; 0 2017/02/17 00:10 ; 535 ; 0 2017/02/17 00:15 ; 543 ; 0 2017/02/17 00:20 ; 539 ; 0 CSV file #2: Timestamp ; DOC ; Temperatute ; UV254; 2017/02/17 00:14 ; 668.9 ; 15,13 ; 239,23 2017/02/17 00:15 ; 669...
I would like to know how can I proceed in order to concatenate two csv files, here is the composition of this two files: The first one contains some datas related to water chemical parameters, these measurements are taken in different dates. The second one shows the different flow values of waste water, during a certa...
0
1
37
0
43,871,677
0
0
0
0
2
false
0
2017-05-09T13:46:00.000
0
2
0
How do I pass my input to keras?
43,871,607
0
python,numpy,keras
If you want to create a 'list of numpys' you can do np.array(yourlist). If you print result.shape you will see what the resulting shape is. Hope this helps!
I am currently aware that keras doesn't support list of list of numpys.. but I can't see other way to pass my input. My input to my neural network is each columns (total 45 columns) of 33 different images. The way I've currently stored it is as an list of list in which the outer list has length 45, and the inner has...
0
1
218
0
43,872,953
0
0
0
0
2
false
0
2017-05-09T13:46:00.000
0
2
0
How do I pass my input to keras?
43,871,607
0
python,numpy,keras
You can use Input(batch_shape = (batch_size, height, width, channels)), where batch_size = 45, channels = 33 and use np.ndarray of shape (45, height, width, 33) if your backend is tensorflow
I am currently aware that keras doesn't support list of list of numpys.. but I can't see other way to pass my input. My input to my neural network is each columns (total 45 columns) of 33 different images. The way I've currently stored it is as an list of list in which the outer list has length 45, and the inner has...
0
1
218
0
43,878,521
0
0
0
0
1
true
0
2017-05-09T19:23:00.000
0
1
0
extended and upright flags in SURF opencv c++ function
43,878,271
1.2
python,c++,opencv,image-processing,surf
got it ! C++: SURF::SURF(double hessianThreshold, int nOctaves=4, int nOctaveLayers=2, bool extended=true, bool upright=false )
what are the equivalent flags of SURF in opencv C++ to python SURF flags extended and upright ? in python version upright flag decides whether to calculate orientation or not And extended flag gives option of whether to use 64 dim or 128 dim Is there a to do this similar operation in opencv C++ version of SURF funct...
0
1
175
0
44,149,840
0
0
0
0
1
true
1
2017-05-10T00:51:00.000
1
1
0
Error using Torch RNN
43,881,941
1.2
python-2.7,lua,torch,luarocks
Check that the header file exists and that you have the correct path. If the header file is missing you skipped the preprocess step. If the header file exists it's likely in your data directory and not in the same directory as the sample.lua code: th train.lua -input_h5 data/my_data.h5 -input_json data/my_data.json
I'm following the instructions on github.com/jcjohnson/torch-rnn and have it working until the training section. When I use th train.lua -input_h5 my_data.h5 -input_json my_data.jsonI get the error Error: unable to locate HDF5 header file at /usr/local/Cellar/hdf5/1.10.0-patch1/include;/usr/include;/usr/local/opt/szip/...
0
1
84
0
43,917,161
0
0
0
0
1
true
0
2017-05-10T15:07:00.000
1
1
0
Embeddings vs text cleaning (NLP)
43,896,369
1.2
python-3.x,text,nlp,embedding,data-cleaning
I post this here just to summarise the comments in a longer form and give you a bit more commentary. No sure it will answer your question. If anything, it should show you why you should reconsider it. Points about your question Before I talk about your question, let me point a few things about your approaches. Word emb...
I am a graduate student focusing on ML and NLP. I have a lot of data (8 million lines) and the text is usually badly written and contains so many spelling mistakes. So i must go through some text cleaning and vectorizing. To do so, i considered two approaches: First one: cleaning text by replacing bad words using huns...
0
1
842
0
43,976,879
0
0
0
0
1
false
1
2017-05-10T23:09:00.000
2
1
0
How extract vocabulary vectors from gensim's word2vec?
43,904,029
0.379949
python,machine-learning,gensim,word2vec,text-classification
If you have trained word2vec model, you can get word-vector by __getitem__ method model = gensim.models.Word2Vec(sentences) print(model["some_word_from_dictionary"]) Unfortunately, embeddings from word2vec/doc2vec not interpreted by a person (in contrast to topic vectors from LdaModel) P/S If you have texts at the obj...
I want to analyze the vectors looking for patterns and stuff, and use SVM on them to complete a classification task between class A and B, the task should be supervised. (I know it may sound odd but it's our homework.) so as a result I really need to know: 1- how to extract the coded vectors of a document using a train...
0
1
1,519
0
43,920,803
0
1
0
0
1
false
0
2017-05-11T16:17:00.000
0
1
0
Jupyter Notebook doesn't show in Dashboard (Windows 10)
43,920,802
0
python,jupyter-notebook
The files path names are too long. Reducing the path length by reducing the number of folders and/or folder name lengths will solve the problem.
My Jupyter Notebook doesn't show in the Jupyter Dashboard in Windows 10. Additionally, I get the following error in my Jupyter cmd line console: [W 00:19:39.638 NotebookApp] C:\Users\danie\Documents\Courses\Python-Data-Science-and-Machine-Learning-Bootcamp Jupyter Notebooks\Python-Data-Science-and-Machine-Learning-Boot...
0
1
368
0
43,935,121
0
0
0
0
2
false
3
2017-05-11T16:23:00.000
2
2
0
IBM Watson nl-c training time
43,920,923
0.197375
python,ibm-cloud,ibm-watson,training-data,nl-classifier
For NLC it depends on the type of data, and quantity. There is no fixed time to when it completes, but I have seen a classifier run a training session for nearly a day. That said, normally anywhere from 30 minutes to a couple of hours. Watson conversation Intents is considerably faster (minutes). But both use differe...
I have a data-set that contains about 14,700 records. I wish to train it on ibm watson and currently i'm on trial version. What is the rough estimate about the time that the classifier will take to train? Each record of dataset contains a sentence and the second column contains the class-name.
0
1
222
0
43,921,011
0
0
0
0
2
false
3
2017-05-11T16:23:00.000
0
2
0
IBM Watson nl-c training time
43,920,923
0
python,ibm-cloud,ibm-watson,training-data,nl-classifier
If your operating system is UNIX, you can determine how long a query takes to complete and display results when executed using dbaccess. You can use the time command to report how much time is spent, from the beginning to the end of a query execution. Including the time to connect to the database, execute the query and...
I have a data-set that contains about 14,700 records. I wish to train it on ibm watson and currently i'm on trial version. What is the rough estimate about the time that the classifier will take to train? Each record of dataset contains a sentence and the second column contains the class-name.
0
1
222
0
43,939,525
0
1
0
0
1
false
0
2017-05-12T13:44:00.000
0
2
0
File write collisions on parallelized python
43,939,316
0
python,file
Python processes, threads and coroutines offers synchronization primitives such as locks, rlocks, conditions and semaphores. If your threads access randomly one or more shared variables then every thread should acquire lock on this variable so that another thread couldn't access it.
I'm doing some research in neuroscience and I'm using python's tinydb library for keeping track of all my model training runs and the data they generate. One of the issues I realized might come up is when I try to train multiple models on a cluster. What could happen is that two threads might try to write to the tinyd...
0
1
678
0
43,954,917
0
0
0
0
1
false
0
2017-05-13T14:16:00.000
-2
1
0
How to finding distance between camera and detected object using openCV in python?
43,954,187
-0.379949
python,opencv,image-processing,distance
I am sorry but finding a distance is a metrology problem, so you need to calibrate your camera. Calibrating is a relatively easy process which is necessary for any measurements. Let's assume you only have one calibrated camera, if the orientation/position of this camera is fixed relatively to the ground plane, it is po...
I want to find out the distance between the camera and the people (detected using the HOG descriptor) in front of camera.I'm looking into more subtle approach rather than calibrating the camera and without knowing any distances before hand. This can fall under the scenario of an autonomous car finding the distance betw...
0
1
2,354
0
61,691,668
0
1
0
0
1
false
5
2017-05-13T14:56:00.000
3
2
0
Google foo.bar Challenge Issue: Can't import libraries
43,954,548
0.291313
python,numpy
Math is a standard library but still it is not working with the Foobar.
I am working on a problem (doomsday_fuel) using python and I need to use matrices, so I would like to import numpy. I have solved the problem and it runs perfectly on my own computer, but Google returns the error: ImportError: No module named numpy [line 3]. The beginning of my code looks like: import fractions from fr...
0
1
4,751
0
43,968,199
0
1
0
0
1
true
0
2017-05-14T19:07:00.000
3
1
0
An algorithm for grouping by trying without feedback
43,967,808
1.2
python,algorithm,sorting,theory
Let's try the following: Suppose we have an m times n grid grid with p different colors. First we work row by row with the following algorithm: Column reduction Drag the piece at (1,1) to (1,2), then (1,2) to (1,3) and so on until you reach (1, n) Drag the piece at (1,1) the same way to (1,n-1). Continue till you reac...
Tagged this as Python because is the most pseudo-y-code language in my opinion I'll explain graphically and the answer can be graphical/theorical too (maybe its the wrong site to post this?) Let's say I want to make an algorithm that solves a simple digital game for infants (this is not the actual context, its much mor...
0
1
58
0
43,971,827
0
0
0
0
1
false
0
2017-05-15T04:39:00.000
1
1
0
How to plot the histogram of image width in python?
43,971,678
0.197375
python,image-processing,computer-vision,ipython,opencv-python
ok i will give you the steps, but the coding has to be done by you assuming you have python installed and pip in you machine Install pillow using pip get the images in the script and calculate the width and store them in a list, you will get to know how to calculate width from the Pillow documentation Install matplotl...
I have some training images (more than 20 image with format .tif)that i want to plot their histogram of width in python. I will be more than happy if any one can helps.
0
1
194
0
43,972,717
0
0
0
0
1
false
1
2017-05-15T05:25:00.000
0
1
0
Music genre classification with sklearn: how to accurately evaluate different models
43,972,059
0
python,machine-learning,scikit-learn,statistical-sampling
To evaluate a classifier's accuracy against another classifier, you need to randomly sample from the dataset for training and test. Use the test dataset to evaluate each classifier and compare the accuracy in one go. Given a dataset stored in a dataframe , split it into training and test (random sampling is better to ...
I'm working on a project to classify 30 second samples of audio from 5 different genres (rock, electronic, rap, country, jazz). My dataset consists of 600 songs, exactly 120 for each genre. The features are a 1D array of 13 mfccs for each song and the labels are the genres. Essentially I take the mean of each set of 13...
0
1
894
0
43,977,884
0
0
0
0
1
false
1
2017-05-15T10:58:00.000
0
2
0
How to maximize the area under -log(x) curve?
43,977,734
0
python,r,python-2.7,equation,logarithm
That is no programming question but a mathematics question and if I get the function in your question right, the answer is "wherever the graph hits the x-axis". But I think that was not what you wanted. Maybe you want the rectangle between O(0,0) and P(x, y)? Than you still should simply use a cas and a-level mathemati...
I'm trying to get the x and y coordinates for which the area under the curve: y=-15.7log(x)+154.94 is maximum. I would like to compute this in R or Python. Can someone please help me to find it? Background: I have data points of sales (y) vs prices (x). I tried fitting a log curve in R: lm(formula = y ~ log(x)) which ...
0
1
161
0
46,858,137
0
0
0
0
1
false
3
2017-05-15T11:50:00.000
1
1
0
Edit a row value in datatable in spotfire
43,978,775
0.197375
ironpython,spotfire,rscript
This can be done. 1-> Create function or packaged function which returns ref-cursor. 1.1-> In that update your value in table based on where clause. 2-> Once you have function ready, create informationlink on that object using parameter type single. 3-> Once you do that import information link to spotfire usi...
How do we edit a row in a datatable in spotfire? Can we do it using ironpython or R script? I have a requirement where I want to edit the values in spotfire datatable to see the effect in the respective visuals. The data table is populated using an information link (from a SQL database).
0
1
1,165
0
43,987,786
0
0
0
0
1
false
0
2017-05-15T17:59:00.000
0
1
0
Mattes Mutual Info basic doubts on 3D image registration
43,985,976
0
python,image-processing,optimization,itk,image-registration
Similarity metrics in ITK usually give the cost, so the optimizers try to minimize them. Mutual information is an exception to this rule (higher MI is better), so in order to fit into the existing framework it has negative values - bigger negative number is better than small negative number, while still following the ...
1. Mattes Mutual Info Doubts In SimpleITK Mattes Mutual information is a similarity metric measure, is this a maximizing function or minimizing function? I have tried a 3D registration(image size : 480*480*60) with Metric Mattes Mutual Info metric and Gradient Descent Optimizer Output numofbins = 30 Optimizer stop con...
0
1
770
0
44,009,737
0
0
0
0
2
true
4
2017-05-16T18:43:00.000
8
2
0
Python: How can I reshape 3D Images (np array) to 1D and then reshape them back correctly to 3D?
44,009,244
1.2
python,arrays,numpy,image-processing,tensorflow
If you are looking to create a 1D array, use .reshape(-1), which will create a linear version of you array. If you the use .reshape(32,32,3), this will create an array of 32, 32-by-3, arrays, which is the original format described. Using '-1' creates a linear array of the same size as the number of elements in the comb...
I have RGB images (32 x 32 x 3) saved as 3D numpy arrays which I use as input for my Neural Net (using tensorflow). In order to use them as an input I reshape them to a 1D np array (1 x 3072) using reshape(1,-1). When I finish training my Net I want to reshape the output back, but using reshape(32,32,3) doesn't seem to...
0
1
11,600
0
44,009,566
0
0
0
0
2
false
4
2017-05-16T18:43:00.000
2
2
0
Python: How can I reshape 3D Images (np array) to 1D and then reshape them back correctly to 3D?
44,009,244
0.197375
python,arrays,numpy,image-processing,tensorflow
If M is (32 x 32 x 3), then .reshape(1,-1) will produce a 2d array (not 1d), of shape (1, 32*32*3). That can be reshaped back to (32,32,3) with the same sort of reshape statement. But that's reshaping the input to and from But you haven't told us what the output of your Net is like. What shape does it have? How are...
I have RGB images (32 x 32 x 3) saved as 3D numpy arrays which I use as input for my Neural Net (using tensorflow). In order to use them as an input I reshape them to a 1D np array (1 x 3072) using reshape(1,-1). When I finish training my Net I want to reshape the output back, but using reshape(32,32,3) doesn't seem to...
0
1
11,600
0
44,054,062
0
0
0
0
1
false
0
2017-05-17T02:58:00.000
-1
2
0
Using Docker for Image training in Python (New to this)
44,014,764
-0.099668
windows,python-3.x,docker,tensorflow
If you're planning to use Python 3, I'd recommend docker run -it gcr.io/tensorflow/tensorflow:latest-devel-py3 (Numpy is installed for python3 in that container). Not sure why Python 3 is partially installed in the latest-devel package.
All of my steps have worked very well up to this point. I am on a windows machine currently. I am in the root directory after using the command: docker run -it gcr.io/tensorflow/tensorflow:latest-devel then followed by a cd /tensorflow, I am now in the directory and it is time to train the images so i jused: /tensorflo...
0
1
237
0
44,020,731
0
1
0
0
1
true
1
2017-05-17T06:38:00.000
0
1
0
How could I use TensorFlow in jupyter notebook? I install TensorFlow via python 3.5 pip already
44,017,326
1.2
python-3.x,tensorflow,pip,installation,jupyter-notebook
There is a package called nb_conda that helps manage your anaconda kernels. However, when you launch Jupyter make sure that you have jupyter installed inside your conda environment and that you are launching Jupyter from that activated environment. So: Activate your conda environment that has Tensorflow installed. You...
I installed tensorflow via python3.5 pip, it is in the python3.5 lib folder and I can use it perfectly on shell IDLE. I have anaconda(jupyter notebook) on my computer at the same time, however, I couldn't import tensorflow on notebook. I guess notebook was using the anaconda lib folder, not python3.5 libs. is there any...
0
1
4,674
0
44,022,536
0
0
0
0
1
true
1
2017-05-17T08:54:00.000
3
1
1
Tensorflow and Pycharm
44,020,050
1.2
python,tensorflow,pycharm,cudnn
The solution is: Run PyCharm from the console. OR add the environment variable to the IDE settings: LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
I have an issues with tensorflow on pycharm. Whenever I import tensorflow in the linux terminal, it works correctly. However, in PyCharm community 2017.1, it shows: ImportError: libcudnn.so.5: cannot open shared object file: No such file or directory Any hint on how to tackle the issue. Please note that I am using pyth...
0
1
921
0
44,079,737
0
0
0
0
1
false
0
2017-05-17T10:06:00.000
0
2
0
get pixel of image in tensorflow
44,021,777
0
python,tensorflow,neural-network,pixel,convolution
Actually, I'm trying to train a NN that get corrupted images and based on them the grand truth, remove noise from that images.It must be Network in Network, an another word pixels independent.
I am new by tensorflow. I want to write a Neural network, that gets noisy images from a file and uncorrupted images from another file. then I want to correct noisy images based on the other images.
0
1
594
0
44,037,339
0
0
0
0
1
true
2
2017-05-17T10:23:00.000
4
1
0
Unpickling Error while using Word2Vec.load()
44,022,180
1.2
python,gensim,word2vec
This would normally work, if the file was created by gensim's native .save(). Are you sure the file 'ammendment_vectors.model.bin' is complete and uncorrupted? Was it created using the same Python/gensim versions as in use where you're trying to load() it? Can you try re-creating the file?
I am trying to load a binary file using gensim.Word2Vec.load(fname) but I get the error: File "file.py", line 24, in model = gensim.models.Word2Vec.load('ammendment_vectors.model.bin') File "/home/hp/anaconda3/lib/python3.6/site-packages/gensim/models/word2vec.py", line 1396, in load model = super(Word...
0
1
5,562
0
44,035,007
0
0
0
0
1
true
2
2017-05-17T19:43:00.000
4
3
0
How to deal with exponent overflow of 64float precision in python?
44,033,533
1.2
python,numpy
You can use the function np.logaddexp() to do such operations. It computes logaddexp(x1, x2) == log(exp(x1) + exp(x2)) without explicitly computing the intermediate exp() values. This avoids the overflow. Since exp(0.0) == 1, you would compute np.logaddexp(0.0, 1000.0) and get the result of 1000.0, as expected.
I am a newbie in python sorry for the simple question. In the following code, I want to calculate the exponent and then take the log. Y=numpy.log(1+numpy.exp(1000)) The problem is that when I take the exponent of 710 or larger numbers the numpy.exp() function returns 'inf' even if I print it with 64float it prints 'inf...
0
1
2,564
0
44,055,484
0
0
0
0
1
false
2
2017-05-18T07:31:00.000
1
2
0
sklearn: Get Distance from Point to Nearest Cluster
44,041,347
0.099668
python,machine-learning,scikit-learn,cluster-analysis,data-mining
To be closer to the intuition of DBSCAN you probably should only consider core points. Put the core points into a nearest neighbor searcher. Then search for all noise points, use the cluster label of the nearest point.
I'm using clustering algorithms like DBSCAN. It returns a 'cluster' called -1 which are points that are not part of any cluster. For these points I want to determine the distance from it to the nearest cluster to get something like a metric for how abnormal this point is. Is this possible? Or are there any alternatives...
0
1
2,660
0
44,423,577
0
0
0
0
1
false
0
2017-05-18T10:06:00.000
0
1
0
Filtering in tweepy - exact phrase
44,044,773
0
python,tweepy
the twitter api doesn't allow that. you'll have to check for each returned tweet whether or not it actually contains one of your exact phrases.
I can't get tweepy filtering to quite work how I want to. stream.filter(track=['one two' , 'three four']) I want to retweet based on a specific two word set i.e. "one two" but I'm getting retweets where the tweet has those two words, but not in order and separated i.e. "three two one" or "one three two" etc. I want twe...
0
1
990
0
44,049,390
0
0
0
0
1
false
0
2017-05-18T10:59:00.000
0
2
0
Answering business questions with machine learning models (scikit or statsmodels)
44,045,913
0
python,machine-learning,statistics,regression,data-science
Why did customer service calls drop last month? It depends on what type and features of data you have to analyze and explore the data. One of the basic things is to look at correlation between features and target variable to check if you can identify any feature that can correlate with the drop of calls. So exploring ...
Thanks for your help on this. This feels like a silly question, and I may be overcomplicating things. Some background information - I just recently learned some machine learning methodologies in Python (scikit and some statsmodels), such as linear regression, logistic regression, KNN, etc. I can work the steps of prep...
0
1
200
0
44,048,354
0
0
0
0
1
false
1
2017-05-18T12:17:00.000
0
3
0
Trying to import keras but got an error
44,047,544
0
python
I had a similar problem, solved it by installing an older pandas version pip install pandas==0.19.2
Trying to import Keras 2.0.4 with Tensorflow 1.0.1 on Windows10 as backend, but I got the following message: AttributeError: module 'pandas' has no attribute 'computation' I've recently upgraded my pandas into version 0.20.1, is it the reason why I failed to import keras? There is a lot more information available on ...
0
1
1,406
0
44,051,350
0
0
0
0
1
false
1
2017-05-18T14:48:00.000
0
1
0
Gensim save_word2vec_format() vs. model.save()
44,051,051
0
python,nlp,gensim,word2vec
EDIT: this was intended as a comment. Don't know how to change it now, sorry correlation between the word occurrence-frequency and vector-length I don't quite follow - aren't all your vectors the same length? Or are you not referring to the embedding vectors?
I am using gensim version 0.12.4 and have trained two separate word embeddings using the same text and same parameters. After training I am calculating the Pearsons correlation between the word occurrence-frequency and vector-length. One model I trained using save_word2vec_format(fname, binary=True) and then loaded us...
0
1
3,611
0
44,053,256
0
1
0
0
1
false
1
2017-05-18T16:11:00.000
1
2
0
Python: Converting string to floats, reading floats into 2D array, if/then, reordering of rows?
44,052,893
0.099668
python,arrays,string
First Part: @njzk2 is exactly right. Simply removing the literal spaces to change from l.strip().split(' ') to l.strip().split() will correct the error, and you will see the following output for f_values: [['-91.', '0.444253325'], ['-90.', '0.883581936'], ['-89.', '-0.0912338793']] And the output for newarray shows...
Let me start by saying that I know nothing about Python, but I am trying to learn(mostly through struggling it seems). I've looked around this site and tried to cobble together code to do what I need it to, but I keep running into problems. Firstly, I need to convert a file of 2 columns and 512 rows of strings to float...
0
1
352
0
44,240,013
0
0
0
0
1
true
0
2017-05-19T06:15:00.000
1
1
0
ChatterBot Ubuntu Corpus Trainer
44,062,679
1.2
python,chatbot
Yes, we can , the data folder is ".\data" , which is path from where you are invoking the ubuntu_corpus_training_example.py. create a folder ubuntu_dialogs and unzip all the folders, the trainer.py looks at .\data\ubuntu_dialogs***.tsv files
Looking to create a custom trainer for chatterbot, In the ubuntu corpus trainer, it looks as if the training is done based on all the conversation entries. I manually copy the ubuntu_dialogs.tgz to the 'data' folder. Trainer fails with error file could not be opened successfully https://github.com/gunthercox/ChatterBo...
0
1
941
0
44,166,939
0
0
1
0
1
false
0
2017-05-19T16:39:00.000
0
1
0
Finding closest value in a binary file
44,075,041
0
python
First attempt. It works, seemingly every time, but I don't know if it's the most efficient way: Take first and last time stamps and number of frames to calculate an average time step. Use average time step and difference between target and beginning timestamps to find approximate index. Check for approximate and 2 sur...
I have a large binary file (~4 GB) containing a series of image and time stamp data. I want to find the image that most closely corresponds to a user-given time stamp. There are millions of time stamps in the file, though. In Python 2.7, using seek, read, struct.unpack, it took over 900 seconds just to read all the tim...
0
1
40
0
44,091,300
0
0
0
0
1
false
0
2017-05-19T18:22:00.000
0
1
0
How-To Generate a 3D Numpy Array On-Demand for an LSTM
44,076,649
0
python,numpy,keras,lstm,training-data
I found the answer to this on the Keras slack from user rocketknight. Use the model.fit_generator function. Define a generator function somewhere within your main python script that "yields" a batch of data. Then call this function in the arguments of the model.fit_generator function.
I am currently trying to use a "simple" LSTM network implemented through Keras for a summer project. Looking at the example code given, it appears the LSTM code wants a pre-generated 3D numpy array. As the dataset and the associated time interval I want to use are both rather large, it would be very prohibitive for me ...
0
1
160
0
44,106,903
0
0
0
0
1
false
2
2017-05-21T16:17:00.000
0
1
0
How to group sentences by edit distance?
44,099,095
0
python,machine-learning,nlp,cluster-analysis,edit-distance
There is only a limited set of POS tags. Rather than using edit distance, compute a POS-POS similarity matrix just once. You may even want to edit this matrixes desired, e.g. to make two POS tags effectively the same, or to increase the difference of two tags. Store that in a numpy array, convert all your vectors to in...
I have a large set (36k sentence) of sentences (text list) and their POS tags (POS list), and I'd like to group/cluster the elements in the POS list using edit distance/Levenshtein: (e.g Sentx POS tags= [CC DT VBZ RB JJ], Senty POS tags= [CC DT VBZ RB JJ] ) are in cluster edit distance =0, while ([CC DT VBZ RB JJ], [C...
0
1
520
0
45,963,156
0
0
0
0
1
true
1
2017-05-22T12:40:00.000
2
1
0
Does it make sense to talk about skip-gram and cbow when using The Glove method?
44,113,128
1.2
python-3.x,word2vec,word-embedding
Not really, skip-gram and CBOW are simply the names of the two Word2vec models. They are shallow neural networks that generate word embeddings by predicting a context from a word and vice versa, and then treating the output of the hidden layer as the vector/representation. GloVe uses a different approach, making use of...
I'm trying different word embeddings methods, in order to pick the approache that works the best for me. I tried word2vec and FastText. Now, I would like to try Glove. In both word2vec and FastText, there is two versions: Skip-gram (predict context from word) and CBOW (predict word from context). But in Glove python pa...
0
1
341
0
54,380,982
0
0
0
0
1
false
0
2017-05-23T01:40:00.000
0
2
0
How to encode categorical with many levels on scikit-learn?
44,124,471
0
python,machine-learning,scikit-learn
One another solution is that, you can do a bivariate analysis of the categorical variable with the target variable. What yo will get is a result of how each level affects the target. Once you get this you can combine those levels that have a similar effect on the data. This will help you reduce number of levels, as wel...
guys. I have a large data set (60k samples with 50 features). One of this features (which is really relevant for me) is job names. There are many jobs names that I'd like to encode to fit in some models, like linear regression or SVCs. However, I don't know how to handle them. I tried to use pandas dummy variables and ...
0
1
1,235
0
44,134,880
0
0
0
0
1
true
0
2017-05-23T11:17:00.000
0
1
0
Strip certain content of columns in multiple columns
44,133,280
1.2
python,string,pandas,split
Ok just solved the question: with df.shape I found out what the dimensions are and then started a for loop: for i in range(1,x): df[df.columns[i]]= df[df.columns[i]].str.split('/').[-1] If you have any more efficient ways let me know :)
I am currently in the phase of data preparation and have a certain issue I would like to make easy. The content of my columns: 10 MW / color. All the columns which have this content are named with line nr. [int] or a [str] What I want to display and which is the data of interest is the color. What I did was following:...
0
1
50
0
44,140,770
0
0
0
0
1
false
1
2017-05-23T16:46:00.000
1
1
0
Python pandas Several DataFrames Best Practice
44,140,675
0.197375
python,pandas
I think the simplest and most efficient path would be to have two tables. The reason being is that with the 1 big table your algorithm can take O(n^2) since you have to iterate n number of times for each element in your markers and then matching for each element n times for each performance. If you did the 2 table appr...
I have a DataFrame with about 6 million rows of daily data that I will use to find how certain technical markers affected their respective stocks’ long term performance. I have 2 approaches, which one is recommended? Make 2 different tables, one of raw data and one (a filtered copy) containing the technical markers, t...
0
1
320
0
44,144,746
0
0
0
0
1
false
0
2017-05-23T17:08:00.000
0
1
0
How to extract cluster id from Dirichlet process in PyMC3 for grouped data?
44,141,059
0
python,process,cluster-computing,pymc3,dirichlet
If I understand you correctly, you're trying to extract which category (1 through k) a data point belongs to. However, a Dirichlet random variable only produces a probability vector. This should be used as a prior for a Categorical RV, and when that is sampled from, it will result in a numbered category.
I am using PyMC3 to cluster my grouped data. Basically, I have g vectors and would like to cluster the g vectors into m clusters. However, I have two problems. The first one is that, it seems PyMC3 could only deal with one-dimensional data but not vectors. The second problem is, I do not know how to extract the cluster...
0
1
95
0
44,207,434
0
0
0
0
1
true
0
2017-05-23T20:28:00.000
1
1
0
PySpark dataframe pipeline throws No plan for MetastoreRelation Error
44,144,421
1.2
python,apache-spark,machine-learning,pyspark,spark-dataframe
This error was due to the order of joining the 2 pyspark dataframes. I tried changing the order of join from say a.join(b) to b.join(a) and its working.
After preprocessing the pyspark dataframe , I am trying to apply pipeline to it but I am getting below error: java.lang.AssertionError: assertion failed: No plan for MetastoreRelation. What is the meaning of this and how to solve this. My code has become quite large, so I will explain the steps 1. I have 8000 colum...
0
1
1,261
0
44,147,287
0
1
0
0
1
false
3
2017-05-23T22:37:00.000
2
1
0
'_remove_dead_weakref' error when updating scikit-learn on Win10 machine
44,146,146
0.379949
python,scikit-learn
After spending a couple hours to no avail, deleted the python anaconda folder and reinstalled. Have the latest bits now and problem solved :)
I'm new to open source so appreciate any/all help. I've got notebook server 4.2.3 running on: Python 3.5.2 |Anaconda 4.2.0 (64-bit) on my Windows10 machine. When trying to update scikit-learn from 0.17 to 0.18, I get below error which I believe indicates one of the dependency files is outdated. I can't understand how...
0
1
3,110
0
44,366,690
0
0
0
0
2
true
1
2017-05-24T13:47:00.000
0
2
0
Using dummy variables for Machine Learning with more than one categorical variable
44,160,324
1.2
python,machine-learning,dummy-variable
In a case where there is more than one categorical variable that needs to be replaced for a dummy. The approach should be to encode each of the variables for a dummy (as in the case for a single categorical variable) and then remove one instance of each dummy that exists for each variable in order to avoid colinearity....
I am looking to do either a multivariate Linear Regression or a Logistic Regression using Python on some data that has a large number of categorical variables. I understand that with one Categorical variable I would need to translate this into a dummy and then remove one type of dummy so as to avoid colinearity however...
0
1
1,175
0
44,163,726
0
0
0
0
2
false
1
2017-05-24T13:47:00.000
1
2
0
Using dummy variables for Machine Learning with more than one categorical variable
44,160,324
0.099668
python,machine-learning,dummy-variable
If there are many categorical variables and also in these variables, if there are many levels, using dummy variables might not be a good option. If the categorical variable has data in form of bins, for e.g, a variable age having data in form 10-18, 18-30, 31-50, ... you can either use Label Encoding or create a new nu...
I am looking to do either a multivariate Linear Regression or a Logistic Regression using Python on some data that has a large number of categorical variables. I understand that with one Categorical variable I would need to translate this into a dummy and then remove one type of dummy so as to avoid colinearity however...
0
1
1,175
0
44,213,304
0
0
0
0
1
false
7
2017-05-25T00:43:00.000
5
4
0
Warning from keras: "Update your Conv2D call to the Keras 2 API"
44,170,581
0.244919
python,keras
As it says, it's not an issue. It still works fine although they might change it any day and the code will not work. In Keras 2 Convolution2D has been replaced by Conv2d along with some changes in the parameters. Convolution* layers are renamed Conv*. Conv2D(10, 3, 3) becomes Conv2D(10, (3, 3))
I am trying to use keras to create a CNN, but I keep getting this warning which I do not understand how to fix. Update your Conv2D call to the Keras 2 API: Conv2D(64, (3, 3), activation="relu") after removing the cwd from sys.path. Can anyone give any ideas about fixing this?
0
1
6,588
0
44,424,599
0
0
0
0
1
false
1
2017-05-25T08:20:00.000
0
2
0
attributeError:'module' object has no attribute 'MXIndexedRecordIO'
44,175,700
0
python-2.7,cpu,mxnet
@user3824903 I think to create a bin directory, you have to compile MXNet from source with option USE_OPENCV=1
I have used im2rec.py to convert "caltech101 images" into record io format: I have created "caltech.lst" succesfully using os.system('python %s/tools/im2rec.py --list=1 --recursive=1 --shuffle=1 data/caltech data/101_ObjectCategories'%MXNET_HOME) Then, when I run this : os.system("python %s/tools/im2rec.py --train-rati...
0
1
375
0
44,183,963
0
0
0
0
1
false
2
2017-05-25T15:11:00.000
0
2
0
Normalize IDs column
44,183,927
0
python,pandas,numpy,ipython,jupyter-notebook
I would go through and find the item with the smallest id in the list, set it to 1, then find the next smallest, set it to 2, and so on. edit: you are right. That would take way too long. I would just go through and set one of them to 1, the next one to 2, and so on. It doesn't matter what order the ids are in (I am gu...
I'm making a recommender system, and I'd like to have a matrix of ratings (User/Item). My problem is there are only 9066 unique items in the dataset, but their IDs range from 1 to 165201. So I need a way to map the IDs to be in the range of 1 to 9066, instead of 1 to 165201. How do I do that?
0
1
210
0
44,204,330
0
0
0
0
1
false
0
2017-05-26T14:27:00.000
0
2
0
SSAS connection from Python
44,204,086
0
python,ssas,olap
Seems Python does not support to include .net dll, but IronPython does, we had a MS BI automation project before with IronPython to connect SSAS, it is a nice experience. www.mdx-helper.com
Does anyone know of a Python package to connect to SSAS multidimensional and/or SSAS tabular that supports MDX and/or DAX queries. I know of olap.xmla but that requires an HTTP connection. I am looking for a Python equivalent of olapR in R. Thanks
0
1
6,435
0
44,212,992
0
0
0
0
1
false
0
2017-05-27T01:30:00.000
1
2
0
How to save large Python numpy datasets?
44,212,063
0.099668
python,opencv,numpy,keras
As with anything regarding performance or efficiency, test it yourself. The problem with recommendations for the "best" of anything is that they might change from year to year. First, you should determine if this is even an issue you should be tackling. If you're not experiencing performance issues or storage issues, t...
I'm attempting to create an autonomous RC car and my Python program is supposed to query the live stream on a given interval and add it to a training dataset. The data I want to collect is the array of the current image from OpenCV and the current speed and angle of the car. I would then like it to be loaded into Keras...
0
1
744
0
44,216,018
0
1
0
0
1
false
4
2017-05-27T10:04:00.000
1
7
0
Random number generator that returns only one number each time
44,215,505
0.028564
python,python-3.x,random,generator
For a large number of non-repeating random numbers use an encryption. With a given key, encrypt the numbers: 0, 1, 2, 3, ... Since encryption is uniquely reversible then each encrypted number is guaranteed to be unique, provided you use the same key. For 64 bit numbers use DES. For 128 bit numbers use AES. For oth...
Does Python have a random number generator that returns only one random integer number each time when next() function is called? Numbers should not repeat and the generator should return random integers in the interval [1, 1 000 000] that are unique. I need to generate more than million different numbers and that sound...
0
1
5,066
0
45,108,482
0
0
0
0
1
true
10
2017-05-28T11:43:00.000
22
3
0
Difference between tf.nn_conv2d and tf.nn.depthwise_conv2d
44,226,932
1.2
python,tensorflow,deep-learning,conv-neural-network
I am no expert on this, but as far as I understand the difference is this: Lets say you have an input colour image with length 100, width 100. So the dimensions are 100x100x3. For both examples we use the same filter of width and height 5. Lets say we want the next layer to have a depth of 8. In tf.nn.conv2d you define...
What is the difference between tf.nn_conv2d and tf.nn.depthwise_conv2d in Tensorflow?
0
1
8,789
0
47,000,213
0
1
0
0
1
false
34
2017-05-28T13:18:00.000
8
5
0
removing newlines from messy strings in pandas dataframe cells?
44,227,748
1
python,string,pandas,split
in messy data it might to be a good idea to remove all whitespaces df.replace(r'\s', '', regex = True, inplace = True).
I've used multiple ways of splitting and stripping the strings in my pandas dataframe to remove all the '\n'characters, but for some reason it simply doesn't want to delete the characters that are attached to other words, even though I split them. I have a pandas dataframe with a column that captures text from web page...
0
1
88,228
0
44,228,127
0
0
0
0
1
true
1
2017-05-28T13:37:00.000
4
1
0
Is it correct to compare score of different estimators?
44,227,908
1.2
python,scikit-learn,regression
If you have a similar pipeline to feed the same data into the models, then the metrics are comparable. You can choose the SVR Model without any doubt. By the way, it could be really interesting for you to "redevelop" this "R_squared" Metric, it could be a nice way to learn the underlying mechanic.
I am getting different score values from different estimators from scikit. SVR(kernel='rbf', C=1e5, gamma=0.1) 0.97368549023058548 Linear regression 0.80539997869990632 DecisionTreeRegressor(max_depth = 5) 0.83165426563946387 Since all regression estimators should use R-square score, I think they are comparable, i.e....
0
1
94
0
46,943,022
0
0
0
0
1
false
0
2017-05-28T14:58:00.000
0
1
0
Impute Missing Values Using K-Nearest Neighbors
44,228,698
0
python-3.x
I had seen this same exact error message, and it was because python was confused about some other file names in the same folder that is was loading instead of library files. Try cleaning the folder, renaming your files, etc.
i'm trying to impute missing values with KNN in python so i have downloaded a package named fancyimpute that contain the methode KNN then when i want to import it i get this Error ImportError: cannot import name 'KNN' please help me
0
1
427
0
51,102,921
0
0
0
0
1
false
2
2017-05-28T23:46:00.000
2
1
0
Can I train a model in steps in Keras?
44,233,042
0.379949
python,memory-management,tensorflow,keras,theano
You can do this thing, but it will cause your training time to approach sizes that will only make the results useful for future generations. Let's consider what all we have in our memory when we train with a batch size of 1 (assuming you've only read in that one sample into memory): 1) that sample 2) the weights of you...
I've got a model in Keras that I need to train, but this model invariably blows up my little 8GB memory and freezes my computer. I've come to the limit of training just one single sample (batch size = 1) and still it blows up. Please assume my model has no mistakes or bugs and this question is not about "what is wrong...
0
1
957
0
44,246,824
0
0
0
0
1
false
0
2017-05-29T12:12:00.000
0
4
0
detect card symbol using opencv python
44,242,207
0
python,opencv
As the card symbol is at fixed positions, you may try below (e.g. in OpenCV 3.2 Python): Crop the symbol at top left corner, image = image[h1:h2,w1:w2] Threshold the symbol colors to black, the rest to white, thresh = mask = cv2.inRange(image,(0,0,0),(100,100,100)) Perform a find contour detection, _, contours, hier...
I'm trying to detect the difference between a spade, club, diamond and hart. the number on the card is irrelevant, just the suit matters. i've tried color detection by looking at just the red or black colors, but that still leaves me with two results per color. how could i make sure i can detect each symbol individuall...
0
1
4,838
0
44,807,718
0
0
0
0
1
false
2
2017-05-29T12:40:00.000
-1
1
1
cv2.VideoCapture doesn't work within docker container
44,242,760
-0.197375
python,opencv,docker,video-capture
There might be 2 problems: 1) In your container opencv is not installed properly. To check that do print(ret, frame). If they come (false, None). Then opencv has not installed properly. 2) The file you are using is corrupted. To check that try to copy any image file (jpg) into the container and use cv2.imread to read ...
I am trying to use cv2.VideoCapture to capture image from a docker container. import cv2 vid = cv2.VideoCapture('path\to\video') ret, frame = vid.read() In terms of the video file, I have tried either mount the file with docker -v or docker cp to copy the video file into container, but both with no luck (ret returns F...
0
1
1,491
0
44,309,273
0
1
0
0
1
false
3
2017-05-29T12:42:00.000
0
1
0
Python: Finding period of a two column data
44,242,810
0
python,fft,dft
For calculating periods I would just find the peak of the Fourier transformed data, to do that in python look into script.fft. Could be computationally intensive though.
This question seems so trivial but I didn't find any suitable answer so I am asking! Lets say I have a two column data(say, {x, sin(x)} ) X Y(X) 0.0 0.0 0.1 0.099 0.2 0.1986 How do I find the period of the function Y(X); I have some experience in Mathematica where(roughly) I just interpolate the data as a function ...
0
1
1,459
0
44,250,976
0
0
0
0
2
true
1
2017-05-29T14:17:00.000
1
3
0
Imbalanced data: undersampling or oversampling?
44,244,711
1.2
python,machine-learning,classification,random-forest,supervised-learning
oversampling or under sampling or over sampling the minority and under sampling the majority is a hyperparameter. Do cross validation which ones works best. But use a Training/Test/Validation set.
I have binary classification problem where one class represented 99.1% of all observations (210 000). As a strategy to deal with the imbalanced data, I choose sampling techniques. But I don't know what to do: undersampling my majority class or oversampling the less represented class. If anybody have an advise? Thank y...
0
1
4,671
0
51,599,898
0
0
0
0
2
false
1
2017-05-29T14:17:00.000
0
3
0
Imbalanced data: undersampling or oversampling?
44,244,711
0
python,machine-learning,classification,random-forest,supervised-learning
Undersampling: Undersampling is typically performed when we have billions (lots) of data points and we don’t have sufficient compute or memory(RAM) resources to process the data. Undersampling may lead to worse performance as compared to training the data on full data or on oversampled data in some cases. In other case...
I have binary classification problem where one class represented 99.1% of all observations (210 000). As a strategy to deal with the imbalanced data, I choose sampling techniques. But I don't know what to do: undersampling my majority class or oversampling the less represented class. If anybody have an advise? Thank y...
0
1
4,671
0
44,291,881
0
0
0
0
1
false
0
2017-05-30T05:06:00.000
1
1
0
How to train doc2vec on AWS cluster using spark
44,253,840
0.197375
python-2.7,amazon-s3,aws-lambda,doc2vec
Gensim's Doc2Vec is not designed to distribute training over multiple-machines. It'd be a significant and complex project to adapt its initial bulk training to do that. Are you sure your dataset and goals require such distribution? You can get a lot done on a single machine with many cores & 128GB+ RAM. Note that you...
I'm using python Gensim to train doc2vec. Is there any possibility to allow this code to be distributed on AWS (s3). Thank you in advance
0
1
632
0
44,584,830
0
0
0
0
1
false
0
2017-05-30T11:07:00.000
0
1
0
CNN in keras using theano as backend
44,260,553
0
python-2.7
Create a folder dataset and then create two sub-folders train and test. Then inside Train if you wish create sub-folders with images labels (e.g. fish - holds all fish images, lion - holds lion images etc) and in test you can populate with some images. Finally train the model pointing to Dataset - > Train.
I am very new to the keras. currently working with CNN in keras using theano as backend. I would like to train my network with own images( around 25000 images),which all are in same folder and test it. How could I do that? (please help me, i am not familiar with deep learning)
0
1
70
0
44,267,916
0
0
0
0
1
false
3
2017-05-30T15:47:00.000
0
2
0
Machine Learning - test set with fewer features than the train set
44,266,677
0
python,machine-learning
The train set determines what features you can use for recognition. If you're lucky, your recognizer will just ignore unknown features (I believe NaiveBayes does), otherwise you'll get an error. So save the set of feature names you created during training, and use them during testing/recognition. Some recognizers will ...
guys. I was developing an ML model and I got a doubt. Let's assume that my train data has the following data: ID | Animal | Age | Habitat 0 | Fish | 2 | Sea 1 | Hawk | 1 | Mountain 2 | Fish | 3 | Sea 3 | Snake | 4 | Forest If I apply One-hot Encoding, it will generate the following matrix: ID | Anima...
0
1
5,636
0
44,274,041
0
0
0
0
1
false
2
2017-05-30T23:44:00.000
-1
2
0
Unable to handle NaN in pandas dataframe
44,273,555
-0.099668
python,pandas
Building on from piRSquared, a possible method to treating NaN values (if applicable to your problem) is to convert the NaN inputs to the median of the column. df = df.fillna(df.mean())
I have a pandas dataframe with a variable, which, when I print it, shows up as mostly containing NaN. It is of dtype object. However, when I run the isnull function, it returns "FALSE" everywhere. I am wondering why the NaN values are not encoded as missing, and if there is any way of converting them to missing values ...
0
1
1,285
1
44,306,111
0
0
0
0
1
false
2
2017-05-31T06:10:00.000
0
2
0
What are the ximgproc_DisparityWLSFilter.filter() Arguments?
44,276,962
0
python,opencv,disparity-mapping
Unlike c++, Python doesn't work well with pointers. So the arguments are Filtered_disp = ximgproc_DisparityWLSFilter.filter(left_disp,left, None,right_disp) Note that it's no longer a void function in Python! I figured this out through trial and error though.
I get a ximgproc_DisparityWLSFilter from cv2.ximgproc.createDisparityWLSFilter(left_matcher), but I cannot get ximgproc_DisparityWLSFilter.filter() to work. The error I get is OpenCV Error: Assertion failed (!disparity_map_right.empty() && (disparity_map_right.depth() == CV_16S) && (disparity_map_right.channels() == ...
0
1
2,604
0
44,968,506
0
0
0
0
4
false
6
2017-05-31T17:01:00.000
5
8
0
Python Machine Learning Functions
44,290,736
0.124353
python,machine-learning
question is really vague one. still as you mentioned machine learning TAG. i take it as machine learning problem. in this case there is no specific model or algorithm available to decide which algorithm/function best suits to your data !!. it's hit & trial method to decide which model should be best for your data. so ...
Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product? Thanks.
0
1
1,870
0
49,795,930
0
0
0
0
4
false
6
2017-05-31T17:01:00.000
1
8
0
Python Machine Learning Functions
44,290,736
0.024995
python,machine-learning
I don't think this is a perfect place to ask this kind of questions. There are some other websites where you can ask this kind of questions. For learning Machine Learning (ML), do a basic ML course and follow blogs.
Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product? Thanks.
0
1
1,870
0
52,089,251
0
0
0
0
4
false
6
2017-05-31T17:01:00.000
1
8
0
Python Machine Learning Functions
44,290,736
0.024995
python,machine-learning
If you have just started learning ML then you should first get the ideas about different scientific libraries which Python provides. Most important thing is that you have to start with basic understanding of machine learning modelling from various online material available or by doing ML course. FYI.. there is no such ...
Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product? Thanks.
0
1
1,870
0
56,123,050
0
0
0
0
4
false
6
2017-05-31T17:01:00.000
1
8
0
Python Machine Learning Functions
44,290,736
0.024995
python,machine-learning
From your question, I garner that you have a result and are trying to find the optimal algorithm to reach there. Unfortunately, as far as I'm aware, you have to compare the different algorithms in itself to understand which one has better performance. However if you only wish to obtain a suitable algorithm for your us...
Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product? Thanks.
0
1
1,870
0
44,316,105
0
0
0
0
1
true
4
2017-06-01T11:51:00.000
1
1
0
What does a tensorflow session exactly do?
44,306,765
1.2
python,machine-learning,tensorflow,gpu
TensorFlow sessions allocate ~all GPU memory on startup, so they can bypass the cuda allocator. Do not run more than one cuda-using library in the same process or weird things (like this stream executor error) will happen.
I have tensorflow's gpu version installed, as soon as I create a session, it shows me this log: I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX TITAN Black major: 3 minor: 5 memoryClockRate (GHz) 0.98 pciBusID 0000:01:00.0 Total memory: 5.94GiB Free memo...
0
1
526
0
44,308,255
0
1
0
0
1
true
5
2017-06-01T12:58:00.000
7
1
0
matplotlib.figure.suptitle(), what does 'sup' stand for?
44,308,195
1.2
python,matplotlib
It is an abbreviation indicating a "super" title. It is a title which appears at the top of the figure, whereas a normal title only appears above a particular axes. If you only have one axes object, then there's unlikely an appreciable difference, but the difference happens when you have multiple subplots on the same f...
I understand that matplotlib.figure.suptitle() adds a title to a figure. But what does the "sup" stand for?
0
1
370
0
44,324,291
0
1
0
0
1
false
0
2017-06-02T08:05:00.000
0
1
0
word2vec - reduce RAM consumption when loading model
44,323,816
0
python,gensim,word2vec
I am not intimately familiar with the word2vec implementation in gensim but the model, once trained, should basically boil down to a dictionary of (word -> vector) pairs. This functionality is provided by the gensim.models.KeyedVectors class and is independent of the training algorithm used to derive the vectors. You c...
I have about 30 word2vec models. When loading them in a python script each consumes a few GB of RAM so it is impossible to use all of them at once. Is there any way to use the models without loading the complete model into RAM?
0
1
467
0
57,430,035
0
0
0
0
1
false
4
2017-06-02T13:06:00.000
-1
4
0
Filtering dataframe based on column value_counts (pandas)
44,329,734
-0.049958
python,pandas
l2 = ((df.val1.loc[df.val== 'Best'].value_counts().sort_index()/df.val1.loc[df.val.isin(l11)].value_counts().sort_index())).loc[lambda x : x>0.5].index.tolist()
I'm trying out pandas for the first time. I have a dataframe with two columns: user_id and string. Each user_id may have several strings, thus showing up in the dataframe multiple times. I want to derive another dataframe from this; one where only those user_ids are listed that have at least 2 or more strings associate...
0
1
9,412
0
44,433,959
0
0
0
0
1
false
0
2017-06-04T06:08:00.000
0
2
0
Specifying Multiple targets for regression in TFLearn
44,351,248
0
python,regression,tflearn
That's not how regression works. You must have only one column as a target. That's why the tensorflow API only allows one column to be the target of regression, specified with an integer.
How to specify multiple target_column in tflearn.data_utils.load_csv method. According to Tflearn docs load_csv takes target_column as integer. Tried passing my target_columns as a list in the load_csv method and as expected got a TypeError: 'list' object cannot be interpreted as an integer traceback. Any solutions for...
0
1
443
0
45,017,647
0
0
0
0
1
false
0
2017-06-04T14:27:00.000
1
1
0
Numpy array to string gives an output that if saved to a file results in a bigger file than the original image, why?
44,355,163
0.197375
python-3.x,numpy
This is likely due to the fact that typical image formats are compressed. If you open an image using e.g. scipy.ndimage.imread, the file will be decompressed and the result will be a numpy array of size (NxMx3), where N and M are the dimensions of the image and 3 represents the [R, G, B] channels. Transforming this to ...
The Operation : transforming a rgb image numpy array to string gives an output that if saved to a file also results in a bigger file than the original image, why?
0
1
15
0
44,362,389
0
0
0
0
1
false
3
2017-06-05T00:43:00.000
2
1
0
Tensorflow Slower on Python 3 vs. Python 2
44,360,273
0.379949
python,python-2.7,python-3.x,tensorflow
When operating Tensorflow from python most code to feed the computational engine with data resides in python domain. There are known differences between python 2/3 when it comes to performance on various tasks. Therefore, I'd guess that the python code you use to feed the net (or TF python layer, which is quite thick) ...
My tests show that Tensorflow GPU operations are ~6% slower on Python 3 compared to Python 2. Does anyone have any insight on this? Platform: Ubuntu 16.04.2 LTS Virtualenv 15.0.1 Python 2.7.12 Python 3.6.1 TensorFlow 1.1 CUDA Toolkit 8.0.44 CUDNN 5.1 GPU: GTX 980Ti CPU: i7 4 GHz RAM: 32 GB
0
1
1,834
0
44,379,161
0
0
1
0
1
true
0
2017-06-05T13:35:00.000
0
1
0
Does Scipy have techniques to import&export optimisation model files such as LP?
44,370,237
1.2
python,import,scipy,export,linear-programming
No as Sascha mentioned in the comment. Use other alternatives such as cvxpy/cvxopt.
I am trying to manage problems from Scipy. So does Scipy provide techniques to import and export model files?
0
1
95
0
44,388,121
0
0
0
0
1
false
0
2017-06-06T10:43:00.000
0
1
0
Create Matrix with gaussian-distributed ellipsis in python
44,387,854
0
python,matrix,ellipse,gaussianblur
You need to draw samples from a multi-variate gaussian distribution. The function you can use is numpy.random.multivariate_normal You mean value matrix should be [40, 60]. The covariance C matrix should be 2X2. Regarding its values: C[1, 1], C[2, 2]: decides the width of the ellipse along each axis. Choose it so that 3...
I have a 100x100 Matrix with Zeros. I want to add a 10x20 ellipsis around a specific point in the Matrix - lets say at position 40,60. The Ellipsis should be filled with values from 0 to 1. (1 in the center - 0 at the edge) - The numbers should be gaussian-distributed. Maybe someone can give me a clue, how to start wit...
0
1
221
0
44,519,380
0
0
0
1
1
true
0
2017-06-06T14:24:00.000
1
1
0
pandas read_sql_query returns negative and incorrect values for Oracle Database number field containing positive values
44,392,676
1.2
python,sql,oracle,pandas,dataframe
Removing pandas and just using cx_Oracle still resulted in an integer overflow so in the SQL query I'm using: CAST(field AS NUMBER(19)) At this moment I can only guess that any field between NUMBER(11) and NUMBER(18) will require an explicit CAST to NUMBER(19) to avoid the overflow.
I'm running pandas read_sql_query and cx_Oracle 6.0b2 to retrieve data from an Oracle database I've inherited to a DataFrame. A field in many Oracle tables has data type NUMBER(15, 0) with unsigned values. When I retrieve data from this field the DataFrame reports the data as int64 but the DataFrame values have 9 or fe...
0
1
577
0
44,397,071
0
0
0
0
1
false
2
2017-06-06T18:13:00.000
2
2
0
Get a subset of data from one row of Dataframe
44,397,034
0.197375
python,pandas,dataframe,indexing
row_2 = df[['B', 'C']].iloc[1] OR # Convert column to 2xN vector, grab row 2 row_2 = list(df[['B', 'C']].apply(tuple, axis=1))[1]
Let's say I have a dataframe df with columns 'A', 'B', 'C' Now I just want to extract row 2 of df and only columns 'B' and 'C'. What is the most efficient way to do that? Can you please tell me why df.ix[2, ['B', 'C']] didn't work? Thank you!
0
1
52
0
44,403,857
0
0
0
0
1
false
4
2017-06-07T04:50:00.000
5
1
0
Deep learning using Caffe - Python
44,403,745
0.761594
python,machine-learning,neural-network,deep-learning,caffe
There is a fundamental difference between weights and input data: the training data is used to learn the weights (aka "trainable parameters") during training. Once the net is trained, the training data is no longer needed while the weights are kept as part of the model to be used for testing/deployment. Make sure this ...
I am studying deep learning and trying to implement it using CAFFE- Python. can anybody tell that how we can assign the weights to each node in input layer instead of using weight filler in caffe?
0
1
166
0
44,412,918
0
0
0
0
2
true
1
2017-06-07T10:24:00.000
4
2
0
How to convert between different color maps on OpenCV?
44,409,981
1.2
python,opencv
I think it is a little more complicated than what is suggested in the comments, since you are dealing with temperatures. You need to revert the color mapping to a temperature value image, then apply one colormap with OpenCV that you like. Going back to greyscale is not so straightforward as converting the image from BG...
I have a set of thermal images which are encoded with different types of color maps. I want to use a constant color map to make fault intelligence easier. Please guide me on how to go about this.
0
1
942
0
44,412,645
0
0
0
0
2
false
1
2017-06-07T10:24:00.000
0
2
0
How to convert between different color maps on OpenCV?
44,409,981
0
python,opencv
You can use cvtcolor to HSV, and then manually change Hue. After you change hue, you can cvt color back to rbg.
I have a set of thermal images which are encoded with different types of color maps. I want to use a constant color map to make fault intelligence easier. Please guide me on how to go about this.
0
1
942
0
44,433,064
0
0
0
0
1
false
3
2017-06-07T17:17:00.000
0
3
0
Switching from tensorflow on python 3.6 to python 3.5
44,419,017
0
python,tensorflow,keras
I had some issues with my tensorflow's installation too. I personnaly used anaconda to solve the problem. After installing anaconda (Maybe uninstall the old one if you already have one), launch an anaconda prompt and input conda create -n tensorflow python=3.5, afther that, you must activate it with activate tensorflow...
This is my first question on stackoverflow, please bear with me as I will do my best to provide as much info as possible. I have a windows 10, 6-bit processor. My end goal is to use keras within spyder. The first thing I did was update python to 3.6 and install tensorflow, which seemed to work. When I attempted to g...
0
1
2,643
0
44,421,256
0
0
0
0
1
false
1
2017-06-07T18:40:00.000
0
2
0
the difference between .bin file and .mat files
44,420,434
0
python,image-processing,tensorflow
A file-name suffix is just a suffix (which sometimes help to get info about that file; e.g. Windows decides which tool is called when double-clicked). A suffix does not need to be correct. And of course, changing the suffix will not change the content. Every format will need their own decoder. JPG, PNG, MAT and co. To ...
can the tensorflow read a file contain a normal images for example in JPG, .... or the tensorflow just read the .bin file contains images what is the difference between .mat file and .bin file Also when I rename the .bin file name to .mat, does the data of the file changed?? sorry maybe my language not clear because I ...
0
1
574
0
50,867,766
0
0
0
0
1
false
0
2017-06-08T13:19:00.000
0
1
0
Is it normal to obtain different test results with different batch sizes with tensorflow
44,436,899
0
python,machine-learning,tensorflow,neural-network
This normally means that you did not set the phase_train parameter back to false after testing.
I am using tensorflow for a classification problem. I have some utility for saving and loading my network. When I restore the network, I can specify a different batch size than for training. My problem is that I am getting different test results when I restore with a different batch size. However, there is no differenc...
0
1
361
0
44,444,164
0
0
0
0
1
true
0
2017-06-08T13:38:00.000
1
1
0
What is label_keys parameter good for in a Classifier - Tensorflow?
44,437,307
1.2
python,tensorflow,tensorboard
Not in tensorboard, but the predict method can return the class names instead of numbers if you provide label_keys.
What is label_keys parameter good for in a Classifier. Can you visualize the labeled data on Tensorboard at the Embeddings section?
0
1
65
0
44,439,439
0
0
0
0
1
true
4
2017-06-08T15:05:00.000
3
2
0
Conditional summation in python
44,439,375
1.2
python,numpy
Your best bet is probably something like np.count_nonzero(x > threshold), where x is your 2-d array. As the name implies, count_nonzero counts the number of elements that aren't zero. By making use of the fact that True is 1-ish, you can use it to count the number of elements that are True.
I have a numpy 2d array (8000x7200). I want to count the number of cells having a value greater than a specified threshold. I tried to do this using a double loop, but it takes a lot of time. Is there a way to perform this calculation quickly?
0
1
1,618
0
44,441,810
0
0
0
0
1
false
1
2017-06-08T16:24:00.000
0
1
0
How to choose parameters for svm in sklearn
44,441,002
0
python,machine-learning,scikit-learn,svm
Yes, this is mostly a matter of experimentation -- especially as you've told us very little about your data set: separability, linearity, density, connectivity, ... all the characteristics that affect classification algorithms. Try the linear and Gaussian kernels for starters. If linear doesn't work well and Gaussian ...
I'm trying to use SVM from sklearn for a classification problem. I got a highly sparse dataset with more than 50K rows and binary outputs. The problem is I don't know quite well how to efficiently choose the parameters, mainly the kernel, gamma anc C. For the kernels for example, am I supposed to try all kernels and...
0
1
726
0
44,444,173
0
0
0
0
1
false
0
2017-06-08T19:23:00.000
0
2
0
Faster calculation histogram of a set of images
44,443,999
0
python-3.x,numpy,histogram
Python is among the slowest production-ready languages you can use. As you haven't posted any code, I can only provide general suggestions. They are listed in order of practicality below: Use a compiled version of python, such as pypy or cpython Use existing software with your desired functionality. There's nothing wr...
I have about 3 million images and need to calculate a histogram for each one. Right now I am using python but it is taking of lot of time. Is there any way to process the images in batches? I have NVIDIA 1080 Ti GPU cards, so maybe if there is a way to process on the GPU? I can't find any code or library to process th...
0
1
528
0
45,195,580
0
1
0
0
1
false
1
2017-06-08T20:04:00.000
0
2
0
Cannot run keras
44,444,634
0
python,machine-learning,virtualenv,keras,mnist
Do you get an error message if you just import keras? I was getting a similar error in the command line and then implemented in Spyder (using Anaconda) and it worked fine.
I want to run keras on anaconda for convolution neural network using mnist handwriting recognition. A day before everything worked fine but as I try to run the same program, i get the following error in the first line: from keras.datasets import mnist (first line of code) ModuleNotFoundError: No module named 'keras.da...
0
1
1,591
0
44,454,289
0
0
0
0
2
false
2
2017-06-09T06:55:00.000
1
3
0
How to reshape a 3D numpy array?
44,451,227
0.066568
python,numpy,deep-learning,conv-neural-network
The standard way is to resize the image such that the smaller side is equal to 224 and then crop the image to 224x224. Resizing the image to 224x224 may distort the image and can lead to erroneous training. For example, a circle might become an ellipse if the image is not a square. It is important to maintain the origi...
I have a list of numpy arrays which are actually input images to my CNN. However size of each of my image is not cosistent, and my CNN takes only images which are of dimension 224X224. How do I reshape each of my image into the given dimension? print(train_images[key].reshape(224, 224,3)) gives me an output ValueError:...
0
1
2,121
0
44,451,381
0
0
0
0
2
false
2
2017-06-09T06:55:00.000
1
3
0
How to reshape a 3D numpy array?
44,451,227
0.066568
python,numpy,deep-learning,conv-neural-network
Here are a few ways I know to achieve this: Since you're using python, you can use cv2.resize(), to resize the image to 224x224. The problem here is going to be distortions. Scale the image to adjust to one of the required sizes (W=224 or H=224) and trim off whatever is extra. There is a loss of information here. If y...
I have a list of numpy arrays which are actually input images to my CNN. However size of each of my image is not cosistent, and my CNN takes only images which are of dimension 224X224. How do I reshape each of my image into the given dimension? print(train_images[key].reshape(224, 224,3)) gives me an output ValueError:...
0
1
2,121
0
49,260,218
0
0
0
0
1
false
0
2017-06-09T11:44:00.000
1
2
0
Finding contour in using opencv in python
44,456,932
0.099668
python-3.x,opencv,contour
The mode and method parameter of findContours() are enum with integer values. One can use either the keywords or the integer values assigned to it. This detail can be viewed as an intellisense in visual studio when opencv is included in a project. Below are the associated values with each enum. MODES CV_RETR_EXTERNAL...
I think,I understood well the function "cv2.findContours(image, mode, method). But I got this thing contours,hierarchy = cv2.findContours(thresh,2,1) in one of the documents of opencv. I am not getting what is the meaning of 2,1 here and why have they been used. Someone please explain it.
0
1
777
0
44,464,380
0
0
0
0
1
false
0
2017-06-09T18:12:00.000
1
1
0
Determining the orientation of a file in memory
44,464,315
0.197375
python,file,io,bit-manipulation
Endiannes is a problem of binary files. CSV file is a text file. The numbers are not binary numbers but ASCII characters. There is no endiannes in it.
Say I want to process a CSV file. I know in Python I can call the read() function to open the file and read it in a byte at a time, from the first field in the file (i.e. the field in the top left of the file) to the last field (the field in the bottom right). My question is how I can determine the orientation of a fil...
0
1
29
0
59,876,018
0
1
0
0
1
false
4
2017-06-10T03:50:00.000
0
15
0
Price column object to int in pandas
44,469,313
0
python,pandas,ipython-notebook
This will also work: dframe.amount.str.replace("$","").astype(int)
I have a column called amount with holds values that look like this: $3,092.44 when I do dataframe.dtypes() it returns this column as an object how can i convert this column to type int?
0
1
12,214
0
44,471,880
0
0
0
0
1
true
0
2017-06-10T09:42:00.000
3
2
0
Creating 1D zero array in Octave
44,471,853
1.2
python,numpy,matrix,octave
zeros(n,1) works well for me in Octave.
How can we create an array with n elements. zeros function can create only arrays of dimensions greater than or equal to 2? zeros(4), zeros([4]) and zeros([4 4]) all create 2D zero matrix of dimensions 4x4. I have a code in Python where I have used numpy.zeros(n). I wish to do something similar in Octave.
0
1
7,005