GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
36,281,252
0
0
0
0
2
false
2
2016-03-29T03:43:00.000
0
2
0
Stratified sampling for Random forest -Python
36,275,005
0
python,scikit-learn,classification,random-forest
You can use parameter class_weight . Weights associated with classes in the form {class_label: weight} You can give more weight to your small class and find best weight using cross-validation. For example class_weight={1: 10, 0:1}. Gives more weight to class labeled 1.
I'm building a random forest classification model with the response variable split being 98%(False)-2%(True). I'm using Scikit Learn's RandomForest classifier for this. What is the best way to handle this unbalanced data and avoid oversampling?
0
1
1,888
0
54,748,830
0
0
0
0
2
false
2
2016-03-29T03:43:00.000
0
2
0
Stratified sampling for Random forest -Python
36,275,005
0
python,scikit-learn,classification,random-forest
In newer versions of sklearn's random forest classifier, you can simply set class_weight="balanced".
I'm building a random forest classification model with the response variable split being 98%(False)-2%(True). I'm using Scikit Learn's RandomForest classifier for this. What is the best way to handle this unbalanced data and avoid oversampling?
0
1
1,888
0
36,298,305
0
0
0
0
1
false
0
2016-03-29T15:25:00.000
0
1
0
Sklearn multi-task: Input data not 3-dimensional?
36,288,578
0
python,machine-learning,scikit-learn
I don't see why you'd want X to vary for each task: the point of multitask learning is that the same feature space is used to represent instances for multiple tasks which can be mutually informative. I get that you may not have ground truth y for all instances for all tasks, though this is currently assumed in the scik...
I have one huge data matrix X, of which subsets of rows correspond to different tasks that are related but also have different idiosyncratic properties. Thus I want to train a Multi-Task model with some regularization and chose sklearn's linear_model MultiTaskElasticNet function. I am confused with the inputs of fitti...
0
1
511
0
36,304,549
0
0
0
0
1
false
2
2016-03-29T17:42:00.000
0
2
0
How may I calculate Accuracy in NLTK KMeans Clustering
36,291,392
0
python,machine-learning,nltk,cluster-analysis,k-means
Precision, Recall, and thus the F-measure are inappropriate for cluster analysis. Clustering is not classification, and clusters are not classes! Common measures for clustering (if you are trying to compare with existing labels, which does not make a whole lot of sense - if you already know the classes, then use classi...
I am trying to use NLTK's KMeans Clustering Algorithm. It is generally going fine. I want to use the Metrics package of NLTK to determine precision,recall and f measure. I searched for some examples in web and in other references but may be without a clue. If any one may kindly cite an example or reference. Thanks ...
0
1
2,455
0
36,292,344
0
0
1
0
1
false
1
2016-03-29T18:25:00.000
0
2
0
I need to find the angle between two sets of Roll and Yaw angles
36,292,230
0
python,math,trigonometry,angle,euler-angles
Suppose u(1), u(2), ..., u(m), v are all unit vectors. You want to determine i such that the angle between u(i) and v is maximized. This is equivalent to finding the i such that np.dot(u(i), v) is minimized. So if you have a matrix U where the rows are the u(i), you can simply do i = np.argmin(np.dot(U, v)) to find th...
I have a sensor attached to a drill. The sensor outputs orientation in heading roll and pitch. From what I can tell these are intrinsic rotations in that order. The Y axis of the sensor is parallel to the longitudinal axis of the drill bit. I want to take a set of outputs from the sensor and find the maximum change in ...
0
1
914
0
36,353,073
0
1
0
0
1
false
0
2016-04-01T08:19:00.000
0
1
0
Querying a dataframe if column and values are of different types
36,351,403
0
python,pandas
You have many otions but I think they can be summarized. I couldn't tell which one would make more sense to you without more context. Convert numeric strings to numbers If you are afraid of issues with floats, convert only integers. If you want to keep your data as is, store the converted values in a different column...
I am writing a function which takes a pandas df, column name and a list of values and gives the filtered df. This function uses df.query() internally. In one specific case, I have a dataframe which has a column in which both integers and strings are present. My function should filter this df on a list whose elements ar...
0
1
54
0
36,371,050
0
0
0
0
3
false
1
2016-04-01T17:17:00.000
0
4
0
Slice error when using MultiRNNCell
36,362,190
0
python,tensorflow
Seem like an invalid argument into embedding_rnn_decoder. Maybe try to change enc_state: ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_inputs, enc_state[-1], e_cell, vocab_size, output_projection=(W, b), feed_previous=False)
I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell Code: e_cell = rnn_cell.GRUCell(self.rnn_size) e_cell = rnn_cell.MultiRNNCell([e_cell] * 2) Later on I use it from inside seq2seq.embedding_rnn_decoder as follows ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_i...
0
1
1,345
0
36,794,050
0
0
0
0
3
false
1
2016-04-01T17:17:00.000
0
4
0
Slice error when using MultiRNNCell
36,362,190
0
python,tensorflow
I've got a problem similar to yours.{tensorflow.python.framework.errors.InvalidArgumentError: Expected size[1] in [0, 0], but got 40} I also use rnn_cell.GRUCell(self.rnn_size) I want to share my experience ,maybe it's helpful. Here is how I fixed it. I want to use gru cell and basic rnn cell ,so I adapt the programme ...
I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell Code: e_cell = rnn_cell.GRUCell(self.rnn_size) e_cell = rnn_cell.MultiRNNCell([e_cell] * 2) Later on I use it from inside seq2seq.embedding_rnn_decoder as follows ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_i...
0
1
1,345
0
38,411,187
0
0
0
0
3
false
1
2016-04-01T17:17:00.000
0
4
0
Slice error when using MultiRNNCell
36,362,190
0
python,tensorflow
This problem is occurring because you have doubled your GRU cell but your initial vector is not doubled. If your initial_vector size is [batch_size,50]. Then initial_vector = tf.concat(1,[initial_vector,initial_vector]) Now input this to decoder as initial vector.
I am using MultiRNNCell from tensorflow.models.rnn.rnn_cell. This is how declare my MultiRNNCell Code: e_cell = rnn_cell.GRUCell(self.rnn_size) e_cell = rnn_cell.MultiRNNCell([e_cell] * 2) Later on I use it from inside seq2seq.embedding_rnn_decoder as follows ouputs, mem_states = seq2seq.embedding_rnn_decoder(decoder_i...
0
1
1,345
0
36,363,565
0
0
0
0
1
false
0
2016-04-01T18:34:00.000
1
1
0
spark python product top 5 numbers from a file
36,363,502
0.197375
python,apache-spark
You distribute the data you read among nodes. Every node finds it's 5 local maximums. You combine all the local maximums and you keep the 5 max of them, which is the answer.
total noob question. I have a file that contains a number on each line, there are approximately 5 millions rows, each row has a different number, how do i find the top 5 values in the file using spark and python.
0
1
54
0
53,074,632
0
1
0
0
1
false
32
2016-04-01T21:18:00.000
0
5
0
How to delete an object from a numpy array without knowing the index
36,365,990
0
python,list,numpy
arr = np.array(['a','b','c','d','e','f']) Then arr = [x for x in arr if arr != 'e']
Is it possible to delete an object from a numpy array without knowing the index of the object but instead knowing the object itself? I have seen that it is possible using the index of the object using the np.delete function, but I'm looking for a way to do it having the object but not its index. Example: [a,b,c,d,e,f]...
0
1
51,262
0
61,442,480
0
0
0
0
2
false
8
2016-04-03T00:16:00.000
0
2
0
Sklearn PCA is pca.components_ the loadings?
36,380,183
0
python,scikit-learn,pca
This previous answer is mostly correct except about the loadings. components_ is in fact the loadings, as the question asker originally stated. The result of the fit_transform function will give you the principal components (the transformed/reduced matrix).
Sklearn PCA is pca.components_ the loadings? I am pretty sure it is, but I am trying to follow along a research paper and I am getting different results from their loadings. I can't find it within the sklearn documentation.
0
1
9,712
0
36,386,315
0
0
0
0
2
true
8
2016-04-03T00:16:00.000
13
2
0
Sklearn PCA is pca.components_ the loadings?
36,380,183
1.2
python,scikit-learn,pca
pca.components_ is the orthogonal basis of the space your projecting the data into. It has shape (n_components, n_features). If you want to keep the only the first 3 components (for instance to do a 3D scatter plot) of a datasets with 100 samples and 50 dimensions (also named features), pca.components_ will have shape ...
Sklearn PCA is pca.components_ the loadings? I am pretty sure it is, but I am trying to follow along a research paper and I am getting different results from their loadings. I can't find it within the sklearn documentation.
0
1
9,712
0
36,381,235
0
0
0
0
1
true
1
2016-04-03T01:36:00.000
0
1
0
How to cancel the huge negative effect of my training data distribution on subsequent neural network classification function?
36,380,696
1.2
python,machine-learning,neural-network
Assuming the NN is trained using mini-batches, it is possible to simulate (instead of generate) an evenly distributed training data by making sure each mini-batch is evenly distributed. For example, assuming a 3-class classification problem and a minibatch size=30, construct each mini-batch by randomly selecting 10 sa...
I need to train my network on a data that has a normal distribution, I've noticed that my neural net has a very high tendency to only predict the most occurring class label in a csv file I exported (comparing its prediction with the actual label). What are some suggestions (except cleaning the data to produce an even...
0
1
82
0
36,395,078
0
0
0
0
1
false
2
2016-04-04T03:38:00.000
1
2
0
Pandas indexing confusion
36,394,194
0.099668
python,pandas
Use df.iloc[1] to select the second row of the dataframe (it uses zero based indexing). To select the second column, use df.iloc[:, 1] (the : is slice notation to select all rows).
While looking at indexing in pandas, I had some questions which should be simple enough. If df is a sufficiently long DataFrame, then df[1:2] gives the second row, however, df[1] gives an error and df[[1]] gives the second column. Why is that?
0
1
432
0
36,516,264
0
0
0
0
1
false
0
2016-04-04T19:00:00.000
0
1
0
Can the model object for a learner be exported with joblib?
36,415,572
0
python,orange
I don't understand what "exported with joblib" refers to, but you can save trained Orange models by pickling them, or with Save Classifier widget if you are using the GUI.
I'm evaluating orange as a potential solution to helping new entrants into data science to get started. I would like to have them save out model objects created from different algorithms as pkl files similar to how it is done in scikit-learn with joblib or pickle.
0
1
61
0
36,431,809
0
0
0
0
2
false
1
2016-04-05T16:09:00.000
0
5
0
filter numpy array of datetimes by frequency of occurance
36,431,659
0
python,datetime,numpy,pandas,filtering
Sort your array Count contiguous occurrences by going through it once, & filter for frequency >= 20 The running time is O(nlog(n)) whereas your list comprehension was probably O(n**2)... that makes quite a difference on 2 million entries. Depending on how your data is structured, you might be able to sort only the ax...
I have an array of over 2 million records, each record has a 10 minutes resolution timestamp in datetime.datetime format, as well as several other values in other columns. I only want to retain the records which have timestamps that occur 20 or more times in the array. What's the fastest way to do this? I've got plen...
0
1
964
0
36,454,679
0
0
0
0
2
false
1
2016-04-05T16:09:00.000
0
5
0
filter numpy array of datetimes by frequency of occurance
36,431,659
0
python,datetime,numpy,pandas,filtering
Thanks for all of your suggestions. I ended up doing something completely different with dictionaries in the end and found it much faster for the processing that I required. I created a dictionary with a unique set of timestamps as the keys and empty lists as the values and then looped once through the unordered list (...
I have an array of over 2 million records, each record has a 10 minutes resolution timestamp in datetime.datetime format, as well as several other values in other columns. I only want to retain the records which have timestamps that occur 20 or more times in the array. What's the fastest way to do this? I've got plen...
0
1
964
0
36,471,599
0
0
0
0
1
true
0
2016-04-07T08:17:00.000
0
2
0
Indexing a Numpy Array (4 dimensions)
36,470,440
1.2
python,arrays,numpy,matrix,indexing
I solved it by using np.squeeze(x) to remove the singleton dimensions.
I have a numpy array which is (1, 2048, 1, 1). I need to assign the first two dimensions to another numpy array which is (1, 2048), but I am confused on how to index it correctly. Hope you can help!
0
1
96
0
36,484,420
0
0
0
0
2
false
0
2016-04-07T15:08:00.000
0
2
0
State Estimation of Steady Kalman Filter
36,480,233
0
python,math,simulation,kalman-filter
Steady state KF requires the initial state matches the steady state covariance. Otherwise, the KF could diverge. You can start using the steady state KF when the filter enters the steady state. The steady state Kalman filter can be used for systems with multiple dimension state.
I am working with discrete Kalman Filter on a system. x(k+1)=A_k x(k)+B_k u(k) y(k)=C_k x(k) I have estimated the state from the available noised y(k), which one is generated from the same system state equations with Reference Trajectory of the state. Then I have tested it with wrong initial state x0 and a big initia...
0
1
764
0
36,666,450
0
0
0
0
2
true
0
2016-04-07T15:08:00.000
1
2
0
State Estimation of Steady Kalman Filter
36,480,233
1.2
python,math,simulation,kalman-filter
Let me first simplify the discussion to a filter with a fixed transition matrix, A rather then A_k above. When the Kalman filter reaches steady-state in this case, one can extract the gains and make a fixed-gain filter that utilizes the steady-state Kalman gains. That filter is not a Kalman filter, it is a fixed-gain...
I am working with discrete Kalman Filter on a system. x(k+1)=A_k x(k)+B_k u(k) y(k)=C_k x(k) I have estimated the state from the available noised y(k), which one is generated from the same system state equations with Reference Trajectory of the state. Then I have tested it with wrong initial state x0 and a big initia...
0
1
764
0
36,489,509
0
0
0
0
1
false
0
2016-04-07T16:32:00.000
1
2
0
Ignore all mouse clicks on a matplotlib plot
36,482,154
0.099668
python,matplotlib,event-handling,mouseevent
I feel that this might be more easily resolved by altering the hardware - can you temporarily unplug the mouse, or tape over the track pad to stop people fiddling with it? I suggest this because your crashing script will always process mouse-clicks in some way, and if you don't know what's causing the crashes then you ...
I've recently built a python script that interacts with an Arduino and a piece of hardware that uses LIDAR to map out a room. Everything works great, but anytime you click on the plot that is generated with maptotlib, the computer freaks out and crashes the script that is running. This is partly because I was given a $...
0
1
175
0
36,650,716
0
0
0
0
1
true
0
2016-04-09T16:01:00.000
0
1
0
Already trained HMM model for word recognition
36,519,225
1.2
python,speech-recognition,cmusphinx,htk,autoencoder
I am not aware of any decoder that could help you. Speech recognition software does not work this way. Usually such thing requires custom implementation for dynamic beam search. That is not a huge task, maybe 100 lines of code. It also depends on what your phonetic decoder produces. Is it phonetic lattice (ideally) or ...
I've implemented a phoneme classifier using an autoencoder (Given an audio file array it returns all the recognized phonemes). I want to extend this project so that word recognition is possible. Does there exist an already trained HMM model (in English) that will recognize a word given a list of phonemes? Thanks everyb...
0
1
354
0
36,532,299
0
0
0
0
2
false
0
2016-04-10T16:05:00.000
2
2
0
pixel value change after image rotate
36,532,089
0.197375
python,opencv,rotation
Yes, it is possible for the initial pixel value not to be found in the transformed image. To understand why this would happen, remember that pixels are not infinitely small dots, but they are rectangles with horizontal and vertical sides, with small but non-zero width and height. After a 13 degrees rotation, these rect...
Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after ...
0
1
1,412
0
36,532,213
0
0
0
0
2
true
0
2016-04-10T16:05:00.000
-1
2
0
pixel value change after image rotate
36,532,089
1.2
python,opencv,rotation
If you just rotate the same image plane the image pixels will remain same. Simple maths
Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after ...
0
1
1,412
0
37,579,943
0
0
0
0
1
false
4
2016-04-11T23:47:00.000
0
1
1
de-Bazel-ing TensorFlow Serving
36,561,231
0
python,tensorflow,tensorflow-serving
You are close, you need to update the environment as they do in this script .../serving/bazel-bin/tensorflow_serving/example/mnist_export I printed out the environment update, did it manually export PYTHONPATH=... then I was able to import tensorflow_serving
While I admire, and am somewhat baffled by, the documentation's commitment to mediating everything related to TensorFlow Serving through Bazel, my understanding of it is tenuous at best. I'd like to minimize my interaction with it. I'm implementing my own TF Serving server by adapting code from the Inception + TF Servi...
0
1
443
0
70,480,217
0
1
0
0
1
false
0
2016-04-12T14:09:00.000
0
2
0
Data Preprocessing Python
36,575,776
0
python,pandas,machine-learning
always split your data to train and test split to prevent overfiting. if some of your features has big scale and some doesnt you should standard the data.make sure to sandard the data only on the train set not to couse overfiting. you also have to look for missing datas and replace or remove them. if less than 0.5% o...
I have a DataFrame in Python and I need to preprocess my data. Which is the best method to preprocess data?, knowing that some variables have huge scale and others doesn't. Data hasn't huge deviance either. I tried with preprocessing.Scale function and it works, but I'm not sure at all if is the best method to proceed ...
0
1
1,399
0
36,582,371
0
0
0
0
1
true
0
2016-04-12T19:26:00.000
1
2
0
Numpy group scalars into arrays
36,582,318
1.2
python,arrays,numpy
You can do U[:, None, :] to add a new dimension to the array.
I have a numpy array U with shape (20, 50): 20 spatial points, in a space of 50 dimensions. How can I transform it into a (20, 1, 50) array, i.e. 20 rows, 1 column, and each element is a 50 dimension point? Kind of encapsulating each row as a numpy array. Context The point is that I want to expand the array along the c...
0
1
71
0
59,394,797
0
0
0
0
1
false
4
2016-04-13T16:32:00.000
1
2
0
Python function such as max() doesn't work in pyspark application
36,604,460
0.099668
python,pyspark
If you get this error even after verifying that you have NOT used from pyspark.sql.functions import *, then try the following: Use import builtins as py_builtin And then correspondingly call it with the same prefix. Eg: py_builtin.max() *Adding David Arenburg's and user3610141's comments as an answer, as that is what...
Python function max(3,6) works under pyspark shell. But if it is put in an application and submit, it will throw an error: TypeError: _() takes exactly 1 argument (2 given)
0
1
2,764
0
36,609,298
0
0
1
0
1
true
1
2016-04-13T18:15:00.000
1
1
0
In python, If I perform an fft on complex data, then irfft only the positive frequencies, how does that affect the data?
36,606,390
1.2
python,numpy,fft,ifft
What you are doing is perfectly fine. You are generating the analytic signal to accommodate the negative frequencies in the same way a discrete Hilbert transform would. You will have some scaling issues - you need to double all the non-DC and non-Nyquist signals in the real frequency portion of the FFT results. Some ...
So I am trying to perform a frequency shift on a set of real valued points. In order to achieve a frequency shift, one has to multiply the data by a complex exponential, making the resulting data complex. If I multiply by just a cosine I get results at both the sum and difference frequencies. I want just the sum or th...
0
1
581
0
69,174,336
0
0
0
0
1
false
63
2016-04-13T18:43:00.000
0
4
0
How to set in pandas the first column and row as index?
36,606,931
0
python,python-3.x,pandas
Maybe try df = pd.read_csv(header = 0)
When I read in a CSV, I can say pd.read_csv('my.csv', index_col=3) and it sets the third column as index. How can I do the same if I have a pandas dataframe in memory? And how can I say to use the first row also as an index? The first column and row are strings, rest of the matrix is integer.
0
1
160,606
0
36,617,627
0
0
0
0
1
false
0
2016-04-14T07:05:00.000
0
3
0
Suggestions on Feature selection techniques?
36,615,987
0
python-3.x,machine-learning,data-analysis,feature-selection,data-science
You are already doing a lot of preprocessing. The only additional step I recommend is to normalize the values after PCA. Then your data should be ready to be fed into your learning algorithm. Or do you want to avoid PCA? If the correlation between your features is not too strong, this might be ok. Then skip PCA and jus...
Blockquote I am a student and beginner in Machine Learning. I want to do feature selection of columns. My dataset is 50000 X 370 and it is a binary classification problem. First i removed the columns with std.deviation = 0, then i removed duplicate columns, After that i checked out top 20 features with highest RO...
0
1
541
0
36,628,839
0
0
0
0
1
true
1
2016-04-14T15:17:00.000
1
1
0
Converting a matrix into an image in Python
36,627,362
1.2
python,image,numpy,matrix,matplotlib
Solved using scipy library import scipy.misc ...(code) scipy.misc.imsave(name,array,format) or scipy.misc.imsave('name.ext',array) where ext is the extension and hence determines the format at which the image will be stored.
I would like to save a numpy matrix as a .png image, but I cannot do so using the matplotlib since I would like to retain its original size (which apperently the matplotlib doesn't do so since it adds the scale and white background etc). Anyone knows how I can go around this problem using numpy or the PIL please? Thank...
0
1
617
0
36,644,976
0
0
0
0
1
false
0
2016-04-14T17:40:00.000
1
2
0
RDD from label Array and data Array in python/spark
36,630,260
0.099668
python,apache-spark,pyspark
Spark have a function takeSample which can merge two RDD in to an RDD.
I have two python arrays of the same length. They are generated from reading two separate text files. One represents labels; let it be called "labelArray". The other is an array of data arrays; let it be called "dataArray". I want to turn them into an RDD object of LabeledPoint. How can I do this?
0
1
1,437
0
38,653,850
0
0
0
0
1
false
0
2016-04-15T05:37:00.000
0
1
0
G++ not detected
36,639,002
0
python,machine-learning,g++,theano
On Windows, you need to install mingw to support g++. Usually, it is advisable to use Anaconda distribution to install Python. Theano works with Python3.4 or older versions. You can use conda install command to install mingw.
I am working on some neural networks. But my dataset is same 95 features and about 120 datasets. So while importing theano i get warning g++ not detected and it will degrade the performance. Do this will effect even a small dataset? I will have a 2-3 hidden layers. My shape of neural network will be (95, 200,200, 4) ...
0
1
112
0
36,647,234
0
0
0
0
1
false
0
2016-04-15T12:27:00.000
1
1
0
Python: Deep neural networks
36,647,169
0.197375
python,machine-learning,neural-network,deep-learning,nolearn
Try increasing the number of hidden units and the learning rate. The power of neural networks comes from the hidden layers. Depending on the size of your dataset, the number of hidden layers can go upto a few thousands. Also, please elaborate on the kind, and number of features you're using. If the feature set is small...
I am currently working on some project related to machine learning. I extracted some features from the object. So I train and test that features with NB, SVM and other classification algorithms and got result about 70 to 80 % When I train the same features with neural networks using nolearn.dbn and then test it I got a...
0
1
473
0
68,226,844
0
0
0
0
1
false
27
2016-04-16T17:45:00.000
1
5
0
How to create a series of numbers using Pandas in Python
36,667,548
0.039979
python,python-3.x,pandas,range,series
try pd.Series([0 for i in range(20)]). It will create a pd series with 20 rows
I am new to python and have recently learnt to create a series in python using Pandas. I can define a series eg: x = pd.Series([1, 2, 3, 4, 5]) but how to define the series for a range, say 1 to 100 rather than typing all elements from 1 to 100?
0
1
69,375
0
60,068,593
0
0
0
0
1
false
17
2016-04-16T19:06:00.000
0
4
0
Change default GPU in TensorFlow
36,668,467
0
python,tensorflow
If you want to run your code on the second GPU,it assumes that your machine has two GPUs, You can do the following trick. open Terminal open tmux by typing tmux (you can install it by sudo apt-get install tmux) run this line of code in tmux: CUDA_VISIBLE_DEVICES=1 python YourScript.py Note: By default, tensorflow us...
Based on the documentation, the default GPU is the one with the lowest id: If you have more than one GPU in your system, the GPU with the lowest ID will be selected by default. Is it possible to change this default from command line or one line of code?
0
1
32,954
0
36,686,758
0
1
0
0
1
false
9
2016-04-18T04:11:00.000
1
3
0
Ignoring non-numerical string values in pandas dataframe
36,685,347
0.066568
python,pandas
you can use df._get_numeric_data() directly.
I have a DataFrame in which a column might have three kinds of values, integers (12331), integers as strings ('345') or some other string ('text'). Is there a way to drop all rows with the last kind of string from the dataframe, and convert the first kind of string into integers? Or at least some way to ignore the rows...
0
1
17,034
0
36,693,072
0
0
0
0
1
true
1
2016-04-18T07:35:00.000
0
1
0
similarity measure scikit-learn document classification
36,687,929
1.2
python-2.7,scikit-learn,text-classification
As with most supervised learning algorithms, Random Forest Classifiers do not use a similarity measure, they work directly on the feature supplied to them. So decision trees are built based on the terms in your tf-idf vectors. If you want to use similarity then you will have to compute a similarity matrix for your docu...
I am doing some work in document classification with scikit-learn. For this purpose, I represent my documents in a tf-idf matrix and feed a Random Forest classifier with this information, works perfectly well. I was just wondering which similarity measure is used by the classifier (cosine, euclidean, etc.) and how I ca...
0
1
350
0
36,728,171
0
0
0
0
2
false
2
2016-04-19T14:27:00.000
4
2
1
./build/tools/caffe: No such file or directory
36,721,348
0.379949
bash,python-2.7,machine-learning,neural-network,deep-learning
Follow the below instructions and see if it works: Open a terminal cd to caffe root directory Make sure the file caffe exists by listing them using ls ./build/tools If the file is not present, type make. Running step 3 will list the file now. Type ./build/tools/caffe, No such file error shouldn't get triggered this ti...
I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset ./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt But I received this error bas...
0
1
7,193
0
36,724,914
0
0
0
0
2
true
2
2016-04-19T14:27:00.000
2
2
1
./build/tools/caffe: No such file or directory
36,721,348
1.2
bash,python-2.7,machine-learning,neural-network,deep-learning
You should specify absolute paths to all your files and commands, to be on the safer side. If /home/user/build/tools/caffe train still doesn't work, check if you have a build directory in your caffe root. If not, then use /home/user/tools/caffe train instead.
I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset ./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt But I received this error bas...
0
1
7,193
0
44,015,767
0
0
0
0
1
false
0
2016-04-19T17:25:00.000
0
2
0
scanning plot through a large data file using python
36,725,361
0
python-2.7,matplotlib,plot
Something that has worked for me in a similar problem (time varying heat-maps) was to run a batch job of producing several thousands such plots over night, saving each as a separate image. At 10s a figure, you can produce 3600 in 10h. You can then simply scan through the images which could provide you with the insight ...
I have a large (10-100GB) data file of 16-bit integer data, which represents a time series from a data acquisition device. I would like to write a piece of python code that scans through it, plotting a moving window of a few seconds of this data. Ideally, I would like this to be as continuous as possible. The data is s...
0
1
373
0
36,814,562
0
0
0
0
1
false
0
2016-04-20T08:50:00.000
0
1
0
Implementations of algorithms without imputation of missing values
36,738,514
0
algorithm,python-2.7,machine-learning,scikit-learn,missing-data
AFAIK scikit-learn doesn't have ML algorithms that can work with missing values without preprocessing them first. R does though.
I would like to know if there are any implementations of machine learning algorithms in python which can work even if there are missing values in the dataset. Please note that I don't want algorithms imputing the missing values first.(I could have done that using the Imputer package ). I would like to know about the im...
0
1
76
0
54,928,184
0
0
0
0
1
false
6
2016-04-20T13:54:00.000
0
3
0
Building your own NLP API
36,746,071
0
python,node.js,nlp,chatbot
Two things to think about are: How are you planning on handling the generation side of things? Entity extraction and classification are going to be useful for the Natural language understanding (NLU) side of things, but generation can be tricky in itself. Another thing to think about is that the training and developmen...
I'm building a chatbot and I'm new to NLP. (api.ai & AlchemyAPI are too expensive for my use case. And wit.ai seems to be buggy and constantly changing at the moment.) For the NLP experts, how easily can I replicate their services locally? My vision so far (with node, but open to Python): entity extraction via Stanfor...
0
1
2,286
0
36,749,915
0
0
0
0
1
false
0
2016-04-20T14:09:00.000
0
1
0
Use error message with numpy.testing.assert_raises()
36,746,446
0
python,unit-testing,numpy
These functions are implemented in numpy/testing/utils.py. Studying that code may be your best option. I see that assert_raises passes the task on to nose.tools.assert_raises(*args,**kwargs). So it depends on what that does. And if I recall use of this in other modules correctly, you are usually more interested in t...
Contrary to np.testing.assert_equal(), np.testing.assert_raises() does not accept an err_msg parameter. Is there a clean way to display an error message when this assert fails? More generally, why do some assert_* methods accept this parameter, while some others don't?
0
1
173
0
36,754,207
0
0
0
0
1
true
4
2016-04-20T15:13:00.000
3
1
0
1 producer, 1 consumer, only 1 piece of data to communicate, is queue an overkill?
36,748,120
1.2
python,python-3.x,pandas,multiprocessing,interprocess
To me, the most important thing you mentioned is this: It is VERY CRITICAL that the consumer catches every single DataFrame the producer produces. So, let's suppose you used a variable to store the DataFrame. The producer would set it to the produced value, and the consumer would just read it. That would work very fi...
This question is related to Python Multiprocessing. I am asking for a suitable interprocess communication data-structure for my specific scenario: My scenario I have one producer and one consumer. The producer produces a single fairly small panda Dataframe every 10-ish secs, then the producer puts it on a python.multi...
0
1
300
0
36,783,492
1
0
0
0
1
false
1
2016-04-20T15:52:00.000
0
1
0
Pandas: datareader unable to get historical stock data
36,749,105
0
python-2.7,pandas,datareader,google-finance,pandas-datareader
That URL is a 404 - pandas isn't at fault, maybe just check the URL? Perhaps they're on different exchanges with different google finance support.
I found that some of the stock exchanges is not supported for datareader. Example, Singapore. Any workaround? query = web.DataReader(("SGX:BLA"), 'google', start, now) return such error` IOError: after 3 tries, Google did not return a 200 for url 'http://www.google.com/finance/historical?q=SGX%3ABLA&startdate=Jan+01%2C...
1
1
850
0
36,758,704
0
0
0
0
1
false
0
2016-04-20T23:47:00.000
0
1
0
Can you use counts in sklearn logistic regression input?
36,757,158
0
python,scikit-learn,logistic-regression,bernoulli-probability
If they are categorical - you should provide binarized version of it. I don't know how that code in R works, but you should binarize your categorical feature always. Because you have to emphasize that each value of your feature is not related to other one, i.e. for feature "blood_type" with possible values 1,2,3,4 your...
So, I know that in R you can provide data for a logistic regression in this form: model <- glm( cbind(count_1, count_0) ~ [features] ..., family = 'binomial' ) Is there a way to do something like cbind(count_1, count_0) with sklearn.linear_model.LogisticRegression? Or do I actually have to provide all those duplicate r...
0
1
308
0
36,870,610
0
0
0
0
1
true
0
2016-04-21T03:24:00.000
1
1
0
tensorflow sequence to sequence without softmax
36,759,037
1.2
python,tensorflow
The model_with_buckets() function in seq2seq.py returns 2 tensors: the output and the losses. The outputs variable contains the raw output of the decoder that you're looking for (that would normally be fed to the softmax).
I was using Tensorflow sequence to sequence example code. for some reason, I don't want to add softmax to output. instead, I want to get the raw output of decoder without softmax. I was wondering if anyone know how to do it based on sequence to sequence example code? Or I need to create it from scratch or modify the th...
0
1
389
0
36,780,531
0
0
0
0
1
false
6
2016-04-21T20:12:00.000
2
3
0
remove known exact row in huge csv
36,779,522
0.132549
python,r,csv
use sed '2636759d' file.csv > fixedfile.csv As a test for a 40,001 line 1.3G csv, removing line 40,000 this way takes 0m35.710s. The guts of the python solution from @en_Knight (just stripping the line and writing to a temp file) is ~ 2 seconds faster for this same file. edit OK sed (or some implementations) may not wo...
I have a ~220 million row, 7 column csv file. I need to remove row 2636759. This file is 7.7GB, more than will fit in memory. I'm most familiar with R, but could also do this in python or bash. I can't read or write this file in one operation. What is the best way to build this file incrementally on disk, instead of tr...
0
1
1,468
0
36,822,368
0
0
0
0
1
true
0
2016-04-23T22:32:00.000
3
1
0
How does Support Vector Machine deal with confusing feature vectors?
36,817,217
1.2
python,machine-learning,svm,feature-extraction
Yes, it will affect the performance of the SVM. It seems your test vectors are just scaled versions of your training vectors. The SVM has no way of knowing that the scaling is irrelevant in your case (unless you present it alot of differently scaled training vectors) A common practice for feature vectors where the scal...
Imagine I have the following feature vectors: Training vectors: Class 1: [ 3, 5, 4, 2, 0, 3, 2], [ 33, 50, 44, 22, 0, 33, 20] Class 2: [ 1, 2, 3, 1, 0, 0, 4], [ 11, 22, 33, 11, 0, 0, 44] Testing vectors: Class 1: [ 330, 550, 440, 220, 0, 330, 200] Class 2: [ 110, 220, 333, 111, ...
0
1
62
0
36,827,834
0
0
0
0
1
false
3
2016-04-24T18:24:00.000
0
6
0
Python/Numpy fast way to selecting every nth chunk in list
36,827,155
0
python,arrays,list,numpy,slice
A simple list comprehension can do the job: [ L[i] for i in range(len(L)) if i%3 != 2 ] For chunks of size n [ L[i] for i in range(len(L)) if i%(n+1) != n ]
Edited for the confusion in the problem, thanks for the answers! My original problem was that I have a list [1,2,3,4,5,6,7,8], and I want to select every chunk of size x with gap of one. So if I want to select select every other chunk of size 2, the outcome would be [1,2,4,5,7,8]. A chunk size of three would give me [1...
0
1
1,225
0
44,995,504
0
0
0
0
2
false
18
2016-04-25T17:17:00.000
5
7
0
What numbers that I can put in numpy.random.seed()?
36,847,022
0.141893
python,numpy-random
What is normally called a random number sequence in reality is a "pseudo-random" number sequence because the values are computed using a deterministic algorithm and probability plays no real role. The "seed" is a starting point for the sequence and the guarantee is that if you start from the same seed you will get the ...
I have noticed that you can put various numbers inside of numpy.random.seed(), for example numpy.random.seed(1), numpy.random.seed(101). What do the different numbers mean? How do you choose the numbers?
0
1
20,730
0
60,006,676
0
0
0
0
2
false
18
2016-04-25T17:17:00.000
0
7
0
What numbers that I can put in numpy.random.seed()?
36,847,022
0
python,numpy-random
One very specific answer: np.random.seed can take values from 0 and 2**32 - 1, which interestingly differs from random.seed which can take any hashable object.
I have noticed that you can put various numbers inside of numpy.random.seed(), for example numpy.random.seed(1), numpy.random.seed(101). What do the different numbers mean? How do you choose the numbers?
0
1
20,730
0
36,857,440
0
0
0
0
1
true
4
2016-04-26T06:09:00.000
1
3
0
OpenCV to recognize image using python
36,856,532
1.2
python,opencv
Regarding your question about the haar cascades. You can use them to classify the images the way you want: Train two haar cascades, one for cars and one for bikes. Both cascades will return a value of how certain they are, that the image contains the object they were trained for. If both are uncertain, the image proba...
I am new to OpenCV. I am using OpenCV 3.1 and python 2.7. I have 5 images of bikes and 5 images of cars. I want to find out given any image is it a car or a bike . On the internet I found out that using haar cascade we can train, but most of the examples contain only one trained data means, the user will train only car...
0
1
1,017
0
36,976,133
0
0
0
0
1
true
0
2016-04-26T08:49:00.000
0
1
0
How to extract the frequencies associated with fft2 values in numpy?
36,859,840
1.2
python,numpy,fft
Yes. Apply fftfreq to each spatial vector (x and y) separately. Then create a meshgrid from those frequency vectors. Note that you need to use fftshift if you want the typical representation (zero frequencies in center of spatial spectrum) to both the output and your new spatial frequencies (before using meshgrid).
I know that for fft.fft, I can use fft.fftfreq. But there seems to be no such thing as fft.fftfreq2. Can I somehow use fft.fftfreq to calculate the frequencies in 2 dimensions, possibly with meshgrid? Or is there some other way?
0
1
343
0
36,864,312
0
0
0
0
1
false
1
2016-04-26T11:49:00.000
0
1
0
sklearn's feature importances_
36,864,116
0
python,scikit-learn
It means those words are "strongly associated" with one of the responses, in your case probably illegal(1). Depending on your classifier, the exact technical definition of strongly associated will vary. It could be the joint probability of the word and response, P(X='theft', Y='illegal'), or it could be the conditional...
I am just curious on the interpretation of sklearn's feature_importances_ attribute. I know that the features with highests coefficients are the features that would highly predict the outcome. My question is - Are these the features strongly predictive to return a 1 (or yes) or not necessarily? (Supervised Learning - B...
0
1
213
0
36,866,111
0
0
0
0
1
false
1
2016-04-26T12:23:00.000
1
2
0
Defining dtype of df.to_sparse() result
36,864,863
0.099668
python,pandas
In short. No. You see, dtypes is not a pandas controlled entity. Dtypes is typically a numpy thing. Dtypes are not controllable in any way, they are automagically asserted by numpy and can only change when you change the data inside the dataframe or numpy array. That being said, the typical reason for ending up with a ...
I have a dataframe df which is sparse and for memory efficiency I wish to convert it using to_sparse() However it seems that the new representation ends up with the dtype=float64, even when my df is dtype=int8. Is there a way specify the data type/ prevent auto conversion to dtype=float64 when using to_sparse() ?
0
1
141
0
36,892,239
0
0
0
0
1
true
0
2016-04-26T14:29:00.000
1
1
0
How to use agglometative clustering with 1 dimensional array valueset?
36,867,924
1.2
python,arrays,scikit-learn
Make the array into a column: use x[:, np.newaxis] instead of x
I have around 7k samples and 11 features which I concentrated into one. This concentrated value I call ResVal and is a weighted sum of previous features. Then I gathered these ResVals into 1D array. Now I want to cluster this results with AgglomerativeClustering but console complains about 1D array. How can I fix it an...
0
1
28
0
36,876,832
0
0
0
0
1
true
0
2016-04-26T21:48:00.000
1
1
0
Using cluster analysis as alternative to point in polygon assignment
36,876,389
1.2
python,scikit-learn
Discriminant analisys (a.k.a. supervised classification) is the way to go. You adjust the model by using the coordinates of the points and the information on the node they belong to. As a result, you obtain a model you can use to predict the node for new points as they are known. Linear discriminant analysis is one of...
I'm interested to approach the confirming point in polygon problem from another direction. I have a dataframe containing series of coordinates, known to be in certain polygon (administrative area). I have other dataframes with coordinates not assigned to any admin area. Would using SciKit offer an alternate means to as...
0
1
152
0
36,878,371
0
0
0
0
1
false
24
2016-04-27T00:19:00.000
1
5
0
Python: Add a column to numpy 2d array
36,878,089
0.039979
python,arrays,numpy
I think the numpy method column_stack is more interesting because you do not need to create a column numpy array to stack it in the matrix of interest. With the column_stack you just need to create a normal numpy array.
I have a 60000 by 200 numpy array. I want to make it 60000 by 201 by adding a column of 1's to the right. (so every row is [prev, 1]) Concatenate with axis = 1 doesn't work because it seems like concatenate requires all input arrays to have the same dimension. How should I do this? I can't find any existing useful answ...
0
1
64,625
0
60,298,484
0
0
0
0
1
false
44
2016-04-27T15:24:00.000
0
4
0
How to get a normal distribution within a range in numpy?
36,894,191
0
python,numpy,random,machine-learning,normal-distribution
You can subdivide your targeted range (by convention) to equal partitions and then calculate the integration of each and all area, then call uniform method on each partition according to the surface. It's implemented in python: quad_vec(eval('scipy.stats.norm.pdf'), 1, 4,points=[0.5,2.5,3,4],full_output=True)
In machine learning task. We should get a group of random w.r.t normal distribution with bound. We can get a normal distribution number with np.random.normal() but it does't offer any bound parameter. I want to know how to do that?
0
1
43,386
0
36,908,550
0
0
0
0
1
false
5
2016-04-27T15:31:00.000
1
2
0
What is a good way to extract dominant colors from image without the shadow?
36,894,358
0.099668
python,opencv,image-processing,machine-learning,computer-vision
If the shadows cover a significant part of the image then this problem is non-trivial. If the shadow is a small fraction of the area you're interested though you could try using k-medoids instead of k-means and as Piglet mentioned using a different color space with separate chromaticity and luminance channels may help.
Is it possible to extract the 'true' color of building façade from a photo/ a set of similar photos and removing the distraction of shadow? Currently, I'm using K-means clustering to get the dominant colors, however, it extracts darker colors (if the building is red, then the 1st color would be dark red) as there are l...
0
1
383
0
36,946,734
0
0
0
0
1
false
0
2016-04-27T21:32:00.000
1
2
0
obtaining the min value of time_diff for a Pandas Dataframe
36,901,311
0.099668
python-2.7,pandas
Sorry for the garbled question. in order to do a groupby for a timedelta value the best way is to do a pd.numeric on the 'timedelta value' and once the results are obtained we can again do a pd.to_timedelta on it.
I have a Pandas Dataframe which has a field txn['time_diff'] Send_Agent Pay_Agent Send_Time Pay_Time score \ 0 AKC383903 AXX100000 2014-08-19 18:52:35 2015-05-01 22:08:39 1 1 AWA280699 AXX100000 2014-08-19 19:32:18 2015-05-01 17:12:32 1 2 ALI030170 ALI030170 2014-08...
0
1
110
0
65,979,818
0
0
0
0
1
false
5
2016-04-28T16:19:00.000
0
5
0
Cosine distance of vector to matrix
36,920,262
0
python,vectorization,cosine-similarity
Below worked for me, have to provide correct signature from scipy.spatial.distance import cosine def cosine_distances(embedding_matrix, extracted_embedding): return cosine(embedding_matrix, extracted_embedding) cosine_distances = np.vectorize(cosine_distances, signature='(m),(d)->()') cosine_distances(corpus_embed...
In python, is there a vectorized efficient way to calculate the cosine distance of a sparse array u to a sparse matrix v, resulting in an array of elements [1, 2, ..., n] corresponding to cosine(u,v[0]), cosine(u,v[1]), ..., cosine(u, v[n])?
0
1
3,904
0
36,930,521
0
0
0
0
1
false
0
2016-04-28T22:54:00.000
1
1
0
Can I get "inertia" for sklearn Birch clusters?
36,926,819
0.197375
python,scikit-learn,cluster-analysis
Nothing is free, and you don't want algorithms to perform unnecessary computations. Inertia is only sensible for k-means (and even then, do not compare different values of k), and it's simply the variance sum of the data. I.e. compute the mean of every cluster, then the squared deviations from it. Don't compute distanc...
Scikit-learn MiniBatchKMeans has an inertia field that can be used to see how tight clusters are. Does the Birch clustering algorithm have an equivalent? There does not seem to be in the documentation. If there is no built in way to check this measurement, does it make sense to find the average euclidian distance for e...
0
1
886
0
36,928,073
0
0
0
0
1
false
0
2016-04-29T00:05:00.000
0
1
0
Scipy zoom with complex values
36,927,432
0
python,numpy,scipy,complex-numbers,zooming
This is not a good answer but it seems to work quite well. Instead of using the default parameters for the zoom method, I'm using order=0. I then proceed to deal with the real and imaginary part separately, as described in my question. This seems to reduce the artifacts although some smaller artifacts remain. It is by ...
I have a numpy array of values and I wanted to scale (zoom) it. With floats I was able to use scipy.ndimage.zoom but now my array contains complex values which are not supported by scipy.ndimage.zoom. My workaround was to separate the array into two parts (real and imaginary) and scale them independently. After that I ...
0
1
215
0
36,951,224
0
0
0
0
1
true
0
2016-04-29T16:25:00.000
1
1
0
scipy.test() results in errors
36,943,283
1.2
python-2.7,scipy
All these are in weave, which is not used anywhere else in scipy itself. So unless you're using weave directly, you're likely OK. And there is likely no reason to use weave in new code anyway.
Having some problems with scipy. Installed latest version using pip (0.17.0). Run scipy.test() and I'm getting the following errors. Are they okay to ignore? I'm using python 2.7.6. Thanks for your help. ====================================================================== ERROR: test_add_function_ordered (test_cata...
0
1
82
0
36,959,985
0
0
1
0
1
true
1
2016-04-30T19:59:00.000
3
1
0
Efficient Matrix-Vector Multiplication: Multithreading directly in Python vs. using ctypes to bind a multithreaded C function
36,959,589
1.2
python,c,multithreading,linear-algebra,hdf5
Hardware As Sven Marnach wrote in the comments, your problem is most likely I/O bound since disk access is orders of magnitude slower than RAM access. So the fastest way is probably to have a machine with enough memory to keep the whole matrix multiplication and the result in RAM. It would save lots of time if you read...
I have a simple problem: multiply a matrix by a vector. However, the implementation of the multiplication is complicated because the matrix is 18 gb (3000^2 by 500). Some info: The matrix is stored in HDF5 format. It's Matlab output. It's dense so no sparsity savings there. I have to do this matrix multiplication rou...
0
1
342
0
37,810,021
0
0
0
0
1
true
2
2016-05-03T10:20:00.000
5
2
0
How to interface Python with Qlikview for data visualization?
37,001,538
1.2
python,scikit-learn,tableau-api,qlikview
There's no straightforward route to calling Python from QlikView. I have used this: Create a Python program that outputs CSV (or any file format that QlikView can read) Invoke your Python program from the QlikView script: EXEC python3 my_program.py > my_output.csv Read the output into QlikView: LOAD * FROM my_output.c...
I am using Scikit-Learn and Pandas libraries of Python for Data Analysis. How to interface Python with data visualization tools such as Qlikview?
0
1
8,577
0
37,005,546
0
0
0
0
1
false
0
2016-05-03T13:28:00.000
0
1
0
How to close pandas.scatter_matrix() figure
37,005,545
0
python,pandas,matplotlib
After a bit of investigation, I realized that I could just use: plt.close() with no argument to close the current figure, or: plt.close('all') to close all of the opened figures.
I'm hitting MemoryError: In RendererAgg: Out of memory when I plot several pandas.scatter_matrix() figures. Normally I use: plt.close(fig) to close matplotlib figures, so that I release the memory used, but pandas.scatter_matrix() does not return a matplotlib figure, rather it returns the axes object. For example: i...
0
1
521
0
37,056,824
0
0
0
0
1
false
6
2016-05-05T17:23:00.000
2
1
0
How can I create an AI for tic tac toe in Python using ANN and genetic algorithm?
37,056,608
0.379949
python,neural-network,artificial-intelligence,genetic-algorithm,tic-tac-toe
Yes, this is possible. But you have to tell your AI the rules of the game, beforehand (well, that's debatable, but it's ostensibly better if you do so - it'll define your search space a little better). Now, the vanilla tic-tac-toe game is far too simple - a minmax search will more than suffice. Scaling up the dimension...
I'm very interested in the field of machine learning and recently I got the idea for a project for the next few weeks. Basically I want to create an AI that can beat every human at Tic Tac Toe. The algorithm must be scalable for every n*n board size, and maybe even for other dimensions (for a 3D analogue of the game, ...
0
1
700
0
46,785,026
0
1
0
0
2
false
34
2016-05-05T22:08:00.000
4
14
0
Trouble with TensorFlow in Jupyter Notebook
37,061,089
0.057081
python,tensorflow,jupyter
Here is what I did to enable tensorflow in Anaconda -> Jupyter. Install Tensorflow using the instructions provided at Go to /Users/username/anaconda/env and ensure Tensorflow is installed Open the Anaconda navigator and go to "Environments" (located in the left navigation) Select "All" in teh first drop down and sea...
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. T...
0
1
87,389
0
67,094,115
0
1
0
0
2
false
34
2016-05-05T22:08:00.000
-1
14
0
Trouble with TensorFlow in Jupyter Notebook
37,061,089
-0.014285
python,tensorflow,jupyter
Open an Anaconda Prompt screen: (base) C:\Users\YOU>conda create -n tf tensorflow After the environment is created type: conda activate tf Prompt moves to (tf) environment, that is: (tf) C:\Users\YOU> then install Jupyter Notebook in this (tf) environment: conda install -c conda-forge jupyterlab - jupyter notebook St...
I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. T...
0
1
87,389
0
37,104,201
0
0
0
0
2
true
8
2016-05-06T13:55:00.000
3
4
0
Run model in reverse in Keras
37,074,244
1.2
python,machine-learning,neural-network,keras
There is no such thing as "running a neural net in reverse", as a generic architecture of neural net does not define any not-forward data processing. There is, however, a subclass of models which do - the generative models, which are not a part of keras right now. The only thing you can do is to create a network which ...
I'm currently playing around with the Keras framework. And have done some simple classification tests, etc. I'd like to find a way to run the network in reverse, using the outputs as inputs and vice versa. Any way to do this?
0
1
4,095
0
51,939,867
0
0
0
0
2
false
8
2016-05-06T13:55:00.000
0
4
0
Run model in reverse in Keras
37,074,244
0
python,machine-learning,neural-network,keras
What you are looking for, I think, is the "Auto-Associative" neural network. it has an input of n dimensions, several layers, one of which is the "middle layer" of m dimensions, and then several more layers leading to an output layer which has the same number of dimensions as the input layer, n. The key here is that m ...
I'm currently playing around with the Keras framework. And have done some simple classification tests, etc. I'd like to find a way to run the network in reverse, using the outputs as inputs and vice versa. Any way to do this?
0
1
4,095
0
37,092,686
0
0
0
0
1
true
0
2016-05-07T11:31:00.000
0
1
0
Method to combine multiple svm classifiers (or "any ML classifier" by using scikit-learn. "decision-feature classifiers"
37,087,996
1.2
python,machine-learning,computer-vision,scikit-learn
First of all - the idea of training separate models is rather bad. Unless you have very good reasons to do so (some external limitations you cannot ignore) you should not do so. Why? Because you are efficiently loosing information, you are unable to model complex dependencies between signals from two classifiers. Train...
I extract multiple Feature vectors from different sensors, and I trained these features by using SVM individually. My question is there any method to combine these classifiers in a way to obtain a better result. thanks in advance
0
1
662
0
37,092,622
0
0
0
0
1
true
1
2016-05-07T17:27:00.000
3
1
0
Additive Smoothing for Dataframe Pandas
37,091,587
1.2
python,pandas,machine-learning,smoothing,naivebayes
Additive smoothing is just a basic mathematical operation, requiring few additions and division - there is no "special" function for that, you simply write a one-liner operating on particular columns of your dataframe.
I have a large dataframe in Pandas with lots of zeros. I want to apply additive smoothing but instead of writing it from scratch, I am wondering if there is any better way of producing a "smoothed" dataframe in Pandas. Thanks!
0
1
1,278
0
37,101,713
0
0
0
0
1
true
21
2016-05-08T14:49:00.000
39
4
0
What to download in order to make nltk.tokenize.word_tokenize work?
37,101,114
1.2
python,nltk
You are right. You need Punkt Tokenizer Models. It has 13 MB and nltk.download('punkt') should do the trick.
I am going to use nltk.tokenize.word_tokenize on a cluster where my account is very limited by space quota. At home, I downloaded all nltk resources by nltk.download() but, as I found out, it takes ~2.5GB. This seems a bit overkill to me. Could you suggest what are the minimal (or almost minimal) dependencies for nltk....
0
1
57,023
0
37,102,874
0
0
0
0
1
false
0
2016-05-08T15:12:00.000
0
1
0
predicting new non-standardized data with classifier trained on standardized data
37,101,361
0
python,scikit-learn
To solve this problem you should use a pipeline. The first stage there is scaling, and the second one is your model. Then you can pickle the whole pipeline and have fun with your new data.
I have some data with say, L features. I have standardized them using StandardScaler() by doing a fit_transform on X_train. Now while predicting, i did clf.predict(scaler.transform(X_test)). So far so good... now if I want to pickle the model for later reuse, how would I go about predicting on the new data in future wi...
0
1
51
0
51,513,887
0
0
0
0
1
false
95
2016-05-09T03:04:00.000
4
10
0
How to add regularizations in TensorFlow?
37,107,223
0.07983
python,neural-network,tensorflow,deep-learning
I tested tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) and tf.losses.get_regularization_loss() with one l2_regularizer in the graph, and found that they return the same value. By observing the value's quantity, I guess reg_constant has already make sense on the value by setting the parameter of tf.contrib.layer...
I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value. My questions are: Is there a more elegant or recommended way of regularization than doing it manually? I also find that get_variable has an arg...
0
1
80,148
0
37,126,992
0
0
0
0
1
false
0
2016-05-09T22:41:00.000
0
1
0
Need to append bias term when using `sklearn` models?
37,126,576
0
python,machine-learning,scikit-learn
No, you do not add any biases, models define biases in their own way. What you learned during course is generic, although not perfect - solution. It matters for models such as SVM, which should not ever have appended "1"s, as then this bias would get regularized, which is simply wrong for SVMs. Thus, while this is nice...
In my machine learning class, we have learned about appending a 1 to each sample's feature vector when using many machine learning models to account for bias. For example, if we are doing linear regression and a sample has features f_1, f_2, ..., f_d, we need to add a "fake" feature value of 1 to allow for the regressi...
0
1
1,556
0
37,155,400
0
0
0
0
1
true
6
2016-05-10T03:40:00.000
1
2
0
Keras, best way to save state when optimizing
37,128,886
1.2
python,keras
You could create a tar archive containing the weights and the architecture, as well as a pickle file containing the optimizer state returned by model.optimizer.get_state().
I was just wondering what is the best way to save the state of a model while it it optimizing. I want to do this so I can run it for a while, save it, and come back to it some time later. I know there is a function to save the weights and another function to save the model as JSON. During learning I would need to sa...
0
1
4,558
0
37,142,380
0
0
0
0
1
false
0
2016-05-10T12:36:00.000
0
1
0
Manually update ratings in recomender system
37,138,777
0
python,machine-learning,recommendation-engine,rating,collaborative-filtering
Based on your comment above, I would manipulate the Number of times they purchased the product field. You need to basically transform the Number of times they purchased the product field into an implicit rating field. I would maybe scale the product rating system to 1-5. If they press the don't like the product butt...
I developed a recommender system using Matrix Factorization in Python. The ratings are in the range [1-5]. It works very well. This system is made for client advisors rather than clients themselves. Hence, the system recommends some products to the client advisor and then this one decides which products he's gonna reco...
0
1
62
0
37,176,185
0
0
0
0
1
false
0
2016-05-11T03:42:00.000
2
4
0
How to auto-discover a lagging of time-series data in scikit-learn and classify using time-series data
37,152,723
0.099668
python,scikit-learn,time-series,quantitative-finance
No. there is not a way, in Python, using sci-kit, to automatically lag all of these time-series to find what time-series (if any) tend to lag other data. You'll have to write some code.
I currently have a giant time-series array with times-series data of multiple securities and economic statistics. I've already written a function to classify the data, using sci-kit learn, but the function only uses non-lagged time-series data. Is there a way, in Python, using sci-kit, to automatically lag all of these...
0
1
2,999
0
37,247,216
0
0
0
0
1
false
0
2016-05-11T18:32:00.000
2
1
0
How to rotate images in Caffe on-the-fly for training set augmentation?
37,170,740
0.379949
python,deep-learning,caffe
You can make use of Python Layer to do the same. The usage of a Python Layer is demonstrated in caffe_master/examples/py_caffe/. Here you could make use of a python script as the input layer to your network. You could describe the behavior of rotations in this layer.
I know that there is a "mirror" parameter in the default data layer, but is there a way to do arbitrary rotations (really, I would just like to do multiples of 90 degrees), preferably in Python?
0
1
792
0
39,967,880
0
0
0
0
1
true
0
2016-05-11T23:20:00.000
1
1
0
module object has no attribute 'fblas' error when running theano.test() in Canopy Python
37,174,808
1.2
python-2.7,theano,enthought,canopy
That's possibly because you installed an old version of Theano package. Try upgrade it or install the newest version by pip install theano.
I could not get Theano running in my system in Enthought canopy Python. When I give import theano and test run, I get the following error. import blas File "/Users/rajesh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site- packages/theano/tensor/blas.py", line 135, in numpy.dtype('float32'):scipy.linalg....
0
1
342
0
37,185,292
0
0
0
0
1
false
33
2016-05-12T10:44:00.000
4
3
0
Find out if/which BLAS library is used by Numpy
37,184,618
0.26052
python,c++,macos,numpy,blas
numpy.show_config() just tells that info is not available on my Debian Linux. However /usr/lib/python3/dist-packages/scipy/lib has a subdirectory for blas which may tell you what you want. There are a couple of test programs for BLAS in subdirectory tests. Hope this helps.
I use numpy and scipy in different environments (MacOS, Ubuntu, RedHat). Usually I install numpy by using the package manager that is available (e.g., mac ports, apt, yum). However, if you don't compile Numpy manually, how can you be sure that it uses a BLAS library? Using mac ports, ATLAS is installed as a dependency....
0
1
28,680
0
37,212,468
0
0
0
0
1
true
3
2016-05-13T10:25:00.000
2
2
0
Smart algorithm for finding the divisors of a binomial coefficient
37,207,589
1.2
algorithm,python-3.x,discrete-mathematics,binomial-coefficients
First you could start with the fact that : C(n,k) = (n/k) C(n-1,k-1). You can prouve that C(n,k) is divisible by n/gcd(n,k). If n is prime then n divides C(n,k). Check Kummer's theorem: if p is a prime number, n a positive number, and k a positive number with 0< k < n then the greatest exponent r for which p^r divides ...
I'm interested in tips for my algorithm that I use to find out the divisors of a very large number, more specifically "n over k" or C(n, k). The number itself can range very high, so it really needs to take time complexity into the 'equation' so to say. The formula for n over k is n! / (k!(n-k)!) and I understand that...
0
1
846
0
37,321,401
0
1
0
0
1
false
4
2016-05-14T00:24:00.000
8
1
0
Using Python multiprocessing on an HPC cluster
37,221,133
1
python-3.x,multiprocessing,distributed-computing,hpc
Unfortunately I wasn't able to find an answer in the community. However, through experimentation, I was able to better isolate the problem and find a workable solution. The problem arises from the nature of Python's multiprocessing implementation. When a Pool object is created (i.e. the manager class that controls the ...
I am running a Python script on a Windows HPC cluster. A function in the script uses starmap from the multiprocessing package to parallelize a certain computationally intensive process. When I run the script on a single non-cluster machine, I obtain the expected speed boost. When I log into a node and run the script lo...
0
1
1,975
0
37,228,094
0
0
0
0
1
false
0
2016-05-14T14:37:00.000
0
1
0
Proper way of loading large amounts of image data
37,227,938
0
python,deep-learning
Why not add a preprocessing step, where you would either (a) physically move the images to folders associated with bucket and/or rename them, or (b) first scan through all images (headers only) to build the in-memory table of image filenames and their sizes/buckets, and then the random sampling step would be quite simp...
For a Deep Learning application I am building, I have a dataset of about 50k grayscale images, ranging from about 300*2k to 300*10k pixels. Loading all this data into memory is not possible, so I am looking for a proper way to handle reading in random batches of data. One extra complication with this is, I need to know...
0
1
811
0
37,234,653
0
0
0
0
1
true
0
2016-05-15T02:59:00.000
1
1
0
How to do gradient descent for not all variables in tensorflow
37,234,114
1.2
python-2.7,tensorflow
To lock the ones that you don't want to train you can use tf.Variable(..., trainable=False)
In tensorflow, tf.train.GradientDescentOptimizer does gradient descent for all variables in default. Can i just do gradient descent for only a few of my variables and 'lock' the others?
0
1
206
0
40,741,488
0
1
0
0
1
false
1
2016-05-15T09:36:00.000
0
1
0
Use GPU installation of tensorflow/cuda in spyder under ubuntu 14.04
37,236,677
0
python,anaconda,tensorflow,spyder
To solve your issue you have 3 options here: 1 just start spyder from terminal 2 move PATH variable definition from .bash_profile to session init scripts 3 Duplicate your PATH in spyder's run configuration
I am running ubuntu 14.04 with an anaconda2 installation and would like to use tensorflow in combination with CUDA. So far the steps I performed are: Installed CUDA 7.5 and cudnn Installed tensorflow (GPU version) through a DEB package. Note that I don't want to use the conda package of tensorflow since that one is no...
0
1
1,863
0
37,246,380
0
0
0
0
2
false
4
2016-05-16T02:22:00.000
0
3
0
How to calculate even distance along an interpolated path (Python2.7)?
37,245,832
0
python,path,geometry
I'm sure there's an elegant way to do this with pandas, but until then, here's a simple idea if you can live with some error. You can do this a few different ways but here's the gist of it: Treat each tuple as a node in a linked list. Define the desired length, D, between each point. As you move through the list, if th...
I have a (long) list of x,y tuples that collectively describe a path (ie. mouse-activity sampled at a constant rate despite non-constant velocities). My goal is to animate that path at a constant rate. So I have segments that are curved, and segments that are straight, and the delta-d between any two points isn't guara...
0
1
159
0
37,246,950
0
0
0
0
2
true
4
2016-05-16T02:22:00.000
1
3
0
How to calculate even distance along an interpolated path (Python2.7)?
37,245,832
1.2
python,path,geometry
If you use Numpy arrays to represent your data then you can vectorise the computation. That's as efficient as you're going to get.
I have a (long) list of x,y tuples that collectively describe a path (ie. mouse-activity sampled at a constant rate despite non-constant velocities). My goal is to animate that path at a constant rate. So I have segments that are curved, and segments that are straight, and the delta-d between any two points isn't guara...
0
1
159
0
37,246,905
0
1
0
1
1
false
1
2016-05-16T03:38:00.000
1
3
0
Python, computationally efficient data storage methods
37,246,342
0.066568
python,sql,arrays,mongodb,database
Have you considered HDF5? It's very efficient for numerical data, and is supported by both Python and Matlab.
I am retrieving structured numerical data (float 2-3 decimal spaces) via http requests from a server. The data comes in as sets of numbers which are then converted into an array/list. I want to then store each set of data locally on my computer so that I can further operate on it. Since there are very many of these da...
0
1
989
0
37,435,858
0
0
0
0
1
false
3
2016-05-16T13:22:00.000
1
1
0
Theano : MissingGxx , g++ not available
37,255,003
0.197375
python,iis,gcc,g++,theano
I've solved the problem , I had two g++ executables in my WinPython environment at following paths WinPythonDir\python-2.7.10.amd64\Scripts\g++.exe WinPythonDir\python-2.7.10.amd64\share\mingwpy\bin\g++.exe Spyder used the correct one (2) and IIS seems to use the one mentioned in 1. I explicitly added path to 2 in my...
I've been trying to deploy my Python Flask App that uses Conv nets using Theano on local IIS. When I try to load a pickled Neural Network , I get following Errors Unable to Create compiledir. I solved this by changing compiledir path in configdefaults.py and giving read/write rights to IIS on that directory. Now comp...
0
1
1,208
0
37,767,201
0
1
0
0
1
false
0
2016-05-16T16:38:00.000
2
2
0
How slicing and ellipsis works in numpy?
37,258,911
0.197375
python,numpy,slice,ellipsis
arr[:,:,1] is fancy indexing used by numpy that selects the first element of the last column in arr. Fancy indexing can only be used in numpy arrays and not in python's traditional lists. Also, like its pointed out in the comments, a[,:,:,] is a syntax error. It is helpful because you can select columns easily
I have been reading a very old documentation of Numpy and found out a weird notation which eludes my understanding. The documentation says a[i:...] is a shortcut for a[i,:,:,:]. The documentation being old is very vague and I would welcome any comments. Thanks, Prerit
0
1
1,494
0
37,280,915
0
0
0
0
1
false
3
2016-05-17T13:33:00.000
1
4
0
Dynamic Programming: Mission Per Day, Scheduling for Maximum Profit
37,277,713
0.049958
python,algorithm,dynamic-programming
Dynamically build a table of dimensions 6 x n. The entry table[w_i][d_j] will denote the maximal reachable value when Bob has worked for the last i days consecutively (including today) and it is day j. The first column is easy to fill in: table[1][0] = x_0 if Bob decides to work on the first day, all other values are 0...
The problem: There is a set of n days that Bob is planning to work, and on each day i there is a mission; each mission lasts exactly one day, must be done on day i in which it is given, and pays bob x_i dollars. Bob cannot work more than 5 consecutive missions at a time. That is, he must take at least 1 rest day every ...
0
1
766