GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
53,186,754
0
0
0
0
2
false
0
2018-11-07T02:17:00.000
0
2
0
Pit in LSTM programming by python
53,182,773
0
python-3.x,tensorflow,keras,lstm,rnn
No. Samples are not equal to batch size. Samples means the number of rows in your data-set. Your training data-set is divided into number of batches and pass it to the network to train. In simple words, Imagine your data-set has 30 samples, and you define your batch_size as 3. That means the 30 samples divided into 1...
As we all Know, if we want to train a LSTM network, we must reshape the train dataset by the function numpy.reshape(), and reshaping result is like [samples,time_steps,features]. However, the new shape is influenced by the original one. I have seen some blogs teaching LSTM programming taking 1 as time_steps, and if tim...
0
1
74
0
53,204,495
0
0
0
0
2
true
0
2018-11-07T18:20:00.000
1
2
0
Tensorflow training, how to prevent training node deletion
53,195,482
1.2
python,tensorflow,machine-learning
You can use the keep_checkpoint_max flag to tf.estimator.RunConfig in model_main.py. You can set it to a very large number to practically save all checkpoints. You should be warned though that depending on the model size and saving frequency, it might fill up your disk (and therefore crash during training). You can cha...
I am using Tensorflow with python for object detection. I want to start training and leave it for a while and keep all training nodes (model-cpk). Standard Tensorflow training seems to delete nodes and only keep the last few nodes. How do I prevent that? Please excuse me if this is the wrong place to ask such questions...
0
1
69
0
53,205,258
0
0
0
0
2
false
0
2018-11-07T18:20:00.000
0
2
0
Tensorflow training, how to prevent training node deletion
53,195,482
0
python,tensorflow,machine-learning
You can save modelcheckpoints as .hdf5 files are load them again when wanting to predict on test data. Hope that helps.
I am using Tensorflow with python for object detection. I want to start training and leave it for a while and keep all training nodes (model-cpk). Standard Tensorflow training seems to delete nodes and only keep the last few nodes. How do I prevent that? Please excuse me if this is the wrong place to ask such questions...
0
1
69
0
53,202,686
0
0
0
0
1
false
1
2018-11-07T19:29:00.000
0
2
0
Can you post process results from Cloud ML's prediction output?
53,196,467
0
python,tensorflow,google-cloud-ml
I'll answer (1): we have an Alpha API that will permit this. Please contact cloudml-feedback@google.com for more information.
I have a model for object detection (Faster RCNN from Tensorflow's Object Detection API) running on Google Cloud ML. I also have some code to filter the resulting bounding boxes based on size, aspect ratio etc. Is it possible to run this code as part of the prediction process so I don't need to run a separate process ...
0
1
81
0
53,199,041
0
0
0
0
1
false
0
2018-11-07T22:35:00.000
1
1
0
scipy stats distributions documentation
53,198,927
0.197375
python,scipy
If you use ipython then I believe scipy.stats.binom? achieves this.
I'm trying to track down the docs for the various distributions in scipy.stats. It is easy enough to google around for them, but I like to use the built-in help function for kicks sometimes. Through a series of help calls, can find that scipy has a stats module and that scipy.stats has a binom distribution. However,...
0
1
70
0
53,286,280
0
1
0
0
1
true
12
2018-11-07T23:57:00.000
9
2
0
Importing tensorflow makes python 3.6.5 error
53,199,675
1.2
python,python-3.x,tensorflow
I have solved the issue. The following procedure was used to find and fix the problem: I used the faulthandler module to force python to print out a stack trace and recieved a Windows fatal exception: access violation error which seems to suggest the problem was indeed a segfault caused by some module used by tensorflo...
Tensorflow used to work on my computer. But now when I try to import tensorflow python itself errors out. I am not given a traceback call to tell me what the error is. I get a window's prompt that says "Python has stopped working". When I click "debug" all I get is "An unhandled win32 exception occurred in python.exe"...
0
1
2,939
0
53,211,031
0
0
0
0
1
false
0
2018-11-08T02:59:00.000
0
1
0
nltk bags of words showing emotions
53,200,934
0
python,nlp,nltk
I'm not aware of any dataset that associates sentiments to keywords, but you can easily built one starting from a generic sentiment analysis dataset. 1) Clean the datasets from the stopwords and all the terms that you don't want to associate to a sentiment. 2)Compute the count of each words in the two sentiment classe...
i am working on NLP using python and nltk. I was wondering whether is there any dataset which have bags of words which shows keywords relating to emotions such as happy, joy, anger, sadness and etc from what i dug up in the nltk corpus, i see there are some sentiment analysis corpus which contain positive and negative...
0
1
822
0
53,203,802
0
0
0
0
1
false
0
2018-11-08T08:00:00.000
-3
3
0
How to save a text file to a .mat file?
53,203,507
-0.197375
python-2.7,matlab,text-files,mat-file
if what you need is to change file format: mv example.mat example.txt
How do I save a '.txt' file as a '.mat' file, using either MATLAB or Python? I tried using textscan() (in MATLAB), and scipy.io.savemat() (in Python). Both didn't help. My text file is of the format: value1,value2,value3,valu4 (each row) and has over 1000 rows. Appreciate any help is appreciated. Thanks in advance.
0
1
880
0
53,205,313
0
0
0
0
1
false
0
2018-11-08T09:18:00.000
0
1
0
relation in between a categorical dependent variable and combination of independent variables
53,204,674
0
python,machine-learning,statistics,analysis
I am not sure if I correctly understand your question, but from what I understand: You can try to convert your continuous columns to buckets, which means effectively converting them as categorical as well and then find correlation between them.
I am looking for a technique which could help us find the relation in between a categorical dependent variable and combination of independent variables (Y ~ X1*X2+X2*X3+X3*X4), here among X1 to X4 we have few categorical columns and few continuous columns. I am working on a classification problem and I want to check w...
0
1
97
0
53,235,950
0
1
0
0
1
false
2
2018-11-09T06:50:00.000
0
2
0
pip install hypothesis[pandas] says hypothesis3.82.1 does not provide the extra 'pandas'
53,221,061
0
python-hypothesis
fixed my problem with pip install hypothesis[all] and also realizing that hypothesis.extra tab completion only showed django, and that pandas and numpy extras seem to need to be imported explicitly.
When I ran pip install hypothesis[pandas] I got the following: Collecting hypothesis[pandas] Using cached https://files.pythonhosted.org/packages/36/58/222aafec5064d12c2b6123c69e512933b1e82a55ce49015371089d216f89/hypothesis-3.82.1-py3-none-any.whl hypothesis 3.82.1 does not provide the extra 'pandas' pip install h...
0
1
432
0
53,377,498
0
0
0
0
1
true
0
2018-11-09T14:45:00.000
0
1
0
Unable to import tensorflow, error for importing pywrap_tensorflow
53,227,954
1.2
python,tensorflow
Well, I am answering my own question since the error seems to have multiple causes. I am not sure what was the cause, however, after downgrading python to 3.5 and installing tensorflow with pip (pip install tensorflow), resolved the issue. Note: I uninstalled everything before installing Anaconda again.
I am trying to use Keras Sequential, however, my jupyter notebook is flooded with error as it's not able to import tensorflow in the backend (i think). Later I found that, its not with Keras, but I am not able to do 'import tensorflow as tf' as well. Any suggestions, please? I am using python 3.5.6 tensorflow 1.12 I di...
0
1
1,593
0
54,779,304
0
0
0
0
1
false
0
2018-11-11T15:43:00.000
1
1
0
TypeError: non_max_suppression() got an unexpected keyword argument 'score_threshold'
53,250,360
0.197375
python,python-3.x,tensorflow
I encountered same issue using tf 1.8. Tensorflow versions < 1.9 did not support the score_threshold param. Need to be sure you're using version 1.9 or newer.
Hi I am using win 7 64 bit and tensorflow version 1.5 I've tried 1.9 and higher but isnt work and I've tried tensorflow-gpu version but again isnt work all the error this
0
1
764
0
53,253,685
0
0
0
0
1
false
1
2018-11-11T21:51:00.000
1
1
0
Parsing a CSV into a database for an API using Python?
53,253,610
0.197375
python,sql,database,pandas,sqlite
If you used pd.read_csv() i can assure you all of the info is there, it's just not displaying it. You can check by doing something like print(df['Column_name_you_are_interested_in'].tolist()) just to make sure though. You can also use the various count type methods in pandas to make sure all of your lines are there. Pa...
I'm gonna use data from a .csv to train a model to predict user activity on google ads (impressions, clicks) in relation to the weather for a given day. And I have a .csv that contains 6000+ recordings of this info and want to parse it into a database using Python. I tried making a df in pandas but for some reason the...
0
1
270
0
53,294,036
0
0
0
0
1
true
1
2018-11-12T16:39:00.000
1
1
0
Input numerical arrays instead of images into Keras/TF CNN
53,266,491
1.2
python,tensorflow,keras,conv-neural-network,mnist
Yes, you can use CNN for data other than images like sequential/time-series data(1D convolution but you can use 2D convolution as well). CNN does its job pretty good for these types of data. You should provide your input as an image matrix i.e a window on which CNN can perform convolution on. And you can store those in...
I have been building some variations of CNN's off of Keras/Tensorflow examples that use the MNIST data images (ubyte files) for feature extraction. My eventual goal is to do a similar thing but with a collection (~10000) 2D FFT arrays of signal data that I have made (n x m ~ 1000 x 50)(32 bite float data) I have been l...
0
1
440
0
53,452,436
0
0
0
0
1
false
1
2018-11-13T01:35:00.000
1
1
0
inception v3 using tf.data?
53,272,508
0.197375
python,tensorflow
Well, I eventually got this working. The various documents referenced in the comment on my question had what I needed, and I gradually figured out which parameters passed to queuerunners corresponded to which parameters in the tf.data stuff. There was one gotcha that took a while for me to sort out. In the inception ...
I'm using a bit of code that is derived from inception v3 as distributed by the Google folks, but it's now complaining that the queue runners used to read the data are deprecated (tf.train.string_input_producer in image_processing.py, and similar). Apparently I'm supposed to switch to tf.data for this kind of stuff. U...
0
1
58
0
53,301,201
0
0
0
0
1
true
0
2018-11-14T02:26:00.000
0
1
0
An algorithm that efficiently computes the distance of one labeled pixel to its nearest differently labeled pixel
53,292,326
1.2
python,algorithm,distance,distance-matrix
I think that if you have a matrix, you can run a BFS version where the matrix A will be your graph G and the vertex v will be the arbitrary pixel you chose. There is an edge between any two adjacent cells in the matrix.
I apologize for my lengthy title name. I have two questions, where the second question is based on the first one. (1). Suppose I have a matrix, whose entries are either 0 or 1. Now, I pick an arbitrary 0 entry. Is there an efficient algorithm that searches the nearest entry with label 1 or calculates the distance betw...
0
1
80
0
53,308,093
0
0
0
0
1
false
0
2018-11-14T06:38:00.000
1
1
1
What is the difference between tensorflow serving Dockerfile and Dockerfile.devel?
53,294,415
0.197375
python,docker,tensorflow,dockerfile,tensorflow-serving
a Dockerfile is a file where your write the configurations to create a docker image. The tensorflow/serving cpu and gpu are docker images which means they are already configured to work with tensorflow, tensorflow_model_server and, in the case of gpu, with CUDA. If you have a GPU, then you can use a tensorflow/servin...
Why are there two different docker files for tensorflow serving - Dockerfile & Dockerfile.devel - for both CPU and GPUs? Which one is necessary for deploying and testing?
0
1
125
0
53,333,072
0
0
0
0
1
true
6
2018-11-14T13:56:00.000
7
1
0
Python/Gensim - What is the meaning of syn0 and syn0norm?
53,301,916
1.2
python,deep-learning,nlp,gensim,word-embedding
These names were inherited from the original Google word2vec.c implementation, upon which the gensim Word2Vec class was based. (I believe syn0 only exists in recent versions for backward-compatbility.) The syn0 array essentially holds raw word-vectors. From the perspective of the neural-network used to train word-vecto...
I know that in gensims KeyedVectors-model, one can access the embedding matrix by the attribute model.syn0. There is also a syn0norm, which doesn't seem to work for the glove model I recently loaded. I think I also have seen syn1 somewhere previously. I haven't found a doc-string for this and I'm just wondering what's...
0
1
6,897
0
53,306,686
0
0
0
0
1
true
0
2018-11-14T16:25:00.000
-1
1
0
Reverse engineer scikit-learn serialized model
53,304,675
1.2
python,machine-learning,scikit-learn,pickle,joblib
No, you cant (in principle, anyway) reverse engineer the data based on a model. You can obviously derive the trained model weights/etc and start to get a good understanding of what it might have been trained over, but directly deriving the data, I'm not aware of any possible way of doing that, providing you're pickling...
I am trying to understand the security implications of serializing a scikit-learn/keras fitted model (using pickle/joblib etc). Specifically, if I work on data that I don't want to be revealed, would there be anyway for someone to reverse engineer what data a model was fitted on? Or is the data, just a way for the alg...
0
1
450
0
62,402,653
0
0
0
0
1
false
7
2018-11-14T23:57:00.000
3
1
0
Why is pd.unique() faster than np.unique()?
53,310,547
0.53705
python,pandas,numpy,data-science,data-analysis
np.unique() is treating the data as an array, so it goes through every value individually then identifies the unique fields. whereas, pandas has pre-built metadata which contains this information and pd.unique() is simply calling on the metadata which contains 'unique' info, so it doesn't have to calculate it again.
I tried to compare the two, one is pandas.unique() and another one is numpy.unique(), and I found out that the latter actually surpass the first one. I am not sure whether the excellency is linear or not. Can anyone please tell me why such a difference exists, with regards to the code implementation? In what case shoul...
0
1
1,666
0
53,312,488
0
0
0
0
1
true
0
2018-11-15T03:51:00.000
0
2
0
Compare stock indices of different sizes Python
53,312,182
1.2
python,plot,statistics,correlation
I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis. These websites rescale them by fixing the initial starting points for both indices at, say, 100. I.e. if Dow is 25000 poi...
I am using Python to try and do some macroeconomic analysis of different stock markets. I was wondering about how to properly compare indices of varying sizes. For instance, the Dow Jones is around 25,000 on the y-axis, while the Russel 2000 is only around 1,500. I know that the website tradingview makes it possible to...
0
1
318
0
53,314,237
0
0
0
0
1
false
1
2018-11-15T06:53:00.000
0
2
0
How to convert 2D matrix to 3D tensor without blending corresponding entries?
53,313,913
0
python,tensorflow
You can first go through each of your first three columns and count the number of different products, stores and weeks that you have. This will give you the shape of your new array, which you can create using numpy. Importantly now, you need to create a conversion matrix for each category. For example, if product is 'X...
I have data with the shape of (3000, 4), the features are (product, store, week, quantity). Quantity is the target. So I want to reconstruct this matrix to a tensor, without blending the corresponding quantities. For example, if there are 30 product, 20 stores and 5 weeks, the shape of the tensor should be (5, 20, 30)...
0
1
626
0
53,320,092
0
1
0
0
2
false
0
2018-11-15T12:49:00.000
0
2
0
Python - No module found
53,319,860
0
python,macos
In Anaconda Navigator, I have already installed those libraries. What I have done was to delete them and install once again. Now it works for me from both: console and Jupyter Notebook.
I'm new to Python and because I couldn't find a solution for my problem after some researches in google, I'm creating a new question, where I'm sure someone for 100% asked for it already. I have installed miniconda with numpy and pandas, which I want to use. It's located at ~/miniconda. I've created new python file in ...
0
1
53
0
53,320,029
0
1
0
0
2
false
0
2018-11-15T12:49:00.000
1
2
0
Python - No module found
53,319,860
0.099668
python,macos
conda has its own version of the Python interpreter. It is located in the Miniconda directory (It's called "Python.exe"). If you are using an IDE you need to switch the interpreter to use this version of Python rather than the default one you may have installed on the internet from the Python website itself.
I'm new to Python and because I couldn't find a solution for my problem after some researches in google, I'm creating a new question, where I'm sure someone for 100% asked for it already. I have installed miniconda with numpy and pandas, which I want to use. It's located at ~/miniconda. I've created new python file in ...
0
1
53
0
53,327,378
0
0
0
0
1
false
0
2018-11-15T20:21:00.000
0
1
0
xgboost feature importance of categorical variable
53,327,334
0
python,xgboost,categorical-data
You could probably get by with summing the individual categories' importances into their original, parent category. But, unless these features are high-cardinality, my two cents would be to report them individually. I tend to err on the side of being more explicit with reporting model performance/importance measures.
I am using XGBClassifier to train in python and there are a handful of categorical variables in my training dataset. Originally, I planed to convert each of them into a few dummies before I throw in my data, but then the feature importance will be calculated for each dummy, not the original categorical ones. Since I al...
0
1
1,226
0
53,332,661
0
0
0
0
1
false
0
2018-11-16T06:20:00.000
1
1
0
Running python script on external hard disk (training neural network)
53,332,468
0.197375
python,neural-network
It depends on the read speed of your Hard drive and External hard drive. Is your hard drive a SSD? If it is, then It sure gonna be way faster than your external hard drive. If the read speed of your hard disk drive and external is same or similar, then its doesn't matter where you store your dataset. 1) Your python fil...
I have a dataset that is too large to store locally and I want to train a neural network. Which would be faster? or are they the same? 1) All files are stored on the external hard drive. The python file is run in the directory of the hard drive that loads the data and trains the network 2) Python files are saved local...
0
1
1,158
0
53,363,572
0
0
0
0
1
true
0
2018-11-16T07:58:00.000
0
1
0
How to use dask to populate DataFrame in parallelized task?
53,333,644
1.2
python,pandas,python-multiprocessing,python-multithreading,dask
The right way to do something like this, in rough outline: make a function that, for a given argument, returns a data-frame of some part of the total data wrap this function in dask.delayed, make a list of calls for each input argument, and make a dask-dataframe with dd.from_delayed if you really need the index to be ...
I would like to use dask to parallelize a numbercrunching task. This task utilizes only one of the cores in my computer. As a result of that task I would like to add an entry to a DataFrame via shared_df.loc[len(shared_df)] = [x, 'y']. This DataFrame should be populized by all the (four) paralllel workers / threads in...
0
1
48
0
53,416,420
0
0
0
0
1
false
2
2018-11-16T08:10:00.000
0
2
0
Multidimensional gradient descent in Tensorflow
53,333,794
0
python,tensorflow,gradient-descent
Tensorflow first reduces your loss to a scalar and then optimizes that.
What does Tensorflow really do when the Gradient descent optimizer is applied to a "loss" placeholder that is not a number (a tensor of size 1) but rather a vector (a 1-dimensional tensor of size 2, 3, 4, or more)? Is it like doing the descent on the sum of the components?
0
1
358
0
61,325,048
0
0
0
0
1
false
6
2018-11-17T18:20:00.000
0
3
0
How to use F-score as error function to train neural networks?
53,354,176
0
python,tensorflow,loss-function,precision-recall
the loss value and accuracy is a different concept. The loss value is used for training the NN. However, accuracy or other metrics is to value the training result.
I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset). So, I know that F-score calculated from precision and recall is a good measure of how well the model is trained. I have used ...
0
1
8,429
0
53,365,423
0
0
0
0
1
true
2
2018-11-18T18:45:00.000
1
1
0
Is there an algorithm to calculate a numerical rating of the degree of abstractness of a word in NLP?
53,364,314
1.2
python,nlp,wordnet
There's no definition of abstractness that I know of, neither any algorithm to calculate it. However, there are several directions I would use as proxies Frequency - Abstract concepts are likely to be pretty rare in a common speech, so a simple idf should help identify rare words. Etymology - Common words in English, ...
Is there an algorithm that can automatically calculate a numerical rating of the degree of abstractness of a word. For example, the algorithm rates purvey as 1, donut as 0, and immodestly as 0.5 ..(these are example values) Abstract words in the sense words that refer to ideas and concepts that are distant from immedia...
0
1
77
0
53,378,514
0
0
0
0
1
true
0
2018-11-19T15:52:00.000
0
1
0
Return distribution over set of action space from Neural Network
53,378,284
1.2
python,tensorflow,neural-network,probability,reinforcement-learning
Is there any reason you want it to return a matrix of these actions? Why not just map each of the 27 combinations to integers 0-26? So your architecture could look like [Linear(5, n), ReLU, Linear(n, .) ... Softmax(Linear(., 27))]. Then when you need to evaluate, you can just map it back to the action sequence. This is...
I am trying to build a neural network to output a probabilistic distribution over set of whole action space. My action space is a vector of 3 individual actions : [a,b,c] a can have 3 possible actions within itself a1,a2,a3 and similarly b has b1,b2,b3, c has c1,c2,c3. So In total i can have 27 different combinations...
0
1
57
0
53,711,345
0
0
0
0
1
false
1
2018-11-20T11:30:00.000
0
1
0
Dask how to avoid recomputing things
53,392,067
0
python,dask,dask-distributed
Yes, this is the usecase that persist is for. The trick is figuring out where to apply it - this decision is usually influenced by: The size of your intermediate results. These will be kept in memory until all references to them are deleted (e.g. foo in foo = intermediate.persist()). The shape of your graph. It's bett...
Using dask I have defined a long pipeline of computations; at some point given constraints in apis and version I need to compute some small result (not lazy) and feed it in the lazy operations. My problem is that at this point the whole computation graph will be executed so that I can produce an intermediate results. I...
0
1
210
0
53,396,837
0
0
0
0
1
false
0
2018-11-20T15:53:00.000
1
1
0
Feature selection in K means clustering
53,396,792
0.197375
python-3.x,cluster-analysis,k-means
The brute force approach is to try all different 380 possibilities. The non brute force approach could be try to do your clustering with 19 features (all 20 solutions) and keeping the best one, then dropping one more, selecting the best of the 19... up to two classes.
I have a dataset with more than 20 columns. I want to find out which two variables contributes towards highest importance. How to do it?
0
1
635
0
53,404,984
0
0
0
0
2
false
0
2018-11-21T03:00:00.000
0
3
0
How do I train the Convolutional Neural Network with negative and positive elements as the input of the first layer?
53,404,679
0
python,tensorflow,conv-neural-network
We usually have 3 types of datasets for getting a model trained, Training Dataset Validation Dataset Test Dataset Training Dataset This should be an evenly distributed data set which covers all varieties of data. If your train with more epochs, the model will get used to the training dataset and will only give proper...
Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?! Or am I wrong? And I still have to scale it on the training set. Also, can I train a dataset in the CNN that contents positive and negative elements as the first input o...
0
1
245
0
53,404,823
0
0
0
0
2
false
0
2018-11-21T03:00:00.000
0
3
0
How do I train the Convolutional Neural Network with negative and positive elements as the input of the first layer?
53,404,679
0
python,tensorflow,conv-neural-network
Scaling data depends upon the requirement as well the feed/data you got. Test data gets scaled with Test data only, because Test data don't have the Target variable (one less feature in Test data). If we scale our Training data with new Test data, our model will not be able to correlate with any target variable and thu...
Just I am curious why I have to scale the testing set on the testing set, and not on the training set when I’m training a model on, for example, CNN?! Or am I wrong? And I still have to scale it on the training set. Also, can I train a dataset in the CNN that contents positive and negative elements as the first input o...
0
1
245
0
53,413,823
0
0
0
0
1
false
0
2018-11-21T03:13:00.000
0
1
0
Mixing questions between Qualtrics Blocks
53,404,770
0
python,machine-learning,deep-learning,qualtrics
There isn't any easy way to do this. You could put all 300 questions in the same block. Then in a block before the 300 question block have 3 multiple choice questions (QA, QB, QC) where you have placeholders for the 100 questions as choices (QA: A1, A2, ..., A100; QB: B1, B2, ... , B100; QC: C1, C2, ..., C100). For e...
I am creating a questionnaire on Qualtrics and I have 3 different blocks of questions. Let's call them A, B, and C. Each of the blocks has 100 questions each. I want to randomly pick 15 questions from each of the blocks. That part is easy. I have used Randomization options available for each block. However, I want to ...
0
1
30
0
53,417,517
0
1
0
0
1
true
0
2018-11-21T17:07:00.000
3
3
0
what is workers parameter in word2vec in NLP
53,417,258
1.2
python,machine-learning,nlp,word2vec
workers = use this many worker threads to train the model (=faster training with multicore machines). If your system is having 2 cores, and if you specify workers=2, then data will be trained in two parallel ways. By default , worker = 1 i.e, no parallelization
in below code . i didn't understand the meaning of workers parameter . model = Word2Vec(sentences, size=300000, window=2, min_count=5, workers=4)
0
1
5,032
0
53,430,651
0
0
0
0
2
true
0
2018-11-22T11:40:00.000
0
2
0
How to process a voice input in Keras that is not one of the two speaker outputs?
53,430,205
1.2
python,machine-learning,keras
A neural-net is - in essence - nothing more than a fancy feature-extractor and interpolator. There is no reason to expect anything specific for data that it's never seen, and this doesn't have much to do with working with the DTFT, MFCC, or I-Vectors, it's a basic principle of data-driven algorithms. Just as a methodol...
I'm trying to create a speaker recognition with a neural network using Keras and also the Fourier transformation to process the voice samples. The voice samples are me and my friend saying 'eeeee' for 3 seconds. Now the problem is if we give the neural network an input of someone else doing that ('ee' for 3 seconds), i...
0
1
74
0
53,430,299
0
0
0
0
2
false
0
2018-11-22T11:40:00.000
0
2
0
How to process a voice input in Keras that is not one of the two speaker outputs?
53,430,205
0
python,machine-learning,keras
It is not a simple matter. You need a lot more training examples and to do some tests. You COULD try to train something to be you-and-your-friend vs all, but it won't be that easy and (again) you will need lots of examples. It's very broad as a question, there are a few different approaches and i'm not sure Keras and n...
I'm trying to create a speaker recognition with a neural network using Keras and also the Fourier transformation to process the voice samples. The voice samples are me and my friend saying 'eeeee' for 3 seconds. Now the problem is if we give the neural network an input of someone else doing that ('ee' for 3 seconds), i...
0
1
74
0
54,192,123
0
0
0
0
1
true
0
2018-11-24T09:33:00.000
1
1
0
ImportError: cannot import name 'convert_kernel'
53,456,874
1.2
python,tensorflow
I got the same issue. The filename of my python code was "tensorflow.py". After I changed the name to "test.py". The issue was resolved. I guess there is already a "tensorflow.py" in the tensorflow package. If anyone uses the same name, it may lead to the conflict. If your python code is also called "tensorflow.py", yo...
When i try to use tensorflow to train model, i get this error message. File "/Users/ABC/anaconda3/lib/python3.6/site-packages/keras/utils/layer_utils.py", line 7, in from .conv_utils import convert_kernel ImportError: cannot import name 'convert_kernel' i have already install Keras
0
1
1,678
0
53,636,929
0
0
0
0
1
false
1
2018-11-25T03:39:00.000
0
1
0
propagation model using neural network (I am beginner)
53,464,452
0
python,networking,perceptron,propagation
I believe you need a data for the response (PL) and data for the independent variables in order to find n. you can find n using that data in SPSS, excel, Matlab etc. Good luck.
Propagation model: P = 10 * n * log10 (d/do) P = path loss (dB) n = the path loss distance exponent d = distance (m) do = reference distance (m) The initial idea is to make the loss measurements 'P' with respect to a distance 'd', and to determine the value of 'n' my question: is this implementation possible using mul...
0
1
33
0
53,465,899
0
0
0
0
1
false
0
2018-11-25T05:33:00.000
0
1
0
Is it Normal for a Neural Network Loss to Increase after being trained on an example?
53,464,933
0
python,machine-learning,neural-network,lstm,recurrent-neural-network
Is your dataset shuffled? Otherwise it could be the case that it was predicting one class for the first 99 examples. If not then LSTM can be tricky to train. Try changing hyper parameters and also I would recommend starting with SimpleRNN, GRU and then LSTM as sometimes a simple network might just do the trick.
I am currently testing an LSTM network. I print the loss of its prediction on a training example before back-propagation and after back-propagation. It would make sense that the after loss should always be less than the before loss because the network was just trained on that example. However, I am noticing that aroun...
0
1
28
0
54,864,247
0
0
0
0
1
true
1
2018-11-26T08:15:00.000
2
3
0
Tensorflow r1.12: TypeError: Type already registered for SparseTensorValue when running a 2nd script
53,477,005
1.2
python-3.x,tensorflow,spyder
(Spyder maintainer here) This error was fixed in Spyder 3.3.3, released on February/2019.
I have just built Tensorflow r1.12 from source in Ubuntu 16.04. The installation is successful. When I run a certain script in Spyder at the 1st time, everything flows smoothly. However, when I continue to run another script, following errors occur (which didn't happen previously): File "/home/haohua/tf_env/lib/python...
0
1
2,669
0
53,508,471
0
1
0
0
1
false
4
2018-11-26T20:13:00.000
5
1
0
Is it possible to append to an existing Feathers format file?
53,488,351
0.761594
python,pandas,feather
Feather files are intended to be written at once. Thus appending to them is not a supported use case. Instead I would recommend to you for such a large dataset to write the data into individual Apache Parquet files using pyarrow.parquet.write_table or pandas.DataFrame.to_parquet and read the data also back into Pandas ...
I am working on a very huge dataset with 20 million+ records. I am trying to save all that data into a feathers format for faster access and also append as I proceed with me analysis. Is there a way to append pandas dataframe to an existing feathers format file?
0
1
1,519
0
53,493,633
0
0
0
0
1
true
1
2018-11-27T05:53:00.000
1
2
0
frozen frames detection openCV python
53,493,527
1.2
python,opencv,ubuntu-16.04
This was my approach to solve this issue. Frozen frames: calculate absolute difference over HSV/RGB per every pixel in two consecutive frames np.arrays and determine max allowed diff that is valid for detecting frozen frames. Black frames have naturally very low (or zero) V-value sum over the frame. Determine max V-su...
I'm trying to detect camera is capturing frozen frames or black frame. Suppose a camera is capturing video frames and suddenly same frame is capturing again and again. I spend long time to get any idea about this problem statement but i failed. So how we detect it or any idea/steps/procedure for this problem.
0
1
805
0
53,504,769
0
0
0
0
1
false
0
2018-11-27T17:01:00.000
1
1
0
Ignore edges when calculating betweenness or closeness of the graph
53,504,630
0.197375
python,networkx,igraph
You could simply remove the edges you want to ignore before running the computations, and keep a record of what edges you have to put back when you're done.
I want to do calculation on my graph neglecting some edges (as if they don't exist). Like calculation of degree, closeness, or betweenness. any ideas ! Python
0
1
32
0
53,509,981
0
0
0
0
1
false
0
2018-11-27T23:39:00.000
-1
2
0
How to create a column of (x,y) pairs in a data frame
53,509,923
-0.099668
python,pandas
Implementing (x,y) coordinates in 1 column would be unnecessarily complex and hacky. I strongly recommend you make two columns, for example pair1_x and pair1_y. Is there a particular reason you need one column?
I am trying to get a column of (x,y) coordinate pairs in my pandas data frame. I want to be able to access each part of the coordinate. For example, if the title of the column is 'pair1' I want to be able to call pair1[0] and pair1[1] to access the x and y integers respectively. Ultimately, I'd be passing these into a ...
0
1
854
0
54,064,392
0
0
0
0
1
false
3
2018-11-28T14:13:00.000
3
1
0
XGBoost (Python) Prediction for Survival Model
53,521,427
0.53705
python,xgboost
No, I think not. A workaround would be to fit the baseline hazard in another package e.g. from sksurv.linear_model import CoxPHSurvivalAnalysis or in R by require(survival). Then you can use the predicted output from XGBoost as multiplyers to the fitted baseline. Just remember that if the baseline is on the log scale t...
The docs for Xgboost imply that the output of a model trained using the Cox PH loss will be exponentiation of the individual persons predicted multiplier (against the baseline hazard). Is there no way to extract from this model the baseline hazard in order to predict the entire survival curve per person? survival:cox:...
0
1
2,074
0
53,529,130
0
0
0
0
2
false
0
2018-11-28T20:06:00.000
0
2
0
NetworkX plotting: different units/scale between node positions and sizes?
53,527,307
0
python,matplotlib,networkx
Networkx uses matplotlib to plot things. It does not use pixels for its coordinates, and for good reason. If you you have coordinates whose values range from -0.01 to 0.01, it will produce a plot that scales the upper and lower bounds on the coordinates to be large enough to hold this, but not so large that everything...
I'm working on a graph with (x,y) node coordinates randomly picked from 0-100. If I simply plot the graph using nx.draw() and passing the original coordinates it looks ok, but if I try to plot some node sizes in a way it relates to coordinates it looks clearly inconsistent. Looks like the nodes position parameter in dr...
0
1
307
0
53,529,069
0
0
0
0
2
true
0
2018-11-28T20:06:00.000
1
2
0
NetworkX plotting: different units/scale between node positions and sizes?
53,527,307
1.2
python,matplotlib,networkx
Ok, I figured it out... Position parameter for nodes are relative, from 0.0 to 1.0 times whatever your plot size is, while size parameter is absolute, in pixels
I'm working on a graph with (x,y) node coordinates randomly picked from 0-100. If I simply plot the graph using nx.draw() and passing the original coordinates it looks ok, but if I try to plot some node sizes in a way it relates to coordinates it looks clearly inconsistent. Looks like the nodes position parameter in dr...
0
1
307
0
53,528,742
0
1
0
0
1
false
0
2018-11-28T21:44:00.000
1
3
0
Pandas read and parse Excel data that shows as a datetime, but shouldn't be a datetime
53,528,583
0.066568
python,pandas
pandas is not at fault its the excel that is interpreting the data wrongly, Set the data to text in that column and it wont interpret as date. then save the file and open through pandas and it should work fine. other wise export as CSV and try to open in pandas.
I have a system I am reading from that implemented a time tracking function in a pretty poor way - It shows the tracked working time as [hh]:mm in the cell. Now this is problematic when attempting to read this data because when you click that cell the data bar shows 11:00:00 PM, but what that 23:00 actually represents...
0
1
470
0
53,535,742
0
0
0
0
1
false
15
2018-11-29T00:38:00.000
11
2
0
The loss function and evaluation metric of XGBoost
53,530,189
1
python,machine-learning,xgboost,xgbclassifier
'binary:logistic' uses -(y*log(y_pred) + (1-y)*(log(1-y_pred))) 'reg:logistic' uses (y - y_pred)^2 To get a total estimation of error we sum all errors and divide by number of samples. You can find this in the basics. When looking on Linear regression VS Logistic regression. Linear regression uses (y - y_pred)^2 as th...
I am confused now about the loss functions used in XGBoost. Here is how I feel confused: we have objective, which is the loss function needs to be minimized; eval_metric: the metric used to represent the learning result. These two are totally unrelated (if we don't consider such as for classification only logloss and ...
0
1
19,483
0
53,540,809
0
0
0
0
1
true
1
2018-11-29T13:43:00.000
0
1
0
How to use Tensorflow tf.nn.Conv2d simultaneously for training and prediction?
53,540,424
1.2
python,tensorflow,neural-network,deep-learning,reinforcement-learning
You need to set your Placeholder like follows tf.placeholder(shape=(None,160,128,3) ....) , with having None in the first dimension , your placeholder will be flexible to any value you feed either 1 or 100.
I am currently diving deeper into tensorflow and I am a bit puzzled with the proper use of tf.nn.Conv2d(input, filter, strides, padding). Although it looks simple at first glance I cannot get my hear around the following issue: The use of filter, strides, padding is clear to me. However what is not clear is the correct...
0
1
221
0
53,543,239
0
0
0
0
1
true
0
2018-11-29T15:58:00.000
2
1
0
Using regular python code on a Spark cluster
53,542,955
1.2
python,apache-spark,distributed-computing
Spark use RDD(Resilient distributed dataset) to distribute work among workers or slaves , I dont think you can use your existing code in python without dramatically adapting the code to spark specification , for tensorflow there are many options to distribute computing over multiple gpus.
Can I run a normal python code using regular ML libraries (e.g., Tensorflow or sci-kit learn) in a Spark cluster? If yes, can spark distribute my data and computation across the cluster? if no, why?
0
1
238
0
53,543,608
0
0
0
0
1
false
1
2018-11-29T15:59:00.000
1
2
0
Pytorch: Normalize Image data set
53,542,974
0.099668
python,deep-learning,computer-vision,conv-neural-network,pytorch
What normalization tries to do is mantain the overall information on your dataset, even when there exists differences in the values, in the case of images it tries to set apart some issues like brightness and contrast that in certain case does not contribute to the general information that the image has. There are seve...
I want to normalize custom dataset of images. For that i need to compute mean and standard deviation by iterating over the dataset. How can I normalize my entire dataset before creating the data set?
0
1
3,508
0
53,544,202
0
1
1
0
1
false
3
2018-11-29T16:03:00.000
1
1
0
Algorithm to find minimum number of resistors from a set of resistor values. (C++ or Python)
53,543,066
0.197375
python,c++,algorithm
This is actually quite hard, best I can do is propose an idea for an algorithm for solving the first part, the concept of including parallels looks harder as well, but maybe the algorithm can be extended. If you define a function "best", which takes a target resistance as input and outputs the minimal set of resistors ...
I'm trying to design an algorithm that takes in a resistance value and outputs the minimum number of resistors and the values associated with those resistors. I would like the algorithm to iterate through a set of resistor values and the values in the set can be used no more than n times. I would like some direction on...
0
1
241
0
53,709,777
0
0
0
0
1
false
1
2018-11-29T23:14:00.000
1
2
0
is there a way to download the frozen graph ( .pb file ) of PoseNet?
53,548,976
0.099668
python,tensorflow,neural-network,deep-learning,tensorflow.js
We currently do not have the frozen graph for inference publicly, however you could download the assets and run them in a Node.js environment.
I intend to use posenet in python and not in browser, for that I need the model as a frozen graph to do inference on. Is there a way to do that?
0
1
2,638
0
53,557,733
0
0
0
0
1
false
0
2018-11-30T12:32:00.000
0
2
0
not having to load a dataset over and over
53,557,674
0
python,global-variables,spyder
It depends how large your data set is. For relatively smaller datasets you could look at installing Anaconda Python Jupyter notebooks. Really great for working with data and visualisation once the dataset is loaded. For larger datasets you can write some functions / generators to iterate efficiently through the datase...
Currently in R, once you load a dataset (for example with read.csv), Rstudio saves it as a variable in the global environment. This ensures you don't have to load the dataset every single time you do a particular test or change. With Python, I do not know which text editor/IDE will allow me to do this. E.G - I want to...
0
1
119
0
53,558,970
0
0
0
0
1
false
0
2018-11-30T13:52:00.000
0
1
0
IndexError: index 43462904 is out of bounds for size 43462904
53,558,877
0
python-3.x,group-by,compiler-errors,index-error
An array of length N can be indexed with 0 ... N-1: arr = [0,1,2] arr[0]: 0 arr[1]: 1 arr[2]: 2 len(arr): 3 In this example you try to access arr[3] which is invalid as it's the N+1st entry in the array.
I have a data set that have 43.462.904 milions of records. I try to do a group by with two variables and do an average of the third one. The fnuction is: df1 = df.groupby(["var1", pd.Grouper(key="var2"freq="MS")]).mean() The error that exit is the follow: IndexError: index 43462904 is out of bounds for size 43462904 T...
0
1
162
0
53,559,275
0
0
0
0
1
false
0
2018-11-30T14:10:00.000
0
2
0
Does the test set is used to update weight in a deep learning model with keras?
53,559,147
0
python,keras,deep-learning
No , you souldn't use your test set for training to prevent overfitting , if you use cross-validation principles you need exactly to split your data into three datasets a train set which you'll use to train your model , a validation set to test different value of your hyperparameters , and a test set to finally tes...
I'm wondering if the result of the test set is used to make the optimization of model's weights. I'm trying to make a model but the issue I have is I don't have many data because they are medical research patients. The number of patient is limited in my case (61) and I have 5 feature vectors per patient. What I tried i...
0
1
204
0
53,560,972
0
0
0
0
1
true
0
2018-11-30T15:43:00.000
2
1
0
Unusual order of dimensions of an image matrix in python
53,560,638
1.2
python-3.x,matlab
You misunderstood. You do not want to reshape, you want to transpose it. In MATLAB, arrays are A(x,y,z) while in python they are P[z,y,x]. Make sure that once you load the entire matrix, you change the first and last dimensions. You can do this with the swapaxes function, but beware! it does not make a copy nor change...
I downloaded a dataset which contains a MATLAB file called 'depths.mat' which contains a 3-dimensional matrix with the dimensions 480 x 640 x 1449. These are actually 1449 images, each with the dimension 640 x 480. I successfully loaded it into python using the scipy library but the problem is the unusual order of the ...
0
1
42
0
53,564,337
0
0
0
0
1
false
1
2018-11-30T20:12:00.000
0
2
0
Pandas memory error when saving DataFrame to file
53,564,278
0
python,pandas
There are several options. You can pickle the dataframe or you can use hdf5 format. These will occupy less memory. Also when you load it next time, it would be quicker then other formats.
I finnally managed to join two big DataFrames on a big machine of my school (512G memory). At the moment we re two people using the same machine, the other one is using about 120G of the memory, after I called the garbage collecter we get to 420G. I want to save the DataFrame to memory so I then I can reuse it easily a...
0
1
1,372
0
53,566,219
0
0
0
0
2
false
1
2018-11-30T21:40:00.000
0
2
0
word2vec: user-level, document-level embeddings with pre-trained model
53,565,271
0
python,twitter,nlp,word2vec,word-embedding
You are on the right track with averaging the word vectors in a tweet to get a "tweet vector" and then averaging the tweet vectors for each user to get a "user vector". Whether these average vectors will be useful or not depends on your learning task. Hard to say if this average method will work or not without trying s...
I am currently developing a Twitter content-based recommender system and have a word2vec model pre-trained on 400 million tweets. How would I go about using those word embeddings to create a document/tweet-level embedding and then get the user embedding based on the tweets they had posted? I was initially intending on...
0
1
312
0
53,612,424
0
0
0
0
2
true
1
2018-11-30T21:40:00.000
2
2
0
word2vec: user-level, document-level embeddings with pre-trained model
53,565,271
1.2
python,twitter,nlp,word2vec,word-embedding
Averaging the vectors of all the words in a short text is one way to get a summary vector for the text. It often works OK as a quick baseline. (And, if all you have, is word-vectors, may be your main option.) Such a representation might sometimes improve if you did a weighted average based on some other measure of rel...
I am currently developing a Twitter content-based recommender system and have a word2vec model pre-trained on 400 million tweets. How would I go about using those word embeddings to create a document/tweet-level embedding and then get the user embedding based on the tweets they had posted? I was initially intending on...
0
1
312
0
53,572,155
0
1
0
0
1
true
4
2018-12-01T14:48:00.000
4
1
0
How to see updated Dataframe after I run the code again in Spyder (without doubleclicking from Variable explorer after EVERY run)?
53,571,920
1.2
python,pandas,dataframe,spyder
(Spyder maintainer here) Unfortunately this is not possible as of September 2020. However, we'll try to implement this functionality in Spyder 5, to be released in 2021.
Is there a way to see an updated version of my Dataframe every time I run the code in Spyder? I can see the name and size of the Dataframe in "Variable explorer" but I don't like that I have to double click it to open it. Or is there a way to have the Dataframe (that I have already earlier opened by double clicking it)...
0
1
219
0
53,577,656
0
0
0
0
1
false
0
2018-12-02T04:56:00.000
0
3
0
How do I alternate the color of a scatter plot in matplotlib?
53,577,532
0
python,matplotlib
The simplest way to do this is probably to implement the logic outside the plot, by assigning a different group to each point defined by your circle-crossing concept. Once you have these group-indexes it's a simple plt.scatter, using the c (stands for "color") input. Good luck!
I have an array of data points of dimension n by 2, where the second dimension $2$ corresponds to the real part and imaginary part of a complex number. Now I know the data points will intersect the unit circle on the plane for a couple of times. What I want to implement is: suppose the path starts will some color, it ...
0
1
1,989
0
53,586,551
0
0
0
0
1
true
0
2018-12-03T01:15:00.000
1
1
0
Different sized vectors in word2vec
53,586,254
1.2
python,vector,word2vec
You run the code for one model three times, each time supplying a different vector_size parameter to the model initialization.
I am trying to generate three different sized output vectors namely 25d, 50d and 75d. I am trying to do so by training the same dataset using the word2vec model. I am not sure how I can get three vectors of different sizes using the same training dataset. Can someone please help me get started on this? I am very new to...
0
1
25
0
53,609,235
0
1
0
0
1
false
1
2018-12-03T03:16:00.000
1
2
0
Missing required dependencies ['numpy'] for anaconda ver5.3.1 on pycharm
53,587,001
0.099668
python,python-3.x,pycharm,anaconda,conda
Your pycharm created a new environment for your project I suspect. Maybe it copied across the anaconda python.exe but not all the global packages. In pycharm you can go to the project properties where you can see a list of all the packages available, and add additional packages. Here you can install Numpy. File --> Set...
I just installed anaconda ver5.3.1 which uses python 3.7. I encountered the following error; "Missing required dependencies {0}".format(missing_dependencies)) ImportError: Missing required dependencies ['numpy'] I have upgraded numpy, pandas to the latest version using conda but the same error appears. To fix this...
0
1
5,309
0
53,601,635
0
0
0
0
1
true
2
2018-12-03T05:29:00.000
4
1
0
What is the meaning of "size" of word2vec vectors [gensim library]?
53,587,960
1.2
python,gensim,word2vec,word-embedding
It is not the case that "[word2vec] aims to represent each word in the dictionary by a vector where each element represents the similarity of that word with the remaining words in the dictionary". Rather, given a certain target dimensionality, like say 100, the Word2Vec algorithm gradually trains word-vectors of 100-d...
Assume that we have 1000 words (A1, A2,..., A1000) in a dictionary. As fa as I understand, in words embedding or word2vec method, it aims to represent each word in the dictionary by a vector where each element represents the similarity of that word with the remaining words in the dictionary. Is it correct to say there ...
0
1
2,261
0
53,592,746
0
0
1
0
2
true
0
2018-12-03T11:15:00.000
0
2
0
How to embed machine learning python codes in hardware platform like raspberry pi?
53,592,665
1.2
python,machine-learning,raspberry-pi,artificial-intelligence,robotics
Python supports wide range of platforms, including arm-based. You raspberry pi supports Linux distros, just install Python and go on.
I want to implement machine learning on hardware platform s which can learning by itself Is there any way to by which machine learning on hardware works seamlessly?
0
1
68
0
67,797,110
0
0
1
0
2
false
0
2018-12-03T11:15:00.000
0
2
0
How to embed machine learning python codes in hardware platform like raspberry pi?
53,592,665
0
python,machine-learning,raspberry-pi,artificial-intelligence,robotics
First, you may want to be clear on hardware - there is wide range of hardware with various capabilities. For example raspberry by is considered a powerful hardware. EspEye and Arduio Nano 33 BLE considered low end platforms. It also depends which ML solution you are deploying. I think the most widely deployed method is...
I want to implement machine learning on hardware platform s which can learning by itself Is there any way to by which machine learning on hardware works seamlessly?
0
1
68
0
53,610,813
0
0
0
0
1
false
1
2018-12-03T11:56:00.000
0
2
0
Train SqueezeNet model using MNIST dataset Pytorch
53,593,363
0
python,neural-network,pytorch,mnist,torchvision
The initialization of the pretrained weights is possible but you'll get trouble with the strides and kernel sizes since MNIST images are 28X28 pixels. Most probably the reduction will lead to (batch_sizex1x1xchannel) feature maps before the net is at its infernece layer which will then cause an error.
I want to train SqueezeNet 1.1 model using MNIST dataset instead of ImageNet dataset. Can i have the same model as torchvision.models.squeezenet? Thanks!
0
1
1,212
0
53,806,861
0
0
0
0
1
false
0
2018-12-03T21:09:00.000
0
3
0
Using MFCC's for voice recognition
53,601,892
0
python,keras,neural-network,voice-recognition,mfcc
You can use MFCCs with dense layers / multilayer perceptron, but probably a Convolutional Neural Network on the mel-spectrogram will perform better, assuming that you have enough training data.
I'm currently using the Fourier transformation in conjunction with Keras for voice recogition (speaker identification). I have heard MFCC is a better option for voice recognition, but I am not sure how to use it. I am using librosa in python (3) to extract 20 MFCC features. My question is: which MFCC features should I ...
0
1
1,526
0
53,613,988
0
0
0
0
1
true
1
2018-12-04T13:06:00.000
2
1
0
Increasing accuracy by changing batch-size and input image size
53,613,681
1.2
python,keras,image-segmentation
If your model is a regular convolutional network (without any weird hacks), the samples in a batch will not be connected to each other. Depending on which loss function you use, the batch size might be important too. For regular functions (available 'mse', 'binary_crossentropy', 'categorical_crossentropy', etc.), they...
I am extracting a road network from satellite imagery. Herein the pixel classification is binary ( 0 = non-road, 1 = road). Hence, the mask of the complete satellite image which is 6400 x 6400 pixels shows one large road network where each road is connected to another road. For the implementation of the U-net I divided...
0
1
955
0
53,616,756
0
0
0
0
1
false
3
2018-12-04T14:50:00.000
0
2
0
Pandas read_excel removes columns under empty header
53,615,545
0
python-3.x,pandas
A quick fix would be to pass header=None to pandas' read_excel() function, manually insert the missing values into the first row (it now will contain the column names), then assign that row to df.columns and drop it after. Not the most elegant way, but I don't know of a builtin solution to your problem EDIT: by "manual...
I have an Excel file where A1,A2,A3 are empty but A4:A53 contains column names. In "R" when you were to read that data, the columns names for A1,A2,A3 would be "X_1,X_2,X_3" but when using pandas.read_excel it simply skips the first three columns, thus ignoring them. The problem is that the number of columns in each fi...
0
1
6,260
0
53,633,844
0
0
0
0
1
true
5
2018-12-05T08:13:00.000
5
1
0
DataFrame view in PyCharm when using pyspark
53,627,818
1.2
python,pyspark,pycharm
Pycharm does not support spark dataframes, you should call the toPandas() method on the dataframe. As @abhiieor mentioned in a comment, be aware that you can potentially collect a lot of data, you should first limit() the number of rows returned.
I create a pyspark dataframe and i want to see it in the SciView tab in PyCharm when i debug my code (like I used to do when i have worked with pandas). It says "Nothing to show" (the dataframe exists, I can see it when I use the show() command). someone knows how to do it or maybe there is no integration between pycha...
0
1
1,310
0
53,630,217
0
0
0
0
1
true
0
2018-12-05T10:22:00.000
2
1
0
batch size for LSTM
53,630,041
1.2
python,tensorflow,keras,lstm
If you provide your data as numpy arrays to model.fit() then yes, Keras will take care of feeding the model with the batch size you specified. If your dataset size is not divisible by the batch size, Keras will have the final batch be smaller and equal to dataset_size mod batch_size.
I've been trying to set up an LSTM model but I'm a bit confused about batch_size. I'm using the Keras module in Tensorflow. I have 50,000 samples, each has 200 time steps and each time step has three features. So I've shaped my training data as (50000, 200, 3). I set up my model with four LSTM layers, each having 100 ...
0
1
1,170
0
53,749,834
0
0
0
0
3
false
2
2018-12-05T14:43:00.000
1
3
0
How to improve accuracy of random forest multiclass classification model?
53,634,808
0.066568
python,machine-learning,random-forest
Try doing a feature selection first using PCA or Random forest and then fit a chained classifier where first do a oneversesall and then a random forest or a decision tree. You should get a slightly better accuracy.
I am working on a multi class classification for segmenting customers into 3 different classes based on their purchasing behavior and demographics. I cannot disclose the data set completely but in general it contains around 300 features and 50000 rows. I have tried the following methods but I am unable to achieve accu...
0
1
4,038
0
53,755,232
0
0
0
0
3
true
2
2018-12-05T14:43:00.000
2
3
0
How to improve accuracy of random forest multiclass classification model?
53,634,808
1.2
python,machine-learning,random-forest
Try to tune below parameters n_estimators This is the number of trees you want to build before taking the maximum voting or averages of predictions. Higher number of trees give you better performance but makes your code slower. You should choose as high value as your processor can handle because this makes your pred...
I am working on a multi class classification for segmenting customers into 3 different classes based on their purchasing behavior and demographics. I cannot disclose the data set completely but in general it contains around 300 features and 50000 rows. I have tried the following methods but I am unable to achieve accu...
0
1
4,038
0
53,635,051
0
0
0
0
3
false
2
2018-12-05T14:43:00.000
1
3
0
How to improve accuracy of random forest multiclass classification model?
53,634,808
0.066568
python,machine-learning,random-forest
How is your training acc? I assume that your acc is your validation. If your training acc is way to high, som normal overfitting might be the case. Random forest normally handles overfitting very well. What you could try is PCA of your data, and then try classify on that. This gives you the features which accounts for ...
I am working on a multi class classification for segmenting customers into 3 different classes based on their purchasing behavior and demographics. I cannot disclose the data set completely but in general it contains around 300 features and 50000 rows. I have tried the following methods but I am unable to achieve accu...
0
1
4,038
0
53,637,348
0
0
0
0
1
true
0
2018-12-05T17:01:00.000
1
1
0
How to use a good use of nb_train_samples in keras?
53,637,278
1.2
python,tensorflow,keras,deep-learning
Yes, they have to be the same. These are the parameters you use to tell the process how many you have of each type of image. For instance, if you tell it that you have 5_000 validation samples, but there are only 3_000 in the data set, you will crash the run.
I'm using keras with Tensorflow-gpu backend in Python. I'm trying to put the correct number of nb_train_samplesn nb_validation_ samples and epochs. I am using the fit-generator-method. nb_train_samples has to be the same number that images that i have for training? Can be higher? nb_validation_samples has to be the sa...
0
1
454
0
53,842,312
0
1
0
0
1
true
0
2018-12-05T18:46:00.000
0
2
0
Datetime in the format "M1,D1,H1" (January 1st, 1.00 am)
53,638,858
1.2
python-3.x,pandas
df.insert(0, 'time', [dt.datetime(self.MODEL_YEAR, 1, 1, 0) + dt.timedelta(hours=int(x)) for x in df['hour'].values])
I want a column in a dataframe including datetime in the format "M1,D1,H1" (January 1st, 1.00 am). I have a dataframe of the size with 8760 elements. How do I populate it?
0
1
73
0
53,644,466
0
1
0
0
1
false
0
2018-12-06T03:24:00.000
1
2
0
Csv Python reader
53,644,166
0.099668
python,csv
What I suggest is: Read the CVS file and set it in an array of two dimensions. Where columns are touchdowns, sacks, passing yards, etc. and rows are specific values for each player. To determinate for example which player got the most touchdown, go through the column 1 "column for touchdowns" and compare the maximum v...
Hi I am trying to write a python code for reading NFL Stats I converted stats into a excel Csv file, I am wondering if anyone could help me plan out my code Like how would I go about getting whos got the most touchdowns, sacks, passing yards, and etc. I know this is kinda of beginner stuff but much help would be appre...
0
1
92
0
58,120,534
0
0
0
0
1
false
5
2018-12-06T16:55:00.000
3
2
0
How to add recurrent dropout to CuDNNGRU or CuDNNLSTM in Keras
53,656,220
0.291313
python,tensorflow,keras,lstm
You can use kernel_regularizer and recurrent_regularizer for prevent overfitting, i am using L2 regularizers and i am having good results.
One can apply recurrent dropout onto basic LSTM or GRU layers in Keras by passing its value as a parameter of the layer. CuDNNLSTM and CuDNNGRU are LSTM and GRU layers that are compatible with CUDA. The main advantage is that they are 10 times faster during training. However they lack some of the beauty of the LSTM or ...
0
1
4,528
0
54,430,444
0
0
0
0
1
true
1
2018-12-07T03:51:00.000
1
1
0
TensorFlow tf.data.Dataset API for medical imaging
53,662,978
1.2
python,tensorflow
Finally, I found a method to solve my problem. I first crop a subject's image without applying the actual crop. I only measure the slices I need to crop the volume to only the brain. I then serialize all the data set images into one TFRecord file, each training example being an image modality, original image's shape an...
I'm a student in medical imaging. I have to construct a neural network for image segmentation. I have a data set of 285 subjects, each with 4 modalities (T1, T2, T1ce, FLAIR) + their respective segmentation ground truth. Everything is in 3D with resolution of 240x240x155 voxels (this is BraTS data set). As we know, I c...
0
1
680
0
53,672,476
0
0
0
0
1
false
2
2018-12-07T10:55:00.000
1
2
0
MutiLabel classification
53,668,101
0.099668
python,machine-learning,deep-learning,text-classification
It is not super clear what is your main idea, but articles typically do have tags or categories and you may use that for the classification labels. Humans are pretty good at articles tagging.
I have some 1000 news articles related to science and technology. I need to train a classifier which will predict say 3(computer science, electronics, electrical) confidence scores for each article. Each score represents how much the article belongs to each field. The confidence score will be a value between zero and ...
0
1
68
0
53,673,661
0
0
0
0
1
false
0
2018-12-07T12:12:00.000
0
1
0
How to convert a dbf file to a dask dataframe?
53,669,318
0
python,dataframe,dask,dbf
Dask does not have a dbf loading method. As far as I can tell, dbf files do not support random-access to the data, so it is not possible to read from sections of the file in separate workers, in parallel. I may be wrong about this, but certainly dbfreader makes no mention of jumping through to an arbitrary record. Ther...
I have a big dbf file, converting it to a pandas dataframe is taking a lot of time. Is there a way to convert the file into a dask dataframe?
0
1
235
0
53,670,039
0
0
0
0
1
false
1
2018-12-07T12:49:00.000
0
1
1
Jupyter notebook : kernel died msg when loading big CSV file
53,669,913
0
python,python-3.x,machine-learning,jupyter-notebook,turi-create
53MB is not a big file ! You should try to load this in a ipython terminal to test. Load as you would in a Jupyter to see if you have any issue. If there is no issue, This could be a bad installation of Jupyter. Note : The kernel dies mostly when you're out of RAM. But 53MB is not that big, assuming you have at least 2...
I am using a Jupiter Notebook for making a machine learning model using turicreate. When ever I upload a big .csv file, I get a message: kernel died. Since I am new in python is there an other way to Batch loading file or anyone nows how to fix this issue ? The csv file is 52.9 MB Thanks
0
1
3,050
0
53,677,838
1
0
0
0
1
false
0
2018-12-07T17:35:00.000
1
1
0
Descending order of shortest paths in networkx
53,674,393
0.197375
python-3.x,networkx,dijkstra
Try networkx's shortest_simple_paths.
I have a weighted Graph using networkx and the topology is highly meshed. I would like to extract a number of paths between two nodes with distance minimization. To clarify, the dijkstra_path function finds the weighted shortest path between two nodes, I would like to get that as well as the second and third best optio...
0
1
270
0
53,688,653
0
1
0
0
1
false
1
2018-12-09T01:25:00.000
0
1
0
processing data in parallel python
53,688,617
0
python-3.x,python-multiprocessing,python-multithreading
cPickle.load() will release the GIL so you can use it in multiple threads easily. But cPickle.loads() will not, so don't use that. Basically, put your data from Redis into a StringIO then cPickle.load() from there. Do this in multiple threads using concurrent.futures.ThreadPoolExecutor.
I have a script, parts of which at some time able to run in parallel. Python 3.6.6 The goal is to decrease execution time at maximum. One of the parts is connection to Redis, getting the data for two keys, pickle.loads for each and returning processed objects. What’s the best solution for such a tasks? I’ve tried Que...
0
1
95
0
54,559,052
0
1
0
0
1
false
1
2018-12-09T05:34:00.000
0
2
0
Issue with installing Keras library from Anaconda
53,689,684
0
python,tensorflow,keras,anaconda,theano
I guess the issue is that tensorflow is not yet released for Python3.7 (you have mentioned latest version of Anaconda). To overcome this, you may create a new environment with Python3.6 and simultaneously install Keras. conda create -n p360 python=3.6 anaconda tensorflow keras Here p360 is the name of the environment I...
I have been trying to install 'Keras' library from Anaconda on my laptop. I have the latest version of Anaconda. After that, I tried conda update conda conda update --all The above two succeeds. After that I tried conda install -c conda-forge keras conda install keras Both of the above fails with the below error....
0
1
1,214
0
61,193,906
0
0
0
0
2
false
1
2018-12-09T14:40:00.000
0
3
0
Pandas sort_index numerically, not lexicographically
53,693,429
0
python,pandas,dataframe
You might have the index as a string type. I was having this issue after using the groupby() function. I fixed the problem by changing the column that later became my index to an int() with: df['col_name'] = df['col_name'].astype(int) .
I'm having some issues with sorting a pandas dataframe. sort_index(axis=0) results in the dataframe sorting the index as 1 10 11 12 13... etc. While sort_index(axis=1) seems to work for the first couple of rows and then it gets completely disordered. I simply cannot wrap my head around what is going on. I want a simply...
0
1
719
0
53,693,470
0
0
0
0
2
false
1
2018-12-09T14:40:00.000
2
3
0
Pandas sort_index numerically, not lexicographically
53,693,429
0.132549
python,pandas,dataframe
you have 2 types of index, either row index (axis=0) or columns index (axis=1) you are just arranging columns by name when you use axis=1. it does not reorder each row by values. check your columns names after your sort_index(axis=1) and you will understang
I'm having some issues with sorting a pandas dataframe. sort_index(axis=0) results in the dataframe sorting the index as 1 10 11 12 13... etc. While sort_index(axis=1) seems to work for the first couple of rows and then it gets completely disordered. I simply cannot wrap my head around what is going on. I want a simply...
0
1
719
0
56,693,911
0
1
0
0
1
false
1
2018-12-09T16:39:00.000
0
3
0
Convert astropy table to list of dictionaries
53,694,408
0
python,list,dictionary,astropy
I ended up iterating, slicing and copying as list which worked fine on the relatively small dataset.
I have an astropy.table.table object holding stars data. One row per star with columns holding data such as Star Name, Max Magnitude, etc. I understand an astropy table's internal representation is a dict for each column, with the rows being returned on the fly as slices across the dict objects. I need to convert the a...
0
1
885
0
53,696,362
0
0
0
0
1
false
0
2018-12-09T19:51:00.000
0
1
0
Solving TSP with GA: Should a distance matrix speed up run-time?
53,696,084
0
python,list,dictionary,time-complexity,sqrt
The short answer: Yes. Dictionary would make it faster. The long answer: Lets say, you pre-processing and calculates all distances once - Great! Now, lets say I want to find the distance between A and B. So, all I have to do now is to find that distance where I put it - it is in the list! What is the time complexity to...
I am trying to write a GA in Python to solve TSP. I would like to speed it up. Because right now, it takes 24 seconds to run 200 generations with a population size of 200. I am using a map with 29 cities. Each city has an id and (x,y) coordinates. I tried implementing a distance matrix, which calculates all the distanc...
0
1
177
0
53,709,894
0
1
0
0
1
true
0
2018-12-09T20:32:00.000
1
1
0
How to display same detailed help in Pycharm as in Jupyter?
53,696,405
1.2
python,pycharm,jupyter-notebook
What you want here is help(pandas.DataFrame). Prints the same information as shift+TAB does in Jupyter.
When in Jupyter I Shift+TAB on pandas.DataFrame, it displays e.g. Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pan...
0
1
62
0
53,714,251
0
0
0
0
1
false
0
2018-12-09T22:47:00.000
2
1
0
Error for word2vec with GoogleNews-vectors-negative300.bin
53,697,450
0.379949
python,gensim,word2vec
This is just a warning, not a fatal error. Your code likely still works. "Deprecation" means a function's use has been marked by the authors as no longer encouraged. The function typically still works, but may not for much longer – becoming unreliable or unavailable in some future library release. Often, there's a new...
the version of python is 3.6 I tried to execute my code but, there are still some errors as below: Traceback (most recent call last): File "C:\Users\tmdgu\Desktop\NLP-master1\NLP-master\Ontology_Construction.py", line 55, in , binary=True) File "E:\Program Files\Python\Python35-32\lib\site-packages\gensim...
0
1
1,129
0
53,714,817
0
1
0
0
1
true
2
2018-12-10T22:20:00.000
0
3
0
physical dimensions and array dimensions
53,714,557
1.2
python,arrays
I believe that the rainfall value shouldn't be a dimension. Therefore, you could use 2D array[lat][lon] = rainfall_value or 3D array[time][lat][lon] = rainfall_value respectively. If you want to reduce number of dimensions further, you can combine latitude and longitude into one dimension as you suggested, which would ...
If I have a rainfall map which has three dimensions (latitude, longitude and rainfall value), if I put it in an array, do I need a 2D or 3D array? How would the array look like? If I have a series of daily rainfall map which has four dimensions (lat, long, rainfall value and time), if I put it in an array, do I need a ...
0
1
77
0
53,730,003
0
0
0
0
1
false
0
2018-12-11T18:03:00.000
0
2
0
How to save image as binary compressed .tiff python?
53,729,889
0
python,tiff
You can try using libtiff. Install using pip install libtiff
Is there any library to save images in binary (1bit per pixel) .tiff compressed file? opencv and pillow cannot do that
0
1
1,104
0
55,491,090
0
0
0
0
1
true
0
2018-12-11T21:55:00.000
1
1
0
How to perform cross validation on NMF Python
53,732,904
1.2
python,scikit-learn,nmf
A property of nmf is that it is an unsupervised (machine learning) method. This generally means that there is no labeled data that can serve as a 'golden standard'. In case of NMF you can not define what is the 'desired' outcome beforehand. The cross validation in sklearn is designed for supervised machine learning, i...
I am trying to perform cross-validation on NMF to find the best parameters to use. I tried using the sklearn cross-validation but get an error that states the NMF does not have a scoring method. Could anyone here help me with that? Thank you all
0
1
657