GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
44,512,629
0
0
0
0
1
false
0
2017-06-11T18:29:00.000
0
1
0
Stanford NLP Output Formatting
44,487,269
0
java,python,stanford-nlp
If you are using the command line you can use -outputFormat text to get a human readable version or -outputFormat json to get a json version. In Java code you can use edu.stanford.nlp.pipeline.StanfordCoreNLP.prettyPrint() or edu.stanford.nlp.pipeline.StanfordCoreNLP.jsonPrint() to print out an Annotation.
Using the Stanford NLP, I want my text to go through lemmatization and coreference resolution. So for an input.txt: "Stanford is located in California. It is a great University, founded in 1891." I would want the output.txt: "Stanford be located in California. Stanford be a great University, found in 1891." I...
0
1
784
0
50,835,528
0
1
0
0
1
false
4
2017-06-12T03:35:00.000
0
2
0
How to loop over all but last column in pandas dataframe + indexing?
44,491,067
0
python,pandas,dataframe,indexing
A simple way would be to use slicing with iloc all but last column would be: df.iloc[:,:-1] all but first column would be: df.iloc[:,1:]
Let's day I have a pandas dataframe df where the column names are the corresponding indices, so 1, 2, 3,....len(df.columns). How do I loop through all but the last column, so one before len(df.columns). My goal is to ultimately compare the corresponding element in each row for each of the columns with that of the last ...
0
1
5,300
0
44,530,760
0
0
0
0
1
false
2
2017-06-12T13:20:00.000
0
1
0
Tensorflow major difference in loss between machines
44,500,526
0
python,machine-learning,tensorflow,keras,autoencoder
The dataset I used was a single .mat file, created by using scipy's savemat and loaded with loadmat. It was created on my Macbook and distributed via scp to the other machines. It turned out that the issue was with this .mat file (I do not know exactly what though). I have switched away from the .mat file and everythin...
I have written a Variational Auto-Encoder in Keras using Tensorflow as backend. As optimizer I use Adam, with a learning rate of 1e-4 and batch size 16. When I train the net on my Macbook's CPU (Intel Core i7), the loss value after one epoch (~5000 minibatches) is a factor 2 smaller than after the first epoch on a diff...
0
1
203
0
45,828,029
0
0
0
0
1
false
1
2017-06-12T16:16:00.000
0
1
0
CNTK & Python: How to do reflect or symmetric padding instead of zero padding?
44,504,140
0
python,padding,cntk
There is a new pad operation (in master; will be released with CNTK 2.2) that supports reflect and symmetric padding.
In the cntk.layers package we have the option to do zero padding: pad (bool or tuple of bools, defaults to False) – if False, then the filter will be shifted over the “valid” area of input, that is, no value outside the area is used. If pad=True on the other hand, the filter will be applied to all input positions, and ...
0
1
177
0
44,608,170
0
1
0
0
1
true
0
2017-06-13T07:25:00.000
0
1
0
How to tokenize and tag those tokenized strings from my own custom dictionary using python nltk?
44,514,898
1.2
python-3.x,dictionary,nltk
I hope this is what you are looking for https://github.com/sujitpal/nltk-examples/tree/master/src/cener
I am new to python. I have to build a chatbot using python nltk -- my use case and expected output is this: I have a custom dictionary of some categories (shampoo,hair,lipstick,face wash), some brands (lakme,l'oreal,matrix), some entities ((hair concern: dandruff, hair falling out), (hair type: oily hair, dry hair), (s...
0
1
203
0
44,518,795
0
0
0
0
1
true
1
2017-06-13T07:58:00.000
2
1
0
Tensorflow resize_image_with_crop_or_pad
44,515,532
1.2
python,tensorflow
Let's suppose that you got images that's a [n, W, H] numpy nd-array, in which n is the number of images and W and H are the width and the height of the images. Convert images to a tensor, in order to be able to use tensorflow functions: tf_images = tf.constant(images) Convert tf_images to the image data format used by...
I want to call tf.image.resize_image_with_crop_or_pad(images,height,width) to resize my input images. As my input images are all in form as 2-d numpy array of pixels, while the image input of resize_image_with_crop_or_pad must be 3-d or 4-d tensor, it will cause an error. What should I do?
0
1
2,981
0
44,535,536
0
0
0
0
1
false
1
2017-06-13T09:13:00.000
0
1
0
CNTK: The new clone do not match the cloned inputs of the clonee Block Function
44,517,122
0
python,runtime-error,cntk
This line cloneModel.parameters[0] = cloneModel.parameters[0]*4 tries to replace the first parameter with an expression (a CNTK graph) that multiplies the parameter by 4. I don't think that's the intent here. Rather, you want to do the above on the .value attribute of the parameter. Try this instead: cloneModel.para...
I have trained a model in CNTK. Then I clone it and change some parameters; when I try to test the quantized model, I get RuntimeError: Block Function 'softplus: -> Unknown': Inputs 'Constant('Constant70738', [], []), Constant('Constant70739', [], []), Parameter('alpha', [], []), Constant('Constant70740', [], [])' of...
0
1
115
0
44,526,860
0
0
0
0
1
false
0
2017-06-13T16:10:00.000
0
1
0
Generate a random nonlinear function going through given points in python
44,526,642
0
python-2.7,nonlinear-functions
It looks more like a math problem to me here, since you ask "how to start". you know that a function's plot is just a lot of points (x, y) where y=f(x). And I know that for any two pairs of points (not vertically aligned), I have an infinity of second-degree functions (parabolas) going through these two points. they ar...
I have two given points (3.0, 3.2) and (7.0, 4.59) . My job here is very simple but I don't even know how to start. I just need to plot 4 nonlinear functions that go through these two points. Did somebody have a similar problem before? How does one even start?
0
1
591
0
44,553,514
0
0
0
0
1
false
0
2017-06-14T16:21:00.000
1
2
0
Find 'modern' nltk words corpus
44,550,004
0.099668
python,nltk,corpus
Rethink your approach. Any collection of English texts will have a "long tail" of words that you have not seen before. No matter how large a dictionary you amass, you'll be removing words that are not "non-English". And to what purpose? Leave them in, they won't spoil your classification. If your goal is to remove non-...
I'm building a text classifier that will classify text into topics. In the first phase of my program as a part of cleaning the data, I remove all the non-English words. For this I'm using the nltk.corpus.words.words() corpus. The problem with this corpus is that it removes 'modern' English words such as Facebook, Insta...
0
1
507
0
44,554,231
0
0
0
0
1
false
0
2017-06-14T20:25:00.000
0
1
0
Import a column from excel into python and run autocorrelation on it
44,554,135
0
python
You can use Pandas to import a CSV file with the pd.read_csv function.
I have a 1 column excel file. I want to import all the values it has in a variable x (something like x=[1,2,3,4.5,-6.....]), then use this variable to run numpy.correlate(x,x,mode='full') to get autocorrelation, after I import numpy. When I manually enter x=[1,2,3...], it does the job fine, but when I try to copy paste...
0
1
57
0
55,951,963
0
0
0
0
1
false
6
2017-06-14T22:06:00.000
1
1
0
Python multiprocessing tool vs Py(Spark)
44,555,485
0.197375
python,scikit-learn,multiprocessing,pyspark,cluster-computing
True, Spark does have the limitations you have mentioned, that is you are bounded in the functional spark world (spark mllib, dataframes etc). However, what it provides vs other multiprocessing tools/libraries is the automatic distribution, partition and rescaling of parallel tasks. Scaling and scheduling spark code be...
A newbie question, as I get increasingly confused with pyspark. I want to scale an existing python data preprocessing and data analysis pipeline. I realize if I partition my data with pyspark, I can't treat each partition as a standalone pandas data frame anymore, and need to learn to manipulate with pyspark.sql row/co...
0
1
2,667
0
44,561,811
0
0
0
0
1
false
0
2017-06-15T06:02:00.000
0
2
0
Equivalent method in Java for np.random.uniform()
44,559,717
0
java,python,random,distribution,uniform-distribution
Just use Random rnd = new Random(); rnd.nextInt(int boundary); rnd.nextDouble(double boundary); rnd.next(); If you want a list of randoms the best way is probably ti write your own little method, just use an array and fill it with a for loop.
I'm trying port a python code into java and am stuck at one place. Is there any method in java that is equivalent to numpy.random.uniform() in python?
1
1
674
0
45,875,329
0
1
0
0
1
false
1
2017-06-15T13:37:00.000
1
1
0
Problems using Tensorflow in PyCharm-keep getting ImportError
44,569,033
0.197375
python-3.x,tensorflow,pycharm,importerror
Since you have in the log Library not loaded: @rpath/libcublas.8.0.dylib I would say you've installed TF with CUDA support but didn't install CUDA libraries properly. Try to install TF CPU only.
for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. Process finished with exit code 1
0
1
223
0
44,624,518
0
0
0
0
1
false
2
2017-06-15T14:18:00.000
0
2
0
Memory Issues Using Keras Convolutional Network
44,569,938
0
python,memory,computer-vision,keras,convolution
While using train_generator(), you should also set the max_q_size parameter. It's set at 10 by default, which means you're loading in 10 batches while using only 1 (since train_generator() was designed to stream data from outside sources that can be delayed like network, not to save memory). I'd recommend setting max_q...
I am very new to ML using Big Data and I have played with Keras generic convolutional examples for the dog/cat classification before, however when applying a similar approach to my set of images, I run into memory issues. My dataset consists of very long images that are 10048 x1687 pixels in size. To circumvent the mem...
0
1
841
0
44,589,585
0
0
0
0
1
false
0
2017-06-16T06:25:00.000
0
2
0
How to Save Plotted Graph Data into Output Data File in Python
44,582,210
0
python,csv,matplotlib,output
If you plotted the data using numpy array, you can use numpy.savetxt.
Using matplotlib.pyplot, I plotted multiple wave functions w.r.t time series, showing the waves in multiple vertical axes, and output the graph in jpg using savefig. I want to know the easiest way in which I can output all wave functions into a single output data file maybe in CSV or DAT in rows and columns.
0
1
2,703
0
44,651,045
0
0
0
0
1
true
0
2017-06-16T11:10:00.000
0
1
0
Accessing Input Layer data in Tensorflow/Keras
44,587,813
1.2
python,tensorflow,deep-learning,keras,keras-layer
At the time it seems to be impossible to actually access the data within the symbolic tensor. It also seems unlikely that such functionality will be added in the future since in the Tensorflow page it says: A Tensor object is a symbolic handle to the result of an operation, but does not actually hold the values of ...
I am trying to replicate a neural network for depth estimation. The original authors have taken a pre-trained network and added between the fully connected layer and the convolutional layer a 'Superpixel Pooling Layer'. In this layer, the convolutional feature maps are upsampled and the features per superpixel are aver...
0
1
486
0
44,609,082
0
0
0
0
1
false
4
2017-06-16T20:33:00.000
2
2
0
How to use model.fit_generator in keras
44,597,555
0.197375
python,deep-learning
They are useful for on-the-fly augmentations, which the previous poster mentioned. This however is not neccessarily restricted to generators, because you can fit for one epoch and then augment your data and fit again. What does not work with fit is using too much data per epoch though. This means that if you have a dat...
When and how should I use fit_generator? What is the difference between fit and fit_generator?
0
1
3,009
0
44,600,391
0
0
0
0
1
true
2
2017-06-17T02:15:00.000
4
1
0
tf-idf : should I do normalization of documents length
44,600,170
1.2
python,normalization,word,tf-idf
Generally you want to do whatever gives you the best cross validated results on your data. If all you are doing to compare them is taking cosine similarity then you have to normalize the vectors as part of the calculation but it won't affect the score on account of varying document lengths. Many general document ret...
When using TF-IDF to compare Document A, B I know that length of document is not important. But compared to A-B, A-C in this case, I think the length of document B, C should be the same length. for example Log : 100 words Document A : 20 words Document B : 30 words Log - A 's TF-IDF score : 0.xx Log - B 's TF-IDF scor...
0
1
2,290
0
44,638,348
0
0
0
0
1
true
0
2017-06-18T12:29:00.000
1
1
0
Is Torch7 defined-by-run like Pytorch?
44,614,977
1.2
python,lua,torch,pytorch
No, Torch7 use static computational graphs, as in Tensorflow. It is one of the major differences between PyTorch and Torch7.
Pytorch have Dynamic Neural Networks (defined-by-run) as opposed to Tensorflow which have to compile the computation graph before run. I see that both Torch7 and PyTorch depend on TH, THC, THNN, THCUNN (C library). Does Torch7 have Dynamic Neural Networks (defined-by-run) feature ?
0
1
79
0
44,617,764
0
0
0
0
1
true
0
2017-06-18T16:51:00.000
1
2
0
Python vertical stack not working
44,617,331
1.2
python,numpy
Since X is a numpy array, you can do X.shape instead of the repeated len. I expect it to show (13934, 74). I expect Y.shape to be (13934,). It's a 1d array, which is why Y[0] is a number, numpy.int64. And since it is 1d, transpose (swapping axes) doesn't do anything. (this isn't MATLAB where everything has at least ...
I have a matrix X which has len(X) equal to 13934 and len(X[i]), for all i, equal to 74, and I have an array Y which has len(Y) equal to 13934 and len(Y[i]) equal to TypeError: object of type 'numpy.int64' has no len() for all i. When I try np.vstack((X,Y)) or result = np.concatenate((X, Y.T), axis=1) I get ValueError:...
0
1
203
0
44,623,114
0
0
0
0
1
false
2
2017-06-18T23:14:00.000
1
2
0
Keras: Is there any way to "pop()" the top layers?
44,620,403
0.099668
python,tensorflow,keras
Keras pop() removes the last (aka top) layer, not the bottom one. I suggest you use model.summary() to print out the list of layers and than subsequently use pop() until only the necessary layers are left.
In Keras there is a feature called pop() that lets you remove the bottom layer of a model. Is there any way to remove the top layer of a model? I have a fully saved pre-trained Variational Autoencoder and am trying to only load the decoder (the bottom four layers). I am using Keras with a Tensorflow backend.
0
1
1,067
0
70,358,084
0
0
0
0
1
false
1
2017-06-19T16:34:00.000
0
3
0
Tensorflow - Euclidean Distance of Points in Matrix
44,635,695
0
python,tensorflow
Define a function to calculate distances calc_distance = lambda f, g: tf.norm(f-g, axis=1, ord='euclidean') Pass your n*m vector to the function, example: P = tf.constant([[1, 2], [3, 4], [2, 1], [0, 2], [2, 3]], dtype=tf.float32) distances = calc_distance(P[:-1:], P[1::]) print(distances) <tf.Tensor: shape=(4,), dtype...
I have a n*m tensor that basically represents m points in n dimensional euclidean space. I wanted calculate the pairwise euclidean distance between each consecutive point. That is, if my column vectors are the points a, b, c, etc., I want to calculate euc(a, b), euc(b, c), etc. The result would be an m-1 length 1D-tens...
0
1
2,501
0
49,830,684
0
0
0
0
2
false
5
2017-06-19T17:47:00.000
0
2
0
Training and validating on images with different resolution in Keras
44,636,877
0
python,validation,machine-learning,neural-network,keras
You need to make sure that your network input is of shape (None,None,3), which means your network accepts an input color image of arbitrary size.
I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validatio...
0
1
888
0
45,889,288
0
0
0
0
2
false
5
2017-06-19T17:47:00.000
0
2
0
Training and validating on images with different resolution in Keras
44,636,877
0
python,validation,machine-learning,neural-network,keras
You could use fit generator instead of fit and provide a different generator for validation set. As long as the rest of your network is agnostic to the image size, (e.g, fully convolutional layers), you should be fine.
I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validatio...
0
1
888
0
44,937,787
0
0
0
0
1
false
0
2017-06-19T20:10:00.000
0
1
0
TensorFlow extracting columns
44,639,106
0
python,tensorflow
I can't comment on the question because of low rep, so using an answer instead. Can you clarify your question a bit, maybe with a small concrete example using very small tensors? What are the "columns" you are referring to? You say that you want to keep 50 columns (presumably 50 numbers) per image. If so, the (10, 50) ...
I have a tensor of shape (10, 100, 20, 3). Basically, it can be thought of as a batch of images. So the image height is 100 and width is 20 and channel depth is 3. I have run some computations to generate a set of 10*50 indices corresponding to 50 columns I would like to keep per image in the batch. The indices are sto...
0
1
78
0
44,662,310
0
0
0
0
1
false
3
2017-06-20T20:18:00.000
0
1
0
Pip and/or installing the .pyd of library to site-packages leads "import" of library to DLL load faliure
44,662,278
0
python,opencv,dll,pip
Use the zip, extract it, and run sudo python3 setup.py install if you are on Mac or Linux. If on Windows, open cmd or Powershell in Admin mode and then run py -3.6 setup.py install, after cding to the path of the zip. If on Linux, you also have to run sudo apt-get install python-opencv. Maybe on Mac you have to use Hom...
I attempted to install Opencv for python two ways, A) Downloading the opencv zip, then copying cv2.pyd to /Python36/lib/site-packages. B) undoing that, and using "pip install opencv-python" /lib/site-packages is definitly the place where python is loading my modules, as tensorflow and numpy are there, but any attempt ...
0
1
4,066
0
44,698,955
0
0
0
0
1
true
1
2017-06-22T11:51:00.000
4
1
0
What are the limits of vectorization?
44,698,632
1.2
python,arrays,numpy,vectorization
For elementwise multiplication it does not matter, and flattening the array does not change a thing. Remember: Arrays, no matter their dimension, are saved linearly in RAM. If you flatten the array before multiplication, you are only changing the way NumPy presents the data to you, the data in RAM is never touched. Mul...
I'm working with multidimensional matrices (~100 dimensions or so, see below why). My matrix are NumPy arrays and I mainly multiply them with each other. Does NumPy care (with respect to speed or accuracy) in what form I ask it to multiply these matrices? I.e. would it make sense to reshape them into a linear array bef...
0
1
115
0
45,310,833
0
0
0
0
1
true
0
2017-06-22T12:49:00.000
1
2
0
XGBoost - Feature selection using XGBRegressor
44,699,889
1.2
python,xgboost
Finally I have solved this issue by: model.booster().get_score(importance_type='weight')
I am trying to perform features selection (for regression tasks) by XGBRegressor(). More precisely, I would like to know: If there is something like the method feature_importances_, utilized with XGBClassifier, which I could use for regression. If the XGBoost's method plot_importance() is reliable when it is used wit...
0
1
2,264
0
61,737,353
0
0
0
0
1
false
3
2017-06-23T01:31:00.000
0
1
0
Uninstall/upgrade tensorflow failed: __init__.cpython-35.pyc not found
44,711,726
0
python,tensorflow,installation
Posting my own answer (alternative) here in case someone overlooked comment: I forced to delete the package in the python/lib/site-packages/ and reinstalled the tensorflow-gpu, and it seems working well. Though I solve this problem via such alternate I would still like to know the root cause and long-term fix for this...
I previously installed tensorflow-gpu 0.12.0rc0 with Winpython-3.5.2, and when I tried to upgrade or uninstall it to install the newer version using both the Winpython Control Panel and pip, I got the following error: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'c:\\users\\moliang\\downlo...
0
1
508
0
44,736,370
0
0
0
0
1
true
2
2017-06-23T08:14:00.000
3
1
0
KernelPCA produces NaNs
44,716,368
1.2
python,machine-learning,scikit-learn
The NaNs are produced because the eigenvalues (self.lambdas_) of the input matrix are negative which provoke the ValueError as the square root does not operate with negative values. The issue might be overcome by setting KernelPCA(remove_zero_eig=True, ...) but such action would not preserve the original dimensionality...
After applying KernelPCA to my data and passing it to a classifier (SVC) I'm getting the following error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). and this warning while performing KernelPCA: RuntimeWarning: invalid value encountered in sqrt X_transformed = self.alphas_...
0
1
622
0
45,011,256
0
0
0
0
1
true
1
2017-06-23T14:08:00.000
0
1
0
VGG16 Training new dataset: Why VGG16 needs label to have shape (None,2,2,10) and how do I train mnist dataset with this network?
44,723,464
1.2
python,machine-learning,deep-learning,keras
Low accuracy is caused by the problem in layers. I just modified my network and obtained .7496 accuracy.
I was trying to train CIFAR10 and MNIST dataset on VGG16 network. In my first attempt, I got an error which says shape of input_2 (labels) must be (None,2,2,10). What information does this structure hold in 2x2x10 array because I expect input_2 to have shape (None, 10) (There are 10 classes in both my datasets). I trie...
0
1
212
0
44,740,700
0
0
0
0
1
true
0
2017-06-24T19:25:00.000
2
1
0
how to preserve number of records in word2vec?
44,740,161
1.2
python-3.x,nlp,word2vec
If you are splitting each entry into a list of words, that's essentially 'tokenization'. Word2Vec just learns vectors for each word, not for each text example ('record') – so there's nothing to 'preserve', no vectors for the 45,000 records are ever created. But if there are 26,000 unique words among the records (after...
I have 45000 text records in my dataframe. I wanted to convert those 45000 records into word vectors so that I can train a classifier on the word vector. I am not tokenizing the sentences. I just split the each entry into list of words. After training word2vec model with 300 features, the shape of the model resulted in...
0
1
277
0
51,706,173
0
0
0
0
1
false
32
2017-06-24T22:39:00.000
4
3
0
pandas timestamp series to string?
44,741,587
0.26052
python,arrays,pandas,vector
Following on from VinceP's answer, to convert a datetime Series in-place do the following: df['Column_name']=df['Column_name'].astype(str)
I am new to python (coming from R), and I am trying to understand how I can convert a timestamp series in a pandas dataframe (in my case this is called df['timestamp']) into what I would call a string vector in R. is this possible? How would this be done? I tried df['timestamp'].apply('str'), but this seems to simply...
0
1
102,326
0
44,765,840
0
0
0
0
1
false
0
2017-06-26T15:55:00.000
0
2
0
supervised tag suggestion for documents
44,763,743
0
python,machine-learning,nlp,text-classification
I'm currently working on something similar, besides what @Joonatan Samuel suggested I would encourage you to do careful preprocessing and considerations. If you want two or more tags for documents you could train several model : one model per tag. You need to consider if there will be enough cases for each model (tag)...
I have thousands of documents with associated tag information. However i also have many documents without tags. I want to train a model on the documents WITH tags and then apply the trained classifier to the UNTAGGED documents; the classifier will then suggest the most appropriate tags for each UNTAGGED document. I hav...
0
1
461
0
44,777,825
0
0
0
0
1
true
0
2017-06-27T09:30:00.000
0
1
0
sklearn::TypeError: Wrong type for parameter `n_values`. Expected 'auto', int or array of ints, got
44,776,786
1.2
python,arrays,numpy,scikit-learn
n_values should only contain domain sizes for categorical values completely skipping out the non-categorical columns in the data matrix. Therefore if [True, False, True] format is used, the size should correspond to the number of True values in the array or if indices are used the two arrays should be of the same size....
I am passing in a hardcoded list / tuple (tried both) when initialising the OneHotEncoder and I get this error during fit_transform , not using numpy types anywhere (well except for the data matrix itself). The only thing is that some of the values in that array are None because I am also using categorical_features to ...
0
1
1,561
0
45,102,844
0
0
0
0
1
false
1
2017-06-27T12:21:00.000
1
2
0
Generating np.einsum evaluation graph
44,780,195
0.099668
python,numpy,scipy,numpy-einsum
First, why do you need B to be 2-dim? Why not just np.einsum('ab , b -> a', A, B)? Now the actual question: It's not exactly what you want, but by using smart choices for A and B you can make this visible. e.g. A = [[1,10],[100,1000]] and B = [1,2], which gives np.einsum('ab , b -> a', A, B) = [21,2100] and it's quite ...
I was planning to teach np.einsum to colleagues, by hoping to show how it would be reduced to multiplications and summations. So, instead of numerical data, I thought to use alphabet chars. in the arrays. Say, we have A (2X2) as [['a', 'b'], ['c', 'd']] and B (2X1) as [['e'], ['f']] We could use einsum to create a matr...
0
1
193
0
44,783,384
0
0
0
0
1
false
0
2017-06-27T14:29:00.000
1
1
0
sklearn warning message whenever I run tensorflow on terminal
44,782,916
0.197375
python,scikit-learn
It is not an error message, it is simply a warning that a module cross_validation has been transmitted from sklearn.cross_validation to sklearn.model_selection.. It is not a problem at all. If you are still eager to fix it, then you should find out what snippet of code tries to import sklearn.cross_validation and alter...
Every time I run a tensorflow file on terminal, this warning pops up before the file runs. I have checked my version of sklearn and it is 0.18.1. How do you make this message to not appear? Thank you. anaconda2/envs/tensorflow/lib/python2.7/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module w...
0
1
187
0
46,108,215
0
0
0
0
1
false
0
2017-06-28T05:34:00.000
3
1
0
Installing seaborn on Pyspark
44,794,347
0.53705
python-2.7,pyspark,seaborn
Generally, for plotting, you need to move all the data points to the master node (using functions like collect() ) before you can plot. PLotting is not possible while the data is still distributed in memory.
I am using Apache Pyspark with Jupyter notebook. In one of the machine learning tutorials, the instructors were using seaborn with pyspark. How can we install and use third party libraries like Seaborn on the Apache Spark (rather Pyspark)?
0
1
824
0
44,799,700
0
0
0
0
1
true
0
2017-06-28T09:53:00.000
3
1
0
When run a tensorflow session in iPython, GPU memory usage remain high when exiting iPython
44,799,200
1.2
python,tensorflow,ipython
Control+Z doesn't quit a process, it stops it (use fg to bring it back up). If some computation is running in a forked process, it may not stop with the main process (I'm no OS guy, this is just my intuition). In any case, properly quitting iPython (e.g. by Control+D or by running exit()) should solve the problem. If y...
I think it's some sort of bug. The problem is quite simple: launch ipython import Tensorflow and run whatever session type nvidia-smi in bash (see really high gpu memory usage, related process name, etc) control+z quit ipython type nvidia-smi in bash (still! really high GPU memory usage, and the same process name, str...
0
1
449
0
44,814,853
0
0
0
1
1
true
0
2017-06-28T13:34:00.000
1
1
0
Python/Pandas/BigQuery: How to efficiently update existing tables with a lot of new time series data?
44,804,051
1.2
python,pandas,google-bigquery,google-cloud-platform,gsutil
Consider breaking up your data into daily tables (or partitions). Then you only need to upload the CVS from the current day. The script you have currently defined otherwise seems reasonable. Extract your new day of CSVs from your source of timeline data. Gzip them for fast transfer. Copy them to GCS. Load the new CVSs...
I have one program that downloads time series (ts) data from a remote database and saves the data as csv files. New ts data is appended to old ts data. My local folder continues to grow and grow and grow as more data is downloaded. After downloading new ts data and saving it, I want to upload it to a Google BigQuery ta...
0
1
642
0
44,804,374
0
0
0
0
1
true
0
2017-06-28T13:42:00.000
1
1
0
Pandas: Reading CSV files with different delimiters - merge error
44,804,235
1.2
python,csv,pandas,merge,delimiter
In short : no, you do not need similar delimiters within your files to merge pandas Dataframes - in fact, once data has been imported (which requires setting the right delimiter for each of your files), the data is placed in memory and does not keep track of the initial delimiter (you can see this by writing down your ...
I have 4 separate CSV files that I wish to read into Pandas. I want to merge these CSV files into one dataframe. The problem is that the columns within the CSV files contain the following: , ; | and spaces. Therefore I have to use different delimiters when reading the different CSV files and do some transformations to ...
0
1
891
0
44,805,286
0
1
0
0
1
false
19
2017-06-28T14:11:00.000
1
2
0
numpy: "size" vs. "shape" in function arguments?
44,804,965
0.099668
python,numpy
Because you are working with a numpy array, which was seen as a C array, size refers to how big your array will be. Moreover, if you can pass np.zeros(10) or np.zeros((10)). While the difference is subtle, size passed this way will create you a 1D array. You can give size=(n1, n2, ..., nn) which will create an nD array...
I noticed that some numpy operations take an argument called shape, such as np.zeros, whereas some others take an argument called size, such as np.random.randint. To me, those arguments have the same function and the fact that they have different names is a bit confusing. Actually, size seems a bit off since it really ...
0
1
17,837
0
44,830,094
0
0
0
0
1
false
2
2017-06-29T15:09:00.000
0
2
0
find non-monotonical rows in dataframe
44,828,905
0
python,pandas
"Quick" in terms of what resource? If you want programming ease, then simply make a new frame resulting from subtracting adjacent columns. Any entry of zero or negative value is your target. If you need execution speed, do note that adjacent differences are still necessary: all you can save is the overhead of finding...
I have a pandas dataframe with Datetime as index. The index is generally monotonically increasing however there seem to be a few rows don't follow this tread. Any quick way to identify these unusual rows?
0
1
759
0
44,835,396
0
0
0
0
2
true
1
2017-06-29T21:16:00.000
2
3
0
Reading file with huge number of columns in python
44,835,126
1.2
python,file-handling
csv is very inefficient for storing large datasets. You should convert your csv file into a better suited format. Try hdf5 (h5py.org or pytables.org), it is very fast and allows you to read parts of the dataset without fully loading it into memory.
I have a huge file csv file with around 4 million column and around 300 rows. File size is about 4.3G. I want to read this file and run some machine learning algorithm on the data. I tried reading the file via pandas read_csv in python but it is taking long time for reading even a single row ( I suspect due to large ...
0
1
1,206
0
44,835,474
0
0
0
0
2
false
1
2017-06-29T21:16:00.000
3
3
0
Reading file with huge number of columns in python
44,835,126
0.197375
python,file-handling
Pandas/numpy should be able to handle that volume of data no problem. I hope you have at least 8GB of RAM on that machine. To import a CSV file with Numpy, try something like data = np.loadtxt('test.csv', dtype=np.uint8, delimiter=',') If there is missing data, np.genfromtext might work instead. If none of these meet y...
I have a huge file csv file with around 4 million column and around 300 rows. File size is about 4.3G. I want to read this file and run some machine learning algorithm on the data. I tried reading the file via pandas read_csv in python but it is taking long time for reading even a single row ( I suspect due to large ...
0
1
1,206
0
60,381,721
0
1
0
0
1
false
3
2017-06-29T21:35:00.000
1
2
0
Pandas - List of Dataframe Names?
44,835,358
0.099668
python,list,pandas
%who_ls DataFrame This is all dataframes loaded in memory as a list all_df_in_mem = %who_ls DataFrame
I've done a lot of searching and can't find anything related. Is there a built-in function to automatically generate a list of Pandas dataframes that I've created? For example, I've created three dataframes: df1 df2 df3 Now I want a list like: df_list = [df1, df2, df3] so I can iterate through it.
0
1
5,562
0
44,935,654
0
0
0
0
2
true
2
2017-06-29T22:49:00.000
1
2
0
Installing rpy2 to work with R 3.4.0 on OSX
44,836,123
1.2
r,conda,python-3.6,rpy2,libiconv
I uninstalled rpy2 and reinstalled with --verborse. I then found ld: warning: ignoring file /opt/local/lib/libpcre.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libpcre.dylib ld: warning: ignoring file /opt/local/lib/liblzma.dylib, file was built for x86_64 ...
I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8). I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Librar...
0
1
1,095
0
53,839,320
0
0
0
0
2
false
2
2017-06-29T22:49:00.000
0
2
0
Installing rpy2 to work with R 3.4.0 on OSX
44,836,123
0
r,conda,python-3.6,rpy2,libiconv
I had uninstall the version pip installed and install from source python setup.py install on the download https://bitbucket.org/rpy2/rpy2/downloads/. FWIW not using Anaconda at all either.
I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8). I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Librar...
0
1
1,095
0
44,852,474
0
0
0
0
1
false
0
2017-06-30T09:30:00.000
0
1
0
How to get the random forest threshold from an h2o random forest object
44,843,175
0
python,random-forest,h2o
you could download and take a look at the POJO which lists all the thresholds used for the model h2o.download_pojo(model, path=u'', get_jar=True, jar_name=u'')
I have an h2o random forest in Python. How to extract for each tree the threshold of each features ? My aim is to implement this random forest in c++ Thanks !
0
1
235
0
44,855,284
0
0
0
0
1
false
1
2017-06-30T17:37:00.000
1
2
0
using tensorflow fill method to create a tensor of certain datatype
44,852,137
0.099668
python,numpy,tensorflow,tensor
You can either provide the value of the datatype you want your resulting tensor to be or to cast a tensor afterwards. tf.fill((3, 3), 0.0) # will be a float 32 tf.cast(tf.fill((3, 3)), tf.float32) # also float 32 The first one is better because you use less operations in the graph
I am trying to use the tf.fill() method to create a tensor of different data types(float16,float32,float64) similar to what you can do with numpy.full(). would tf.constant() be a suitable substitution? or should I create my fill values to be of the data type I want them to be then plug it into the value holder inside t...
0
1
2,486
0
44,857,779
0
0
0
0
1
true
0
2017-07-01T02:56:00.000
1
1
0
Using a custom threshold value with tf.contrib.learn.DNNClassifier?
44,856,964
1.2
python,machine-learning,tensorflow,neural-network
The tf.contrib.learn.DNNClassifier class has a method called predict_proba which returns the probabilities belonging to each class for the given inputs. Then you can use something like, tf.round(prob+thres) for binary thresholding with the custom parameter thres.
I'm working on a binary classification problem and I'm using the tf.contrib.learn.DNNClassifier class within TensorFlow. When invoking this estimator for only 2 classes, it uses a threshold value of 0.5 as the cutoff between the 2 classes. I'd like to know if there's a way to use a custom threshold value since this mig...
0
1
476
0
44,858,027
0
0
0
0
1
false
4
2017-07-01T06:19:00.000
3
4
0
how to get random pixel index from binary image with value 1 in python?
44,857,970
0.148885
python,random,pixel
I'd suggest making a list of coordinates of all non-zero pixels (by checking all pixels in the image), then using random.shuffle on the list and taking the first 100 elements.
I have a binary image of large size (2000x2000). In this image most of the pixel values are zero and some of them are 1. I need to get only 100 randomly chosen pixel coordinates with value 1 from image. I am beginner in python, so please answer.
0
1
3,545
0
44,862,175
0
0
0
0
1
false
5
2017-07-01T14:24:00.000
1
2
0
Reading an excel with pandas basing on columns' colors
44,861,989
0.099668
python,excel,pandas
This can not be done in pandas. You will need to use other library to read the xlsx file and determine what columns are white. I'd suggest using openpyxl library. Then your script will follow this steps: Open xlsx file Read and filter the data (you can access the cell color) and save the results Create pandas datafra...
I have an xlsx file, with columns with various coloring. I want to read only the white columns of this excel in python using pandas, but I have no clues on hot to do this. I am able to read the full excel into a dataframe, but then I miss the information about the coloring of the columns and I don't know which columns...
0
1
7,328
0
44,864,201
0
0
0
0
1
true
0
2017-07-01T17:18:00.000
1
1
0
Matplotlib is incorrectly rendering axis labels in sans-serif when using LaTeX
44,863,610
1.2
python,matplotlib,latex
I was importing the seaborn package after setting the matplotlib rcParams, which overwrote values such as the font family. Calling rcParams.update(params) after importing seaborn fixes the problem.
I am using matplotlib.rc('text', usetex=True); matplotlib.rc('font', family='serif') to set my font to serif with LaTeX. This works for the tick labels, however the plot title and axis lables are typeset using the sans-serif CMS S 12 computer modern variant. From what I have found on the web, most people seem to have t...
0
1
184
0
44,871,723
0
1
0
0
1
false
0
2017-07-02T13:26:00.000
1
1
0
In spyder how to get back default view of running a code in Ipython console
44,871,312
0.197375
python-3.x,anaconda,spyder
(Spyder developer here) Please use the Variable Explorer to visualize Numpy arrays and Pandas DataFrames. That's its main purpose.
Hi on running a code in the console I am getting the display as: runfile('C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing/praCTICE.py', wdir='C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing') and on viewing a small matrix it is showing up as array([['France', 44.0, 72000.0], ['Spa...
0
1
272
0
44,875,134
0
0
0
0
1
true
0
2017-07-02T16:56:00.000
1
1
0
How can the perplexity of a language model be between 0 and 1?
44,873,156
1.2
python,tensorflow,language-model,sequence-to-sequence,perplexity
This does not make a lot of sense to me. Perplexity is calculated as 2^entropy. And the entropy is from 0 to 1. So your results which are < 1 do not make sense. I would suggest you to take a look at how your model calculate the perplexity because I suspect there might be an error.
In Tensorflow, I'm getting outputs like 0.602129 or 0.663941. It appears that values closer to 0 imply a better model, but it seems like perplexity is supposed to be calculated as 2^loss, which implies that loss is negative. This doesn't make any sense.
0
1
298
0
44,884,458
0
0
0
0
1
false
0
2017-07-03T10:19:00.000
0
1
0
python multiprocessing (using pytable) misses some results from the queue in the final output
44,883,116
0
python,queue,multiprocessing,pytables
It's really hard to help you without code. But I just think if you want to find "thin" places in your code you have to write log of it. As I understood one iteration of your worker has to create 268 Series that are made as columns in final dataframe. If these Series are the same shape, then it seems that the issue in q...
Before I state my question, let me put my constraint - I can't post the code as it is related to my job and they don't allow it. So this is just a survey query to see if somebody has seen similar issues. I have a python multiprocessing set up where the workers do the work and put the result in a queue. A special writer...
0
1
54
0
44,894,389
0
0
0
0
1
true
2
2017-07-03T21:47:00.000
2
1
0
Measure Volatility or Stability Of Lists of Floating Point Numbers
44,894,250
1.2
python,math,statistics,volatility
You could use the standard deviation of the list divided by the mean of the list. Those measures have the same units so their quotient will be a pure number, without a unit. This scales the variability (standard deviation) to the size of the numbers (mean). The main difficulty with this is for lists that have both posi...
Wonder if anyone can help. I have a set of lists of numbers, around 300 lists in the set, each list of around 200 numbers. What I wish to calculate is the "relative stability" of each list. For example: List A: 100,101,103,99,98 - the range is v small - so stable. List B: 0.3, 0.1, -0.2, 0.1 - again, v small range, so...
0
1
361
0
44,899,478
0
0
0
0
1
false
1
2017-07-04T07:04:00.000
0
3
0
Merging 2 dataframes on Pandas
44,899,119
0
python,pandas
Instead of df1.merge(...) try: pd.merge(left=df1, right=df2, on ='e', how='inner')
Sorry I have a very simple question. So I have two dataframes that look like Dataframe 1: columns: a b c d e f g h Dataframe 2: columns: e ef I'm trying to join Dataframe 2 on Dataframe 1 at column e, which should yield columns: a b c d e ef g h or columns: a b c d e f g h ef However: df1.merge(df2, how = 'inner', on ...
0
1
87
0
44,903,656
0
1
0
0
1
false
1
2017-07-04T10:02:00.000
0
3
1
how to install python package on azure hdinsight pyspark3 kernel?
44,902,885
0
python,azure,pyspark,jupyter-notebook,azure-hdinsight
Have you tried installing using pip? In some cases where you have both Python 2 and Python 3, you have to run pip3 instead of just pip to invoke pip for Python 3.
I would like to install python 3.5 packages so they would be available in Jupyter notebook with pyspark3 kernel. I've tried to run the following script action: #!/bin/bash source /usr/bin/anaconda/envs/py35/bin/activate py35 sudo /usr/bin/anaconda/envs/py35/bin/conda install -y keras tensorflow theano gensim but the p...
0
1
2,656
0
47,966,038
0
0
0
0
1
false
1
2017-07-05T00:27:00.000
1
1
0
Python - Fitting a polynomial (multi-dimension) through X points
44,915,500
0.197375
python,scikit-learn,regression,polynomials
Your question is ill defined. If you want, say, 14 features of 34 possible, which 14 should that be? In your place, I would generate a redundant number of features and then would use a feature selection algorithm. It would be a sparse model (like Lasso) or a feature elemination algorithm (like RFE).
I've been using scikit learn, very neat. Fitting a polynomial curve through points (X-Y) is pretty easy. If I have N points, I can chose a polynomial of order N-1 and the curve will fit the points perfectly. If my X vector has several dimension, in scikit-learn I can build a pipeline with a PolynomialFeatures and a Lin...
0
1
300
0
44,964,036
0
0
0
0
1
true
0
2017-07-05T10:42:00.000
0
1
0
Tensorflow GPU cuDNn: How do I load cuDDN libraries?
44,923,993
1.2
python,python-3.x,tensorflow,gpu
Fresh install is the key but there are some important points: 1. Install CUDA 8.0 toolkit 2. Install cuDNn 5.1 version (not 6.0) 3. Install from source(bazel) and configure to use ensorflow with CUDA Above steps worked for me! Hope it helps anyone.
I am trying to use tensorflow with gpu and installed CUDA 8.0 toolkit and cuDNn v5.1 libraries as described in nvidia website. But when I try to import tensorflow as module in python3.5, it does not load cuDNn libraries (outputs nothing, just loads tensorflow module). And I do not observe speed in processing (same spee...
0
1
784
0
44,937,815
0
0
0
0
1
true
1
2017-07-05T23:35:00.000
3
1
0
What are use cases for *not* resetting a groupby index in pandas
44,937,573
1.2
python,pandas
When you perform a groupby/agg operation, it is natural to think of the result as a mapping from the groupby keys to the aggregated scalar values. If we were using plain Python, a dict would be the natural data structure to hold such a mapping from keys to values. Since we are using Pandas, a Series is the natural data...
When working with groupby on a pandas DataFrame instance, I have never not used either as_index=False or reset_index(). I cannot actually think of any reason why I wouldn't do so. Because my behavior is not the pandas default (indeed, because the groupby index exists at all), I suspect that there is some functionality ...
0
1
40
0
46,657,731
0
0
0
0
1
false
0
2017-07-06T02:12:00.000
2
2
0
Is there a python implementation of a Face shape detector?
44,938,737
0.197375
python-3.x,computer-vision,opencv3.0,dlib
Dear past version of self. With Deep learning; Convolutional Neural Networks, This becomes a trivial problem. You can retrain Google's inception V3 Model to classify faces into the the 5 face shapes you mentioned in your question. With Just 500 training images you can attain an accuracy of 98%.
Is there an OpenCV-python, dlib or any python 3 implementation of a face shape detector (diamond, oblong, square)?
0
1
2,005
0
44,950,403
0
0
0
0
1
false
1
2017-07-06T03:16:00.000
0
2
0
TensorFlow RandomForest vs Deep learning
44,939,210
0
python,machine-learning,tensorflow,neural-network,random-forest
A useful rule for beginning training models, is not to begin with the more complex methods, for example, a Linear model, which you will be able to understand and debug more easily. In case you continue with the current methods, some ideas: Check the initial weight values (init them with a normal distribution) As a pre...
I am using TensorFlow for training model which has 1 output for the 4 inputs. The problem is of regression. I found that when I use RandomForest to train the model, it quickly converges and also runs well on the test data. But when I use a simple Neural network for the same problem, the loss(Random square error) does n...
0
1
2,520
0
44,947,979
0
1
0
0
1
false
0
2017-07-06T10:05:00.000
0
2
0
Installing data science packages to vanilla python
44,945,850
0
python,machine-learning,data-science
as suggested by @DavidG, the following solution worked: Download the whl file use cmd window and go to the download folder and then install like below: C:\Users\XXXXXXXX>cd C:\Users\XXXXXXXX\Documents\Python Packages C:\Users\XXXXXXXX\Documents\Python Packages>pip install numpy-1.13.0+mkl-cp36-cp36m-win32.whl Process...
How to download necessary python packages for data analysis (e.g. pandas,scipy,numpy etc) and machine learning packages (sci-kit learn for starter, tensorflow for deeplearning if possible etc) without using github or anaconda? Our client has permitted us to install python 3.6 and above (32-bit) in our terminals for dat...
0
1
931
0
45,007,258
0
0
0
0
1
true
0
2017-07-06T10:44:00.000
1
1
0
Installing Tensorflow for Python 2.7 for Keras and CoreML conversion on Windows 10
44,946,737
1.2
python-2.7,tensorflow,windows-10,keras,coreml
A non-optimal solution (the only one I found) in my opinion, is to install a Linux virtual machine. I used VitualBox for it. Then, it is very easy to download Anaconda and Python 2, as well as the right versions of the packages. For example, you can download Tensorflow 1.1.0 using the following command $ pip install -I...
I am currently working on an artificial neural network model with Keras for image recognition and I want to convert it using CoreML. Unfortunately, I have been working with Python3 and CoreML only works with Python 2.7 at the moment. Moreover, Tensorflow for Python 2.7 does not seem to be supported by Windows... So my ...
0
1
1,130
0
44,966,279
0
0
0
0
1
false
0
2017-07-06T22:07:00.000
2
1
0
Many to one LSTM input shape
44,959,636
0.379949
python-3.x,keras,lstm
In the first layer of the model you should define input_shape=(n_timesteps,n_features). So in your case input_shape = (25,10). Your actual input to the model will have shape (1000,25,10). You should also use keras.np_utils.to_categorical to convert your labels to one-hot-encoded vectors, so that they will become vecto...
My input data has 10 features and it is taken at 25 different timestamps. My output data consists of class labels. So, basically, I am having a many to one classification problem. I want to implement an LSTM for this problem. Total training data consists of 10000 data points. How should the input and output format (sh...
0
1
248
0
44,965,968
0
0
0
0
2
false
0
2017-07-07T04:11:00.000
0
2
0
Image preprocessing of finetune in ResNet
44,962,433
0
python,deep-learning
In my honest opinion people overstate the impact of image preprocessing. The only truly important thing is that the test data is similar in value scale to the training data. There are some theoretical benefits of having a pre normalized dataset, with the usage of batch normalization, but in practice it never made much ...
I want to finetune ResNet50 ImageNet pretrained model, and I have a few question about image preprocessing of finetune. In ImageNet preprocessing, we need to subtract the mean of pixel ([103.939, 116.779, 123.68]). When I use my dataset to finetune, should I subtract mean of ImageNet or subtract the mean of my data...
0
1
2,606
0
44,983,413
0
0
0
0
2
false
0
2017-07-07T04:11:00.000
0
2
0
Image preprocessing of finetune in ResNet
44,962,433
0
python,deep-learning
I would try both. Subtracting your mean makes sense because generally one tries to get mean 0. Subtracting image net mean makes sense because you want the network as a feature extractor. If you change something that early in the feature extractor it could be that it doesn't work at all. Just like the mean 0 thing, it ...
I want to finetune ResNet50 ImageNet pretrained model, and I have a few question about image preprocessing of finetune. In ImageNet preprocessing, we need to subtract the mean of pixel ([103.939, 116.779, 123.68]). When I use my dataset to finetune, should I subtract mean of ImageNet or subtract the mean of my data...
0
1
2,606
0
44,968,146
0
0
0
0
1
false
0
2017-07-07T09:49:00.000
0
1
0
Tensorflow:Using a trained model in C++
44,967,751
0
python,c++,tensorflow
Why would want to train the model in C++? Tensorflows core libraries are in c++. I think you mean use the trained model in C++? Once you've trained a model and exported it (assuming you have the .pb file) you use the model for predicting .Theres no way to retrain an exported model.
I have a model build in Python using keras and tensorflow. I want to export the model and use it for training in C++. I am using TF1.2 and used the tf.train.export_metagraph to export my graph. I am not exactly sure on how to proceed in using the model in C++ for training. Thanks :)
0
1
192
0
44,982,417
0
0
0
0
1
false
0
2017-07-08T04:05:00.000
0
1
0
Numpy Float128 Polyfit
44,982,378
0
python,arrays,numpy
You may use logarithmic version of your variables (np.log10), so when dealing with something like 1e-200 you will have -200, less memory and more efficiency.
I'm using numpy's polyfit to find a best fit curve for a set of data. However, numpy's polyfit returns an array of float64 and because the calculated coefficients are so large/small (i.e. 1e-200), it's returning an overflow error that's encountered in multiply : RuntimeWarning: overflow encountered in multiply scale ...
0
1
281
0
44,983,209
0
0
0
0
1
false
11
2017-07-08T06:16:00.000
22
1
0
importing numpy in hackerrank competitions
44,983,165
1
python,numpy
I have run into the same issue on HackerRank. A number of their challenges do support NumPy--indeed, a handful require it. Either import numpy or the idiomatic import numpy as np will work just fine on those. I believe you're simply trying to use numpy where they don't want you to. Because it's not part of the standard...
I want to use numpy module for solving problems on hackerrank. But, when I imported numpy, it gave me the following error. ImportError: No module named 'numpy'. I understand that this might be a very trivial question. But, I am a beginner in programming. Any help is highly appreciated.
0
1
19,903
0
44,986,475
0
0
0
0
1
false
0
2017-07-08T12:46:00.000
0
1
0
NLTK: No value returned when searching the CMU dictionary based on syllable value
44,986,375
0
python-3.x,nltk
My bad, I realized the mistake. I had swapped the position of "pron" with "word" thereby causing this problem. The corrected code is: p3 = [(pron[0] + '-' + pron[2], word) for word, pron in entries if pron[0] == 'P' and len(pron) == 3] "
I am practicing the nltk examples from the "Natural language processing in Python" book. While trying to get the words that start with syllable "p" and of syllable length 3 from cmu dictionary (one of the examples provided in chapter 2), I am not getting any values returned. I am using Python 3. Below is the code: e...
0
1
414
0
45,001,380
0
0
0
0
1
false
2
2017-07-09T18:45:00.000
2
1
0
Best practice for groupby on Parquet file
44,999,814
0.379949
python,pyspark,parquet,dask
If you are doing a groupby-aggregation with a known aggregation like count or mean then your partitioning won't make that much of a difference. This should be relatively fast regardless. If you are doing a groupby-apply with a non-trivial apply function (like running an sklearn model on each group) then you will have ...
We have a 1.5BM records spread out in several csv files. We need to groupby on several columns in order to generate a count aggregate. Our current strategy is to: Load them into a dataframe (using Dask or pyspark) Aggregate columns in order to generate 2 columns as key:value (we are not sure if this is worthwhile...
0
1
1,447
0
66,233,233
0
0
0
1
1
false
15
2017-07-10T03:15:00.000
1
2
0
How to use matplotlib to plot pyspark sql results
45,003,301
0.099668
python,pandas,matplotlib,pyspark-sql
For small data, you can use .select() and .collect() on the pyspark DataFrame. collect will give a python list of pyspark.sql.types.Row, which can be indexed. From there you can plot using matplotlib without Pandas, however using Pandas dataframes with df.toPandas() is probably easier.
I am new to pyspark. I want to plot the result using matplotlib, but not sure which function to use. I searched for a way to convert sql result to pandas and then use plot.
0
1
30,940
0
45,005,490
0
0
0
0
1
false
0
2017-07-10T05:46:00.000
0
2
0
How to classify both sentiment and genres from movie reviews using CNN Tensorflow
45,004,514
0
python-3.x,tensorflow,neural-network,deep-learning,data-science
You can treat this as a multi-label problem, and append the sentiment and the tone labels together. Now since the network has to predict multiple outputs (2 in this case) you need to use an activation function like sigmoid and not softmax. And your prediction can be made using tf.round(logits).
I am trying to classify sentiment on movie review and predict the genres of that movie based on the review itself. Now Sentiment is a Binary Classification problem where as Genres can be Multi-Label Classification problem. Another example to clarify the problem is classifying Sentiment of a sentence and also predicting...
0
1
254
0
45,031,046
0
0
0
0
1
false
0
2017-07-11T09:39:00.000
0
2
0
Reverse a matrix with tensorflow
45,030,827
0
python,tensorflow,matrix-inverse,bigdata
You mean you need to swap rows and columns? If that's the case then you might use tf.transpose.
I'm a Beginner in bigdata. I had learned Python. I want get Reverse a matrix with tensorflow (matrix n*n in input), but office boss will to do it with tensorflow, so i wanna do it without Adjoining matrix. help me, please. thank you In advance. <3
0
1
435
0
45,034,487
0
1
0
0
1
false
7
2017-07-11T12:11:00.000
0
3
0
Override `import` for more sophisticated module import
45,034,266
0
python,import
Short answer is NO... But you could and should catch ImportError for when the module is not there, and handle it then. Otherwise replacing all import statements with something else is the clever thing to do.
Is it possible to somehow override import so that I can do some more sophisticated operations on a module before it gets imported? As an example: I have a larger application that uses matplotlib for secondary features that are not vital for the overall functionality of the application. In case that matplotlib is not in...
0
1
2,180
0
45,047,193
0
0
0
0
1
false
1
2017-07-11T21:51:00.000
0
1
0
pandas - reading dynamically named file within a .zip
45,045,043
0
python,pandas
Read your file using, df = pd.read_excel('zipfilename 2017-06-28.xlsx',compression='zip', header=1, names=cols)
I am creating a new dataframe in pandas as below: df = pd.read_excel(zipfile.open('zipfilename 2017-06-28.xlsx'), header=1, names=cols) The single .xlsx within the .zip is dynamically named (so changes based on the date). This means I need to change the name of the .xlsx in my code each time I open the .zip to account ...
0
1
71
0
45,052,125
0
0
0
0
1
false
1
2017-07-12T07:15:00.000
0
1
0
how to re-train Saved linear regression ML model in pyspark when new data is coming
45,050,839
0
python,machine-learning,pyspark
I don't think so, you use pyspark.ml.regression.GeneralizedLinearRegression to train, and then you get a pyspark.ml.regression.GeneralizedLinearRegressionModel, that is what you have saved. AFIK, the model can't be refitted, you have to use the regression fit again to get a new model.
I trained a linear regression model using pyspark ml and save it.now i want to re-train it on the bases of new data batch.. is it possible??
0
1
126
0
45,056,341
0
1
0
0
1
true
0
2017-07-12T11:10:00.000
1
1
0
Installing miniconda for theano with gpuarray: as root or as user?
45,056,037
1.2
python,conda
Anaconda and miniconda are designed to be installed by each user individually, into each users $HOME/miniconda directory. If you installed it as a shared install as root, all users would need to access /root/miniconda. Also, environments will be created in $HOME/miniconda/envs, so environments of several people will in...
I've always used virtualenv(wrapper) for my python needs, but now I'm considering trying conda for new projects, mainly because theano docs "strongly" recommend it, and hoping that it will save me some hassle with pygpu config. I'm on linux mint 16( I guess, kernel in uname is from ubuntu 14.04) and there are no system...
0
1
94
0
45,061,095
0
0
0
0
1
false
0
2017-07-12T14:24:00.000
0
1
0
is the Matlab radon() function a "circular" radon transform?
45,060,419
0
python,matlab
Matlab's radon() function is not circular. This was the problem. Although the output image sizes do still differ, I am getting essentially the result I want.
I am trying to translate some matlab code to python. In the matlab code, I have a radon transform function. I start with a 146x146 image, feed it into the radon() function, and get a 211x90 image. When I feed the same image into my python radon() function, I get a 146x90 image. The documentation for the python radon ()...
0
1
269
0
45,395,282
0
0
0
1
1
false
0
2017-07-12T15:00:00.000
0
3
0
Is Google Cloud Datastore or Google BigQuery better suited for analytical queries?
45,061,306
0
python,pandas,google-cloud-datastore,google-bigquery,google-cloud-platform
As far as I can tell there is no support for Datastore in Pandas. This might affect your decision.
Currently we are uploading the data retrieved from vendor APIs into Google Datastore. Wanted to know what is the best approach with data storage and querying the data. I will be need to query millions of rows of data and will be extracting custom engineered features from the data. So wondering whether I should load th...
0
1
577
0
45,063,449
0
0
0
0
1
false
0
2017-07-12T16:45:00.000
2
1
0
Using pandas, how can I return the number of times an element appears in a column?
45,063,425
0.379949
python,pandas
Use df['your column name'].value_counts()['your value name'].
I have a pandas df with 5 columns, one of them being State. I want to find the number of times each state appears in the State column. I'm guessing I might need to use groupby, but I haven't been able to figure out the exact command.
0
1
354
0
45,070,245
0
0
0
0
1
false
2
2017-07-13T01:49:00.000
0
2
0
Machine learning to classify company names to their industries
45,070,186
0
python,machine-learning,text-classification,multilabel-classification
Not sure what you want. If the point is to use just company names, maybe break names into syllables/phonemes, and train on that data. If the point is to use Word2Vec, I'd recommend pulling the Wikipedia page for each company (easier to automate than an 'about me').
What I'm trying to do is to ask the user to input a company name, for example Microsoft, and be able to predict that it is in the Computer Software industry. I have around 150 000 names and 60+ industries. Some of the names are not English company names. I have tried training a Word2Vec model using Gensim based on comp...
0
1
2,136
0
45,084,990
0
0
0
0
1
false
4
2017-07-13T08:40:00.000
0
1
0
what the differences between tf.train.Saver().restore() and tf.saved_model.loader
45,075,568
0
python,tensorflow
During training, restoring from checkpoints, etc, you want to use the saver. You only want to use the saved model if you're loading your exported model for inference.
I'd like to know the differences between tf.train.Saver().restore() and tf.saved_model.loader(). As far as I know, tf.train.Saver().restore() restores the previously saved variables from the checkpoint file; and tf.saved_model.loader() loads the graph def from the pb file. But I have no idea about when I should choose...
0
1
323
0
45,105,205
0
0
0
0
1
false
0
2017-07-13T20:43:00.000
0
1
0
Visual properties of unselected glyphs in Bokeh based on what is selected
45,090,562
0
python,bokeh,glyph
In order to avoid tripling (or quintupling) memory usage in the browser, Bokeh only supports setting "single values" for non-selection colors and alphas. That is, non-selection properties can't be vectorized by pointing them at a ColumnDataSource column. So there's only two options I can think of: Split the glyphs int...
I have a glyph that's a series of Circles. I want to click on one point and change the colour / alpha of the unselected glyphs such that each unselected glyph has a custom colour based on it's relationship with the selected point. For example, I'd want the closest points to the selected point to have alpha near to 1 a...
0
1
195
0
45,106,390
0
0
0
0
1
true
0
2017-07-14T15:06:00.000
1
1
0
Best way to differentiate an array of indices vs a boolean mask
45,106,240
1.2
python,numpy
You can check the dtype, or iterate through and check if the values are not in the set {True, False} as well as checking if the values are not in the set {0,1} Boolean masks must be the same shape as the array they are intended to index into, so that's another check. But there's no hard and fast way to distinguish a p...
If I am given an array of indices but I don't know whether it is a regular index array or a boolean mask, what is the best way to determine which it is?
0
1
57
0
69,032,680
0
0
0
0
2
false
17
2017-07-14T15:15:00.000
1
2
0
PyCharm: Is there a way to make the "data view" auto update when dataframe is changed?
45,106,431
0.099668
python,pycharm
If you put a cursor on the text field just below the displayed dataframe and hit Enter, it'll update itself.
When you select "View as DataFrame" in the variables pane it has a nice spreadsheet like view of the DataFrame. That said, as the DataFrame itself changes, the Data View does not auto update and you need to reclick the View as DataFrame to see it again. Is there a way to make PyCharm autoupdate this? Seems like such a ...
0
1
835
0
66,154,095
0
0
0
0
2
false
17
2017-07-14T15:15:00.000
1
2
0
PyCharm: Is there a way to make the "data view" auto update when dataframe is changed?
45,106,431
0.099668
python,pycharm
Unfortunately, no. The only thing you can do is use 'watches' to watch the variable and open when you want it. It requires a lot of background processing and memory usage to display the dataframe.
When you select "View as DataFrame" in the variables pane it has a nice spreadsheet like view of the DataFrame. That said, as the DataFrame itself changes, the Data View does not auto update and you need to reclick the View as DataFrame to see it again. Is there a way to make PyCharm autoupdate this? Seems like such a ...
0
1
835
0
45,115,173
0
0
0
0
1
false
0
2017-07-14T21:10:00.000
1
1
0
statsmodel fractional logit model
45,111,640
0.197375
python,python-2.7,statistics
I assume fractional Logit in the question refers to using the Logit model to obtain the quasi-maximum likelihood for continuous data within the interval (0, 1) or [0, 1]. The discrete models in statsmodels like GLM, GEE, and Logit, Probit, Poisson and similar in statsmodels.discrete, do not impose an integer condition ...
can anyone let me know what is the method of estimating the parameters in fractional logit model in statsmodel package of python? And can anyone refer me the specific part of the source code of fractional logit model?
0
1
1,645
0
46,335,988
0
0
0
0
1
true
10
2017-07-15T07:16:00.000
2
1
0
difference in predictions between model.predict() and model.predict_generator() in keras
45,115,582
1.2
python,keras,prediction
@petezurich Thanks for your comment. Generator.reset() before model.predict_generator() and turning off the shuffle in predict_generator() fixed the problem
When I use model.predict_generator() on my test_set (images) I am getting a different prediction and when I use mode.predict() on the same test_Set I am getting a different set of predictions. For using model.predict_generator I followed the below steps to create a generator: Imagedatagenerator(no arguments here) and...
0
1
5,757
0
45,118,234
0
0
0
0
1
false
1
2017-07-15T11:49:00.000
6
1
0
Can Django work well with pandas and numpy?
45,117,857
1
python,django,pandas,numpy
You can use any framework to do so. If you worked with Python before I can recommend using Django since you have the same (clear Python) syntax along your project. This is good because you keep the same logic everywhere but should not be your major concern when it comes to choosing the right framework for your needs. S...
I am trying to build a web application that requires intensive mathematical calculations. Can I use Django to populate python charts and pandas dataframe?
1
1
7,691
0
45,171,965
0
0
0
0
1
true
0
2017-07-16T06:56:00.000
1
1
0
MXNet - what is python equivalent of getting scala's mxnet networkExecutor.gradDict("data")
45,125,919
1.2
python,scala,mxnet
How about grad_dict in executor? It returns a dictionary representation of the gradient arrays.
Trying to understand some Scala's code on network training in MXNet. I believe you can access gradient on the executor in Scala by calling networkExecutor.gradDict("data"), what would be equivalent of it in Python MXNet? Thanks!
0
1
64
0
45,154,694
0
0
0
0
2
false
8
2017-07-17T21:47:00.000
0
3
0
How to dynamically freeze weights after compiling model in Keras?
45,154,180
0
python,tensorflow,neural-network,keras,theano
Can you use tf.stop_gradient to conditionally freeze weights?
I would like to train a GAN in Keras. My final target is BEGAN, but I'm starting with the simplest one. Understanding how to freeze weights properly is necessary here and that's what I'm struggling with. During the generator training time the discriminator weights might not be updated. I would like to freeze and unfree...
0
1
7,178
0
47,122,897
0
0
0
0
2
false
8
2017-07-17T21:47:00.000
0
3
0
How to dynamically freeze weights after compiling model in Keras?
45,154,180
0
python,tensorflow,neural-network,keras,theano
Maybe your adversarial net(generator plus discriminator) are wrote in 'Model'. However, even you set the d.trainable=False, the independent d net are set non-trainable, but the d in the whole adversarial net is still trainable. You can use the d_on_g.summary() before then after set d.trainable=False and you would know...
I would like to train a GAN in Keras. My final target is BEGAN, but I'm starting with the simplest one. Understanding how to freeze weights properly is necessary here and that's what I'm struggling with. During the generator training time the discriminator weights might not be updated. I would like to freeze and unfree...
0
1
7,178
0
54,628,350
0
0
0
0
1
false
4
2017-07-17T22:41:00.000
0
1
0
Scikit learn API xgboost allow for online training?
45,154,751
0
python,machine-learning,scikit-learn,xgboost
I don't think the sklearn wrapper has an option to incrementally train a model. The feat can be achieved to some extent using the warm_start parameter. But, the sklearn wrapper for XGBoost doesn't have that parameter. So, if you want to go for incremental training you might have to switch to the official API version of...
According to the API, it seems like the normal xgboost interface allows for this option: xgboost.train(params, dtrain, num_boost_round=10, evals=(), obj=None, feval=None, maximize=False, early_stopping_rounds=None, evals_result=None, verbose_eval=True, xgb_model=None, callbacks=None, learning_rates=None). In this opti...
0
1
571
0
45,161,655
0
0
0
0
1
false
0
2017-07-17T22:53:00.000
0
1
0
OpenCV Python, filter edges to only include those connected to a specific pixel
45,154,854
0
python,opencv,edge-detection
The result of Hough Line transform is an array of (rho,theta) parameter pairs. The equation of the line represented by the pair is y + x/tan(theta) + rho/sin(theta) = 0 You can check whether the (x, y) coordinates of the point satisfy this condition, to find lines that pass throught the point (practically, use a small...
I've got a script that uses the Canny method and probabilistic Hough transform to identify line segments in an image. I need to be able to filter out all line segments that are NOT connected to a specific pixel. How would one tackle this problem?
0
1
276
0
45,156,763
0
0
0
1
1
false
0
2017-07-17T23:23:00.000
2
1
0
How to import CSV to an existing table on BigQuery using columns names from first row?
45,155,117
0.379949
python,google-bigquery,import-from-csv
When you import a CSV into BigQuery the columns will be mapped in the order the CSV presents them - the first row (titles) won't have any effect in the order the subsequent rows are read. To be noted, if you were importing JSON files, then BigQuery would use the name of each column, ignoring the order.
I have a python script that execute a gbq job to import a csv file from Google cloud storage to an existing table on BigQuery. How can I set the job properties to import to the right columns provided in the first row of the csv file? I set parameter 'allowJaggedRows' to TRUE, but it import columns in order regardless ...
0
1
3,297