GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
50,758,844
0
0
0
0
1
true
6
2018-06-08T10:03:00.000
6
1
0
Why the following operands could not be broadcasted together?
50,758,165
1.2
python,python-3.x,numpy,array-broadcasting
It's to do with NumPy's broadcasting rules. Quoting the NumPy manual: When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when they are equal, or one of them is 1 The first statement throws an error...
The arrays are of following dimensions: dists: (500,5000) train: (5000,) test:(500,) Why does the first two statements throw an error whereas the third one works fine? dists += train + test Error: ValueError: operands could not be broadcast together with shapes (5000,) (500,) dists += train.reshape(-1,1) + test.res...
0
1
13,229
0
57,155,250
0
1
0
0
1
false
0
2018-06-08T10:20:00.000
0
1
0
How to fix import error 'nvcuda.dll' in spyder for python?
50,758,472
0
python-3.x,tensorflow,spyder,dllimport
As the comment mentioned, you need to ensure the time you import the tensorflow, the path environment points to c:\windows\system32 and as you said you have nvcuda.dll, ensure the file is there too. There is no need to set the libraries.
1.I already have nvcuda.dll in system32. 2.I have path C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin. 3.The program already upgraded tensorflow and GPU tensorflow. I check import still have the error. ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directo...
0
1
2,669
0
50,764,934
0
0
0
0
1
false
16
2018-06-08T12:20:00.000
19
2
0
Error: OOM when allocating tensor with shape
50,760,543
1
python-3.x,tensorflow,gpu,gunicorn
OOM stands for Out Of Memory. That means that your GPU has run out of space, presumably because you've allocated other tensors which are too large. You can fix this by making your model smaller or reducing your batch size. By the looks of it, you're feeding in a large image (800x1280) you may want to consider downsa...
i am facing issue with my inception model during the performance testing with Apache JMeter. Error: OOM when allocating tensor with shape[800,1280,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: Cast = CastDstT=DT_FLOAT, SrcT=DT_UINT8, _device="/job:localhost...
0
1
34,640
0
53,469,361
0
1
0
0
1
false
14
2018-06-08T17:40:00.000
6
1
0
"Solving Environment" during `conda install -c tensorflow` takes 3+ min but changing the name a bit reduces the time significantly
50,765,892
1
python,tensorflow,anaconda,conda
I solved by doing this open Anaconda Navigator application select Environment from menu choose the environment you wanna use (base enviroment if you don't use multiple enviroments) Update index Click on channels and remove all eventual channel, but default Now, for me it takes a reasonable abount of time to install ...
I am writing a custom conda package for tensorflow. When I name the package "tensorflow" it takes it more than 3 minutes to get past the "solving environment" part but if I change the package name even a little bit, to "tensorflowp3" it loads in around 10 seconds. I am using the commands - conda install -c <my_channe...
0
1
21,062
0
54,032,632
0
1
0
0
1
false
0
2018-06-09T16:51:00.000
0
1
0
Rasa-core, dealing with dates
50,776,518
0
python,date,rasa-nlu,rasa-core
I think you could have a validation in the custom form. Where it perform validation on the time and perform next action base on the decision on the time. Your story will have to train to handle different action paths.
I have a problem with rasa core, let's suppose that I have a rasa-nlu able to detect time eg "let's start tomorrow" would get the entity time: 2018-06-10:T18:39:155Z Ok, now I want next branches, or decisions to be conditioned by: time is in the past time before one month from now time is beyond 1 month I do n...
0
1
568
0
50,781,250
0
0
0
0
1
false
0
2018-06-09T21:13:00.000
0
1
0
tensorflow save and restore autoencoder
50,778,593
0
python,tensorflow
If you don't care about memory space the easiest way is by saving the whole graph (encoder and decoder) and when using it for prediction, you can pass the last layer of the encoder as the fetch argument. Tensorflow will only calculate to this point and you don't have any computational difference compared to only saving...
I used tf.layers.dense to build a fully connected autoencoder. and I want to save it and restore only the encoder to get the embedding output. How to use tf.train.saver to restore only the encoder? Because I want to set different batch size of the restored model, to input only one data into it. I saw many tutorials bu...
0
1
341
0
50,991,998
0
0
0
0
1
false
1
2018-06-10T13:57:00.000
-2
2
0
Multi crtieria alterative ranking based on mixed data types
50,784,441
-0.197375
python,statistics,ranking,recommendation-engine,economics
I am happy to see that you are willing to use multiple criteria decision making tool. You can use Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), TOPSIS, VIKOR etc. Please refer relevant papers. You can also refer my papers. Krishnendu Mukherjee
I am building a recommender system which does Multi Criteria based ranking of car alternatives. I just need to do ranking of the alternatives in a meaningful way. I have ways of asking user questions via a form. Each car will be judged on the following criteria: price, size, electric/non electric, distance etc. As you...
0
1
972
0
58,026,950
0
1
0
0
1
false
8
2018-06-10T19:38:00.000
2
1
0
mpi4py or multiprocessing in Python ?
50,787,392
0.379949
mpi,python-multiprocessing
By using mpi4py you can divide the task into multiple threads, but with a single computer with limited performance or number of cores the usability will be limited. However you might find it handy during training. mpi4py is constructed on top of the MPI-1/2 specifications and provides an object oriented interface which...
I am writing a machine learning toolkit to run algorithm with different settings in parallel (each process run the algorithm for one setting). I am thinking about either to use mpi4py or python's build-in multiprocessing ? There are a few pros and cons I am considering about. Easy-to-use: mpi4py: It seems more conc...
0
1
3,083
0
63,611,068
0
0
0
0
1
false
1
2018-06-10T19:43:00.000
2
1
0
How to best flatten NDJson data in Python
50,787,438
0.379949
python,ndjson
pandas read_json has a bool param lines, set this to True to read ndjsons data_frame = pd.read_json('ndjson_file.json', lines=True)
I have a huge file (>400MB) of NDJson formatted data and like to flatten it into a table format for further analysis. I started iterate through the various objects manually but some are rather deep and might even change over time, so I was hoping for a more general approach. I was certain pandas lib would offer someth...
0
1
521
0
50,798,156
0
1
0
0
1
false
2
2018-06-11T11:42:00.000
0
1
0
No module named tensorflow found in Windows 10 64bit version
50,796,923
0
python-3.x
Pls, explain a little more where you're getting that error also it's quite possible that code you're using for verification is using codes which involve GPU. Hit me back on this one.
After installing Tensorflow with cpu support, I am getting some problems in the verification of Tensorflow. I don't have any GPU in my laptop and used pip3 for installation
0
1
163
0
51,878,585
0
0
0
0
1
false
1
2018-06-11T13:08:00.000
2
1
0
How to return 'faiss' unique vector id on 'add_with_ids' trained index?
50,798,515
0.379949
python,knn
Since you provided the actual vectors, you presumably know how to map ids to vectors. Most Faiss indexes in do not store the vectors, because they need to be compressed to fit in RAM.
I'm using Facebook's faiss index with custom indexes using the add_with_ids method. In inference time I use distance, ID = model.search() which returns the custom ID it was trained with. Is it possible to return also a unique id without retraining? Or just return the actual closest vector? Thank you!
1
1
1,900
0
50,819,970
0
0
0
0
1
false
4
2018-06-12T07:00:00.000
2
3
0
Is there an Anderson-Darling implementation for python that returns p-value?
50,811,061
0.132549
python,statistics,p-value,hypothesis-test,goodness-of-fit
I would just rank distributions by the goodness-of-fit statistic and not by p-values. We can use the Anderson-Darling, Kolmogorov-Smirnov or similar statistic just as distance measure to rank how well different distributions fit. background: p-values for Anderson-Darling or Kolmogorov-Smirnov depend on whether the para...
I want to find the distribution that best fit some data. This would typically be some sort of measurement data, for instance force or torque. Ideally I want to run Anderson-Darling with multiple distributions and select the distribution with the highest p-value. This would be similar to the 'Goodness of fit' test in Mi...
0
1
2,819
0
58,000,976
0
0
0
0
1
false
1
2018-06-12T09:36:00.000
0
1
0
Data extraction from HEC-RAS
50,813,889
0
python,data-modeling,data-extraction
Maybe can use vector data, first with a classified and get different values between areas, after a statistical analysis.
I'm using Hec-Ras for 2D unsteady modeling of a river delta. My model is simulated for one year. I need to extract the velocities and/or discharges and compare them with the velocities from a already done 1DSA model. I wanted to do it in python but I'm new in programming and I wanted to see if anyone has experience wit...
0
1
319
0
50,822,329
0
1
0
0
1
true
1
2018-06-12T16:53:00.000
0
2
0
How to perform a pickling so that it is robust against crashing?
50,822,127
1.2
python,pickle
You're effectively doing backups, as your goal is the same: disaster recovery, lose as little work as possible. In backups, there are these standard practices, so choose whatever fits you best: backing up full backup (save everything each time) incremental backup (save only what changed since the last backup) diffe...
I routinely use pickle.dump() to save large files in Python 2.7. In my code, I have one .pickle file that I continually update with each iteration of my code, overwriting the same file each time. However, I occasionally encounter crashes (e.g. from server issues). This may happen in the middle of the pickle dump, rende...
0
1
175
0
53,845,674
0
0
0
0
1
false
5
2018-06-12T18:05:00.000
3
1
0
Does keras.backend.clear_session() deletes sessions in a process or globally?
50,823,233
0.53705
python,tensorflow,keras,multiprocessing
I faced similar kind of issue but I am not running models in parallel but alternatively i;e; either of the models (in different folders but same model file names) will run. When I run the models directly without clear_session it was conflicting with the previously loaded model and cannot switch to other model. After i...
I create up to 100 keras models in separated script an save them localy with model.save(). For Training them, I use multiprocessing.pool. In those processes I load each model separately. Because of occuring Memory Errors I used keras.backend.clear_session(). This seems to work but I have also read that it deletes the w...
0
1
1,899
0
50,825,599
0
0
0
0
1
true
0
2018-06-12T19:51:00.000
0
1
0
Re- standardise data after excluding outliers?
50,824,847
1.2
python,data-visualization,data-analysis
As part of the Exploratory Data Analysis(EDA) Process, you'll want to visualize your data with all data points, identify outliers and then further investigate those outliers to figure out what to do with them. Are these outliers inaccurate values that need to be corrected? Perhaps erroneous entries in the raw data? Or ...
I am experimenting with python and data analytics. I collected tweets, counted the distinct users, and summed them ,grouped by their locations. Then i have calculated the percentage of users per country population. To make my graphs look better i have standardised my data using the z-score formula. Now i observe that i...
0
1
29
0
50,840,894
0
0
0
0
1
true
1
2018-06-13T10:49:00.000
1
1
0
Collapsing consecutive linear layers
50,835,327
1.2
python,machine-learning,neural-network,convolution,convolutional-neural-network
Permute the dimensions of the first-layer kernels such that input channels are in the "mini-batch" dimension and output channels are in the "channels" dimension. Apply the second layer to that as if that were an image. Then apply the third layer to the result of that. The final result are kernels of the "collapsed" lay...
I have a neural network with 3 consecutive linear layers (convolution), with no activation functions in between. After training the network and obtaining the weights, I would like to collapse all 3 layers into one layer. How can this be done in practice, when each layer has different kernel size and stride? The layers...
0
1
223
0
51,275,732
0
0
0
0
1
true
2
2018-06-13T15:15:00.000
2
1
0
Segmentation fault (core dumped) when training more than one Keras NN models
50,840,749
1.2
python-3.x,tensorflow,segmentation-fault,keras,nvidia-jetson
If you run K.clearsession() on a GPU with Keras 2, you may get a segmentation fault. If you have this in your code, try removing it!
I am optimizing the hyper-parameters of my neural-network, for which I am recursively training the network using different hyper-parameters. It works as expected until after some iterations, when creating a new network for training, it dies with the error "Segmentation fault (core dumped)". Furthermore, I am using GPU ...
0
1
1,251
0
64,878,625
0
0
0
0
4
false
12
2018-06-13T18:17:00.000
0
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
0
python-3.x,scikit-learn,anaconda
Another option is SimpleImputer, it works fine: from sklearn.impute import SimpleImputer
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import...
0
1
17,981
0
50,844,299
0
0
0
0
4
true
12
2018-06-13T18:17:00.000
10
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
1.2
python-3.x,scikit-learn,anaconda
As BallpointBen pointed out, sklearn.impute is not yet released in the latest stable release (0.19.1). Currently it's supported only in 0.20.dev0.
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import...
0
1
17,981
0
54,895,196
0
0
0
0
4
false
12
2018-06-13T18:17:00.000
1
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
0.028564
python-3.x,scikit-learn,anaconda
It's a version error. Here's a fix that worked for me while working in Jupyter Notebook. From your Terminal: conda update anaconda conda update scikit-learn Then restart your jupyter kernal
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import...
0
1
17,981
0
57,499,632
0
0
0
0
4
false
12
2018-06-13T18:17:00.000
0
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
0
python-3.x,scikit-learn,anaconda
you can use from sklearn.preprocessing import Imputer it works.
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import...
0
1
17,981
0
50,868,421
0
0
0
0
1
false
2
2018-06-14T15:07:00.000
0
1
0
How to predict word using trained skipgram model?
50,860,649
0
python,c++,nlp,word2vec,gensim
I haven't seen any way to do this, and given the way hierarchical-softmax (HS) outputs work, there's no obviously correct way to turn the output nodes' activation levels into a precise per-word likelihood estimation. Note that: the predict_output_word() method that (sort-of) simulates a negative-sampling prediction do...
I'm using Google's Word2vec and I'm wondering how to get the top words that are predicted by a skipgram model that is trained using hierarchical softmax, given an input word? For instance, when using negative sampling, one can simply multiply an input word's embedding (from the input matrix) with each of the vectors in...
0
1
342
0
50,864,355
0
0
0
0
1
false
0
2018-06-14T19:02:00.000
0
1
0
Defining label in confusion matrix with highly imbalanced dataset
50,864,233
0
python,neural-network
It is a binary classification problem. Usually the classes labeled as positive =1, and negative =0
I am currently working on building a neural net model that targets to predict success/failure of server update. However, the existing data is highly imbalanced. I.e. only 3 % of the records are failures, the rest is all success record. I am now trying to do some data exploration using confusion matrix. In this case, s...
0
1
39
0
50,865,504
0
0
0
0
1
true
0
2018-06-14T20:26:00.000
2
1
0
Lazy version of numpy.unpackbits
50,865,421
1.2
python,numpy,boolean,mmap,numpy-memmap
Not possible. The memory layout of a bit-packed array is incompatible with what you're looking for. The NumPy shape-and-strides model of array layout does not have sub-byte resolution. Even if you were to create a class that emulated the view you want, trying to use it with normal NumPy operations would require materia...
I use numpy.memmap to load only the parts of arrays into memory that I need, instead of loading an entire huge array. I would like to do the same with bool arrays. Unfortunately, bool memmap arrays aren't stored economically: according to ls, a bool memmap file requires as much space as a uint8 memmap file of the same ...
0
1
163
0
54,122,776
0
0
0
0
1
false
3
2018-06-15T21:23:00.000
3
2
0
Python VADER lexicon Structure for sentiment analysis
50,882,838
0.291313
python,nltk,lexicon,vader
The vader_lexicon.txt file has four tab delimited columns as you said. Column 1: The Token Column 2: It is the Mean of the human Sentiment ratings Column 3: It is the Standard Deviation of the token assuming it follows Normal Distribution Column 4: It is the list of 10 human ratings taken during experiments The actua...
I am using the VADER sentiment lexicon in Python's nltk library to analyze text sentiment. This lexicon does not suit my domain well, and so I wanted to add my own sentiment scores to various words. So, I got my hands on the lexicon text file (vader_lexicon.txt) to do just that. However, I do not understand the arch...
0
1
1,281
0
50,897,886
0
0
0
0
3
false
0
2018-06-16T21:47:00.000
0
3
0
Python faster way to take logarithm of N-dimensional array
50,892,030
0
python,arrays
The numpy log function is implemented in C and optimised for handling arrays, so although you may be able to scrape a bit of overhead off by writing your own custom log function in a lower-level language, this will still remain the bottleneck. If you want to see a big speed increase, you'll need to implement your algor...
My question is trivial, nevertheless I need your help. It's not a problem to take a np.log(x) of an array. But in my case this array could be N-dimensional/Tensor (N=2..1024 and 100 samples in each dimension). For N=4 calculation of element-wise np.log(x) takes 10 seconds. I need to take this log(x) in a cost function...
0
1
101
0
51,058,038
0
0
0
0
3
false
0
2018-06-16T21:47:00.000
0
3
0
Python faster way to take logarithm of N-dimensional array
50,892,030
0
python,arrays
Thanks guys, the problem was a big amount of entries that I had to go through for processing. I just found another cost function for my optimization. But, to speed up exactly this code - I think the idea with self made log table exactly for my type of signal can make it work.
My question is trivial, nevertheless I need your help. It's not a problem to take a np.log(x) of an array. But in my case this array could be N-dimensional/Tensor (N=2..1024 and 100 samples in each dimension). For N=4 calculation of element-wise np.log(x) takes 10 seconds. I need to take this log(x) in a cost function...
0
1
101
0
50,892,434
0
0
0
0
3
false
0
2018-06-16T21:47:00.000
0
3
0
Python faster way to take logarithm of N-dimensional array
50,892,030
0
python,arrays
Maybe multiprocessing can help you on this situation
My question is trivial, nevertheless I need your help. It's not a problem to take a np.log(x) of an array. But in my case this array could be N-dimensional/Tensor (N=2..1024 and 100 samples in each dimension). For N=4 calculation of element-wise np.log(x) takes 10 seconds. I need to take this log(x) in a cost function...
0
1
101
0
50,905,356
0
1
0
0
1
true
0
2018-06-18T07:48:00.000
0
1
0
Feature extraction from multiple images in python using SIFT
50,904,849
1.2
python-3.x,image-processing
Key-points extracted from SIFT describe numerous features. If you wish to compare all 400 frames from a video to an image that you have, you will have to make a loop over your process and run SIFT iteratively. This will be computationally expensive. One method to make this fast would be to read all key-points of these ...
In feature extraction and detection using SIFT, I could extract features from 2 image. But I have 400 frames in video and want to have features from all 400 images in python. Can someone help me out with this? Thank you.
0
1
520
0
50,916,669
0
0
0
0
1
false
8
2018-06-18T17:32:00.000
0
3
0
Gensim Word2Vec select minor set of word vectors from pretrained model
50,914,729
0
python,keras,word2vec,gensim,word-embedding
There's no built-in feature that does exactly that, but it shouldn't require much code, and could be modeled on existing gensim code. A few possible alternative strategies: Load the full vectors, then save in an easy-to-parse format - such as via .save_word2vec_format(..., binary=False). This format is nearly self-exp...
I have a large pretrained Word2Vec model in gensim from which I want to use the pretrained word vectors for an embedding layer in my Keras model. The problem is that the embedding size is enormous and I don't need most of the word vectors (because I know which words can occure as Input). So I want to get rid of them t...
0
1
2,182
0
50,926,356
0
0
0
0
1
true
1
2018-06-19T03:56:00.000
3
1
0
Keras vs TensorFlow - does Keras have any actual benefits?
50,920,425
1.2
python,tensorflow,keras
Keras used to have an upper hand on TensorFlow in the past but ever since the author is now affiliated with Google all the features that made it attractive are being implemented into TensorFlow you can check version 1.8, like you rightfully pointed out tf.layers is one such example.
I have been implementing some deep nets in Keras, but have eventually gotten frustrated with some limitations (for example: setting floatx to float16 fails on batch normalization layers, and the only way to fix it is to actually edit the Keras source; implementing custom layers requires coding them in backend code, whi...
0
1
423
0
50,923,449
0
0
0
0
1
true
0
2018-06-19T07:17:00.000
0
1
0
How to choose number of perceptron in fine-tuning FC layer?
50,922,606
1.2
python,tensorflow
The easiest way to adapt your network is to add another FC layer on top of the VGG (with weight kernel of size 1000x3). Alternatively, replace the last FC layer (of size 4096x1000) with an FC layer of size 4096x3. Don't forget to properly initialize your newly added layers.
I use VGG-16 pre-trained model and fine-tune the last 3 FC layers. But in my case, I only use 3 classes as my classification. I want to ask how to choose the perceptron of FC layers. Should I visualize the Conv5_3 layer, then making a decision? BTW, VGG-16 official model is 4096, 4096, 1000 perceptron in FC layers.
0
1
50
0
64,109,376
0
0
0
0
2
false
6
2018-06-19T15:10:00.000
0
2
0
Undo a change that was performed using Pandas
50,931,697
0
python,pandas
Yes, there is a way to do this. If you're using the newest iteration of python and pandas you could do it this way: df.replace(to_replace='and', value='&', inplace=true) This is the way I learned it!
I would like to know if there's a technique to simply undo a change that was done using Pandas. For example, I did a string replacement on a few thousand rows of Pandas Dataframe, where, every occurrence of "&" in its string be replaced with "and". However after performing the replacement, I found out that I've made a...
0
1
4,868
0
67,360,151
0
0
0
0
2
false
6
2018-06-19T15:10:00.000
0
2
0
Undo a change that was performed using Pandas
50,931,697
0
python,pandas
If you have cells structured in step, and the mess is because of running a couple of cells that have affected the dataset, you can stop the kernel and run all the cells from the beginning.
I would like to know if there's a technique to simply undo a change that was done using Pandas. For example, I did a string replacement on a few thousand rows of Pandas Dataframe, where, every occurrence of "&" in its string be replaced with "and". However after performing the replacement, I found out that I've made a...
0
1
4,868
0
50,947,050
0
0
0
0
1
true
0
2018-06-19T17:17:00.000
1
2
0
How to use trained neural network in different platform/technology?
50,933,736
1.2
python,c++,tensorflow,neural-network
Technically you don't need a framework at all. A conventional fully connected neural network is simple enough that you can implement it in straight C++. It's about 100 lines of code for the matrix multiplication and a dozen or so for the non-linear activation function. The biggest part is figuring out how to parse a s...
Given I trained a simple neural network using Tensorflow and Python on my laptop and I want to use this model on my phone in C++ app. Is there any compatibility format I can use? What is the minimal framework to run neural networks (not to train)? UDP. I'm also interested in Tensorflow to NOT-Tensorflow compatibility. ...
0
1
117
0
50,935,236
0
0
0
0
1
false
0
2018-06-19T18:36:00.000
1
1
0
dataframe from underlying script not updating
50,934,946
0.197375
python,dataframe,reference
I figured it out, sorry for the confusion. I did not save the risktemplate that I updated the dataframe to in the same folder that the other reference script was looking at! Newbie!
I have a script called "RiskTemplate.py" which generates a pandas dataframe consisting of 156 columns. I created two additional columns which gives me a total count of 158 columns. However, when I run this "RiskTemplate.py" script in another script using the below code, the dataframe only pulls the original 156 colum...
0
1
21
0
50,941,849
0
0
0
0
1
false
0
2018-06-20T06:23:00.000
0
4
0
Pandas dataframe merge by function on column names
50,941,528
0
python-3.x,pandas,dataframe,merge,concat
A simple concatenation will do pd.concat([df_A, df_B], join='outer')[['A', 'B']].copy(). or 'pd.concat([df_A, df_B], join='inner')
I say to dataframes. df_A has columns A__a, B__b, C. (shape 5,3) df_B has columns A_a, B_b, D. (shape 4,3) How can I unify them (without having to iterate over all columns) to get one df with columns A,B ? (shape 9,2) - meaning A__a and A_a should be unified to the same column. I need to use merge with applying the fun...
0
1
200
0
50,955,970
0
0
0
0
1
true
0
2018-06-20T19:12:00.000
0
2
0
ImportError: libnvidia-fatbinaryloader.so.396.24.02: cannot open shared object file: No such file or directory
50,955,454
1.2
python,tensorflow
Running export LD_LIBRARY_PATH=/usr/lib/nvidia-396 fixed it now i have another error
I just updated my nvidia GPU driver and got this error when i import tensorflow like that: import tensorflow as tf Config: Ubuntu 16.04 NVIDIA Corporation GM204M [GeForce GTX 970M] 16GB RAM i7 6700HQ Python 3.5.2 GCC 5.4.0 Cuda 9.0.176 Tensorflow 1.8 CudNN 7 This error had no result on Google ... Maybe i should downgra...
0
1
798
0
50,964,829
0
0
0
0
1
true
2
2018-06-21T08:56:00.000
1
1
0
Image classification: Best approach to training the model
50,964,401
1.2
python,machine-learning,classification,conv-neural-network
The strategy that you will choose depends mainly on the structure of the CNN that you are going to create. If you train a model that is able to recognize if an image contains a spoon or a fork, you will not be able to test on a table with several table-cloth items (e.g. both a fork and a spoon) because the network will...
Given a model that has to classify 10 table-cloth items (spoons, forks, cups, plate etc,) and must be tested on an image of a table with all the table-cloth items in it (test_model_accuracy,) which is the best approach for training: A: Train the model on individual items then test on test_model_accuracy B: Train the ...
0
1
140
0
62,225,211
0
0
0
0
1
false
14
2018-06-21T09:24:00.000
11
2
0
Sklearn custom transformers: difference between using FunctionTransformer and subclassing TransformerMixin
50,965,004
1
python,machine-learning,scikit-learn,cross-validation
The key difference between FunctionTransformer and a subclass of TransformerMixin is that with the latter, you have the possibility that your custom transformer can learn by applying the fit method. E.g. the StandardScaler learns the means and standard deviations of the columns during the fit method, and in the transfo...
In order to do proper CV it is advisable to use pipelines so that same transformations can be applied to each fold in the CV. I can define custom transformations by using either sklearn.preprocessing.FunctionTrasformer or by subclassing sklearn.base.TransformerMixin. Which one is the recommended approach? Why?
0
1
10,493
0
50,966,711
0
0
0
0
1
true
11
2018-06-21T10:25:00.000
12
3
0
convert images from [-1; 1] to [0; 255]
50,966,204
1.2
python,numpy,opencv
As you have found, img * 255 gives you a resulting range of [-255:255], and (img + 1) * 255 gives you a result of [0:510]. You're on the right track. What you need is either: int((img + 1) * 255 / 2) or round((img + 1) * 255 / 2). This shifts the input from [-1:1] to [0:2] then multiplies by 127.5 to get [0.0:255.0]. ...
I know that question is really simple, but I didn't find how to bypass the issue: I'm processing images, the output pixels are float32, and values are in range [-1; 1]. The thing is, when saving using openCV, all negative data and float values are lost (I only get images with 0 or 1 values) So I need to convert those i...
0
1
30,202
0
50,974,355
0
0
0
0
1
false
0
2018-06-21T16:43:00.000
0
2
0
How to get two or more maximum indexes values set to 1 from tf.softmax's output
50,973,687
0
python,tensorflow,softmax
tf.nn.softmax forces everything to add up to 1.0 to make a valid probability distribution. If you want multiple values in the vector to be ones then you should use tf.nn.sigmoid instead. If you want to retrieve the maximum numbers in the vector use tf.nn.top_k.
I want to get the maximum (2 or more) indexes set to 1 from the output of tf.nn.softmax(). given tf.nn.softmax's outputs as [0.1, 0.4, 0.2, 0.1, 0.8] I want to get something like [0,1,0,0,1] since those indexes have the maximum numbers (in this case I chose just the maximum 2). Thank you in advance!
0
1
445
0
50,975,222
0
0
0
0
1
true
0
2018-06-21T18:06:00.000
1
2
0
matplotlib colormap without normalization
50,975,047
1.2
python,matplotlib
Create a list of colors, say colors = ['blue', 'red', 'green', 'purple'], that has as many colors as you have different targets. Then, set c=colors[target] with target being the integer your model popped out. This means you will need to plot each point one at a time unless you sort all the targets and plot at the end.
I am creating scatterplots of data with integer targets. Naturally, I represent the targets as color in the scatterplot. However, sometimes my models, because of the nature of the model, predict targets that are not in the original set. I.e., my training targets are chosen from [0,1,2], and my model occasionally pre...
0
1
174
0
50,977,040
0
0
0
0
1
true
3
2018-06-21T19:25:00.000
2
1
0
matplotlib modified color map with white as zero
50,976,177
1.2
python,matplotlib,colormap
You would rather mask zero out of your data, e.g. setting those values to nan or use a masked array. Then you can just set_bad("white") for your colormap.
Some of the standard matplotlib cmaps, such as viridis or jet show dark colors in small values. While this is what I need, I like them to show nothing, i.e. white background if the value is exactly zero. For non zero values the usual colors of that color map are fine. Is it possible to do this?
0
1
4,611
0
57,412,703
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
0
5
0
Locking of HDF files using h5py
50,977,839
0
python-2.7,hdf5,h5py,hdf
with h5py.File(), the same .h5 file can be open for read ("r") multiple times. But h5py doesn't support more than a single thread. You can experience bad data with multiple concurrent readers.
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is ...
0
1
7,784
0
71,661,095
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
0
5
0
Locking of HDF files using h5py
50,977,839
0
python-2.7,hdf5,h5py,hdf
Similar as with the other answers I had already opened the file, but for me it was in a separate HDF5 viewer.
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is ...
0
1
7,784
0
51,071,193
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
2
5
0
Locking of HDF files using h5py
50,977,839
0.07983
python-2.7,hdf5,h5py,hdf
My issue! Failed to close the file in an obscure method. Interesting thing is that unlocking the file in some cases just took a restart of ipython, other times took a full reboot.
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is ...
0
1
7,784
0
65,687,853
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
1
5
0
Locking of HDF files using h5py
50,977,839
0.039979
python-2.7,hdf5,h5py,hdf
I had other process running that I did not realize. How did I solved my problem: Used ps aux | grep myapp.py to find the process number that was running myapp.py. Kill the process usill the kill command Run again
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is ...
0
1
7,784
0
50,995,396
0
0
0
0
1
false
0
2018-06-22T03:23:00.000
0
1
0
How to cluster some object in HAC but they have same value of Cosine Similarity
50,980,199
0
python-2.7,cluster-analysis,hierarchical-clustering,cosine-similarity
With Cosine similarity, you'll probably want to stop at 0... But of course the problem of ties can arise with any distance function, too. But there obviously is no mathematical answer. They are all equally good. Usually, one hopes that the order does not matter. For a it doesn't, but for all other it does. Don't forget...
I want to cluster Object A with Object B or Object C. But value of Cosine Similarity Object A with Object B is 0 and Cosine Similarity Object A with Object C is 0. Before it directly clustered, I need to cluster those object step by stem, which one should be combined first Object A with B or Object A with C?
0
1
18
0
51,001,967
0
1
0
0
2
false
0
2018-06-22T17:44:00.000
0
2
0
Tensor flow package installation in PyCharm
50,993,189
0
python-3.x,tensorflow,pycharm
Please run pip install tensorflow in command line and post the output here. Tensorflow can be installed on Windows but the process is often annoying.
I have been successfully using PyCharm for my python work.All the packages can be easily installed by going to settings and then project interpreter but tensorflow installation is showing error.In suggestions it asked me to upgrade pip module.But even after that it shows error with following message: " Could not fin...
0
1
182
0
51,021,687
0
1
0
0
2
false
0
2018-06-22T17:44:00.000
0
2
0
Tensor flow package installation in PyCharm
50,993,189
0
python-3.x,tensorflow,pycharm
You could also tried anaconda. It is has very nice UI and you could switch between different version.
I have been successfully using PyCharm for my python work.All the packages can be easily installed by going to settings and then project interpreter but tensorflow installation is showing error.In suggestions it asked me to upgrade pip module.But even after that it shows error with following message: " Could not fin...
0
1
182
0
50,994,909
0
0
0
0
1
false
3
2018-06-22T18:41:00.000
1
1
0
Getting IDs from t-SNE plot?
50,993,934
0.197375
python,mapping
If you are using sklearn's t-SNE, then your assumption is correct. The ordering of the inputs match the ordering of the outputs. So if you do y=TSNE(n_components=n).fit_transform(x) then y and x will be in the same order so y[7] will be the embedding of x[7]. You can trust scikit-learn that this will be the case.
Quite simple, If I perform t-SNE in Python for high-dimensional data then I get 2 or 3 coordinates that reflect each new point. But how do I map these to the original IDs? One way that I can think of is if the indices are kept fixed the entire time, then I can do: Pick a point in t-SNE See what row it was in t-SNE...
0
1
660
0
56,987,465
0
0
0
0
1
false
4
2018-06-22T21:16:00.000
1
3
0
Ignoring visible gpu device with compute capability 3.0. The minimum required Cuda capability is 3.5
50,995,707
0.066568
python,docker,tensorflow
I just spent a day trying to build this thing from source and what worked for me finally is quite surprising: the pre-built wheel for TF 1.5.0 does not complain about this anymore, while pre-built wheel for TF 1.14.0 does complain. It seems you have used the same version, so it's quite interesting, but I thought I woul...
I am running Tensorflow 1.5.0 on a docker container because i need to use a version that doesn't use the AVX bytecodes because the hardware i am running on is too old to support it. I finally got tensorflow-gpu to import correctly (after downgrading the docker image to tf 1.5.0) but now when i run any code to detect th...
0
1
9,306
0
51,002,024
0
0
0
0
1
true
0
2018-06-23T12:54:00.000
0
1
0
Unable to import cv2 in 64 bit version of python 3.6.5
51,001,356
1.2
python,opencv
You may uninstall previous opencv installation via pip uninstall opencv-python then do the pip install opencv-python.
I had installed opencv in python 3.6 32 bit version using the command 'pip install opencv-python', which I successfully used. Later when I upgraded my version to 64 bit as to use tensorflow as well, and ran the same command 'pip install opencv-python', opencv was already present, yet when I tried to import cv2 module, ...
0
1
433
0
51,012,003
0
1
0
0
1
true
0
2018-06-24T00:45:00.000
1
1
0
importing library Jupyter Notebook vs Canopy
51,006,130
1.2
python,jupyter,canopy
Each Python environment is independent. Installing a package into an anaconda Python environment does not install it into a Canopy Python environment (nor into a different anaconda Python environment). This is a feature, not a bug; it allows different Python environments to be configured differently, even incompatibly....
I just installed Canopy because I had some issues running code in Jupyter Notebook. I have an Anaconda distribution installed. I installed OpenCV through anaconda and can easily import cv2 in Jupyter Notebook. However, when I import cv2 in Canopy IDE it says "No module named cv2". How can I safely fix this?
0
1
327
0
51,192,080
0
0
0
0
1
true
0
2018-06-25T06:08:00.000
0
1
0
numpy concatenate multiple arrays arrays
51,017,203
1.2
python,numpy,concatenation
So, the main problem here was with the one of the arrays of shape (0,) instead of (0,227,227,3). np.concatenate(alist,axis=0) works.
I have many numpy arrays of shape (Ni,227,227,3), where Ni of each array is different. I want to join them and make array of shape (N1+N2+..+Nk,227,227,3) where k is the number of arrays. I tried numpy.concatenate and numpy.append but they ask for same dimension in axis 0. I am also confused on what is axis 1 and axis ...
0
1
1,321
0
51,018,770
0
0
0
0
2
false
0
2018-06-25T07:56:00.000
0
4
0
How to add rows to pandas dataframe with reasonable performance
51,018,628
0
python,pandas,dataframe
The Fastest way would be load to dataframe directly via pd.read_csv() Try separating the logic to clean out unstructured to structured data and then use pd.read_csv to load the dataframe. You can share the sample unstructured line and logic to take out the structured data, So that might share some insights on the same.
I have an empty data frame with about 120 columns, I want to fill it using data I have in a file. I'm iterating over a file that has about 1.8 million lines. (The lines are unstructured, I can't load them to a dataframe directly) For each line in the file I do the following: Extract the data I need from the current li...
0
1
88
0
51,018,824
0
0
0
0
2
false
0
2018-06-25T07:56:00.000
0
4
0
How to add rows to pandas dataframe with reasonable performance
51,018,628
0
python,pandas,dataframe
Where you use append you end up copying the dataframe which is inefficient. Try this whole thing again but avoiding this line: df = df.append(df.iloc[-1]) You could do something like this to copy the last row to a new row (only do this if the last row contains information that you want in the new row): df.iloc[...ca...
I have an empty data frame with about 120 columns, I want to fill it using data I have in a file. I'm iterating over a file that has about 1.8 million lines. (The lines are unstructured, I can't load them to a dataframe directly) For each line in the file I do the following: Extract the data I need from the current li...
0
1
88
0
56,691,091
0
0
0
0
1
false
8
2018-06-25T11:50:00.000
14
3
0
subsample, colsample_bytree, colsample_bylevel in XGBClassifier() Python 3.x
51,022,822
1
python-3.x,xgboost
The idea of "subsample", "colsample_by_tree", and "colsample_bylevel" comes from Random Forests. In it, you build an ensemble of many trees and then group them together when making a prediction. The "random" part happens through random sampling of the training samples for each tree (bootstrapping), and building each ...
I've spent a good deal of time trying to find out what these "subsample", "colsample_by_tree", and "colsample_bylevel" actually did in XGBClassifier() but I can't exactly find out what they do. Can someone please explain briefly what it is they do? Thanks!
0
1
13,307
0
51,035,810
0
0
0
1
1
false
1
2018-06-25T12:36:00.000
0
1
0
Python pandas dataframe transaction
51,023,642
0
python,pandas,dataframe,transactions,sqlalchemy
After further investigation I realized that it is possible to do only with sqllite3, because to_sql supports both sqlalchemy engine and plain connection object as conn parameter, but as a connection it is supported only for sqllite3 database In other words you have no influence on connection which will be created by to...
Please suggest a way to execute SQL statement and pandas dataframe .to_sql() in one transaction I have the dataframe and want to delete some rows on the database side before insertion So basically I need to delete and then insert in one transaction using .to_sql of dataframe I use sqlalchemy engine with pandas.df.to_sq...
0
1
1,173
0
51,275,886
0
0
0
0
1
false
0
2018-06-25T13:53:00.000
0
1
0
Maximising prediction accuracy of the majority class in an imbalanced dataset
51,025,178
0
python,optimization,classification,data-science
You are thinking about this the wrong way. If all you cared about was the majority class, you could just predict everything as belonging to the majority class. You'd get 100% of them right. You would have lots of false positives, but you don't care about those right? Ah, if you do care about the false positives, then ...
When talking about imbalanced datasets, most articles would refer to maximising the prediction of the minority class (e.g. for fraud detection). I have an imbalanced dataset (ratio approximately 1:20). where I am interested to achieve the highest prediction accuracy for the majority class. My work is in Python. Possibl...
0
1
129
0
51,027,093
0
0
0
0
1
false
1
2018-06-25T15:25:00.000
0
2
0
Install Keras/Tensorflow on Mac with cpu python2.7
51,026,983
0
python,macos,python-2.7,tensorflow,cpu
I think what you read meant that tensorflow programs work much faster if your computer has a GPU. You need a Nvidia GPU in your computer to install tensorflow with GPU support on your Mac and as far as I know, after version 1.2 tensorflow no longer provides GPU support for MacOS
I recently found an article that indicates that the conventional methods for downloading python machine learning modules such as tensorflow and keras are not optimized for computers with a cpu. How can I configure tensorflow and keras to make it most compatible with my processor on MacOSX in python 2.7? If it helps,...
0
1
784
0
51,041,372
0
0
0
0
1
true
0
2018-06-26T07:17:00.000
0
1
0
How to restart dlib's correlation tracker
51,037,016
1.2
python,tracking,dlib
Call the correlation_tracker's start_track() member function.
I'm using dlib's correaltion tracker and would like to restart it on some que. When I pass None as the image it crashes. How can I tell the tracker a new video is starting? I'm using multiple thread and would not like to open a new tracker every time. Thank you!
0
1
213
0
51,039,032
0
0
0
0
2
false
5
2018-06-26T07:35:00.000
3
2
0
Linear Regression vs Random Forest performance accuracy
51,037,363
0.291313
python,data-science
There for sure have to be situations where Linear Regression outperforms Random Forests, but I think the more important thing to consider is the complexity of the model. Linear Models have very few parameters, Random Forests a lot more. That means that Random Forests will overfit more easily than a Linear Regression.
If the dataset contains features some of which are Categorical Variables and some of the others are continuous variable Decision Tree is better than Linear Regression,since Trees can accurately divide the data based on Categorical Variables. Is there any situation where Linear regression outperforms Random Forest?
0
1
9,961
0
51,062,800
0
0
0
0
2
false
5
2018-06-26T07:35:00.000
2
2
0
Linear Regression vs Random Forest performance accuracy
51,037,363
0.197375
python,data-science
Key advantages of linear models over tree-based ones are: they can extrapolate (e.g. if labels are between 1-5 in train set, tree based model will never predict 10, but linear will) could be used for anomaly detection because of extrapolation interpretability (yes, tree based models have feature importance, but it's o...
If the dataset contains features some of which are Categorical Variables and some of the others are continuous variable Decision Tree is better than Linear Regression,since Trees can accurately divide the data based on Categorical Variables. Is there any situation where Linear regression outperforms Random Forest?
0
1
9,961
0
51,051,880
0
0
0
0
1
true
0
2018-06-26T17:21:00.000
0
1
0
how does Keras flow_from_directory affect computer storage?
51,048,421
1.2
python-3.x,tensorflow,machine-learning,neural-network,keras
By default, ImageDataGenerator does data augmentation on the fly and does not store the augmented images anywhere. As you mention, doing so would require too much space. So you should only worry about having enough RAM to fit a certain number of augmented batches, not the whole dataset.
I'm trying to figure out the minimum amount of room I will need to train neural networks on my machine. Often times (image) data sets are relatively small in their raw forms, but when we transform them (in keras w/ flow_from_dir) we augment the images and kind of multiply the size of the data set to our desire. My que...
0
1
369
0
51,058,681
0
0
0
0
1
false
2
2018-06-27T08:24:00.000
0
1
0
Evaluation of Forecasting performance Metric on original or transformed Dependent Variable
51,057,993
0
python,logging,machine-learning,transformation,metrics
I would say that in your circumstance it is necessary to scale back to true prices. This is not an absolute statement, but really depends on the setup of your problem: if you have a true price that is "1", then its log will be "0" and, whatever you predict for that single point, you'll get undefined / infinite MAPE. So...
I am building a machine learning model to forecast future prices in scikit-learn. The dependent variable price is not normally distributed, thus, I will perform log transformation on only dependent variable price using np.log(price). After this, I will split complete data-set into train and test sets. Thus y_train and...
0
1
514
0
55,392,343
0
1
0
0
1
false
1
2018-06-27T13:40:00.000
0
1
0
TimeoutError import error when using matlab engine with python 3.5
51,064,328
0
python,matlab,matlab-engine
I get a similar running error. But after trying several times, I found that for the same *.py manuscript, the phrase import matlab.engine and eng = matlab.engine.start_matlab() should be implemented only once I commented them, by doing this I can re-run the script *.py again. Otherwise, it will post the error ImportE...
I am trying to run a function written in matlab in a python script using matlab.engine. The first time I run the script everything works fine, but when I try to run the script again I get the error "ImportError: cannot import name 'TimeoutError'" on importing the matlab engine. Restarting the kernel allows me to run th...
0
1
429
0
51,066,356
0
0
1
0
1
false
1
2018-06-27T15:12:00.000
1
2
0
the best way to conduct fft using GPU accelaration with cuda
51,066,245
0.099668
python,cuda,cufft
I want to use pycuda to accelerate the fft You can't. PyCUDA has no built-in FFT support of any kind.
In python, what is the best to run fft using cuda gpu computation? I am using pyfftw to accelerate the fftn, which is about 5x faster than numpy.fftn. I want to use pycuda to accelerate the fft. I know there is a library called pyculib, but I always failed to install it using conda install pyculib. Is there any suggest...
0
1
1,084
0
51,071,525
0
1
0
0
1
false
0
2018-06-27T21:11:00.000
0
4
0
Separate a list based on values in a second list
51,071,467
0
python
list(itertools.compress(X, Y)) will get you the list of good lists. list(itertools.compress(X, [not a for a in Y])) will get you the list of bad lists.
X: a list of lists, where each list element corresponds to a label in Y Y: a binary list of labels (values are either 1 or 0) I want to extract the elements in X according to the value at the corresponding index in Y, as follows: good = values of X where the label/value in Y is 1 bad = values of X where the label/value...
0
1
123
0
51,078,286
0
0
0
0
1
false
0
2018-06-28T07:01:00.000
0
1
0
how to plot different stats for every year
51,076,577
0
python,matplotlib,time,statistics
A line chart for each stat with year on the x-axis, percentage on the y-axis and a line for each user.
I have a dataset where over 5 years, for each person, i have 3 stats (for, against, neutral) which are represented as percentage. Do you have any ideas on how to plot this over time for each person ? I tought of a pie chart for each year, is it good idea ? year|x|y|z|uniq_key 2011|0.005835365238989241|0.776126314927817...
0
1
28
0
51,085,292
0
0
0
0
1
true
0
2018-06-28T10:33:00.000
2
1
0
Why doesn't Dask dataframe have a shape attribute?
51,080,681
1.2
python,dataframe,dask
This has been discussed in dask. First I'll point out that in the python spec, len() is always supposed to return a concrete integer. Dask respects this, and so len(df) blocks, unlike most operations on a data-frame. There is no such constraint on .size, which is therefore lazy. The metadata of the dataframe is immedia...
Just out of curiosity, if dask enables both len() and size, why is there not shape as well?
0
1
660
0
51,082,005
0
0
0
0
2
true
2
2018-06-28T11:11:00.000
2
2
0
Is the usage of on-line data augmentation a fair comparison between CNN models
51,081,439
1.2
python,tensorflow,machine-learning,keras,convolutional-neural-network
If I understand you correct you are wondering whether the randomness caused by the data augmentation affects the result? The randomness of the augmentation does not affect the result (at least not to a degree that makes a difference anyway) if you train long enough. The other options you have are (as I think about it):...
I am using on-line data augmentation of images I feed into my Convolutional Neural Network. I am using the Keras ImageDataGenerator for this. The images are augmented in each batch and then the model is trained on these images. I am comparing different models, but since the images are augmented on the fly, is this real...
0
1
424
0
51,081,966
0
0
0
0
2
false
2
2018-06-28T11:11:00.000
1
2
0
Is the usage of on-line data augmentation a fair comparison between CNN models
51,081,439
0.099668
python,tensorflow,machine-learning,keras,convolutional-neural-network
In my opinion, you already give part of the answer within your question: images are augmented on the fly, is this really fair, since each models is getting slightly different images? For Evaluation / Validation I usually try to provide situations as similar as possible over the different architectures - otherwise yo...
I am using on-line data augmentation of images I feed into my Convolutional Neural Network. I am using the Keras ImageDataGenerator for this. The images are augmented in each batch and then the model is trained on these images. I am comparing different models, but since the images are augmented on the fly, is this real...
0
1
424
0
51,082,709
0
1
0
0
2
false
0
2018-06-28T12:06:00.000
0
2
0
OpenCV install on Python 3.6: ModuleNotFoundError
51,082,504
0
python,module,pip
Use pip install -U opencv-python
I am getting the following error when using import cv2: ModuleNotFoundError: No module named 'cv2' My version of Python is 3.6 64 bit. I have downloaded the whl file to install it via pip manually, and have also installed it with pip install opencv-python however I still get ModuleNotFoundError. pip outputs Requiremen...
0
1
1,891
0
51,082,582
0
1
0
0
2
false
0
2018-06-28T12:06:00.000
2
2
0
OpenCV install on Python 3.6: ModuleNotFoundError
51,082,504
0.197375
python,module,pip
Your download must have been corrupted. It happened to me too. Simply uninstall the package and use sudo apt-get install python open-cv
I am getting the following error when using import cv2: ModuleNotFoundError: No module named 'cv2' My version of Python is 3.6 64 bit. I have downloaded the whl file to install it via pip manually, and have also installed it with pip install opencv-python however I still get ModuleNotFoundError. pip outputs Requiremen...
0
1
1,891
0
51,105,559
0
0
0
0
1
false
2
2018-06-29T15:50:00.000
0
2
0
Error by running script with python3
51,105,431
0
python-3.x,pandas
Maybe you’re using python2 instead of python3. Use the appropiate syntax to import pandas and also try pip install pandas
I have tried to run a script with pandas in python3, but it appears to have an terrible error I even don't understand. Note: my script is "gh.py" and has an error in line 1, i can't import pandas. File "gh.py", line 1 from pandas as pd ^ SyntaxError: invalid syntax
0
1
23
0
51,106,833
0
0
0
0
1
true
0
2018-06-29T16:57:00.000
0
1
0
Tensorflow 1.9 DNNClassifier unequal output labels optimization
51,106,402
1.2
python,tensorflow,machine-learning,artificial-intelligence
What you said is considered one approach to solving this issue, as this prevents your computed gradient from being dominated by Classes 1 and 2. Depending on your data, it may be effective to add some Gaussian noise to the underrepresented samples to create similar samples. Another good idea in general to avoid overfit...
I have a classification problem which requires some optimization, as my results are not quite adequate. I'm using the DNNClassifier for a huge dataset in order to classify items in 5 different classes (labels). I have over 2000 distinct items (in a hashbucket column with size 2000 and dims 6 - is this adequate?) and mu...
0
1
84
0
51,121,201
0
1
0
0
1
false
5
2018-07-01T07:10:00.000
0
2
0
How to replace all string in all columns using pandas?
51,121,170
0
python,python-3.x,pandas
Try this df['Title'] = titanic_df['Title'].replace("&amp;", "&")
In pandas, how do I replace &amp; with '&' from all columns where &amp could be in any position in a string? For example, in column Title if there is a value 'Good &amp; bad', how do I replace it with 'Good & bad'?
0
1
11,751
0
51,134,741
0
0
0
0
1
false
2
2018-07-02T10:22:00.000
1
2
0
How can i scale a thickness of a character in image using python OpenCV?
51,133,962
0.099668
python,image,opencv,computer-vision
One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much ...
I created one task, where I have white background and black digits. I need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operatio...
0
1
2,238
0
51,139,393
0
0
0
0
1
false
0
2018-07-02T11:26:00.000
1
1
0
Gensim Word2vec Freeze some wordvectors and Update others
51,135,118
0.197375
python,word2vec,gensim
There is! But it's an experimental feature with little documentation – you'd need to read the source to fully understand it, and directly mutate your model to make use of it. Look through the word2vec.py source for properties ending _lockf – specifically in the latest code, one named vectors_lockf. It's a sort of mask ...
Regarding word2vec with gensim, Suppose you already trained a model on a big corpus, and you want to update it with new words from new sentences, but not update the words which already have a vector. Is it possible to freeze the vectors of some words and update only some chosen words (like the new words) when calling m...
0
1
436
0
51,140,808
0
0
0
0
1
false
0
2018-07-02T17:04:00.000
0
2
0
Count Specific Values in Dataframe
51,140,765
0
python,python-3.x,pandas
You could use len([x for x in df["Sex"] if x == "Male"). This iterates through the Sex column of your dataframe and determines whether an element is "Male" or not. If it is, it is appended to a list via list comprehension. The length of that list is the number of Males in your dataframe.
If I had a column in a dataframe, and that column contained two possible categorical variables, how do I count how many times each variable appeared? So e.g, how do I count how many of the participants in the study were male or female? I've tried value_counts, groupby, len etc, but seem to be getting it wrong. Thanks
0
1
2,617
0
71,822,982
0
0
0
0
1
false
0
2018-07-02T18:32:00.000
0
1
1
Kafka Messages are split in consumer
51,141,942
0
python-3.x,apache-kafka,streaming
Once use fetch.message.max.bytes=2000000 in consumer side
I am new to Kafka. I am sending a request to REST server and send the response of the request to kafka server as messages. When i consume the data from consumer the message is split into multiple smaller messages. How do i avoid this. The response is a JSON row. I want each json row to be one message. Any help would be...
0
1
556
0
52,422,681
0
0
0
0
1
false
1
2018-07-02T19:59:00.000
1
1
0
Error while importing Keras
51,142,979
0.197375
python,tensorflow,keras
pip install --upgrade pip setuptools work for me.
I am facing error while importing Keras. Below is the error trace: Using TensorFlow backend. Traceback (most recent call last): File "recognize.py", line 8, in <module> import keras File "/home/pi/.local/lib/python2.7/site-packages/keras/__init__.py", line 3, in <module> from . import utils File "/home/pi...
0
1
588
0
66,550,315
0
0
0
0
1
false
27
2018-07-02T20:18:00.000
2
4
0
Difference between tensor.permute and tensor.view in PyTorch?
51,143,206
0.099668
python,multidimensional-array,deep-learning,pytorch,tensor
tensor.permute() permutes the order of the axes of a tensor. tensor.view() reshapes the tensor (analogous to numpy.reshape) by reducing/expanding the size of each dimension (if one increases, the others must decrease).
What is the difference between tensor.permute() and tensor.view()? They seem to do the same thing.
0
1
25,972
0
51,153,467
0
0
0
0
1
false
0
2018-07-03T08:31:00.000
3
1
0
What is the difference between sklearn.cross_validation and sklearn.model_estimation?
51,149,995
0.53705
python,machine-learning,scikit-learn
cross_validation is an older package used previously in scikit. model_selection is newer replacement of the cross_validation (and some others too). It has some structural changes in the classes defined in it. So same class which was previously in cross_validation is now present in model_selection but with changed beha...
I want to know the difference between importing sklearn.model_estimation and sklearn.cross_validation when I run Python code for linear regression. I found out that sklearn.model_estimation calls a method called next(ShuffleSplit().split(X, y)) and sklearn.cross_validation calls a method called next(iter(ShuffleSplit(n...
0
1
819
0
51,152,822
0
0
0
1
1
true
0
2018-07-03T09:37:00.000
0
2
0
Creating star schema from csv files using Python
51,151,263
1.2
python,csv,star-schema
Reading certain blogs look like it is not a good way to handle such cases in python in memory but still if the below post make sense you cn use it Fact Loading The first step in DW loading is dimensional conformance. With a little cleverness the above processing can all be done in parallel, hogging a lot of CPU time. T...
I have 6 dimension tables, all in the form of csv files. I have to form a star schema using Python. I'm not sure how to create the fact table using Python. The fact table (theoretically) has at least one column that is common with a dimension table. How can I create the fact table, keeping in mind that quantities from...
0
1
1,539
0
51,211,614
0
1
0
0
1
false
0
2018-07-03T13:11:00.000
0
1
0
How is dask implemented on multiple systems?
51,155,513
0
python-2.7,parallel-processing,dask,dask-distributed
Dask dataframes are chunked, so in general you have one big dataframe made up of smaller dataframes spread across your cluster. Computations apply to each chunk individually with shuffling of results where required (such as groupby, sum and other aggregate tasks).
I am new to Dask library.I wanted to know if we implement parallel computation using dask on two systems ,then is the data frame on which we apply the computation stored on both the systems ? How actually does the parallel computation takes place,it is not clear from the documentation.
0
1
52
0
51,195,993
0
0
0
0
1
false
2
2018-07-03T17:27:00.000
0
2
0
Which newline character is in my CSV?
51,160,071
0
python,csv,ssis,delimiter,eol
Seeing that you have EmEditor, you can use EmEditor to find the eol character in two ways: Use View > Character Code Value... at the end of a line to display a dialog box showing information about the character at the current position. Go to View > Marks and turn on Newline Characters and CR and LF with Different Mark...
We receive a .tar.gz file from a client every day and I am rewriting our import process using SSIS. One of the first steps in my process is to unzip the .tar.gz file which I achieve via a Python script. After unzipping we are left with a number of CSV files which I then import into SQL Server. As an aside, I am loading...
0
1
2,081
0
51,160,557
0
0
0
0
1
false
1
2018-07-03T17:43:00.000
1
1
0
CNTK evaluation for image classification
51,160,298
0.197375
python,image,classification,cntk
Reshaping your input would not affect the output of the model. If it is only predicting one class for every image, it is an issue with model training. I would suggest you try predicting on your training data to see if it only predicts one class on the training data. If that is the case, it is definitely a model trai...
I built an image classifier using CNTK. The images are grayscale. Therefore, I entered the number of channels as 1. So, the model requires (1x64x64) data (64 being the image height and width). The problem is, when I try to predict the class of a new image, it is seen as (64x64) only. So, the code errors out due to data...
0
1
72
0
51,180,855
0
0
0
0
1
true
0
2018-07-04T01:22:00.000
2
2
0
How to crop a circular image to inscribed square, then crop to inscribed circle, and finally crop to inscribed square?
51,164,645
1.2
python,image,opencv,image-processing,python-imaging-library
The image you want to crop to is, geometrically, a square centered on the input image, half as large. This is because you're inscribing twice, each time the square shrinks by the square root of two, and dividing by SQRT(2) twice is the same thing as dividing by 2. So if you have an input square of side D (or a circular...
I would like to crop the circular image attached below according to the following: crop input circular image to the unique square inscribed in the circle. crop the square image down to the circle inscribed in the square crop the circular image from the previous step down to square that inscribes image. I am using pyt...
0
1
577
0
51,167,856
0
0
0
0
1
true
3
2018-07-04T03:22:00.000
1
1
0
How to train HMM with audio senteces dataset for speech recognition?
51,165,305
1.2
python,tensorflow,speech-recognition,mfcc,hmmlearn
Do i need to cut my sentences into words or just use sentences for train HMM models? Theoretically you just need sentences and phonemes. But having isolated words may be useful for your model (it increases the size of your training data) Do I need phonemes dataset for train ? if yes do i need to train it use HMM too ...
I have read some journals and paper of HMM and MFCC but i still got confused on how it works step by step with my dataset (audio of sentences dataset). My data set Example (Audio Form) : hello good morning good luck for you exam etc about 343 audio data and 20 speaker (6800 audio data) All i know : My sentences data...
0
1
795
0
51,165,703
0
0
0
0
1
true
0
2018-07-04T04:04:00.000
1
1
0
Data analytics using Python
51,165,589
1.2
python,data-analysis
I feel you can leave the fact tables as is and combine the rest of the data with which you can reduce the amount of data your dealing with and have the star schema intact too.. Thanks, Ram
I have multiple csv files in the form of a star schema. To perform analytics using Python, is it better to combine all these csv files into one csv file, or to extract data from each csv file and then do analytics? People online have almost always combined all files into one and have then performed analytics. However, ...
0
1
143
0
51,173,697
0
1
0
0
1
true
1
2018-07-04T12:26:00.000
1
1
0
How do I install and run pytorch in MSVS2017 (to avoid "module not found" error on "import torch" statement)?
51,173,695
1.2
python,installation,anaconda,pytorch
Probably, at the date of our MSVS2017 installation (esp. if prior to April 2018), there were no official .whl files for Windows pytorch (this has since changed). Also, given the default installation pathway, permissions on Windows (or file lock access) may be a problem (for example, when attempting to install to the "...
I'm trying to use pytorch in MSVS2017. I started a pytorch project, have anaconda environment set using python3.6, but when I run the debugger, I get a "module not found" error on the first import statement "import torch". I've tried various methods for installing pytorch in a way that allows MSVS2017 to use it, incl...
0
1
92
0
51,180,929
0
0
0
0
1
false
0
2018-07-04T21:19:00.000
0
1
0
operation on blocks of a matrix efficiently in python
51,180,854
0
python,vectorization
You can iterate over the array and just call numpy.average on the 5x5 blocks ?
Let's say I have 100x200 numpy array of random numbers and wish to average blocks of 5x5 in the this array, that is I need the operation to be done on all 800 distinct blocks that are 5x5. I wonder if there is an efficient way to do this without nested loop and possibly without any loop.
0
1
26
0
51,184,234
0
1
0
0
1
false
0
2018-07-05T05:46:00.000
0
3
0
Import chainer in python throws error
51,184,116
0
python,anaconda,chainer
Try restarting your text editor and trying it, sometimes it needs to be restarted for changes to take effect.
I get the error: module 'matplotlib.colors' has no attribute 'to_rgba', when i import chainer in ipynb. I am using python 2 ,anaconda 4.1.1 ,chainer 4 and matplotlib 1.5.1.could anyone asses the problem
0
1
322
0
51,226,434
0
0
0
0
1
false
1
2018-07-05T07:15:00.000
0
1
0
Fast way to determine the optimal number of topics for a large corpus using LDA
51,185,358
0
python,r,lda,topic-modeling
Start with some guess in middle. decrease and increase the number of topics by say 50 or 100 instead of 1. Check in which way Coherence Score is increasing. I am sure it will converge.
I have a corpus consisting of around 160,000 documents. I want to do a topic modeling on it using LDA in R (specifically the function lda.collapsed.gibbs.sampler in lda package). I want to determine the optimal number of topics. It seems the common procedure is to have a vector of topic numbers, e.g., from 1 to 100, t...
0
1
826
0
63,504,519
0
0
0
0
1
false
2
2018-07-05T21:37:00.000
2
3
0
How to create sequential number column in pyspark dataframe?
51,200,217
0.132549
python,dataframe,pyspark,sequential-number
Three simple steps: from pyspark.sql.window import Window from pyspark.sql.functions import monotonically_increasing_id,row_number df =df.withColumn("row_idx",row_number().over(Window.orderBy(monotonically_increasing_id())))
I would like to create column with sequential numbers in pyspark dataframe starting from specified number. For instance, I want to add column A to my dataframe df which will start from 5 to the length of my dataframe, incrementing by one, so 5, 6, 7, ..., length(df). Some simple solution using pyspark methods?
0
1
11,579
0
51,209,553
0
0
0
0
1
true
4
2018-07-05T23:32:00.000
4
2
1
Python 3.6 airflow with a Operator that requires 2.7
51,201,188
1.2
python,tensorflow,google-cloud-dataflow,airflow
There isn't a way specify the python version dynamically on a worker. However if you are using the the Celery executor, you can run multiple workers either on difference servers/vms or in different virtual environments. You can have one worker running python 3, and one running 2.7, and have each listening to differen...
I'm currently running an airflow (1.9.0) instance on python 3.6.5. I have a manual workflow that I'd like to move to a DAG. This manual workflow now requires code written in python 2 and 3. Let's simplify my DAG to 3 steps: Dataflow job that processes data and sets up data for Machine Learning Training Tensorflow ML ...
0
1
4,405