GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
49,523,832
0
0
0
0
2
false
3
2018-03-08T06:22:00.000
1
3
0
How to find the optimal number of clusters using k-prototype in python
49,166,657
0.066568
python,cluster-analysis
Yeah elbow method is good enough to get number of cluster. Because it based on total sum squared.
I am trying to cluster some big data by using the k-prototypes algorithm. I am unable to use K-Means algorithm as I have both categorical and numeric data. Via k prototype clustering method I have been able to create clusters if I define what k value I want. How do I find the appropriate number of clusters for this.? ...
0
1
8,033
0
49,182,785
0
1
0
0
1
true
0
2018-03-08T14:36:00.000
1
1
0
Parallelize Pandas CSV Writing
49,175,681
1.2
python,pandas
If you have only one HDD (not even an SSD drive), then the disk IO is your bottleneck and you'd better write to it sequentially instead of writing in parallel. The disk head needs to be positioned before writing, so trying to write in parallel will most probably be slower compared to one writer process. It would make s...
Is it possible to write multiple CSVs out simultaneously? At the moment, I do a listdir() on an outputs directory, and iterate one-by-one through a list of files. I would ideally like to write them all at the same time. Has anyone had any experience in this before?
0
1
61
0
49,190,269
0
0
0
0
1
false
0
2018-03-09T07:46:00.000
0
1
0
Keras "Tanh Activation" function -- edit: hidden layers
49,188,928
0
python-3.x,neural-network,keras,multiclass-classification,activation-function
First of all you simply should'nt use them in your output layer. Depending on your loss function you may even get an error. A loss function like mse should be able to take the ouput of tanh, but it won't make much sense. But if were talking about hidden layers you're perfectly fine. Also keep in mind, that there are bi...
Tanh activation functions bounds the output to [-1,1]. I wonder how does it work, if the input (features & Target Class) is given in 1-hot-Encoded form ? How keras (is managing internally) the negative output of activation function to compare them with the class labels (which are in one-hot-encoded form) -- means only...
0
1
641
0
49,312,396
0
0
0
0
1
true
0
2018-03-09T12:25:00.000
0
1
0
Install tensorflow1.2 with CUDA8.0 and cuDNN5.1 shows 'ImportError: libcublas.so.9.0'
49,193,808
1.2
python,tensorflow,cuda,ubuntu-16.04,cudnn
Thanks to @Robert Crovella, you give me the helpful solution of my question! When I try to use a different way: pip install tensorflo-gpu==1.4 to install again, It found my older version of tensorflow1.5 and uninstall tf1.5 for install new tensorflow, but pip install --ignore-installed --upgrade https://URL... couldn'...
I want to install tensorflow1.2 on Ubuntu 16.04 LST, After installing with pip, I test it with import tensorflow as tf in terminal, error shows that ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory It seems that tensorflow needs higher version CUDA, But the version of my ten...
0
1
341
0
49,195,249
0
0
0
0
1
false
3
2018-03-09T13:34:00.000
1
1
0
Preprocessing machine learning data
49,195,008
0.197375
python,python-3.x,algorithm,machine-learning
The data isn't stored in a CSV (Do I simply store it in a database like I would with any other type of data?) You can store in whatever format you like. Some form of preprocessing is used so that the ML algorithm doesn't have to analyze the same data repeatedly each time it is used (or does it have to given that one ...
This may be a stupid question, but I am new to ML and can't seem to find a clear answer. I have implemented a ML algorithm on a Python web app. Right now I am storing the data that the algorithm uses in an offline CSV file, and every time the algorithm is run, it analyzes all of the data (one new piece of data gets add...
0
1
92
0
57,648,777
0
1
0
0
2
false
9
2018-03-09T18:16:00.000
0
3
0
Importing the multiarray numpy extension module failed (Just with Anaconda)
49,199,818
0
python,numpy,anaconda
Kindly perform invalidate cache and restart if you are using PyCharm. No need to uninstall numpy or run any command.
I'm quite new to Python/Anaconda, and I'm facing an issue that I couldn't solve on my own or googling. When I'm running Python on cmd I can import and use numpy. Working fine. When I'm running scripts on Spyder, or just trying to import numpy on Anaconda Prompt this error message appears: ImportError: Importing the mu...
0
1
9,555
0
49,199,982
0
1
0
0
2
false
9
2018-03-09T18:16:00.000
0
3
0
Importing the multiarray numpy extension module failed (Just with Anaconda)
49,199,818
0
python,numpy,anaconda
I feel like I would have to know a little more but, it seems to be that you need to reinstall numpy and check if the complete install was successful. Keep in mind that Anaconda is a closed environment so you don't have as much control. with regards to the permissions issue you may have installed it with a superuser/adm...
I'm quite new to Python/Anaconda, and I'm facing an issue that I couldn't solve on my own or googling. When I'm running Python on cmd I can import and use numpy. Working fine. When I'm running scripts on Spyder, or just trying to import numpy on Anaconda Prompt this error message appears: ImportError: Importing the mu...
0
1
9,555
0
49,200,765
0
0
0
0
1
true
0
2018-03-09T19:05:00.000
0
1
0
Should I drop a variable that has the same value in the whole column for building machine learning models?
49,200,518
1.2
python,r,pandas,machine-learning,data-science
You should be deleting such columns because it will provide no extra information about how each data point is different from another. It's fine to leave the column for some machine learning models (due to the nature of how the algorithms work), like random forest, because this column will actually not be selected to sp...
For instance, column x has 50 values and all of these values are the same. Is it a good idea to delete variables like these for building machine learning models? If so, how can I spot these variables in a large data set? I guess a formula/function might be required to do so. I am thinking of using nunique that can ta...
0
1
474
0
55,458,337
0
0
0
0
1
true
10
2018-03-10T07:25:00.000
2
4
0
Accessing '.pickle' file in Google Colab
49,206,488
1.2
python,tensorflow,google-data-api,google-colaboratory
Thanks, guys, for your answers. Google Colab has quickly grown into a more mature development environment, and my most favorite feature is the 'Files' tab. We can easily upload the model to the folder we want and access it as if it were on a local machine. This solves the issue. Thanks.
I am fairly new to using Google's Colab as my go-to tool for ML. In my experiments, I have to use the 'notMNIST' dataset, and I have set the 'notMNIST' data as notMNIST.pickle in my Google Drive under a folder called as Data. Having said this, I want to access this '.pickle' file in my Google Colab so that I can use th...
0
1
27,735
0
49,372,377
0
1
0
0
1
true
4
2018-03-10T08:41:00.000
3
1
0
Does Intel vs. AMD matter for running python?
49,207,112
1.2
python,intel,amd
Are you asking about compatibility or performance? Both AMD and Intel market CPU products compatible with x86(_64) architecture and are functionally compatible with all software written for it. That is, they will run it with high probability (there always may be issues when changing hardware, even while staying with t...
I do a lot of coding in Python (Anaconda install v. 3.6). I don't compile anything, I just run machine learning models (mainly sci-kit and tensor flow) Are there any issues with running these on an workstation with AMD chipset? I've only used Intel before and want to make sure I don't buy wrong. If it matters it is ...
0
1
11,208
0
50,358,133
0
1
0
0
1
false
1
2018-03-11T16:30:00.000
0
1
0
How can I copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting
49,222,299
0
python,pandas,dataframe,jupyter-notebook,powerpoint
One way seems to be to copy the styled pandas table from jupyter notebook to excel. It will keep a lot of the formatting. Then you can copy it to powerpoint and it will maintain its style.
I am trying to copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting. I currently just take a screenshot to preserve formatting, but this is not ideal. Does anyone know of a better way? I search for an extension that maybe has a screenshot button, but no luck.
0
1
1,613
0
49,227,672
0
0
0
0
1
false
1
2018-03-12T02:48:00.000
2
1
0
Categorical Data yes/no to 0/1 python - is it a right approach?
49,227,490
0.379949
python,python-3.x,pandas,neural-network,decision-tree
Yes, in my opinion, encoding yes/no to 1/0 would be the right approach for you. Python's sklearn requires features in numerical arrays. There are various ways of encoding : Label Encoder; One Hot Encoder. etc However, since your variable only has 2 levels of categories, it wouldnt make much difference if you go for Lab...
My dataset has few features with yes/no (categorical data). Few of the machine learning algorithms that I am using, in python, do not handle categorical data directly. I know how to convert yes/no, to 0/1, but my question is - Is this a right approach to go about it? Can these values of no/yes to 0/1, be misinterpreted...
0
1
1,491
0
49,229,831
0
0
0
0
1
false
16
2018-03-12T06:55:00.000
9
2
0
Difference between numpy.round and numpy.around
49,229,610
1
python,arrays,numpy,rounding
The main difference is that round is a ufunc of the ndarray class, while np.around is a module-level function. Functionally, both of them are equivalent as they do the same thing - evenly round floats to the nearest integer. ndarray.round calls around from within its source code.
So, I was searching for ways to round off all the numbers in a numpy array. I found 2 similar functions, numpy.round and numpy.around. Both take seemingly same arguments for a beginner like me. So what is the difference between these two in terms of: General difference Speed Accuracy Being used in practice
0
1
11,776
0
49,231,482
0
1
0
0
1
false
1
2018-03-12T08:57:00.000
0
1
0
Installing python packages in a different location than default by pip or conda
49,231,322
0
python,pip,packages,conda
install conda create new environment (conda create --name foobar python=3.x list of packages use anaconda to activate foobar (activate foobar) check pip location by typing in cmd 'where pip' to be sure you use pip from withing the python from withing the foobar environment and not the default python installed in your s...
How do you use a Python package such as Tensorflow or Keras if you cannot install the package on the drive on which pip always saves the packages? I'm a student at a university and we don't have permission to write to the C drive, which is where pip works out of (I get a you don't have write permission error when insta...
0
1
1,385
0
49,244,770
0
0
0
0
1
true
0
2018-03-12T12:01:00.000
0
1
0
Normalization of input data to Qnetwork
49,234,736
1.2
python,scikit-learn,reinforcement-learning,q-learning
Normalizing the input can lead to faster convergence. It is highly recommended to normalize the inputs. And as the network will progress through different layers due to use of non-linearities the data flowing between the different layers will not be normalized anymore and therefore, for faster convergence we often use...
I am well known with that a “normal” neural network should use normalized input data so one variable does not have a bigger influence on the weights in the NN than others. But what if you have a Qnetwork where your training data and test data can differ a lot and can change over time in a continous problem? My idea was...
0
1
333
0
49,294,793
0
0
0
0
1
false
0
2018-03-12T18:03:00.000
0
1
0
Can the parent nodes of clusters formed using disjoint set forest be used as cluster representative?
49,241,733
0
python,algorithm,machine-learning,cluster-analysis,data-mining
The parent node is the aggregated cluster. It's not a single point, so you can't just use it as representative. But you can use the medoids, for example.
The intention is to merge clusters which have similarity higher than the Jaccard similarity based on pairwise comparison of cluster representative. My logic here is that because the child nodes are all under the parent node for a cluster, it means that the parent node is somewhat like a representative of the cluster.
0
1
27
0
49,248,423
0
1
0
0
1
false
0
2018-03-12T23:06:00.000
0
1
0
ModuleNotFoundError in Spyder with Python
49,245,779
0
python,ubuntu,spyder
Your PATH may be pointing to the wrong python environment. Depending on which one is conflicting, you may have to do some exploring to find the culprit. My guess is that Spyder is not using your created conda environment where Pytorch is installed. To change the path in Spyder, open the Preferences window. Within this ...
Hi I'm using Ubuntu and have created a conda environment to build a project. I'm using Python 2.7 and Pytorch plus some other libraries. When I try to run my code in Spyder I receive a ModuleNotFoundError telling me that torch module hasn't been installed. However, when I type conda list into a terminal I can clearly s...
0
1
1,203
0
49,266,730
0
0
0
0
1
false
1
2018-03-13T01:52:00.000
0
2
0
Process Large (10gb) Time Series CSV file into daily files
49,247,108
0
python,python-3.x,pandas
I was getting thrown off in that open(...) actually gets a line. I was doing a separate readline(...) after the open(...)and so unwittingly advancing the iterator and getting bad results. There is a small problem with csv write which I'll post on new question.
I am new to Python 3, coming over from R. I have a very large time series file (10gb) which spans 6 months. It is a csv file where each row contains 6 fields: Date, Time, Data1, Data2, Data3, Data4. "Data" fields are numeric. I would like to iterate through the file and create & write individual files which contain o...
0
1
626
0
49,248,106
0
0
0
0
1
false
0
2018-03-13T03:00:00.000
1
1
0
K Means Cluster with Specified Intra Cluster Distance
49,247,626
0.197375
python,machine-learning,k-means
I think this method would work: Run KMeans. Mark all clusters exceeding intracluster distance threshold. For each marked cluster, run KMeans for K=2 on the cluster's data. Repeat 2, until no clusters are marked. Each cluster is split in two, until the intra cluster distance is not violated. Another option: Run KMean...
I often come across a situation where I have bunch of different addresses (input data in Lat Long) mapped all over the city. What i need to do is use cluster these locations in a way that allows me to specify "maximum distance netween any two points within a cluster". In other words, specify maximum intra-cluster dista...
0
1
233
0
62,542,674
0
0
0
0
1
false
2
2018-03-13T13:47:00.000
-1
1
0
What is a simple way to extract NDVI average from polygon [Sentinel 2 L2A]
49,257,867
-0.197375
python,r,gis,satellite
You can give a try to Google Engine. That would be the easiest way to obtain access to image series. If your research applies only to that period, you may work less downloading by hand and processing in QGIS. If programming is a must, use Google Engine. They have much of the problem resolved. Otherwise you will have to...
Currently, I am working on a project for a non-profit organization. Therefore, I need the average NDVI values for certain polygons. Input for my search: Group of coördinates (polygon) a range of dates (e.g. 01-31-2017 and 02-31-2017) What I now want is: the average NDVI value of the most recent picture in that give...
0
1
355
0
49,274,707
0
0
1
0
1
false
1
2018-03-14T05:14:00.000
1
1
1
In Matlab Runtime Python3.6 installer not found in order to install matlab python suport for ubunut 16.4
49,270,176
0.197375
python,matlab,computer-vision,ubuntu-16.04
The python installer should be in /{matlab_root}/extern/engines/python. Then python setup.py install Hope it helps
I tried to install MATLAB R2017b runtime Python 3.6 on to my ubuntu 16.4. As per the instruction that given in matlab community python installer (setup.py) should be in ../../v93/extern/engines/python location. When I go there Icouldnt see that setup.py file in the location. I have tried so many time re installing th...
0
1
88
0
49,294,559
0
0
0
0
1
false
2
2018-03-14T09:08:00.000
0
1
0
Python KMeans Clustering - Handling nan Values
49,273,536
0
python,cluster-analysis,k-means
If you don't have data on a word, then skip it. You could try to compute a word vector on the fly based on the context, but that essentially is the same as just skipping it.
I am trying to cluster a number of words using the KMeans algorithm from scikit learn. In particular, I use pre-trained word embeddings (300 dimensional vectors) to map each word with a number vector and then I feed these vectors to KMeans and provide the number of clusters. My issue is that there are certain words in...
0
1
2,184
0
49,289,462
0
0
0
0
1
true
2
2018-03-14T23:27:00.000
11
1
0
Decision Tree Sklearn -Depth Of tree and accuracy
49,289,187
1.2
python,scikit-learn,decision-tree
max_depth is what the name suggests: The maximum depth that you allow the tree to grow to. The deeper you allow, the more complex your model will become. For training error, it is easy to see what will happen. If you increase max_depth, training error will always go down (or at least not go up). For testing error, ...
I am applying Decision Tree to a data set, using sklearn In Sklearn there is a parameter to select the depth of the tree - dtree = DecisionTreeClassifier(max_depth=10). My question is how the max_depth parameter helps on the model. how does high/low max_depth help in predicting the test data more accurately?
0
1
22,441
0
49,290,127
0
1
0
0
1
false
1
2018-03-15T01:00:00.000
0
2
0
replace numbers with token if numbers have whitespace on both side
49,289,969
0
python,string,replace,whitespace
I also tried ' \d+ ' and that works! probably not "pythonic" though...
the code below replaces numbers with the token NUMB: raw_corpus.loc[:,'constructed_recipe']=raw_corpus['constructed_recipe'].str.replace('\d+','NUMB') It works fine if the numbers have a space before and a space after, but creates a problem if the numbers are included in another string. How do I modify the code so tha...
0
1
415
0
49,598,792
0
0
0
0
2
false
0
2018-03-15T12:33:00.000
0
3
0
Tying weights in neural machine translation
49,299,609
0
python,deep-learning,recurrent-neural-network,pytorch,seq2seq
Did you check the code that kmario23 shared? Because it is written that if the hidden size and the embedding sizes are not equal then raise an exception. So, this means if you really want to tie the weights then you should decrease the hidden size of your decoder to 300. On the other hand, if you rethink your idea, w...
I want to tie weights of the embedding layer and the next_word prediction layer of the decoder. The embedding dimension is set to 300 and the hidden size of the decoder is set to 600. Vocabulary size of the target language in NMT is 50000, so embedding weight dimension is 50000 x 300 and weight of the linear layer whic...
0
1
5,192
0
54,236,136
0
0
0
0
2
true
0
2018-03-15T12:33:00.000
3
3
0
Tying weights in neural machine translation
49,299,609
1.2
python,deep-learning,recurrent-neural-network,pytorch,seq2seq
You could use linear layer to project the 600 dimensional space down to 300 before you apply the shared projection. This way you still get the advantage that the entire embedding (possibly) has a non-zero gradient for each mini-batch but at the risk of increasing the capacity of the network slightly.
I want to tie weights of the embedding layer and the next_word prediction layer of the decoder. The embedding dimension is set to 300 and the hidden size of the decoder is set to 600. Vocabulary size of the target language in NMT is 50000, so embedding weight dimension is 50000 x 300 and weight of the linear layer whic...
0
1
5,192
0
49,303,974
0
0
0
0
1
false
0
2018-03-15T12:42:00.000
0
1
0
Binary mask for output vector in Tensorflow
49,299,761
0
python,tensorflow,machine-learning,lstm,bitmask
You could use tf.boolean_mask on the softmax prediction output to remove the probabilities for inactive deals then get the maximum probabilities without them.
I want to recommend products by clickstream with LSTM in TensorFlow. I have historical user behaviour data using which I want to use to train model to recommend products (represented as classes on output) but I need to consider whether product was active in that moment on webpage(not to recommend inactive deals). Sinc...
0
1
376
0
49,313,978
0
1
0
0
1
true
1
2018-03-15T20:40:00.000
3
1
0
is there any way to not installing packages on Google Colab every time?
49,308,803
1.2
python,pip,google-colaboratory
No, there's currently no way for users to choose additional packages to install by default.
some packages like numpy are installed as default in Google Colab. is there any way to not installing new packages and make it default just like numpy?
0
1
936
0
58,624,383
0
0
0
0
1
false
12
2018-03-15T21:27:00.000
1
2
0
Dask Dataframe: Get row count?
49,309,523
0.099668
python,dataframe,dask
If you only need the number of rows - you can load a subset of the columns while selecting the columns with lower memory usage (such as category/integers and not string/object), there after you can run len(df.index)
Simple question: I have a dataframe in dask containing about 300 mln records. I need to know the exact number of rows that the dataframe contains. Is there an easy way to do this? When I try to run dataframe.x.count().compute() it looks like it tries to load the entire data into RAM, for which there is no space and it ...
0
1
12,792
0
49,311,725
0
0
0
0
1
false
0
2018-03-16T00:51:00.000
0
1
1
How to add your files across cluster on pyspark AWS
49,311,592
0
python,apache-spark,amazon-ec2,pyspark
Since you are in AWS already, it may be easier to just store your data files in s3, and open them directly from there.
I am new to spark. I am trying to read a file from my master instance but I am getting this error. After research I found out either you need to load data to hdfs or copy across clusters. I am unable to find the commands for doing either of these. ----------------------------------------------------------------------...
0
1
96
0
49,339,891
0
0
0
0
1
false
1
2018-03-17T17:05:00.000
1
2
0
Tracking cycles while adding random edges to a sparse graph
49,339,575
0.099668
python,graph,graph-algorithm,traversal,graph-traversal
Possible solution I came up with while in the shower. What I will do is maintain a list of size n, representing how many times that node has been on an edge. When I add an edge (i,j), I will increment list[i] and list[j]. If after an edge addition, list[i] > 1, and list[j] > 1, I will do a DFS starting from that edge. ...
Scenario: I have a graph, represented as a collection of nodes (0...n). There are no edges in this graph. To this graph, I connect nodes at random, one at a time. An alternative way of saying this would be that I add random edges to the graph, one at a time. I do not want to create simple cycles in this graph. Is th...
0
1
59
0
49,357,921
0
0
0
0
1
false
0
2018-03-18T15:41:00.000
0
2
0
traffic density visualization in Python
49,349,788
0
python-3.x,data-visualization
It is difficult to say without any information about the structure of the data. Is it just points? Is it a shapefile? Probably you should start with geopandas....
I have a csv file with traffic density data per road segment of a certain high way, measured in Annual average daily traffic (AADT). Now I want to visualize this data. Since I have the locations (lat and lon) of the road segments, my idea is to create lines between these points and give it a color which relates to the ...
0
1
253
0
49,357,236
0
0
0
0
1
true
0
2018-03-19T00:19:00.000
1
1
0
Can we train a Keras Model in Stages?
49,354,178
1.2
python,numpy,tensorflow,deep-learning,keras
Yes, you can, but the concept is not called "stages" but batches and it is the most common method to train neural networks. You just need to make a generator function that loads batches of your data one at a time and use model.fit_generator to start training it.
I have a huge NumPy matrix of dimension (1919090, 140, 37). Now it is not easy to fit something that large anywhere in the memory local or on a server. So I was thinking of splitting the NumPy matrix into smaller parts say of (19,000, 140, 37) and then training a Keras model on it. I store the model, then loading it ag...
0
1
61
0
49,361,454
0
0
0
0
1
false
2
2018-03-19T10:35:00.000
2
3
0
Sentiment Lexicon for stock market prediction
49,360,828
0.132549
python,machine-learning,nlp,nltk,sentiment-analysis
Not readily available, but trivial to build on your own. Simply download a sentiment annotated twitter dataset, construct a dictionary of words for it, iterate over the entries and add +1/(-1) to positive(/negative) words. Finally, divide each word's values by its respective occurrence count and you'll have a naive sen...
I am making a Stock Market Predictor machine learning application that will try to predict the price for a certain stock. It will take news articles/tweets regarding that particular company and the company's historical data for this reason. My issue is that I need to first construct a sentiment analyser for the headlin...
0
1
2,572
0
49,890,401
0
0
0
0
2
false
1
2018-03-19T12:37:00.000
4
2
0
I got a message when importing tensorflow in python
49,363,172
0.379949
python,tensorflow,anaconda
You could upgrade h5py to a more recent version. It worked for me. sudo pip3 install h5py==2.8.0rc1
When I import tensorflow in Python I get this error: C:\Users\Sathsara\Anaconda3\envs\tensorflow\Lib\site-packages\h5py__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. fro...
0
1
1,143
0
49,363,229
0
0
0
0
2
false
1
2018-03-19T12:37:00.000
4
2
0
I got a message when importing tensorflow in python
49,363,172
0.379949
python,tensorflow,anaconda
Its not an error, its just informing you that in future releases the this feature or behaviour is going to change or be no longer available. This is important if you plan to reuse this code with different versions of python and tensorflow.
When I import tensorflow in Python I get this error: C:\Users\Sathsara\Anaconda3\envs\tensorflow\Lib\site-packages\h5py__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. fro...
0
1
1,143
0
70,389,655
0
1
0
0
2
false
25
2018-03-19T23:47:00.000
-4
4
0
Dependencies and packages conflicts in Anaconda?
49,374,217
-1
python-3.x,anaconda,packages
You can try using different conda environments. For example: conda create -n myenv Then you can activate your environment with: conda activate myenv and deactivate with: conda deactivate
I'm using Anaconda 5.1 and Python 3.6 on a Windows 10 machine. I'm having quite a few problems ; I tried to add some useful tools such as lightGBM, tensorflow, keras, bokeh,... to my conda environment but once I've used conda install -c conda-forge packagename on all of these, I end up having downgrading and upgrading ...
0
1
49,725
0
49,374,371
0
1
0
0
2
false
25
2018-03-19T23:47:00.000
6
4
0
Dependencies and packages conflicts in Anaconda?
49,374,217
1
python-3.x,anaconda,packages
You could try disabling transitive deps updates by passing --no-update-dependencies or --no-update-deps to conda install command. Ex: conda install --no-update-deps pandas.
I'm using Anaconda 5.1 and Python 3.6 on a Windows 10 machine. I'm having quite a few problems ; I tried to add some useful tools such as lightGBM, tensorflow, keras, bokeh,... to my conda environment but once I've used conda install -c conda-forge packagename on all of these, I end up having downgrading and upgrading ...
0
1
49,725
0
62,830,161
0
0
0
0
1
false
2
2018-03-20T18:21:00.000
0
3
0
Activation Function in Machine learning
49,391,576
0
python,math,machine-learning,calculus,sigmoid
Simply put, an activation function is a function that is added into an artificial neural network in order to help the network learn complex patterns in the data. When comparing with a neuron-based model that is in our brains, the activation function is at the end deciding what is to be fired to the next neuron. That is...
What is meant by Activation function in Machine learning. I go through with most of the articles and videos, everyone states or compare that with neural network. I'am a newbie to machine learning and not that much familiar with deep learning and neural networks. So, can any one explain me what exactly an Activation fun...
0
1
317
0
49,401,566
0
1
0
0
1
false
0
2018-03-20T20:11:00.000
0
2
1
I installed tensorflow on mac and now I can't open Anaconda
49,393,300
0
python,macos,tensorflow,terminal,anaconda
I fixed the issue by downgrading to pip version 9.0.1. It appears anaconda doesn't like pip version 9.0.2 I ran: pip install pip==9.0.1
I installed tensorflow on my mac and now I can't seem to open anaconda-navigator. When I launch the app, it appears in the dock but disappears quickly. When I launch anaconda-navigator the terminal I get the following error(s). KeyError: 'pip._vendor.urllib3.contrib'
0
1
155
0
49,500,435
0
0
0
0
1
true
0
2018-03-20T21:56:00.000
0
1
0
Portfolio Performance Attribution Metrics
49,394,773
1.2
python,performance,portfolio,metric,attribution
A reasonable intensive measure of how much a market instrument within a portfolio captured its potential would be the geometric difference between its IRR for the period and its annualized market return for the same period. For this you would need the cash flow amounts and dates into and out of the instrument, its actu...
I would like to incorporate a particular performance metric into my portfolio managing software. This metric should be one where I can measure "how much of the potential gains from the selected assets have been captured by the selected portfolio composition". Consider the following table reporting a portfolio's perfo...
0
1
595
0
53,602,315
0
0
0
0
2
false
4
2018-03-22T03:47:00.000
2
2
0
Providing user defined sample weights for knn classifier in scikit-learn
49,420,191
0.197375
python,scikit-learn,knn,nearest-neighbor
KNN in sklearn doesn't have sample weight, unlike other estimators, e.g. DecisionTree. Personally speaking, I think it is a disappointment. It is not hard to make KNN support sample weight, since the predicted label is the majority voting of its neighbours. A stupid walk around, is to generate samples yourself based on...
I am using the scikit-learn KNeighborsClassifier for classification on a dataset with 4 output classes. The following is the code that I am using: knn = neighbors.KNeighborsClassifier(n_neighbors=7, weights='distance', algorithm='auto', leaf_size=30, p=1, metric='minkowski') The model works correctly. However, I would ...
0
1
1,542
0
63,655,345
0
0
0
0
2
false
4
2018-03-22T03:47:00.000
-1
2
0
Providing user defined sample weights for knn classifier in scikit-learn
49,420,191
-0.099668
python,scikit-learn,knn,nearest-neighbor
sklearn.neighbors.KNeighborsClassifier.score() has a sample_weight parameter. Is that what you're looking for?
I am using the scikit-learn KNeighborsClassifier for classification on a dataset with 4 output classes. The following is the code that I am using: knn = neighbors.KNeighborsClassifier(n_neighbors=7, weights='distance', algorithm='auto', leaf_size=30, p=1, metric='minkowski') The model works correctly. However, I would ...
0
1
1,542
0
49,432,623
0
1
0
0
1
true
0
2018-03-22T10:08:00.000
1
1
0
Python 3.6 matplotlib.pyplot shows graph immediately without letting me apply other functions?
49,425,805
1.2
python,matplotlib,spyder
try typing %matplotlib or %matplotlib qt before doing plt.hist(df.amount, bins=30). This will switch the console out of "inline" mode.
I am trying to build a plot using matplotlib.pyplot as plt. For example plt.hist(df.amount, bins = 30) But when I hit enter in the console it generates the graph. I want to apply xlim, ylim and title functions of plt but can't do this. Anyone familiar with this behavior? Should I change Spyder settings? Same behavior w...
0
1
44
0
49,430,423
0
0
0
0
1
false
1
2018-03-22T13:38:00.000
0
2
0
Is it possible to force Tensorflow to generate orthogonal matrix?
49,430,178
0
python,tensorflow
It should be possible. I see two solutions. If you don't care that the transformation is a perfect rotation, you can take the matrix, adjust it to what you think it's a good matrix (make it a perfect rotation) then compute the difference between the one you like and the original and add it as a loss. With this approac...
I'm using Tensorflow to generate a transformation matrix for a set of input vectors (X) to target vectors (Y). To minimize the error between the transformed input and the target vector samples I'm using a gradient descent algorithm. Later on I want to use the generated matrix to transform vectors coming from the same s...
0
1
729
0
50,856,659
0
0
0
0
1
false
0
2018-03-22T15:10:00.000
0
1
0
Sharing class objects in Tensorlfow
49,432,175
0
python,tensorflow,lstm,rnn
You can share final state that is one of the outputs of for example consider dynamic rnn. You get outputs,cell_final_state which you can share. You probably know about cell_final_state.c and cell_final_state.h. you can set that final state as initial state. lemme know if I cleared your question or not. Just noticed it'...
TF has tf.variable_scope() that allows users to access tf.Variable() anywhere in the code. Basically every variable in TF is a global variable. Is there a similar way to access class objects like tf.nn.rnn_cell.LSTMCell() or tf.layers.Dense()? To be more specific, can i create a new class object, let's say lstm_cell_2 ...
0
1
27
0
49,451,732
0
0
0
0
2
false
7
2018-03-23T13:59:00.000
1
2
0
Q Learning Applied To a Two Player Game
49,451,366
0.099668
python,tic-tac-toe,reinforcement-learning,q-learning
Q-Learning is an algorithm from the MDP (Markov Decision Process) field, i.e the MDP and Learning in practically facing a world that being act upon. and each action change the state of the agent (with some probability) the algorithm build on the basis that for any action, the world give a feedback (reaction). Q-Learnin...
I am trying to implement a Q Learning agent to learn an optimal policy for playing against a random agent in a game of Tic Tac Toe. I have created a plan that I believe will work. There is just one part that I cannot get my head around. And this comes from the fact that there are two players within the environment. No...
0
1
2,280
0
49,451,735
0
0
0
0
2
true
7
2018-03-23T13:59:00.000
7
2
0
Q Learning Applied To a Two Player Game
49,451,366
1.2
python,tic-tac-toe,reinforcement-learning,q-learning
In general, directly applying Q-learning to a two-player game (or other kind of multi-agent environment) isn't likely to lead to very good results if you assume that the opponent can also learn. However, you specifically mentioned for playing against a random agent and that means it actually can work, because this me...
I am trying to implement a Q Learning agent to learn an optimal policy for playing against a random agent in a game of Tic Tac Toe. I have created a plan that I believe will work. There is just one part that I cannot get my head around. And this comes from the fact that there are two players within the environment. No...
0
1
2,280
0
68,944,912
0
0
0
0
1
false
5
2018-03-23T17:59:00.000
2
2
0
GridSearchCV final model
49,455,806
0.197375
python,machine-learning,scikit-learn
This is given in sklearn: “The refitted estimator is made available at the best_estimator_ attribute and permits using predict directly on this GridSearchCV instance.” So, you don’t need to fit the model again. You can directly get the best model from best_estimator_ attribute
If I use GridSearchCV in scikit-learn library to find the best model, what will be the final model it returns? That said, for each set of hyper-parameters, we train the number of CV (say 3) models. In this way, will the function return the best model in those 3 models for the best setting of parameters?
0
1
5,388
0
49,622,289
0
0
0
0
1
false
1
2018-03-23T23:29:00.000
1
3
0
Tm1 to python to R
49,459,591
0.066568
python,r
It seems as if you only want to read data from tm1. Therefore a "simple" mdx query should be fine. Have a look at the package "httr" how to send POST-Requests. Then it's pretty staright forward to port the relevant parts from tm1py to R.
I would like to create a dashboard using R. However, all the data that I need to connect is from TM1. The easiest way that I found is using an python library called TM1py to connect to tm1 data. I would like to know what is the easist to access to access TM1py library from R ? Thanks
0
1
412
0
49,461,632
0
0
0
0
1
false
0
2018-03-24T03:43:00.000
0
3
0
How to change values in certain columns according to certain rule in pandas dataframe
49,460,990
0
python,pandas
You may need to use isnull(): df['col2'] = df['col2'].apply(lambda x: str(x)[0] if not pd.isnull(x) else x)
Suppose I have a pandas dataframe looks like this: col1 col2 0 A A60 1 B B23 2 C NaN The data from is read from a csv file. Suppose I want to change each non-missing value of 'col2' to its prefix (i.e. 'A' or 'B'). How could I do this without writing a for loop? The expected output is ...
0
1
489
0
49,472,921
0
0
0
0
1
true
1
2018-03-24T19:29:00.000
0
1
0
Tensoflow: Providing jacobians and hessians to ScipyOptimizerInterface
49,468,976
1.2
python,tensorflow
This is not directly supported by TensorFlow's ScipyOptimizerInterface but you should be able to build a hessian function that will be passed through the interface and work over its head. scipy.optimize.minimize expects a function that recieves a candidate solution p (in the form of a 1D numpy vector) and returns the ...
I am trying out the different optimization methods of tf.contrib.opt.ScipyOptimizerInterface and some of them (e.g. trust-exact) require the hessian of the objective function. How can I use tf.hessians as hessian for tf.contrib.opt.ScipyOptimizerInterface? I tried to provide it with hess=tf.hessians(loss,variable) (wh...
0
1
182
0
49,477,081
0
0
0
0
1
true
0
2018-03-25T07:54:00.000
0
1
0
How to choose a group of columns in a Dask Dataframe?
49,473,649
1.2
python,dataframe,dask
It was my mistake. I passed to the slicing operator a list of strings in a numpy array receiving a "not implemented error", passing a python list instead works correctly.
Is there a way to choose a group of columns in a dask dataframe? The slice df [['col_1', 'col_2']] does not seem to work.
0
1
1,478
0
52,206,018
0
0
0
0
1
false
0
2018-03-25T15:41:00.000
0
2
0
TensorFlow: ImportError: cannot import name 'dragon4_positional'
49,477,640
0
python,python-3.x,tensorflow
This looks like it is an issue with Numpy, which is a dependency of Tensorflow. Did you try upgrading your version of numpy using pip or conda? Like such: pip install --ignore-installed --upgrade numpy
I get the following error when trying to use tensorflow importError Traceback (most recent call last) in () ----> 1 import tensorflow as tf ~\Anaconda3\lib\site-packages\tensorflow__init__.py in () 22 23 # pylint: disable=wildcard-import ---> 24 from tensorflow.pytho...
0
1
1,449
0
51,741,148
0
1
0
0
1
false
0
2018-03-25T18:07:00.000
1
3
0
Unable to import cv2 OpenCV 2.4.13 in python 3.6
49,479,145
0.066568
python,opencv,computer-vision,anaconda
Try pip install opencv-python instead of pip install cv2. Although the name of the package changes, you can still import it as import cv2, It will work.
import cv2 On executing the above code, it shows the following error. Error: Traceback (most recent call last) in () ----> 1 import cv2 ImportError: DLL load failed: The specified module could not be found. Unable to import cv2 in python I have installed OpenCV 2.4.13 and Anaconda3 with python 3.6.4. OpenCV loc...
0
1
2,644
0
49,957,782
0
0
0
0
1
true
0
2018-03-25T21:26:00.000
1
1
0
What is returned by scipy.io.wavefile.read()?
49,481,114
1.2
python-3.x,audio,scipy
wavfile.read() returns two things: data: This is the data from your wav file which is the amplitude of the audio taken at even intervals of time. sample rate: How many of those intervals make up one second of audio.
I have never worked with audio before. For a monophonic wav file read() returns an 1-D array of integers. What do these integers represent? Are they the frequencies? If not how do I use them to get the frequencies?
0
1
152
0
49,485,045
0
0
0
0
1
true
0
2018-03-26T05:57:00.000
0
1
0
Text Categorization Test NLTK python
49,484,820
1.2
python,nltk,text-mining,naivebayes
Just saving the model will not help. You should also save your VectorModel (like tfidfvectorizer or countvectorizer what ever you have used for fitting the train data). You can save those the same way using pickle. Also save all those models you used for pre-processing the train data like normalization/scaling models, ...
I have using nltk packages and train a model using Naive Bayes. I have save the model to a file using pickle package. Now i wonder how can i use this model to test like a random text not in the dataset and the model will tell if the sentence belong to which categorize? Like my idea is i have a sentence : " Ronaldo have...
0
1
147
0
49,493,806
0
0
0
0
1
true
2
2018-03-26T12:57:00.000
4
1
0
Spacy training multithread CPU usage
49,492,038
1.2
python,multithreading,nlp,spacy
The only things that are multi-threaded are the matrix multiplications, which in v2.0.8 are done via numpy, which delegates them to a BLAS library. Everything else is single-threaded. You should check what BLAS library your numpy is linked to, and also make sure that the library has been compiled appropriately for your...
I'm training some models with my own NER pipe. I need to run spacy in lxc container so I can run it with python3.6 (which allow multi thread on training). But.. on my 7 core authorized to run on my container only 1 run at 100% others run at 40-60% (actually they start at 100% but decrease after fews minutes). I would r...
0
1
2,172
0
51,396,901
0
0
0
0
1
false
10
2018-03-26T14:00:00.000
0
3
0
Random Forest Regressor using a custom objective/ loss function (Python/ Sklearn)
49,493,331
0
python-3.x,scikit-learn,random-forest,statsmodels,poisson
If the problem is that the counts c_i arise from different exposure times t_i, then indeed one cannot fit the counts, but one can still fit the rates r_i = c_i/t_i using MSE loss function, where one should, however, use weights proportional to the exposures, w_i = t_i. For a true Random Forest Poisson regression, I've ...
I want to build a Random Forest Regressor to model count data (Poisson distribution). The default 'mse' loss function is not suited to this problem. Is there a way to define a custom loss function and pass it to the random forest regressor in Python (Sklearn, etc..)? Is there any implementation to fit count data in Py...
0
1
10,061
0
49,507,909
0
0
0
1
1
false
3
2018-03-26T22:15:00.000
2
1
0
Is there a function in xlsxwriter that lets you sort a column?
49,501,501
0.379949
python,xlsxwriter
Sorting isn't a feature of the xlsx file format. It is something Excel does at runtime. So it isn't something XlsxWriter can replicate. A workaround would be to to sort your data using Python before you write it.
I was wondering if there was a function in xlsxwriter that lets you sort the contents in the column from greatest to least or least to greatest? thanks!
0
1
847
0
52,691,745
0
0
0
0
1
false
22
2018-03-27T03:06:00.000
2
6
0
Save and load model optimizer state
49,503,748
0.066568
python,tensorflow,machine-learning,keras
upgrading Keras to 2.2.4 and using pickle solved this issue for me. with keras release 2.2.3 Keras models can now be safely pickled.
I have a set of fairly complicated models that I am training and I am looking for a way to save and load the model optimizer states. The "trainer models" consist of different combinations of several other "weight models", of which some have shared weights, some have frozen weights depending on the trainer, etc. It is a...
0
1
21,170
0
49,528,355
0
0
0
0
2
false
0
2018-03-27T04:15:00.000
1
2
0
clustering in python without number of clusters or threshold
49,504,271
0.099668
python,cluster-analysis
Clustering is an explorative technique. This means it must always be able to produce different results, as desired by the user. Having many parameters is a feature. It means the method can be adapted easily to very different data, and to user preferences. There will never be a generally useful parameter-free technique....
Is it possible to do clustering without providing any input apart from the data? The clustering method/algorithm should decide from the data on how many logical groups the data can be divided, even it doesn't require me to input the threshold eucledian distance on which the clusters are built, this also needs to be lea...
0
1
231
0
49,504,632
0
0
0
0
2
true
0
2018-03-27T04:15:00.000
1
2
0
clustering in python without number of clusters or threshold
49,504,271
1.2
python,cluster-analysis
Why not code your algorithm to create a list of clusters ranging from size 1 to n (which could be defined in a config file so that you can avoid hard coding and just fix it once). Once that is done, compute the clusters of size 1 to n. Choose the value which gives you the smallest Mean Square Error. This would requir...
Is it possible to do clustering without providing any input apart from the data? The clustering method/algorithm should decide from the data on how many logical groups the data can be divided, even it doesn't require me to input the threshold eucledian distance on which the clusters are built, this also needs to be lea...
0
1
231
0
49,514,082
0
0
0
0
1
true
1
2018-03-27T12:31:00.000
6
1
0
Is Tensorflow worth using for simple optimization problems?
49,512,935
1.2
python,tensorflow
This is somewhat opinion based, but Tensorflow and similar frameworks such as PyTorch are useful when you want to optimize an arbitrary, parameter-rich non-linear function (e.g., a deep neural network). For a 'standard' statistical model, I would use code that was already tailored to it instead of reinventing the wheel...
I have started learning Tensorflow recently and I am wondering if it is worth using in simple optimization problems (least squares, maximum likelihood estimation, ...) instead of more traditional libraries (scikit-learn, statsmodel)? I have implemented a basic AR model estimator using Tensorflow with MLE and the AdamOp...
0
1
343
0
49,514,624
0
1
0
0
1
false
1
2018-03-27T13:39:00.000
1
2
0
Excel function IFERROR(value, value_if_error). Does it have a python equivalent?
49,514,486
0.099668
python,pandas
No sure what you mean by an error in data in Python. Do you mean NA? Then try the fillna function in pandas.
Does python have a function similar to the excel function IFERROR(value, value_if_error) Can I use np.where? Many Thanks
0
1
4,595
0
67,007,173
0
0
0
0
1
false
1
2018-03-27T13:42:00.000
0
1
0
Calling R function with se.fit parameter with rpy2 from Python
49,514,535
0
python,r,prediction,rpy2
In Python you can not use "." as part of keywords, replacing "." with "_", so "se_fit", should work.
I need to call the R function predict(fit_hs, type="quantile", se.fit=True, p=0.5) where predict refers to survreg in library survival. It gives an error about the se.fit parameter saying it's a keyword that can't be used. Could you please help finding a way to call this R function from Python?
0
1
54
0
55,154,657
0
0
0
0
1
false
14
2018-03-28T11:02:00.000
-1
2
0
Dask dataframe split partitions based on a column or function
49,532,824
-0.099668
python,pandas,dataframe,dask,dask-distributed
Setting index to the required column and map_partitions works much efficient compared to groupby
I have recently begun looking at Dask for big data. I have a question on efficiently applying operations in parallel. Say I have some sales data like this: customerKey productKey transactionKey grossSales netSales unitVolume volume transactionDate ----------- -------------- ---------------- ------...
0
1
9,214
0
49,542,244
0
0
0
1
1
false
2
2018-03-28T17:49:00.000
0
3
0
(Sql + Python) df array to string with single quotes?
49,541,070
0
python,sql,arrays,string,quote
Please try this and verify if it helps- sql = "SELECT * FROM database WHERE list IN (%s)" % ",".join(map(myString,List1))
Goal: how to convert (111, 222, 333) to ('111', '222', '333') for an sql query in Python? What I have done so far: I am calling a csv file to a df: dataset = pd.read_csv('simple.csv') print(dataset) LIST 0 111 1 222 2 333 List11 = dataset.LIST.apply(str) print(List1) 0 111 1 222 2 333 Name: OPERAT...
0
1
1,305
0
51,157,225
0
0
0
0
2
false
27
2018-03-29T07:20:00.000
-4
9
0
Keras rename model and layers
49,550,182
-1
python,keras
for 1), I think you may build another model with right name and same structure with the exist one. then set weights from layers of the exist model to layers of the new model.
1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. Class Model seem to have the property model.name, but when changing it I get "AttributeError: can't set attribute". What is the Problem here? 2) Additionally, I am using sequential API and I want to give ...
0
1
40,090
0
63,853,924
0
0
0
0
2
false
27
2018-03-29T07:20:00.000
10
9
0
Keras rename model and layers
49,550,182
1
python,keras
To rename a keras model in TF2.2.0: model._name = "newname" I have no idea if this is a bad idea - they don't seem to want you to do it, but it does work. To confirm, call model.summary() and you should see the new name.
1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. Class Model seem to have the property model.name, but when changing it I get "AttributeError: can't set attribute". What is the Problem here? 2) Additionally, I am using sequential API and I want to give ...
0
1
40,090
0
49,551,180
0
0
0
0
1
false
1
2018-03-29T07:51:00.000
0
1
0
When is it safe to cache tf.Tensors?
49,550,723
0
python,tensorflow
TL;DR: TF already caches what it needs to, don't bother with it yourself. Every time you call sess.run([some_tensors]) TF's engine find the minimum subgraph needed to compute all tensors in [some_tensors] and runs it from top to bottom (possibly on new data, if you're not feeding it the same data). That means, caching ...
Let's say we have some method foo we call during graph construction time that returns some tf.Tensors or a nested structure of them every time is called, and multiple other methods that make use of foo's result. For efficiency and to avoid spamming the TF graph with unnecessary repeated operations, it might be tempting...
0
1
321
0
49,553,542
0
0
0
0
1
false
1
2018-03-29T10:09:00.000
0
3
0
Python Pandas DataFrames Sorting, Summing and Fetching Max Data
49,553,357
0
python,python-3.x,pandas
The best way to do when you are learning is to try it. It's very unlikely your data will be too large (there aren't millions of car models), but in any case, you can use df.head(N) to take the top N rows to try your method and see if it's slow. Other useful functions include df.groupby, df.nlargest, df.sort_values
I have just started learning Python, Pandas and NumPy and I want to find out what is the cleanest and most efficient way to solve the following problem. I have data which holds CarManufacturer, Car, TotalCarSales, bearing in mind that the data is not small: CarManufacturer Car TotalCarSales Volkswagen Polo 100 Volkswa...
0
1
91
0
49,565,369
0
0
0
0
1
true
1
2018-03-29T15:54:00.000
0
1
0
Random crop and bounding boxes in tensorflow
49,560,347
1.2
python,tensorflow
You can get the shape, but only at runtime - when you call sess.run and actually pass in the data - that's when the shape is actually defined. So do the random crop manually in tesorflow, basically, you want to reimplement tf.random_crop so you can handle the manipulations to the bounding boxes. First, to get the shape...
I want to add a data augmentation on the WiderFace dataset and I would like to know, how is it possible to random crop an image and only keep the bouding box of faces with the center inside the crop using tensorflow ? I have already try to implement a solution but I use TFRecords and the TfExampleDecoder and the shape ...
0
1
972
0
49,562,017
0
0
0
0
1
false
5
2018-03-29T17:21:00.000
1
2
0
How to choose RandomState in train_test_split?
49,561,882
0.099668
python,pandas,machine-learning,scikit-learn,svm
For me personally, I set random_state to a specific number (usually 42) so if I see variation in my programs accuracy I know it was not caused by how the data was split. However, this can lead to my network over fitting on that specific split. I.E. I tune my network so it works well with that split, but not necessaril...
I understand how random state is used to randomly split data into training and test set. As Expected, my algorithm gives different accuracy each time I change it. Now I have to submit a report in my university and I am unable to understand the final accuracy to mention there. Should I choose the maximum accuracy I get?...
0
1
5,696
0
49,571,075
0
1
0
0
1
false
1
2018-03-29T20:31:00.000
0
1
0
ImportError with dask.distributed
49,564,542
0
python-3.x,importerror,dask-distributed
I resolved the error, there were some outdated packages including imageio which needed upgrade to work with dask.distributed and dask.dataframe.
I am trying to import dask.distributed package, but I keep getting this error: ImportError: cannot import name 'collections_to_dsk'. Help is appreciated.
0
1
311
0
49,902,116
0
0
0
0
2
false
0
2018-03-30T07:13:00.000
0
2
0
CountVectorizer in Python
49,570,046
0
python,tf-idf,text-classification,countvectorizer,tfidfvectorizer
You could easily just concatenate these matrices and other feature columns to build one very large matrix. However, be aware that concatenating the matrix from email body and email subject will probably create an incredibly sparse matrix. When you then add other features you might risk to "water down" your other featur...
I am working on a problem in which I have to predict whether a sent email from a company is opened or not and if it is opened, I have to predict whether the recipient clicked on the given link or not. I have a data set with the following features: Total links inside the emai` Total internal links inside the email Num...
0
1
82
0
49,936,932
0
0
0
0
2
false
0
2018-03-30T07:13:00.000
0
2
0
CountVectorizer in Python
49,570,046
0
python,tf-idf,text-classification,countvectorizer,tfidfvectorizer
Your problem is you have two large sparse feature vectors (email body and subject) and also small dense feature vectors. Here is my simple suggestion: (Jerome's idea) Reduce the dimension of email body and subject (via PCA, AutoEncoder, CBOW, Doc2Vec, PLSA, or LDA) so that you will end up with a dense feature vector. ...
I am working on a problem in which I have to predict whether a sent email from a company is opened or not and if it is opened, I have to predict whether the recipient clicked on the given link or not. I have a data set with the following features: Total links inside the emai` Total internal links inside the email Num...
0
1
82
0
49,577,047
0
1
0
0
1
true
1
2018-03-30T14:45:00.000
1
2
0
How to compare date (yyyy-mm-dd) with year-Quarter (yyyyQQ) in python
49,576,487
1.2
sql,python-3.x,pandas
Once you are able to get the month out into a variable: mon you can use the following code to get the quarter information: for mon in range(1, 13): print (mon-1)//3 + 1, print which would return: for months 1 - 3 : 1 for months 4 - 6 : 2 for months 7 - 9 : 3 for months 10 - 12 : 4
I am writing a sql query using pandas within python. In the where clause I need to compare a date column (say review date 2016-10-21) with this value '2016Q4'. In other words if the review dates fall in or after Q4 in 2016 then they will be selected. Now how do I convert the review date to something comparable to 'yyyy...
0
1
798
0
49,577,580
0
0
1
0
1
false
0
2018-03-30T14:47:00.000
2
1
0
Quick way to classify if an image contains text or not
49,576,528
0.379949
python,classification,ocr,tesseract,text-extraction
Unfortunately there is no way to tell if an image has text in it, without performing OCR of some kind on it. You could build a machine learning model that handles this, however keep in mind it would still need to process the image as well.
I have millions of images, and I am able to use OCR with pytesseract to perform descent text extraction, but it takes too long to process all of the images. Thus I would like to determine if an image simply contains text or not, and if it doesn't, i wouldn't have to perform OCR on it. Ideally this method would have a...
0
1
285
0
49,597,211
0
0
1
0
2
true
3
2018-03-31T04:10:00.000
0
2
0
Measurement for intersection of 2 irregular shaped 3d object
49,584,153
1.2
python,3d,computational-geometry,bin-packing
A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a gi...
I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. I am wondering if there a...
0
1
1,361
0
49,688,037
0
0
1
0
2
false
3
2018-03-31T04:10:00.000
0
2
0
Measurement for intersection of 2 irregular shaped 3d object
49,584,153
0
python,3d,computational-geometry,bin-packing
By straight voxelization: If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel. Then for every vertex of the mesh, set the bit of the cell it is included in (this ju...
I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. I am wondering if there a...
0
1
1,361
0
49,594,057
0
0
0
0
1
true
0
2018-04-01T01:28:00.000
1
1
0
Neural Network - Input Normalization
49,593,985
1.2
python,tensorflow,machine-learning,neural-network,deep-learning
A large number of features makes it easier to parallelize the normalization of the dataset. This is not really an issue. Normalization on large datasets would be easily GPU accelerated, and it would be quite fast. Even for large datasets like you are describing. One of my frameworks that I have written can normalize th...
It is a common practice to normalize input values (to a neural network) to speed up the learning process, especially if features have very large scales. In its theory, normalization is easy to understand. But I wonder how this is done if the training data set is very large, say for 1 million training examples..? If # f...
0
1
1,045
0
50,748,457
0
0
0
0
1
true
1
2018-04-02T00:11:00.000
0
1
0
pyspark read csv file multiLine option not working for records which has newline spark2.3 and spark2.2
49,603,834
1.2
python-3.x,apache-spark,pyspark,spark-dataframe
I created my own hadoop Custom Record Reader and was able to read it by invoking the api . spark.sparkContext.newAPIHadoopFile(file_path,'com.test.multi.reader.CustomFileFormat','org.apache.hadoop.io.LongWritable','org.apache.hadoop.io.Text',conf=conf) And in the Custom Record Reader implemented the logic to handle t...
I am trying to read the dat file using pyspark csv reader and it contains newline character ("\n") as part of the data. Spark is unable to read this file as single column, rather treating it as new row. I tried using the "multiLine" option while reading , but still its not working. spark.read.csv(file_path, schema=sc...
0
1
648
0
49,605,405
0
0
0
0
1
true
52
2018-04-02T04:14:00.000
50
4
0
Does Numpy automatically detect and use GPU?
49,605,231
1.2
python,numpy,gpu
Does Numpy/Python automatically detect the presence of GPU and utilize it to speed up matrix computation (e.g. numpy.multiply, numpy.linalg.inv, ... etc)? No. Or do I have code in a specific way to exploit the GPU for fast computation? Yes. Search for Numba, CuPy, Theano, PyTorch or PyCUDA for different paradigms fo...
I have a few basic questions about using Numpy with GPU (nvidia GTX 1080 Ti). I'm new to GPU, and would like to make sure I'm properly using the GPU to accelerate Numpy/Python. I searched on the internet for a while, but didn't find a simple tutorial that addressed my questions. I'd appreciate it if someone can give...
0
1
49,124
0
49,608,006
0
0
0
0
1
false
0
2018-04-02T08:27:00.000
0
1
0
Casting a numpy array into a (different) pre-allocated array
49,607,824
0
python,numpy
As Paul Panzer commented, this can be done simply by B[...] = A.
Suppose I have an array A of dtype int32, and I want to cast it into float64. The standard way to do this (that I know) is A.astype('float64'). But this allocates a new array for the result. If I run this command repeatedly (with different arrays of the same shape), each time using the result and discarding it shortly...
0
1
76
0
49,613,345
0
0
0
0
1
false
1
2018-04-02T14:12:00.000
0
1
0
How to use scikit-learn metrics in CNTK?
49,612,908
0
python,neural-network,cntk
Unless this metric is already implemented in CNTK, implement your own custom "metric" function in whatever format CNTK requires, and have it pass the inputs on to scikit-learn's metric function.
I wish to use classification metrics like matthews_corrcoef as a metric to a neural network built with CNTK. The way I could find as of now was to evaluate the value by passing the predictions and label as shown matthews_corrcoef(cntk.argmax(y_true, axis=-1).eval(), cntk.argmax(y_pred, axis=-1).eval()) Ideally I'd like...
0
1
68
0
49,623,125
0
1
0
0
1
false
0
2018-04-02T22:28:00.000
0
3
0
python pair multiple field entries from csv
49,619,655
0
python,csv,text
First, get a distinct of all breakfast items. A pseudo code like below Iterate through each line Collect item and person in 2 different lists Do a set on those 2 lists Say persons, items Counter = 1 for person in persons: for item in items: Print "breafastitem", Counter Print person, item
Trying to take data from a csv like this: col1 col2 eggs sara bacon john ham betty The number of items in each column can vary and may not be the same. Col1 may have 25 and col2 may have 3. Or the reverse, more or less. And loop through each entry so its output into a text file like this breakfast_1 breakfast_i...
0
1
50
0
49,630,619
0
0
0
0
1
false
1
2018-04-03T10:30:00.000
0
1
0
Distance metric for n binary vectors
49,627,823
0
python,machine-learning,similarity,cosine-similarity
There are two concepts relevant to your question, which you should consider separately. Similarity Measure: Independent of your scoring mechanism, you should find a similarity measure which suits your data best. It can be an Euclidean distance (not suitable for a 1500 dimensional space), a cosine (dot product based) di...
I have n and m binary vectors(of length 1500) from set A and B respectively. I need a metric that can say how similar (kind of distance metric) all those n vectors and m vectors are. The output should be total_distance_of_n_vectors and total_distance_of_m_vectors. And if total_distance_of_n_vectors > total_distance_of_...
0
1
1,262
0
49,713,880
0
1
0
0
1
false
0
2018-04-03T19:58:00.000
3
2
0
numpy version creating issue. python 2.7 already installed
49,638,201
0.291313
macos,numpy,matplotlib,ipython,homebrew
I just met the same problem. It's a issue about numpy preinstall in Python has a version number issue(required >=1.5, but found 1.8.0rc1). Try running brew install python2 to upgrade your python which may solve this issue.
Getting few "package missing" errors while installing ipython on High Sierra. matplotlib 1.3.1 has requirement numpy>=1.5, but you'll have numpy 1.8.0rc1 which is incompatible.
0
1
1,097
0
55,976,519
0
0
0
0
2
false
173
2018-04-04T05:09:00.000
16
5
0
What's the difference between reshape and view in pytorch?
49,643,225
1
python,pytorch
Tensor.reshape() is more robust. It will work on any tensor, while Tensor.view() works only on tensor t where t.is_contiguous()==True. To explain about non-contiguous and contiguous is another story, but you can always make the tensor t contiguous if you call t.contiguous() and then you can call view() without the erro...
In numpy, we use ndarray.reshape() for reshaping an array. I noticed that in pytorch, people use torch.view(...) for the same purpose, but at the same time, there is also a torch.reshape(...) existing. So I am wondering what the differences are between them and when I should use either of them?
0
1
88,597
0
69,676,338
0
0
0
0
2
false
173
2018-04-04T05:09:00.000
0
5
0
What's the difference between reshape and view in pytorch?
49,643,225
0
python,pytorch
I would say the answers here are technically correct but there's another reason for existing of reshape. pytorch is usually considered more convenient than other frameworks because it closer to python and numpy. It's interesting that the question involves numpy. Let's look into size and shape in pytorch. size is a func...
In numpy, we use ndarray.reshape() for reshaping an array. I noticed that in pytorch, people use torch.view(...) for the same purpose, but at the same time, there is also a torch.reshape(...) existing. So I am wondering what the differences are between them and when I should use either of them?
0
1
88,597
0
49,651,837
0
0
0
1
1
false
1
2018-04-04T12:47:00.000
0
2
0
Pandas Dataframe.to_sql wrongly inserting into more than one table (postgresql)
49,651,442
0
python-3.x,postgresql,pandas
Removing INHERITS (tablename); on the slave table (creating it again without INHERITS) seems to have done the trick. Only a matter of curiosity: Why did it matter? I thought inheritance only gets columns and dtypes not the actual data.
df.to_sql(name='hourly', con=engine, if_exists='append', index=False) It inserts data not only to table 'hourly', but also to table 'margin' - I execute this particular line only. It's Postgresql 10. While Creating table 'hourly', I inherited column names and dtypes from table 'margin'. Is it something wrong with the ...
0
1
588
0
49,656,081
0
0
0
1
1
false
0
2018-04-04T13:46:00.000
1
2
0
how to read text from excel file in python pandas?
49,652,693
0.099668
excel,python-3.x,pandas,import
Try converting the file from .xlsx to .CSV I had the same problem with text columns so i tried converting to CSV (Comma Delimited) and it worked. Not very helpful, but worth a try.
I am working on a excel file with large text data. 2 columns have lot of text data. Like descriptions, job duties. When i import my file in python df=pd.read_excel("form1.xlsx"). It shows the columns with text data as NaN. How do I import all the text in the columns ? I want to do analysis on job title , description ...
0
1
2,378
0
49,678,287
0
0
0
0
1
false
0
2018-04-04T20:30:00.000
0
1
0
Ensemble (Combine) multiple deep learning regression models which already have dropout layers
49,659,892
0
python,tensorflow,regression,prediction,robust
I think the presence of dropout is irrelevant to what you want to do. Ensembling should work just fine with dropout.
Currently I have multiple trained models for regression task, each model is of the same architecture but while training, I have dropout layer, to improve the performance, is that still possible for me to combine those trained models and calculate the mean of the weights as the combined, new model? I just heard that the...
0
1
137
0
49,662,938
0
0
0
0
1
false
1
2018-04-05T01:37:00.000
1
3
0
In Keras, how to send each item in a batch through a model?
49,662,869
0.066568
python,tensorflow,keras
At the moment you are returning a 3D array. Add a Flatten() layer to convert the array to 2D, and then add a Dense(1). This should output (batch_size, 1).
I have a model that starts with a Conv2D layer and so it must take input of shape (samples, rows, cols, channels) (and the model must ultimately output a shape of (1)). However, for my purposes one full unit of input needs to be some (fixed) number of samples, so the overall input shape sent into this model when given ...
0
1
587
0
64,231,036
0
0
0
0
1
false
27
2018-04-05T06:45:00.000
2
4
0
How to add report_tensor_allocations_upon_oom to RunOptions in Keras
49,665,757
0.099668
python,tensorflow,keras,gpu
OOM means out of memory. May be it is using more memory at that time. Decrease batch_size significantly. I set to 16, then it worked fine
I'm trying to train a neural net on a GPU using Keras and am getting a "Resource exhausted: OOM when allocating tensor" error. The specific tensor it's trying to allocate isn't very big, so I assume some previous tensor consumed almost all the VRAM. The error message comes with a hint that suggests this: Hint: If yo...
0
1
26,528
0
49,669,130
0
0
0
0
1
false
0
2018-04-05T08:57:00.000
0
2
0
Feed the output of a CNN in a LSTM
49,668,169
0
python,tensorflow,deep-learning,lstm
If you merge several small sequences from different videos to form a batch, the output of the last layer of your model (the RNN) should already be [batch_size, window_size, num_classes]. Basically, you want to wrap your CNN with reshape layers which will concatenate the frames from each batch: input -> [batch_size, wi...
It is the first time that I am working with the LSTM networks. I have a video with a frame rate of 30 fps. I have a CNN network (AlexNet based) and I want to feed the last layer of my CNN network into the recurrent network (I am using tensorflow). Supposing that my batch_size=30, so equal to the fps, and I want to have...
0
1
2,027
0
49,686,565
0
0
0
0
1
false
5
2018-04-05T13:50:00.000
8
1
0
How is the output h_n of an RNN (nn.LSTM, nn.GRU, etc.) in PyTorch structured?
49,674,079
1
python,neural-network,deep-learning,lstm,pytorch
The implementation of LSTM and GRU in pytorch automatically includes the possibility of stacked layers of LSTMs and GRUs. You give this with the keyword argument nn.LSTM(num_layers=num_layers). num_layers is the number of stacked LSTMs (or GRUs) that you have. The default value is 1, which gives you the basic LSTM. nu...
The docs say h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len Now, the batch and hidden_size dimensions are pretty much self-explanatory. The first dimension remains a mystery, though. I assume, that the hidden states of all "last cells" of all layers ...
0
1
1,829
0
49,676,768
0
0
0
0
1
false
0
2018-04-05T15:58:00.000
0
1
0
Exception: Python in worker has different version 2.7 than that in driver 2.6, PySpark cannot run with different minor versions
49,676,701
0
python,apache-spark,pyspark
Install at least Python 2.7 on each node and configure SPARK_PYTHON environment variable to point to the required installation. Spark doesn't support mixed environments and doesn't support Python 2.6 anymore.
We have a hadoop cluster of 625 nodes. But some of them are centos 6 (python 2.6) and some are centos 7 (python 2.7). So how can I resolve this as I am getting this error constantly.
0
1
562
0
49,817,588
0
1
0
0
1
false
0
2018-04-06T05:00:00.000
0
1
0
Using Jupyter Notebook to plot data from rosbag files
49,685,635
0
python,jupyter-notebook,ros
Found this answer in one of the older .ipynb files. Plots can be obtained with Plotly library. It really doesn't matter how many bag files are being scanned for topics and are plotted. Each bag file can be separately converted using the 'bag_to_dataframe' function from the "rosbag_pandas" package. Even in case of sim...
I have multiple rosbag files with a ton of data, and what I would like to do is analyze these bag files using Jupyter Notebook, but the problem is that each bag has a different set of data parameters. So I have created msg files to subscribe to data from each bag file. Some msg files have the same variables since those...
0
1
690