GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
71,438,963
0
0
0
0
1
true
1
2018-09-05T01:01:00.000
1
1
0
Spyder cannot find module named 'pandas_datareader'
52,175,718
1.2
python,pandas,module,spyder,datareader
I tried conda install pandas-datareader in Anaconda Prompt. It was installed and after my computer restarted, pandas-datareader worked in spyder 3.6.
First off I would like to say that I am aware that this question has been asked before, however, none of the other posts have offered a solution that resolves the problem. I am trying to use pandas-datareader to grab stock prices from the internet. I am using windows with python version 3.6. I first installed pandas-da...
0
1
2,865
0
56,251,858
0
0
0
0
1
false
13
2018-09-05T03:51:00.000
6
1
0
Could Keras prefetch data like tensorflow Dataset?
52,176,792
1
python,tensorflow,keras,dataset
If you call fit_generator with workers > 1, use_multiprocessing=True, it will prefetch queue_size batches. From docs: max_queue_size: Integer. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
In TensorFlow's Dataset API, we can use dataset.prefetch(buffer_size=xxx) to preload other batches' data while GPU is processing the current batch's data, therefore, I can make full use of GPU. I'm going to use Keras, and wonder if keras has a similar API for me to make full use of GPU, instead of serial execution: rea...
0
1
2,301
0
52,183,726
0
0
0
0
1
true
1
2018-09-05T08:25:00.000
0
1
0
Multi-dimensional tensors as input to rnn in tensorflow (tf.contrib.rnn.RNNCell)
52,180,502
1.2
python,tensorflow,deep-learning,computer-vision,rnn
As you have said RNN only accept as input a Tensor like [batch_size, sequence_lentgh, features]. In order to use RNN from tensorflow you will have to extract the features with a CNN for each frame and convert your CNN output data to a tensor that follows [batch_size, sequence_lentgh, features] shape in order to feed it...
From tensorflow documentation about tf.contrib.rnn.RNNCell: "This definition of cell differs from the definition used in the literature. In the literature, 'cell' refers to an object with a single scalar output. This definition refers to a horizontal array of such units." It seems, that rnn cell only accepts vectors as...
0
1
155
0
52,200,967
0
0
0
0
2
false
0
2018-09-05T18:03:00.000
0
2
0
Is it good idea to repartition 50 million records data in dataframe? If yes then someone please tell me the appropriate way of doing this
52,191,056
0
python,database,dataframe,pyspark,hadoop2
Usually, partitioning is a good idea and as @Karthik already said, often the date is not the best idea. In my experience it always made sense to partition your data based on the amount of workers you have. So ideally your partition size is a multiple of your workers. We normally use 120 partitions, as we have 24 worker...
We are going to handle Big Data (~50 million records) in our organization. We are partitioning data on the basis of date and other some parameters, but data is not equally partitioned. Can we do repartition on it for good performance?
0
1
64
0
52,194,219
0
0
0
0
2
false
0
2018-09-05T18:03:00.000
0
2
0
Is it good idea to repartition 50 million records data in dataframe? If yes then someone please tell me the appropriate way of doing this
52,191,056
0
python,database,dataframe,pyspark,hadoop2
Depending on your machine try maintaining a fixed number of partitions. It is always a good idea to partition but in most cases, it's not a good idea to partition based on date(Not sure because I don't know the nature of your data).
We are going to handle Big Data (~50 million records) in our organization. We are partitioning data on the basis of date and other some parameters, but data is not equally partitioned. Can we do repartition on it for good performance?
0
1
64
0
52,212,217
0
0
0
0
1
true
0
2018-09-06T18:54:00.000
1
1
0
H2O Word2Vec inconsistent vectors
52,210,521
1.2
python,word2vec,h2o
word2vec in h2o-3 uses hogwild implementation - the model parameters are updated concurrently from multiple threads and it is not possible to guarantee the reproducibility in this implementation. How big is your text corpus? At the cost of a slowdown of the model training you could get reproducible result with limiting...
I have a general question on a specific topic. I am using the vectors generated by Word2Vec to feed as features into my Distributed Random Forest model for classifying some records. I have millions of records and am receiving new records on a daily basis. Because of the new records coming in I want the new records to ...
0
1
140
0
52,211,776
0
0
0
0
1
true
0
2018-09-06T20:24:00.000
4
1
0
why do i get nan loss value in training discriminator and generator of GAN?
52,211,665
1.2
python,tensorflow,generative-adversarial-network
There are several reasons for a NaN loss and why models diverge. Most common ones I've seen are: Your learning rate is too high. If this is the case, the loss increases and then diverges to infinity. You are getting a division by zero error. If this is the case, you can add a small number like 1e-8 to your output prob...
I have saved my text vectors by using gensim library which consists of some negative numbers. will it effect the training? If not then why am i getting nan loss value first for discriminator and then for both discriminator and generator after certain steps of training?
0
1
5,113
0
52,221,695
0
1
0
0
1
false
0
2018-09-07T10:33:00.000
0
2
0
Python Pandas reading UTF-8 characters
52,220,676
0
python-2.7,pandas
Double Check if Excel is saved as UTF-8 In Excel 2016 When saving as: click More Options > Tools > Web Options > Encoding > Save this document as ... (pick UTF-8 from the list) Saving Excel as csv or even txt helps in many cases too. If csv or txt exported from Excel also doesn't open/work properly open it in notepad a...
I am trying to read an Excel file containing the Swedish characters åäö. I am importing the Excel file with pd.read_excel(path, sheetname, encoding='utf8') Works fine to import it and I can see the åäö characters, but when I work with the data for example creating a new variable df['East'] = df['Öst'] + 50 I receive a...
0
1
865
0
52,454,922
0
0
0
0
1
false
0
2018-09-07T12:44:00.000
0
1
0
How to use C++ to implement SimilarityTransform in scikit-image without using estimateRigidTransform in OpenCV?
52,222,864
0
python,c++,image,opencv
I found a function in eigen3, can do the same thing as the python code does.
When I try to translate a project written in python to C++. I have to implement the function SimilarityTransform in the package scikit-image. I find estimateRigidTransform in OpenCV will do the same thing. But estimateRigidTransform will return empty matrix somtimes. So, Is there some method that which will works bette...
0
1
748
0
52,223,093
0
0
0
0
1
false
21
2018-09-07T12:55:00.000
4
4
0
Merge multiple dataframes based on a common column
52,223,045
0.197375
python,pandas,dataframe,merge,concat
You can do df1.merge(df2, how='left', left_on='Col1', right_on='Col1').merge(df3, how='left', left_on='Col1', right_on='Col1')
I have Three dataframes. All of them have a common column and I need to merge them based on the common column without missing any data Input >>>df1 0 Col1 Col2 Col3 1 data1 3 4 2 data2 4 3 3 data3 2 3 4 data4 2 4 5 data5 1 4 >>>df2 0 Col1 Col4 Col5 1 data1 7 4 2 data2 6 9...
0
1
29,412
0
52,245,736
0
0
0
0
1
false
0
2018-09-08T00:53:00.000
0
1
0
Get classification score of hypothetical detection box
52,231,108
0
python,tensorflow,deep-learning
With tensorflow you cannot do that. What you are saying is almost like a region proposal and rest of the pipeline on which different platforms like tensorflow, yolo are built to arrive at object detection. You are proposing to a built a different platform by asking what you are asking.
Is there anyway to assert the presence of a detection box in an image and obtain the classification score of said hypothetical box? I am working with a tensorflow object detection graph and want to refine it's accuracy with a little trickery; by making the claim that there are more (N) objects in a given image than it...
0
1
32
0
52,236,009
0
1
0
0
2
false
0
2018-09-08T02:24:00.000
0
2
0
Graph traversal, maybe another type of mathematics?
52,231,442
0
python,algorithm,set,graph-algorithm,graph-traversal
If you really intended to find the minimum amount, the answer is 0, because you don't have to use any number at all. I guess you meant to write "maximal amount of numbers". If I understand your problem correctly, it sounds like we can translated it to the following problem: Given a set of n numbers (1,..,n), what is t...
Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: (1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My ...
0
1
81
0
52,249,740
0
1
0
0
2
false
0
2018-09-08T02:24:00.000
0
2
0
Graph traversal, maybe another type of mathematics?
52,231,442
0
python,algorithm,set,graph-algorithm,graph-traversal
In case anyone cares in the future, the solution is called a blossom algorithm.
Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: (1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My ...
0
1
81
0
52,291,702
0
0
0
0
1
false
0
2018-09-08T20:30:00.000
0
1
0
How to change channel dimension of an image?
52,239,164
0
python-3.x,numpy
You should set a score threshold to map every pixel in the image to one class, and every class has a color (which has RGB channels), so every pixel is a RGB value for its class.
Suppose I have numpy array with shape = [y,z,21] where y = image_width, z= image_height. The above array represents image with 21 channels. How should I convert it to size = [ y,z,3 ] ?
0
1
355
0
52,258,663
0
0
0
0
1
true
3
2018-09-10T08:26:00.000
5
1
0
KMeans clustering unbalanced data
52,253,787
1.2
python,cluster-analysis,k-means,data-science,feature-engineering
It is not part of the k-means objective to produce balanced clusters. In fact, solutions with balanced clusters can be arbitrarily bad (just consider a dataset with duplicates). K-means minimizes the sum-of-squares, and putting these objects into one cluster seems to be beneficial. What you see is the typical effect of...
I have a set of data with 50 features (c1, c2, c3 ...), with over 80k rows. Each row contains normalised numerical values (ranging 0-1). It is actually a normalised dummy variable, whereby some rows have only few features, 3-4 (i.e. 0 is assigned if there is no value). Most rows have about 10-20 features. I used KMeans...
0
1
3,624
0
52,255,420
0
0
0
0
1
false
0
2018-09-10T09:41:00.000
0
1
0
How to apply avg function to DataFrame series monthly?
52,254,994
0
python,pandas,dataframe
data.resample('M', how='mean')
I have a DataFrame series with day resolution. I want to transform the series to a series of monthly averages. Ofcourse I can apply rolling mean and select only every 30th of means but it would not precise. I want to get series which contains mean value from the previous month on every first day of a month. For exampl...
0
1
63
0
55,653,445
0
0
0
0
1
false
0
2018-09-10T11:25:00.000
0
1
0
how to use Tensorflow seq2seq.GreedyEmbeddingHelper first parameter Embedding in case of using normal one hot vector instead of embedding?
52,256,809
0
python,tensorflow
embedding = tf.Variable(tf.random_uniform([c-dimensional , EMBEDDING_DIM])) here you can create the embedding for you own model. and this will be trained during your training process to give a vector for your own input. if you don't want to use it you just can create a matrix where is every column of it is one hot vect...
I am trying to decode one character (represented as c-dimensional one hot vectors) at a time with tensorflow seq2seq model implementations. I am not using any embedding in my case. Now I am stuck with tf.contrib.seq2seq.GreedyEmbeddingHelper. It requires "embedding: A callable that takes a vector tensor of ids (argmax...
0
1
267
0
52,279,819
0
0
0
0
1
false
0
2018-09-11T08:36:00.000
0
1
0
How to extract only ID photo from CV with pdfimages
52,271,908
0
python,image,pdf,extract,pypdf
You need a way of differentiating images found in the PDF in order to extract the ones of interest. I believe you have the options of considering: Image characteristics such as Width, Height, Bits Per Component, ColorSpace Metadata information about the image (e.g. a XMP tag of interest) Facial recognition of the pers...
Hi I tried to use pdfimages to extract ID images from my pdf resume files. However for some files they return also the icon, table lines, border images which are totally irrelevant. Is there anyway I can limit it to only extract person photo? I am thinking if we can define a certain size constraints on the output?
0
1
194
0
52,287,805
0
0
0
0
1
true
0
2018-09-11T09:54:00.000
0
1
0
Write binary numpy array of zeros and ones to file using cv2 or Pillow
52,273,313
1.2
python-3.x,python-imaging-library,cv2
Since you are writing pixels with values (0, 0, 0) or (1, 1, 1) to the image you are seeing an image that is entirely black and almost-black, so it looks black. You can multiply your array by 255 to get an array of { (0, 0, 0), (255, 255, 255) } which would be black and white. When you read the image you can convert ba...
Is it possible to write binary numpy array containing 0 and 1 to file using opencv (cv2) or Pillow? I was using scipy.misc.imsave and it worked well, but i read it's depreciated so i wanted to switch to other modules, but when trying to write such an array i see only black image. I need to have 0/1 values, and not 0/25...
0
1
486
0
53,675,625
0
0
0
0
1
false
6
2018-09-12T03:53:00.000
1
3
0
Can neuroevolution of augmenting topologies (NEAT) neural networks be built in TensorFlow?
52,287,254
0.066568
python,tensorflow,pytorch,neat
One way to make an evolving tensorflow network would be to use either hyperneat or the es-hyperneat algorithms instead of running the evolution on the individual networks in the species this instead evolves a "genome" that is actually cppn that encodes the phenotype neural nets. For the cppn you can use a feed forward ...
I am making a machine learning program for time series data analysis and using NEAT could help the work. I started to learn TensorFlow not long ago but it seems that the computational graphs in TensorFlow are usually fixed. Is there tools in TensorFlow to help build a dynamically evolving neural network? Or something l...
0
1
2,641
0
52,302,920
0
0
0
0
1
false
2
2018-09-12T19:22:00.000
0
1
0
sklearn learning_curve and StandardScaler
52,302,047
0
python,scikit-learn
learning_curve does not implement StandardScaler on its own. You could create a Pipeline as your estimator where the first step is StandardScaler then whatever your estimator you're using as the next step. This way when you call learning_curve during each cv iteration you are training both the scaler and estimator on...
I want to know if the sklearn.model_selection learning_curve can use or does use sklearn.preprocessing StandardScaler. I've looked over the implementation, but my skill level isn't up to par to come to a conclusion on my own. All tutorials on using learning_curve have you pass the entire data set to the learning_curv...
0
1
374
0
52,302,923
0
0
0
0
1
true
2
2018-09-12T19:44:00.000
1
1
0
TensorFlow estimators vs manual/session approach
52,302,352
1.2
python,tensorflow,deep-learning,tensorflow-estimator
A simple answer would be: Estimator hides some TensorFlow concepts, such as Graph and Session, from the user. This is best for newbies since it makes new learners be able to get started much easier (this is nothing to do with the type of dataset, just use tf.dataset API to write an input_fn is sufficient to provide inp...
I am fairly new to deep learning and TensorFlow, and in the set of lectures from the course I am taking they go over two methods of employing TensorFlow: using estimators and using sessions. It seems like the estimators method is much easier to understand and simpler as it is similar to what I have done using the sklea...
0
1
471
0
52,318,766
0
1
0
0
2
false
0
2018-09-13T14:52:00.000
1
4
0
problem installing and importing modules in python
52,316,354
0.049958
python,numpy,opencv
I removed the Anaconda version on my machine, so I just have python 3.7 installed. I removed the python interpreter(Pycharm) and installed it again and the problem got fixed somehow!
I am installing python on windows10 and trying to install the opencv and numpy extentions in the command window. I get no error installing them and it says it is successfully installed. But when I try to check the installation and import cv2 it does not recognize it and give me the error: no module named cv2. can anybo...
0
1
3,551
0
52,316,565
0
1
0
0
2
false
0
2018-09-13T14:52:00.000
1
4
0
problem installing and importing modules in python
52,316,354
0.049958
python,numpy,opencv
Is it possible that you have 2 versions of python on your machine and your native pip is pointing to the other one? (e.g. you pip install opencv which installs opencv for python 2, but you are using python 3). If this is so, then use pip3 install opencv
I am installing python on windows10 and trying to install the opencv and numpy extentions in the command window. I get no error installing them and it says it is successfully installed. But when I try to check the installation and import cv2 it does not recognize it and give me the error: no module named cv2. can anybo...
0
1
3,551
0
52,332,078
0
1
0
0
1
false
0
2018-09-14T01:42:00.000
0
1
0
Some modules can be imported in python previously but now can only be imported in ipython2
52,323,907
0
python,linux,python-2.7,numpy
Make sure the python path that you given in bashrc is correct. Also it will be good to use conda environment to try out the same since there is confusion in python environments. For that you can follow the below steps: Create the environment and activate it using following commands: conda create -n test_env python=2.7 ...
Previously I installed pytorch,PIL,numpy... using pip. After that I installed python3. Thus ipython switched from python2 to python3. I have to use ipython2 to start python2 kernel. These modules still works well in ipython2, but when I run a python script using python, python2, python2.7, they all raise ImportError: ...
0
1
62
0
52,564,257
0
1
0
0
1
false
0
2018-09-14T12:41:00.000
10
4
0
pip install face_recognition giving error
52,332,268
1
python,face-recognition
I ran into this issue as well. I am using windows and have a python environment that I am installing the requirements to. I ran pip install cmake , and then pip install dlib. I no longer received the error and successfully installed dlib.
RuntimeError: CMake must be installed to build the following extensions: dlib Failed building wheel for dlib Running setup.py clean for dlib Failed to build dlib
0
1
35,770
0
52,334,301
0
0
0
0
1
false
2
2018-09-14T14:37:00.000
1
3
0
Matplotlib - How to strip extra whitespaces from a plot without needing to save it?
52,334,185
0.066568
python,matplotlib,whitespace
The easiest way imho is to click on the button "configure subplots" and adjust the sliders because you see the result immediately. You could although call the tight_layout() function directly on plt bevor show()
I have made a plot, and I don't want extra whitespaces in my plot; the question is: How can I strip extra whitespaces from a plot? I know you can strip extra whitespaces from a plot when you save it; Then you just do this: plt.savefig('file_name.png', bbox_inches='tight') But I can't find any similar arguments you can...
0
1
76
0
52,335,074
0
0
0
1
1
false
0
2018-09-14T14:55:00.000
0
1
0
Python/SQL/Excel I have 12 datasets and I want to combine them to one representative set
52,334,490
0
python,sql,excel,statistics
It sounds like you want to break it all out into (60*12) rows with 3 columns: one recording the application number, another recording the time, and another recording the location. Then a model could dummy out each location as a predictor, and you could generate 12 simulated predictions, with uncertainty. Then, to get y...
I'm trying to create a predictive curve using 12 different datasets of empirical data. Essentially I want to write a function that passes 2 variables (Number of Applications, Days) and generates a predictive curve based on the 12 datasets that i have. The datasets all have 60 days and have Number of Applications from 5...
0
1
23
0
52,596,253
0
1
0
0
2
true
0
2018-09-14T15:29:00.000
0
2
0
Is tensorflow session running in parallel to the rest of my code?
52,335,065
1.2
python,multithreading,tensorflow,parallel-processing,batch-processing
After some research I found out that 'session.run' is not running concurrently to your other code. Indeed, as Ujjwal suggested, the 'tf.data.Dataset' API is the best choice for pipelining batch preprocessing and GPU execution.
I'm running my session on a GPU and I'm wondering if the 'session.run()' piece of code is running in parallel to my other code in my script. I use batch processing on the CPU prior to running 'session.run()' in a loop and would like to pipeline this processing with the execution on the GPU. Is this already satisfied i...
0
1
354
0
52,335,207
0
1
0
0
2
false
0
2018-09-14T15:29:00.000
0
2
0
Is tensorflow session running in parallel to the rest of my code?
52,335,065
0
python,multithreading,tensorflow,parallel-processing,batch-processing
It entirely depends upon how you have written your code. This should be trivial to check, by checking out your CPU and GPU utilization simultanously I normally make use of tf.data.Dataset API. I use the get_next() method of an iterator to feed data to a network. CPU and GPU work in parallel in this case.
I'm running my session on a GPU and I'm wondering if the 'session.run()' piece of code is running in parallel to my other code in my script. I use batch processing on the CPU prior to running 'session.run()' in a loop and would like to pipeline this processing with the execution on the GPU. Is this already satisfied i...
0
1
354
0
66,705,862
0
0
0
1
1
false
3
2018-09-14T17:50:00.000
0
2
0
AWS Glue - read from a sql server table and write to S3 as a custom CSV file
52,336,996
0
python,python-2.7,amazon-web-services,amazon-s3,aws-glue
This task fits AWS DMS (Data Migration Service) use case. DMS is designed to either migrate data from one data storage to another or keep them in sync. It can certainly keep in sync as well as transform your source (i.e., MSSQL) to your target (i.e., S3). There is one non-negligible constraint in your case thought. Ong...
I am working on Glue since january, and have worked multiple POC, production data lakes using AWS Glue / Databricks / EMR, etc. I have used AWS Glue to read data from S3 and perform ETL before loading to Redshift, Aurora, etc. I have a need now to read data from a source table which is on SQL SERVER, and fetch data, w...
0
1
1,571
0
52,344,172
0
1
0
0
1
false
1
2018-09-15T06:18:00.000
1
3
0
Pythonic way to cut specific elements from numpy array
52,342,187
0.066568
python,python-2.7,list,numpy
Why don't you use just c=a[b] as this is the Python way to take the values from array a.
I have a Python list (numpy array) and another list which contains the indices for the location of values from the first array which I want to keep. Is there a Pythonic way to do this? I know numpy.delete, but I want to keep the elements and not delete them.
0
1
178
0
52,360,778
0
1
0
0
1
true
0
2018-09-17T04:15:00.000
3
1
0
Finding local min/max of a cubic function
52,360,672
1.2
python,math,scientific-computing
For cubic function you can find positions of potential minumum/maximums without optimization but using differentiation: get the first and the second derivatives find zeros of the first derivative (solve quadratic equation) check the second derivative in found points - sign tells whether that point is min, max or saddl...
I'm looking to program a Python function that takes in 6 variables, a, b, c, d, e, f, where a, b is the interval to compute on (e.g. [1, 3], all real numbers), and c, d, e, f are the coefficients of the cubic polynomial, i.e. f(x) = cx^3 + dx^2 + ex + f, and returns the local min/max on the interval [a, b]. I have a ro...
0
1
1,730
0
52,438,930
0
0
0
0
1
true
3
2018-09-17T09:49:00.000
10
1
0
How to find center points of DBSCAN clusrering in sklearn
52,364,959
1.2
python-3.x,scikit-learn,dbscan
DBSCAN doesn't have centers. You can compute then yourself, but they may be outside of the cluster if it is not convex.
How to find the centre point of clusters of DBSCAN clustering algorithm in sklearn.
0
1
6,728
0
52,387,459
0
0
0
0
1
false
0
2018-09-17T19:08:00.000
1
1
0
Why will GPU usage run low in NN training?
52,374,287
0.197375
python,machine-learning,pytorch
I can only guess without further research but it could be that your network is small in terms of layer-size (not number of layers) so each step of the training is not enough to occupy all the GPU resources. Or at least the ratio between the data size and the transfer speed (to the gpu memory) is bad and the GPU stays i...
I'm running a NN training on my GPU with pytorch. But the GPU usage is strangely "limited" at about 50-60%. That's a waste of computing resources but I can't make it a bit higher. I'm sure that the hardware is fine because running 2 of my process at the same time,or training a simple NN (DCGAN,for instance) can both oc...
0
1
795
0
55,842,872
0
0
0
0
2
false
0
2018-09-18T08:14:00.000
0
2
0
Y_train values for symbolicRegressor
52,381,949
0
python-3.x,genetic,gplearn
Sorry for the late replay. gplearn supports regression (numeric y) with the SymbolicRegressor estimator, and with the newly released gplearn 0.4.0 we also support binary classification (two labels in y) using the SymbolicClassifier. From the sounds of things though, you have a multi-label problem which gplearn does not...
I split my dataset in X_train, Y_train, X_test and Y_test, and then I used the symbolicRegressor... I've already convert the string values from Dataframe in float values. But by applying the symbolicRegressor I get this error: ValueError: could not convert string to float: 'd' Where 'd' is a value from Y. Since all ...
0
1
302
0
52,389,432
0
0
0
0
2
true
0
2018-09-18T08:14:00.000
0
2
0
Y_train values for symbolicRegressor
52,381,949
1.2
python-3.x,genetic,gplearn
According to the https://gplearn.readthedocs.io/en/stable/index.html - "Symbolic regression is a machine learning technique that aims to identify an underlying mathematical expression that best describes a relationship". Pay attention to mathematical. I am not good at the topic of the question and gplearn's description...
I split my dataset in X_train, Y_train, X_test and Y_test, and then I used the symbolicRegressor... I've already convert the string values from Dataframe in float values. But by applying the symbolicRegressor I get this error: ValueError: could not convert string to float: 'd' Where 'd' is a value from Y. Since all ...
0
1
302
0
52,383,425
0
0
0
0
1
false
5
2018-09-18T09:18:00.000
0
4
0
MemoryError with numpy arange
52,383,129
0
python,numpy,matplotlib,out-of-memory
In this case the function logspace from numpy is more suitable. The answer to the example is np.logspace(3,15,num=15-3+1, endpoint=True)
I want to create an array of powers of 10 as a label for the y axis of a plot. I am using the plt.yticks() with matplotlib imported as plt but this does not matter here anyway. I have plots where as the y axis is varying from 1e3 to 1e15. Those are log plots. Matplotlib is automatically displaying those with ticks with...
0
1
911
0
52,387,276
0
1
0
0
1
false
0
2018-09-18T13:00:00.000
1
3
0
Reading large CSV file with Pandas freezes computer
52,387,191
0.066568
python,pandas,csv
You're probably loading all of the data in your RAM, thus allocating all memory available, forcing your system to rely on swap memory (writing temporary data to the disk, which is MUCH slower). It should solve the issue if you split the data into chunks that fit in your memory. Maybe 1 GB each?
I am working with a relatively large CSV file in Python. I am using the pandas read_csv function to import it. The data is on a shared folder at work and around 25 GB. I have 2x8 GB RAM and an Intel Core i5 processor and using the juypter notebook. While loading the file the RAM Monitoring goes up to 100%. It stays at ...
0
1
593
0
65,549,840
0
0
0
0
1
false
5
2018-09-19T11:29:00.000
2
3
0
How do I plot for Multiple Linear Regression Model using matplotlib
52,404,857
0.132549
python,matplotlib,machine-learning,regression,linear-regression
You can use Seaborn's regplot function, and use the predicted and actual data for comparison. It is not the same as plotting a best fit line, but it shows you how well the model works. sns.regplot(x=y_test, y=y_predict, ci=None, color="b")
I try to Fit Multiple Linear Regression Model Y= c + a1.X1 + a2.X2 + a3.X3 + a4.X4 +a5X5 +a6X6 Had my model had only 3 variable I would have used 3D plot to plot. How can I plot this . I basically want to see how the best fit line looks like or should I plot multiple scatter plot and see the effect of individual varia...
0
1
24,001
0
59,574,232
0
1
0
0
2
false
28
2018-09-19T11:36:00.000
3
5
0
Get a list of categories of categorical variable (Python Pandas)
52,404,971
0.119427
python,pandas,categorical-data
Try executing the below code. List_Of_Categories_In_Column=list(df['Categorical Column Name'].value_counts().index)
I have a pandas DataFrame with a column representing a categorical variable. How can I get a list of the categories? I tried .values on the column but that does not return the unique levels. Thanks!
0
1
67,005
0
67,443,900
0
1
0
0
2
false
28
2018-09-19T11:36:00.000
0
5
0
Get a list of categories of categorical variable (Python Pandas)
52,404,971
0
python,pandas,categorical-data
df.column name.value_counts() # to see total number of values for each categories in a column df.column name.value_counts().index # to see only the categories name df.column name .value_counts().count() # to see how many categories in a column (only number)
I have a pandas DataFrame with a column representing a categorical variable. How can I get a list of the categories? I tried .values on the column but that does not return the unique levels. Thanks!
0
1
67,005
0
52,409,256
0
0
0
0
1
false
0
2018-09-19T15:06:00.000
2
2
0
RandomForestClassifiers sklearn apply(X)
52,408,980
0.197375
python-3.x,scikit-learn,random-forest,sklearn-pandas
It gives you the indices of the leaf your data point is for every tree of your forest. This is what is then used to predict the class of your point.
Apply returns indices of leafs. Could anyone explain which indices does it return? Related fucntion in Matlab? Thanks
0
1
33
0
52,435,565
0
0
0
0
1
false
0
2018-09-19T15:42:00.000
0
1
0
Is it possible to customize Plotly x-axis hoverinfo in Python?
52,409,641
0
python,plotly
I think maybe you want to change the ticklabels of x-axis instead of the hoverinfo. The meaning of x in hoverinfo is the x-coordinated of the points. So if you truly want to revise the x, maybe you should change the x-coordinated of the points. Maybe changed into string or some special case. Of course, you could also u...
When I set the hoverinfo of a Plotly object to "x+text", I can modify what is shown in the hover tooltip using the hovertext attribute. I haven't found a way to modify the hover text along the x-axis though. I would like to modify it to be more than the default x-axis value at that location.
0
1
130
0
52,422,031
0
0
0
0
1
false
0
2018-09-20T09:29:00.000
1
2
0
Move array of doubles from Python to Java
52,421,822
0.099668
java,python,arrays
Saving a file to disk to exchange the data between different applications sounds like a hacky approach to me. Depending on your structure and complexity, I would consider implementing a messaging queue (i.e. redis) or a document database (i.e. mongo or prefered alternative) with respective clients to do the data excha...
I have a Python script that calculates a few, quite small, NumPy arrays and I have a Java service on a separate machine that needs to use these arrays. These arrays sometimes need to be recalculated and then used afterwards by the Java service. What is the best way to dump a NumPy array to the disk and load it in Java...
1
1
57
0
52,422,304
0
0
0
0
1
false
0
2018-09-20T09:40:00.000
0
4
0
Open CV to Capture Unique Objects from a Video
52,422,060
0
python,opencv
Do you need to detect license plates, etc? Or just notice if something happens? For the latter, you could use a very simple approach. Take an average of say the frames of the last 30 seconds and subtract that from a current frame. If the mean absolute average of the delta image is above a threshold, that could be the c...
I was doing a frame slicing from the OpenCV Library in Python, and I am successfully able to create frames from the video being tested on. I am doing it on a CCTV Camera installed at a parking entry gateway where the video plays 24x7, and at times the car is standing still for good number of minutes, leading to having...
0
1
810
0
52,436,519
0
0
0
0
1
true
1
2018-09-21T04:11:00.000
2
2
0
numpy Broadcasting for user functions
52,436,499
1.2
python,python-3.x,numpy
You could define your own function f = lambda x: sin(x) if x<1 else cos(x) and then use numpy's builtin vectorizer f_broadcasting = np.vectorize(f). This doesn't offer any speed improvements (and the additional overhead can slow down small problems), but it gives you the desired broadcasting behavior.
In numpy, if a is an ndarray, then, something like np.sin(a) takes sin of all the entries of ndarray. What if I need to define my own function (for a stupid example, f(x) = sin(x) if x<1 else cos(x)) with broadcasting behavior?
0
1
134
0
52,504,486
0
0
0
0
1
true
0
2018-09-21T08:22:00.000
0
3
0
Solve 3D least squares in numpy/scipy
52,439,564
1.2
python,numpy,scipy,linear-algebra,least-squares
In fact the answer was simple, I just needed to create bigger matrices Y and X by horizontally stacking the Y_k (to create Y) and the X_k (to create X). Then I can just solve a regular 2d least squares problem: minimize norm(Y - A.dot(X))
For some integer K around 100, I have 2 * K (n, n) arrays: X_1, ..., X_K and Y_1, ..., Y_K. I would like to perform K least squares simultaneously, i.e. find the n by n matrix A minimizing the sum of squares over k: \sum_k norm(Y_k - A.dot(X_k), ord='fro') ** 2 (A must not depend on k). I am looking for an easy way to ...
0
1
834
0
52,460,262
0
0
0
0
1
true
0
2018-09-21T09:24:00.000
0
2
0
NVIDIA K-80 GPU does not run with Deep Learning Image Tensorflow
52,440,606
1.2
python,tensorflow,keras,google-compute-engine
The problem was with my use of requirements.txt. I created it on my laptop with pip freeze, uploaded it to the VM and used pip to install all the requirements. In this way my requirements.txt included tensorflow. As the result, pip installed the repository version that did not include GPU support, replacing pre-instal...
I have created a virtual machine in Google Compute us-east-1c region with the following specifications: n1-standard-2 (2 vCPU, 7.5 GB memory), 1 NVIDIA Tesla K80 GPU, boot disk: Deep Learning Image Tensorflow 1.10.1 m7 CUDA 9.2. When I first logged in to the machine, it asked me to install the drivers and I agreed. It...
0
1
985
0
52,443,366
0
0
0
0
3
false
7
2018-09-21T11:53:00.000
1
3
0
Keras floods Jupyter cell output during fit (verbose=1)
52,443,200
0.066568
python,keras,jupyter-notebook,jupyter,tqdm
verbose=2 should be used for interactive outputs.
When running keras model inside Jupyter notebook with "verbose=1" option, I started getting not single line progress status updates as before, but a flood of status lines updated at batch. See attached picture. Restarting jupyter or the browser is not helping. Jupyter notebook server is: 5.6.0, keras is 2.2.2, Python...
0
1
2,733
0
52,445,923
0
0
0
0
3
false
7
2018-09-21T11:53:00.000
1
3
0
Keras floods Jupyter cell output during fit (verbose=1)
52,443,200
0.066568
python,keras,jupyter-notebook,jupyter,tqdm
Two things I would recommend: Try restarting Jupyter Notebook server. Try different browser other than what you're using; perhaps your browser got some update and it's breaking stuff! (usually, chrome is bad with notebooks!)
When running keras model inside Jupyter notebook with "verbose=1" option, I started getting not single line progress status updates as before, but a flood of status lines updated at batch. See attached picture. Restarting jupyter or the browser is not helping. Jupyter notebook server is: 5.6.0, keras is 2.2.2, Python...
0
1
2,733
0
52,505,253
0
0
0
0
3
true
7
2018-09-21T11:53:00.000
4
3
0
Keras floods Jupyter cell output during fit (verbose=1)
52,443,200
1.2
python,keras,jupyter-notebook,jupyter,tqdm
After a few tests I found that the error is related to tqdm import. Tqdm was used in a piece of code which was later rewritten withoout it. Even though I was not using tqdm in this notebook, just having it imported affected the keras output. To fix it I just commented out this line: from tqdm import tqdm and everything...
When running keras model inside Jupyter notebook with "verbose=1" option, I started getting not single line progress status updates as before, but a flood of status lines updated at batch. See attached picture. Restarting jupyter or the browser is not helping. Jupyter notebook server is: 5.6.0, keras is 2.2.2, Python...
0
1
2,733
0
52,463,724
0
0
0
0
1
false
1
2018-09-22T06:21:00.000
0
2
0
Autoencoder with Transfer Learning?
52,454,090
0
python-3.x,keras,computer-vision,autoencoder,resnet
From what I know, there is no proven method to do this. I'd train the autoencoder from scratch. In theory, if you find a pre-trained CNN which does not use max pooling, you can use those weights and architecture for the encoder stage in your autoencoder. You can also extract features from a pre-trained model and conca...
Is there a way I can train an autoencoder model using a pre-trained model like ResNet? I'm trying to train an autoencoder model with input as an image and output as a masked version of that image. Is it possible to use weights from a pretrained model here?
0
1
1,464
0
52,461,491
0
0
0
0
1
true
0
2018-09-22T18:39:00.000
1
1
0
image multi classification with keras
52,459,748
1.2
python,neural-network,keras,multilabel-classification
The best way to accomplish this is to create a new class in addition to dog and cat to handle images you have no interest in. So now, your labels would be ["dogs", "cats", "other"]. In your current architecture, your model is forced to predict a random image as either a dog or cat as those are the only two options it h...
so if I have two labels "dogs" and "cats" and I want to create multi classification neural network. now if I provided a new random image which is not a dog or a cat, is there a way I can teach the classifier to tell me that this image is not a dog or a cat instead of saying how much percent it maybe cat or dog?
0
1
67
0
52,462,006
0
0
0
1
1
true
0
2018-09-22T19:51:00.000
1
1
0
Efficient Intersection of pandas dataframe with remote mongodb?
52,460,327
1.2
python,mongodb,pandas,pymongo
As you already explained that you won't be able to insert data. So only thing is possible is first take the unique values to a list.df['column_name'].unique(). Then you can use the $in operator in .find() method and pass your list as a parameter. If it takes time or it is too much. Then break your list in equal chunks,...
I have a python pandas dataframe on my local machine, and have access to a remote mongodb server that has additional data that I can query via pymongo. If my local dataframe is large, say 40k rows with 3 columns in each row, what's the most efficient way to check for the intersection of my local dataframe's features an...
0
1
112
0
64,562,864
0
0
0
0
1
false
26
2018-09-24T05:11:00.000
8
4
0
Available options in the spark.read.option()
52,472,993
1
python,python-3.x,apache-spark
Annoyingly, the documentation for the option method is in the docs for the json method. The docs on that method say the options are as follows (key -- value -- description): primitivesAsString -- true/false (default false) -- infers all primitive values as a string type prefersDecimal -- true/false (default false) --...
When I read other people's python code, like, spark.read.option("mergeSchema", "true"), it seems that the coder has already known what the parameters to use. But for a starter, is there a place to look up those available parameters? I look up the apche documents and it shows parameter undocumented. Thanks.
0
1
38,801
0
52,604,844
0
0
0
0
1
false
0
2018-09-24T21:49:00.000
0
1
0
Prediction problem- Build model using 6 months data and predict on one month data?
52,487,832
0
python,machine-learning,logic,data-science,prediction
Is your minimum unit of mesurement 6 months ? I hope not, but if yes, then I would sugges that you dont try to predict the next 1 month. Seasonality within a year aside, you would need daily volume measurements.. I would be very worried to build anything on monthly or even weekly numbers. In terms of modelling techniqu...
I have a data set which contains site usage behavior of users over a period of six months. It contains data about: Number of pages viewed Number of unique cookies associated with each user Different number of OS, Browsers used Different number of cities visited Everything over here is collected on a six month time ...
1
1
421
0
52,493,234
0
1
0
0
1
true
7
2018-09-25T07:39:00.000
3
1
0
Why pandas has its own datetime object Timestamp?
52,492,996
1.2
python,pandas,timestamp
You can go through Pandas documentation for the details: "pandas.Timestamp" is a replacement for python datetime.datetime for Padas usage. Timestamp is the pandas equivalent of python’s Datetime and is interchangeable with it in most cases. It’s the type used for the entries that make up a DatetimeIndex, and oth...
The documentation of pandas.Timestamp states a concept well-known to every pandas user: Timestamp is the pandas equivalent of python’s Datetime and is interchangeable with it in most cases. But I don't understand why are pandas.Timestamps needed at all. Why is, or was, it useful to have a different object than python...
0
1
416
0
52,499,867
0
0
0
0
1
false
0
2018-09-25T12:13:00.000
1
2
0
Filled 3D numpy mask
52,497,995
0.099668
python-2.7,numpy,geometry
If you have surface cells marked and there si no additional information, then scan array layer by layer to get the first marked cell (or get some surface cell if they are known). When you have marked A[z,y,x] surface cell, fill line in the last dimension (x) 1d array until new marked cell is met. Then find neighbor mar...
I have a binary (0-1) 3D numpy array, which I plan to use for masking a 3D image. The mask at the moment consists in the area of a cylinder. The two centres of the faces are two arbitrary points, and the axis is not parallel to x, y or z. How can I fill the cylinder with a pure numpy solution?
0
1
792
0
52,499,750
0
1
0
0
1
false
0
2018-09-25T13:36:00.000
0
2
0
ImportError: No module named detector_classifier
52,499,573
0
python,ubuntu
I think a little more information might help. Which python version and which pip version are you using? I just googled "detector_classifier" and couldn't find anything. What library does "detector_classifier" belong to? Without much background to go off of, I would recommended making sure you have updated pip. Dependi...
I'm working with Concept Drift, but when trying to run my code i get this error "ImportError: No module named detector_classifier" been trying to install the module with pip install, but all i get is no match found. Anyone had this problem before?
0
1
908
0
52,628,707
0
0
0
0
1
false
0
2018-09-25T14:07:00.000
0
1
0
Is it possible to keep all the images in one folder for tensorflow object detection API
52,500,185
0
python,tensorflow
In case you already have separate record files for train and eval (validation/test), then it's okay. You simply put the pathes of the corresponding records in tf_record_input_reader { input_path: "/path/to/record/record_name.record" } once for train_input_reader and once for eval_input_reader. In case the reco...
I am new to tensorflow and it’s object detection API. In its tutorial, it’s said that the images must be separated into train/ and test/ folders. Actually I am working on a server where my entire data is kept in a folder called ‘images’ and I don’t want to either change it’s structure or create another copy of it. How...
0
1
192
0
52,529,791
0
0
0
0
1
false
0
2018-09-27T04:54:00.000
0
3
0
how can i check all the values of dataframe whether have null values in them without a loop
52,529,669
0
python,pandas
This gives you all a columns and how many null values they have. df = pd.DataFrame({0:[1,2,None,],1:[2,3,None]) df.isnull().sum()
if all(data_Window['CI']!=np.nan): I have used the all() function with if so that if column CI has no NA values, then it will do some operation. But i got syntax error.
0
1
77
0
52,536,297
0
0
0
0
1
true
0
2018-09-27T11:42:00.000
0
1
0
Understanding the output of scipy.stats.multivariate_normal
52,536,206
1.2
python,scipy
This is fine. The probability density function can be larger than 1 at a specific point. It's the integral than must be equal to 1. The idea that pdf < 1 is correct for discrete variables. However, for continuous ones, the pdf is not a probability. It's a value that is integrated to a probability. That is, the integra...
I am trying to build a multidimensional gaussian model using scipy.stats.multivariate_normal. I am trying to use the output of scipy.stats.multivariate_normal.pdf() to figure out if a test value fits reasonable well in the observed distribution. From what I understand, high values indicate a better fit to the given mod...
0
1
1,012
0
52,539,914
0
0
0
0
1
false
0
2018-09-27T14:44:00.000
1
1
0
Holoviews - network graph - change edge color
52,539,639
0.197375
python,networkx,bokeh,holoviews
Problem solved, the option to change edges color is edge_line_color and not edge_color.
I am using holoviews and bokeh with python 3 to create an interactive network graph fro mNetworkx. I can't manage to set the edge color to blank. It seems that the edge_color option does not exist. Do you have any idea how I could do that?
0
1
284
0
52,544,455
0
1
0
0
1
false
0
2018-09-27T19:59:00.000
1
2
0
How can we parse DataFrame.describe()?
52,544,301
0.099668
python,pandas,dataframe,sklearn-pandas
print always prints in string format. But if you check type(df.describe()) then you'll see that it is a dataframe. So you can treat it like one. :)
How can we parse the output from DataFrame.describe()? When we print the result of DataFrame.describe() as shown in examples, it is in string format, which is why it is difficult to parse it. I understand that the print function might be converting the output into a displayable and readable form. However, it is not eas...
0
1
110
0
52,544,615
0
0
0
0
1
false
1
2018-09-27T20:02:00.000
2
3
0
In Python DataFrame how to find out number of rows that have valid values of columns
52,544,340
0.132549
python,pandas,dataframe,sklearn-pandas
Use df.isnull().sum() to get number of rows with None and NaN value. Use df.eq(value).sum() for any kind of values including empty string "".
I want to find the number of rows that have certain values such as None or "" or NaN (basically empty values) in all columns of a DataFrame object. How can I do this?
0
1
875
0
52,555,122
0
0
0
0
1
true
1
2018-09-27T20:52:00.000
0
1
1
In which deployment mode can we "Not" add nodes to a cluster in Apache Spark 2.3.1
52,544,955
1.2
python-2.7,apache-spark,cluster-computing,worker
When master is Local, your program will run on single machine that is your edge node. To run it in distributed environment i.e. on cluster you need to select master as "Yarn". When deployment mode is "client" (default) your edge node will become the master (where driver program will run). When deployment mode is "clust...
In which deployment mode can we Not add Nodes/workers to a cluster in Apache Spark 2.3.1 1.Spark Standalone 2.Mesos 3.Kubernetes 4.Yarn 5.Local Mode i have installed Apache Spark 2.3.1 on my machine and have run it in Local Mode in Local Mode can we add Nodes/workers to Apache Spark?
0
1
45
0
57,938,416
0
0
0
0
1
false
2
2018-09-28T10:49:00.000
2
2
0
Algorithm used in Excel Fuzzy Lookup
52,553,735
0.197375
python,excel,levenshtein-distance,fuzzy-logic
The following is an excerpt from Microsoft Fuzzy Lookup Add-In for Excel, Readme.docx. I hope that helps. Advanced Concepts Fuzzy Lookup technology is based upon a very simple, yet flexible measure of similarity between two records. Jaccard similarity Fuzzy Lookup uses Jaccard similarity, which is defined as the ...
I was working on matching company names of two sets. I was trying to code it in Python with Levenstien's distance. I was having issues with short names of companies, and their trailing part like Pvt,Ltd. I have ran the same set with Excel Fuzzy lookup and was getting good results. I there a way that i can see how excel...
0
1
2,872
0
63,618,206
0
0
0
1
1
true
1
2018-09-28T16:10:00.000
0
1
0
unable to read the mongodb data (json) in pyspark
52,559,131
1.2
python,mongodb,hive,pymongo,pyspark-sql
mport json with open('D:/json/aaa.json') as f: d = f.read() da = ''.join(d.split()) print(type(da)) print(da) daa=da.replace('u'','') daaa= json.loads(daa) print(daaa) satisfied with the answer. Hence closing this question
I am connecting the mongodb database via pymongo and achieved the expected result of fetching it outside the db in json format . but my task is that i need to create a hive table via pyspark , I found that mongodb provided json (RF719) which spark is not supporting .when i tried to load the data in pyspark (dataframe) ...
0
1
208
0
53,070,860
0
1
0
0
1
false
0
2018-09-29T11:57:00.000
0
1
0
Predicting python script in Jupyter Lab
52,568,135
0
python,rstudio,jupyter,jupyter-lab
Auto-completion is supported in Jupyter already. You could try type enu and then hit tab. This enumerate will prompt out automatically.
I am an R user currently learning python. In RStudio, when I type a piece of code, it automatically gives me predictions of the functions I am looking for - like an autocomplete. I would like to have something similar in Jupyter Lab. Is it possible?
0
1
200
0
52,700,217
0
0
0
0
1
true
0
2018-09-29T12:08:00.000
-1
1
0
how can I use Transfer Learning for LSTM?
52,568,209
1.2
python-3.x,conv-neural-network,lstm
As I have discovered, we can't use Transfer learning on the LSTM weights. I think the causation is infra-structure of LSTM networks.
I intent to implement image captioning. Would it be possible to transfer learning for LSTM? I have used pretrained VGG16(transfer learning) to Extract features as input of the LSTM.
0
1
289
0
52,579,026
0
0
0
0
1
false
3
2018-09-30T09:32:00.000
1
2
0
How to save Numpy 4D array to CSV?
52,576,617
0.099668
python,csv,numpy,keras
You can try using pickle to save the data. It is much more diverse and easy to handle compare to np.save.
I am trying to save a series of images to CSV file so they can be used as a training dataset for Alexnet Keras machine learning. The shape is (15,224,224,3). So far I am having issue doing this. I have managed to put all data into a numpy array but now I cannot save it to a file. Please help.
0
1
8,759
0
52,630,545
0
0
0
0
1
true
2
2018-10-01T03:49:00.000
3
2
0
difference between datashader and other plotting libraries
52,584,339
1.2
python,matplotlib,plotly,datashader
It may be helpful to first think of Datashader not in comparison to Matplotlib or Plotly, but in comparison to numpy.histogram2d. By default, Datashader will turn a long list of (x,y) points into a 2D histogram, just like histogram2d. Doing so only requires a simple increment of a grid cell for each new point, which i...
I want to understand the clear difference between Datashader and other graphing libraries eg plotly/matplotlib etc. I understand that in order to plot millions/billions of data points, we need datashader as other plotting libraries will hung up the browser. But what exactly is the reason which makes datashader fast an...
0
1
901
0
55,257,524
0
0
0
0
1
false
0
2018-10-01T06:56:00.000
0
2
0
Can we ensemble fastText along with SVM?
52,585,975
0
python,machine-learning,scikit-learn,ensemble-learning
In your use case you can as you're dealing with 3 models you should keep in mind that: The models have different mechanics to use the predict() method: FastText uses an internal file (serialized model with .bin extension, for example) with all embeddings and wordNGrams and you can pass raw text directly; SVM and Nai...
I'm trying to ensemble the three different models (FastText, SVM, NaiveBayes). I thought of using python to do this. I'm sure that we can ensemble NaiveBayes as well as SVM models. But, can we ensemble fastText using python ? Can anyone please suggest me regarding the same ...
0
1
780
0
52,589,887
0
0
0
0
1
true
2
2018-10-01T07:38:00.000
3
3
0
Possible ways to embed python matplotlib into my presentation interactively
52,586,506
1.2
python,matplotlib,powerpoint,jupyter,rise
When putting a picture in PowerPoint you can decide whether you want to embed it or link to it. If you decide to link to the picture, you would be free to change it outside of powerpoint. This opens up the possibility for the following workflow: Next to your presentation you have a Python IDE or Juypter notebook open w...
I need to present my data in various graphs. Usually what I do is to take a screenshot of my graph (I almost exclusively make them with matplotlib) and paste it into my PowerPoint. Unfortunately my direct superior seems not to be happy with the way I present them. Sometimes he wants certain things in log scale and some...
0
1
8,150
0
52,592,108
0
1
0
0
1
false
3
2018-10-01T12:57:00.000
2
2
0
Should I use the dictionary or the series to hold a bunch of dataframe?
52,591,696
0.197375
python,pandas,dataframe,panel
Method 2 also works. Since Python 3.6 it remembers the order it is created too.
Suppose I have several dataframes: df1, df2, df3, etc. The label with each dataframes is A1, A2, A3 etc. I want to use this information as a whole, so that I can pass them. Three methods came into my mind: method 1 use a label list: labels=["A1", "A2", "A3"...] and a list of dataframes dfs=[df1, df2, df3...]. method 2 ...
0
1
117
0
52,592,997
0
0
0
0
1
false
1
2018-10-01T13:56:00.000
0
1
0
Most efficient datatype for iteratively adding to?
52,592,761
0
python-3.x,data-science
In your question you say when the data collection is finished (possibly months from now). It is enormous amount of time in comparison with efficiency of python or pandas or any other programming tool I can imagine. I just created 100k random dictionaries of length 18 containing floats, saved them into text file (csv fo...
I have a web scraper which iteratively retrieves data from web pages, and I would like to add the attributes pulled to a pandas dataframe (eventually) for running simple statistics and analysis. The current script returns a dictionary every time a new page is scraped. I understand adding a new row or column to an exist...
0
1
16
0
67,233,384
0
0
0
0
1
false
1
2018-10-02T16:45:00.000
1
2
0
How to cluster *features* based on their correlations to each other with sklearn k-means clustering
52,612,841
0.099668
python,machine-learning,scikit-learn,k-means,sklearn-pandas
Create a new matrix by taking the correlations of all the features df.corr(), now use this new matrix as your dataset for the k-means algorithm. This will give you clusters of features which have similar correlations.
I have a pandas dataframe with rows as records (patients) and 105 columns as features.(properties of each patient) I would like to cluster, not the patients, not the rows as is customary, but the columns so I can see which features are similar or correlated to which other features. I can already calculate the correlati...
0
1
1,423
0
52,775,839
0
1
0
0
1
true
0
2018-10-03T03:22:00.000
0
1
0
Have a keyword parser function return variables into local namespace
52,619,262
1.2
python
A dict is a good way to package a variable number of named values. If the parser returns a dict, then there is a single object that can be queried to get those names and values, avoiding the problem of needing to know the number and names ahead of time. Another possibility would be to put the parser into a class, eith...
This may be a straight-up unwise idea so I'd best explain the context. I am finding that some of my functions have multiple and sometimes mutually exclusive or interdependent keyword arguments - ie, they offer the user the ability to input a certain piece of data as (say) a numpy array or a dataframe. And then if a num...
0
1
30
0
52,642,522
0
1
0
0
1
true
1
2018-10-04T08:18:00.000
1
1
0
pip install not working for pandas + numpy
52,642,130
1.2
python,pandas,numpy
You should not use pip in python CLI. You must use pip in your system CLI like Windows powershell. use command below to install packages : pip install pachakge-name for example: pip install numpy scipy matplotlib pandas Or you can do this one by one. Each package in single line of pip install
I am new to Python and am trying to pip install the pandas, numpy and a few other libraries, but it won't work. My method is: go to command prompt and type python -m pip install pandas --user - I have also tried every other way like pip install etc. Each time i do it it just says syntax error. Solutions? Thank you.
0
1
721
0
52,675,950
0
0
0
0
3
true
7
2018-10-04T11:22:00.000
7
3
0
How to evaluate Word2Vec model
52,645,459
1.2
python,nlp,word2vec,embedding,word-embedding
There's no generic way to assess token-vector quality, if you're not even using real words against which other tasks (like the popular analogy-solving) can be tried. If you have a custom ultimate task, you have to devise your own repeatable scoring method. That will likely either be some subset of your actual final ta...
Hi have my own corpus and I train several Word2Vec models on it. What is the best way to evaluate them one against each-other and choose the best one? (Not manually obviously - I am looking for various measures). It worth noting that the embedding is for items and not word, therefore I can't use any existing benchmarks...
0
1
6,758
0
55,913,014
0
0
0
0
3
false
7
2018-10-04T11:22:00.000
3
3
0
How to evaluate Word2Vec model
52,645,459
0.197375
python,nlp,word2vec,embedding,word-embedding
One way to evaluate the word2vec model is to develop a "ground truth" set of words. Ground truth will represent words that should ideally be closest together in vector space. For example if your corpus is related to customer service, perhaps the vectors for "dissatisfied" and "disappointed" will ideally have the sma...
Hi have my own corpus and I train several Word2Vec models on it. What is the best way to evaluate them one against each-other and choose the best one? (Not manually obviously - I am looking for various measures). It worth noting that the embedding is for items and not word, therefore I can't use any existing benchmarks...
0
1
6,758
0
58,868,796
0
0
0
0
3
false
7
2018-10-04T11:22:00.000
1
3
0
How to evaluate Word2Vec model
52,645,459
0.066568
python,nlp,word2vec,embedding,word-embedding
One of the ways of evaluating the Word2Vec model would be to apply the K-Means algorithm on the features generated by the Word2Vec. Along with that create your own manual labels/ground truth representing the instances/records. You can calculate the accuracy of the model by comparing the clustered result tags with the g...
Hi have my own corpus and I train several Word2Vec models on it. What is the best way to evaluate them one against each-other and choose the best one? (Not manually obviously - I am looking for various measures). It worth noting that the embedding is for items and not word, therefore I can't use any existing benchmarks...
0
1
6,758
0
52,648,696
0
1
0
0
2
false
13
2018-10-04T13:53:00.000
-1
4
0
Why doesn't a new Conda environment come with packages like numpy?
52,648,520
-0.049958
python,package,anaconda,conda
You can check the packages you have in your environment with the command: conda list If packages are not listed you just have to add it, with the command: conda install numpy
I am going through the painful process of learning how to manage packages/ different (virtual) environments in Python/Anaconda. I was told that Anaconda is basically a python installation with all the packages I need (e.g. numpy, scipy, sci-kit learn etc). However, when I create a new environment, none of these packag...
0
1
12,607
0
52,648,738
0
1
0
0
2
false
13
2018-10-04T13:53:00.000
3
4
0
Why doesn't a new Conda environment come with packages like numpy?
52,648,520
0.148885
python,package,anaconda,conda
I don't know about "conda" environments but in general virtual environments are used to provide you a "unique" environment. This might include different packages, different environment variables etc. The whole point of making a new virtual environment is to have a separate place where you can install all the binaries ...
I am going through the painful process of learning how to manage packages/ different (virtual) environments in Python/Anaconda. I was told that Anaconda is basically a python installation with all the packages I need (e.g. numpy, scipy, sci-kit learn etc). However, when I create a new environment, none of these packag...
0
1
12,607
0
53,033,952
0
1
0
0
1
false
0
2018-10-04T16:23:00.000
0
2
0
Facing issues while installing rasa_core
52,651,437
0
python,installation,chatbot,rasa-nlu,rasa-core
I faced the same issue and able to install rasa_core after resolving dependencies fast. Please try below: First install twisted pip install Twisted Then, install rasa_core pip install rasa_core
I am trying to install rasa_core in my python by using !pip install rasa_core; command. But i am getting an error : Below is the error : Failed building wheel for Twisted The scripts freeze_graph.exe, saved_model_cli.exe, tensorboard.exe, tflite_convert.exe, toco.exe and toco_from_protos.exe are installed in 'C:\Use...
0
1
868
0
52,665,360
0
0
0
0
1
true
0
2018-10-05T08:09:00.000
1
1
0
How to choose coefficients with scikit LinearRegression
52,661,107
1.2
python,scikit-learn,autoregressive-models
Just make a dataset X with 11 columns [x0-97, x0-10, x0-9,...,x0-1]. Then series of x0 will be your target Y.
I want to find an autoregressive model on some data stored in a dataframe and I have 96 data points per day. The data is the value of solar irradiance in some region and I know it has a 1-day seasonality. I want to obtain a simple linear model using scikit LinearRegression and I want to specify which lagged data points...
0
1
38
0
52,665,472
0
1
0
0
1
true
1
2018-10-05T09:42:00.000
2
1
0
Tensor shape modification using slicing and None
52,662,727
1.2
python,tensorflow
Indeed, None adds a new dimension. You can also use tf.newaxis for this which is a bit more explicit IMHO. The new dimension is added in axis 1 because that's where it appears in the index. E.g. input[:, :, None] should result in shape (19, 4, 1, 64, 64, 3) and so on. It might get clearer if we write all the dimension...
I am bit puzzled by how to read and understand a simple line of code: I have a tensor input of shape (19,4,64,64,3). The line of code input[:, None] returns a tensor of shape (19, 1, 4, 64, 64, 3). How should I understand the behavior of that line? It seems that None is adding a dimension, with a size of 1. But why i...
0
1
59
0
52,670,172
0
0
0
0
1
true
4
2018-10-05T15:23:00.000
1
1
0
Tensorflow Object Detection API: How to ignore regions during training?
52,668,857
1.2
python,tensorflow,object-detection,tensorflow-serving,object-detection-api
If those regions to ignore remain static, as in, the contents of the region doesn't change throughout the dataset, then the model can be learnt to ignore those regions. If you really want the model to ignore them during training, then mask them with a constant value.
I'm using the object detection API from the models/research python repo on Ubuntu 16.04, and I wanted to fine-tune a pre-trained model (at the moment I'm interested in SSD with MobileNet or Inception backbones) on the UA-DETRAC dataset. The problem is that there are specific regions, with their bounding boxes, which ar...
0
1
811
0
59,511,989
0
1
0
0
2
false
1
2018-10-06T07:18:00.000
-1
5
0
No module named 'prompt_toolkit.formatted_text'
52,676,660
-0.039979
python,jupyter-notebook,jupyter
Check your environment variable Path! In the system variable Path add the following line C:\Users\\AppData\Roaming\Python\Python37\Scripts
I am totally new to Jupyter Notebook. Currently, I am using the notebook with R and it is working well. Now, I tried to use it with Python and I receive the following error. [I 09:00:52.947 NotebookApp] KernelRestarter: restarting kernel (4/5), new random ports Traceback (most recent call last): File "/usr/lib/pytho...
0
1
13,255
0
52,676,845
0
1
0
0
2
false
1
2018-10-06T07:18:00.000
1
5
0
No module named 'prompt_toolkit.formatted_text'
52,676,660
0.039979
python,jupyter-notebook,jupyter
It's more stable to create a kernel with an Anaconda virtualenv. Follow these steps. Execute Anaconda prompt. Type conda create --name $ENVIRONMENT_NAME R -y Type conda activate $ENVIRONMENT_NAME Type python -m ipykernel install Type ipython kernel install --user --name $ENVIRONMENT_NAME Then, you'll have a new jupyt...
I am totally new to Jupyter Notebook. Currently, I am using the notebook with R and it is working well. Now, I tried to use it with Python and I receive the following error. [I 09:00:52.947 NotebookApp] KernelRestarter: restarting kernel (4/5), new random ports Traceback (most recent call last): File "/usr/lib/pytho...
0
1
13,255
0
52,677,839
0
0
0
0
3
false
0
2018-10-06T09:34:00.000
0
5
0
AttributeError("module 'pandas' has no attribute 'read_csv'")
52,677,658
0
python,pandas,attributeerror
There might be possibility that you are using this name for your script as read_csv.py hence pandas itself confused what to import, if or csv.py then you can rename it to something else like test_csv_read.py. also remove any files in the path naming read_csv.pyc or csv.pyc .
I am new to Python and I have been stuck on a problem for some time now. I recently installed the module pandas and at first, it worked fine. However, for some reason it keeps saying AttributeError("module 'pandas' has no attribute 'read_csv'"). I have looked all over StackOverflow and the consensus is that there is...
0
1
9,258
0
55,653,559
0
0
0
0
3
false
0
2018-10-06T09:34:00.000
0
5
0
AttributeError("module 'pandas' has no attribute 'read_csv'")
52,677,658
0
python,pandas,attributeerror
Here is the solution when you downloaded python its automatically download 32 you need to delete if you don't have 32 and go download 64 and then problem solved :)
I am new to Python and I have been stuck on a problem for some time now. I recently installed the module pandas and at first, it worked fine. However, for some reason it keeps saying AttributeError("module 'pandas' has no attribute 'read_csv'"). I have looked all over StackOverflow and the consensus is that there is...
0
1
9,258
0
60,574,804
0
0
0
0
3
false
0
2018-10-06T09:34:00.000
0
5
0
AttributeError("module 'pandas' has no attribute 'read_csv'")
52,677,658
0
python,pandas,attributeerror
In my case, I had installed module "panda" instead of "pandas". I was getting this error, even when there was no conflicting .py files were present in working folder. Then I recognized my mistake, and then installed package "pandas and problem got resolved.
I am new to Python and I have been stuck on a problem for some time now. I recently installed the module pandas and at first, it worked fine. However, for some reason it keeps saying AttributeError("module 'pandas' has no attribute 'read_csv'"). I have looked all over StackOverflow and the consensus is that there is...
0
1
9,258
0
52,695,712
0
0
0
0
1
false
0
2018-10-06T20:14:00.000
0
2
0
Is it possible to change the loss function dynamically during training?
52,682,979
0
python,tensorflow,machine-learning
You have to implement your own algorithm. This is mostly possible with Tensorflow.
I am working on a machine learning project and I am wondering whether it is possible to change the loss function while the network is training. I'm not sure how to do it exactly in code. For example, start training with cross entropy loss and then halfway through training, switch to 0-1 loss.
0
1
783
0
52,807,275
0
0
0
0
1
false
0
2018-10-07T05:39:00.000
0
2
0
Tensorflow: Each row in the training dataset contains 99% of the previous rows data - can I optimize it before running the training?
52,685,768
0
python,tensorflow,dataset,tensorflow-datasets
If you use Data API then you can cache the input. Also maybe TF's support for Kafka might be a help here as you could model it as a stream of data. Another approach would be to reuse some data between session calls. Then you would have to use resource variable (in the current Variable() spec it means using flag use_re...
I am searching for a way to make my training and testing data smaller in file size. The model I want to end up with I want to train a model that predicts whether or not a crypto coin price is making and x% (0.4 or so) jump within the next 10 minutes (i.e. I want the model to answer with a Yes or No). Every minute I wi...
0
1
133
0
52,692,667
0
1
0
0
1
true
1
2018-10-07T20:28:00.000
1
1
0
Error - No module named '_pywrap_tensorflow'
52,692,622
1.2
python,tensorflow
Please downgrade Python to 3.6.x and try again. I had faced similar issue while using Python 3.7.x. Once I downgraded it, it worked. Make sure you adjust your path variable accordingly. "Pip" also may have to be modified and the corresponding path variable.
I have seen multiple questions for the same issue. I went through all the answers and tried all of them. I updated pip, tensorflow, python etc to the latest versions or as suggested in the answers and still I am facing this issue. Pip version 18.0, Python 3.7
0
1
45
0
52,694,329
0
0
0
0
1
true
1
2018-10-08T01:18:00.000
1
1
0
Difference between dask pivot_table and pandas pivot_table python
52,694,289
1.2
python,python-3.x,pandas,pivot-table,dask
Definitely Dask. The way pandas work is, it processes everything as a monolithic block in memory and is not parallelizable, while Dask is made to break the data frame into chunks that can be processed in parallel.
It seems we can achieve same goal using pivot_table from both libraries, but which one is more efficient in performance for large dataset?
0
1
515
0
70,800,730
0
0
0
0
2
false
0
2018-10-08T13:37:00.000
0
2
0
Random Forest Multi Class Python does not improve accuracy
52,703,577
0
python,random-forest,multiclass-classification
Try to tune below parameters n_estimators This is the number of trees you want to build before taking the maximum voting or averages of predictions. Higher number of trees give you better performance but makes your code slower. max_features These are the maximum number of features Random Forest is allowed to try in ind...
I am making a random forest multi-classifier model. Basically there are hundred of households which have 200+ features, and based on these features I have to classify them in one of the classes {1,2,3,4,5,6}. The problem I am facing is I cannot improve the accuracy of the model how much ever I can try. I have used Ran...
0
1
118
0
70,800,413
0
0
0
0
2
false
0
2018-10-08T13:37:00.000
0
2
0
Random Forest Multi Class Python does not improve accuracy
52,703,577
0
python,random-forest,multiclass-classification
You can check if the features are on different scales. If they are, it is suggested to use some type of normalization. This step is essential for many linear-based models to perform well. You can take a quick look at the distributions of each numeric feature to decide what type of normalization to use.
I am making a random forest multi-classifier model. Basically there are hundred of households which have 200+ features, and based on these features I have to classify them in one of the classes {1,2,3,4,5,6}. The problem I am facing is I cannot improve the accuracy of the model how much ever I can try. I have used Ran...
0
1
118