GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
46,582,120
0
1
0
0
1
false
0
2017-10-05T08:24:00.000
0
1
0
Converting many string values to categories
46,581,018
0
python,pandas
I applied below command and it works: df['kategorie']=action['kategorie'].astype('category')
I have a data frame with one column full of string values. They need to be converted into categories. Due to huge amount it would be inconvenient to define categories in dictionary. Is there any other way in pandas to do that?
0
1
34
0
47,052,669
0
1
0
0
1
false
0
2017-10-05T10:31:00.000
0
1
0
Load portions of matrix into RAM
46,583,487
0
python,data-structures,microcontroller,micropython
Sorry, but your question contains the answer - if you need to work with 32x32 tiles, the best format is that which represents your big image as a sequence of tiles (and e.g. not as one big 256x256 image, though reading tiles out of it is also not a rocket science and should be fairly trivial to code in MicroPython, tho...
I'm writing some image processing routines for a micro-controller that supports MicroPython. The bad news is that it only has 0.5 MB of RAM. This means that if I want to work with relatively big images/matrices like 256x256, I need to treat it as a collection of smaller matrices (e.g. 32x32) and perform the operation o...
0
1
52
0
46,584,984
0
1
0
0
1
true
0
2017-10-05T11:31:00.000
1
1
0
Can I install Tensorflow on both Python 2 and 3?
46,584,556
1.2
python,tensorflow
Yes you can do. Easy step is install python anaconda then create environment with python 2.7 and python 3. Install Tensorflow for both environment
I've installed Tensorflow using Python 3 (pip3 install). Now, since Jupyter Notebook is using Python 2, thus the python command is linked to python2.7, all the codes in Jupyter Notebook get error (ImportError: No module named tensorflow). Question: Can I install Tensorflow running side by side for both Python 2 and 3?
0
1
948
0
46,597,662
0
0
0
0
2
false
0
2017-10-05T18:46:00.000
0
2
0
Programming an interactive slackbot - python
46,592,760
0
python,csv,slack,slack-api
you can solve using pandas from python pandas is data processing framework pandas framework can processing EXCEL, TXT as well as csv file. The following links pandas documentation
I have recently been working on a slackbot and I have the basic functionality down, I am able to take simple commands and make have the bot answer. But, I want to know if there is anyway to have to bot store some data given by a user, such as "@slackbot 5,4,3,2,1" and then have the bot sort it and return it like "1,2,3...
0
1
158
0
56,807,453
0
0
0
0
2
false
0
2017-10-05T18:46:00.000
0
2
0
Programming an interactive slackbot - python
46,592,760
0
python,csv,slack,slack-api
Whatever you have mentioned in your question, can easily be done using slackbot. You can develop slackbot as Django server. If you want bot to store data, you can connect your django server to any database or to any cache (eg: Redis, Memecache). You can write sorting logic in python and send sorted list back to slack ...
I have recently been working on a slackbot and I have the basic functionality down, I am able to take simple commands and make have the bot answer. But, I want to know if there is anyway to have to bot store some data given by a user, such as "@slackbot 5,4,3,2,1" and then have the bot sort it and return it like "1,2,3...
0
1
158
0
46,597,146
0
0
0
0
1
true
6
2017-10-06T00:31:00.000
4
7
0
Differentiable round function in Tensorflow?
46,596,636
1.2
python,tensorflow
Rounding is a fundamentally nondifferentiable function, so you're out of luck there. The normal procedure for this kind of situation is to find a way to either use the probabilities, say by using them to calculate an expected value, or by taking the maximum probability that is output and choose that one as the network'...
So the output of my network is a list of propabilities, which I then round using tf.round() to be either 0 or 1, this is crucial for this project. I then found out that tf.round isn't differentiable so I'm kinda lost there.. :/
0
1
6,120
0
46,600,856
0
0
0
0
1
false
0
2017-10-06T07:37:00.000
0
2
0
How can I create sheet 2 in a CSV file by using Python code?
46,600,652
0
python,csv
You can do this by using multiple CSV files - one CSV file per sheet. A comma-separated value file is a plain text format. It is only going to be able to represent flat data, such as a table (or a "sheet") When storing multiple sheets, you should use separate CSV files. You can write each one separately and import/pars...
Is there is way to create sheet 2 in same csv file by using python code
0
1
4,260
0
46,610,485
0
0
0
1
1
true
0
2017-10-06T14:36:00.000
0
1
0
Sorting and loading data from Pandas to Redshift using to_sql
46,608,223
1.2
python,sorting,amazon-redshift,pandas-to-sql
While ingesting data into redshift, data gets distributed between slices on each node in your redshift cluster. My suggestion would be to create a sort key on a column which you need to be sorted. Once you have sort key on that column, you can run Vacuum command to get your data sorted. Sorry! I cannot be of much help...
I've built some tools that create front-end list boxes for users that reference dynamic Redshift tables. New items in the table, they appear automatically in the list. I want to put the list in alphabetical order in the database so the dynamic list boxes will show the data in that order. After downloading the list fro...
0
1
725
0
46,619,774
0
0
0
0
1
true
7
2017-10-07T11:16:00.000
4
2
0
Share memory between C/C++ and Python
46,619,531
1.2
python,c++,linux,opencv
OK, this is not exactly a memory sharing in its real sense. What you want is IPC to send image data from one process to another. I suggestthat you use Unix named pipes. You will have to get the raw data in a string format in C/C++, send it through pipe or Unix socket to Python and there get a numpy array from the sent ...
Is there a way to share memory to share an openCV image (MAT in C+++ and numpy in python) image between a C/C++ and python? Multiplataform is not needed, I'm doing it in linux, I've thought share between mmap or similar think. I have two running processes one is written in C and the other is python, and I need to share...
0
1
5,634
0
46,620,696
0
1
0
0
1
false
0
2017-10-07T13:23:00.000
1
1
0
How to use random numbers that executes a one dimensional random walk in python?
46,620,657
0.197375
python,random
1 - Start with a list initialized with 5 items (maybe None?) 2 - place the walker at index 2 3 - randomly chose a direction (-1 or + 1) 4 - move the walker in the chosen direction 5 - maybe print the space and mark the location of the walker 6 - repeat at step 3 as many times as needed
Start with a one dimensional space of length m, where m = 2 * n + 1. Take a step either to the left or to the right at random, with equal probability. Continue taking random steps until you go off one edge of the space, for which I'm using while 0 <= position < m. We have to write a program that executes the random wa...
0
1
327
0
58,870,147
0
1
0
0
2
true
3
2017-10-07T15:14:00.000
1
2
0
Epochs Vs Pass Vs Iteration
46,621,774
1.2
python,machine-learning,deep-learning,neural-network,epoch
Epoch: One round forward Propagation and backward Propagation into the neural network at once.(dataset ) Example : One round of throwing the ball into the basket and finding out the error and come back and changing the weights.(f = ma) Forward propagation: The Process of initizing the mass and acceleration with rando...
What does the term epochs mean in the neural network. How does it differ from pass and iteration
0
1
971
0
53,937,484
0
1
0
0
2
false
3
2017-10-07T15:14:00.000
2
2
0
Epochs Vs Pass Vs Iteration
46,621,774
0.197375
python,machine-learning,deep-learning,neural-network,epoch
There are many neural networks algorithms in unsupervised learning. As long as a cost function can be defined, so can "neural networks" be used. For instance, there are for instance autoencoders, for dimensionality reduction, or Generative Adversarial Networks (so 2 networks, one generating new samples). All these are ...
What does the term epochs mean in the neural network. How does it differ from pass and iteration
0
1
971
0
46,929,384
0
0
0
0
1
true
0
2017-10-07T20:31:00.000
0
1
0
IBM Watson Natural Language Understanding uploading multiple documents for analysis
46,624,822
1.2
python,ibm-cloud,ibm-watson,watson-nlu
NLU can be "manually" adapted to do batch analysis. But the Watson service that provides what you are asking for is Watson Discovery. It allows to create Collections (set of documents) that will be enriched thru an internal NLU function and then queried.
I have roughly 200 documents that need to have IBM Watson NLU analysis done. Currently, processing is performed one at a time. Will NLU be able preform a batch analysis? What is the correct python code or process to batch load the files and then response results? The end goal is to grab results to analyze which docu...
1
1
445
0
46,646,311
0
0
0
0
1
true
0
2017-10-09T12:10:00.000
2
1
0
is multi-label clasification for text only
46,646,141
1.2
python,machine-learning,multilabel-classification
Of course it can be done with numbers. After all, the text itself is converted to numbers to be classified. But you should not use regression for that. It is clearly a case for classification. A regular classifier (for example, a neural network) usually has multiple outputs, one for each class. Each output returns the ...
I was working on a numeric dataset and apparently it is a multi variable output regression. I wanted to know if you can have a multi-label classification in a numeric dataset or it is strictly for text based. For Eg: Stackoverflow an categorize every text/code into multiple tags like python,flask, python2.7 ... But can...
0
1
36
1
46,680,237
0
0
0
0
2
false
0
2017-10-10T22:02:00.000
0
2
0
Using Tensorflow on smartphones
46,676,738
0
java,android,python,mobile,tensorflow
Short answer is : yes. You will be safe with python since it’s the main front end language for tensorflow. Also I agree with BHawk’s answer above.
I've been learning a lot about the uses of Machine Learning and Google's Tensorflow. Mostly, developers use Python when developing with Tensorflow. I do realize that other languages can be used with Tensorflow as well, i.e. Java and C++. I see that Google s about to launch Tensorflow Lite that is supposed to be a game ...
0
1
249
1
46,817,475
0
0
0
0
2
true
0
2017-10-10T22:02:00.000
0
2
0
Using Tensorflow on smartphones
46,676,738
1.2
java,android,python,mobile,tensorflow
In short, yes. It would be safe to learn implementing TensorFlow using python and still comfortably develop machine learning enabled mobile apps. Let me elaborate. Even with TensorFlow Lite, training the data can only happen on the server side; only the prediction, or the inference happens on the mobile device. So typi...
I've been learning a lot about the uses of Machine Learning and Google's Tensorflow. Mostly, developers use Python when developing with Tensorflow. I do realize that other languages can be used with Tensorflow as well, i.e. Java and C++. I see that Google s about to launch Tensorflow Lite that is supposed to be a game ...
0
1
249
0
46,681,839
0
0
0
0
1
false
1
2017-10-11T05:56:00.000
1
1
0
How to generate a size 1000 random number list according to Poisson distribution and with a fixed mean(size)?
46,680,795
0.197375
python,python-2.7
So in a Poisson distribution lambda is the mean and variance at the same time. And if you draw infinitely often you will see it is true. What you are asking for is like expecting to roll a dice 10 times and get an average of 3.5 since thats the expected mean. Nevertheless you could generate a list with numpy.random.po...
I want to generate a size 1000 random number list according to Poisson distribution and with a fixed mean. Since the size is fixed to 1000, so the sum is also fixed. The first idea I get is to use numpy.random.Poisson(lambda,size), but it can not set a fixed mean for the list. So I am really confused.
0
1
561
0
46,719,538
0
0
0
0
1
false
0
2017-10-11T06:28:00.000
0
2
0
Natural Language Processing(syntatctic,semantic,progmatic) Analysis
46,681,209
0
python-3.x,nlp,stanford-nlp
I would suggest that you should read an introductory book on NLP to be familiar with the chain processes you are trying to achieve. You are trying to do question-answering , aren't you? If it is the case, you should read about question-answering systems. The above sentence has to be morphologically analyzed (so read ab...
My text contains text="Ravi beated Ragu" My Question will be "Who beated Ragu?" The Answer Should come "Ravi" Using NLP How to do this by natural language processing. Kindly guide me to proceed with this by syntactic,semantic and progmatic analysis using python
0
1
78
0
46,697,151
0
1
0
0
1
false
0
2017-10-11T19:55:00.000
0
1
0
How to use multinomial naive bayes for both text and non text data using python?
46,696,478
0
python,machine-learning,naivebayes
Three are several ways to do that: simply concatenate Hashing Vectors with integers and train on that bigger features vector. It will work. It would be more reasonable to do so using different classifier, beacuse MultinominalNB can't model the interactions between the features. But if you want nothing else but Multin...
The data consists of text parameters as well as integer parameters. The problem is to train machine with both data. Hashing Vectorizer is used for text parameters training. Thanks in advance....
0
1
207
0
46,698,554
0
0
0
0
1
false
0
2017-10-11T22:27:00.000
0
1
0
scipy optimize - View steps during procedure
46,698,519
0
python,optimization,scipy
The minimize function takes an options dict as a keyword argument. Accepted keys for this dict inlude, disp, which should be set to True to print the progress of the minimization.
I am using the minimize function from scipy.optimize library. Is there a way to print some values during the optimization procedure? Values like the current x, objective function value, number of iterations and number of gradient evaluations. I know there are options to save these values and return them after the opti...
0
1
311
0
46,699,385
0
0
0
0
1
false
0
2017-10-11T22:33:00.000
0
1
0
Normalise face landmark data using python
46,698,570
0
python,image-processing
Actually I think I have figured it out, pretty simple maths actually, here is what i am going to do Take every point and take away the first box point values - this will give me the points as if the box starts at [ 0,0 ] Apply the box/normalised size ratio to every point
I am currently learning python and playing around with tensorflow. I have a bunch of images where I have obtained the landmarks (pixel points) of a person's facial features such as ears and eyes. In addition, it also provides me with a box (4 coordinates) where the face exists. My goal is to normalise all the data from...
0
1
339
0
46,704,606
0
0
0
0
1
true
1
2017-10-12T04:03:00.000
2
1
0
Are all train samples used in fit_generator in Keras?
46,701,216
1.2
python,tensorflow,machine-learning,keras,neural-network
No, because it is a generator the model does not know the total number of training samples. Therefore, it finishes an epoch when it reaches the final step defined with the steps_per_epoch argument. In your case it will indeed train 192 samples per epoch. If you want to use all samples in your model you can shuffle the ...
I am using model.fit_generator() to train a neural network with Keras. During the fitting process I've set the steps_per_epoch to 16 (len(training samples)/batch_size). If the mini batch size is set to 12, and the total number of training samples is 195, does it mean that 3 samples won't be used in the training phase?
0
1
198
0
46,701,588
0
1
0
0
1
false
0
2017-10-12T04:29:00.000
0
1
0
How to find the number of elements of a float array in python and how to convert it to a 2-dimensional float array?
46,701,431
0
python,arrays
Length of array: len(array). Try to do 2 cycles to spread all values to 2-d array.
I am trying to find the length of a 1-D float array and convert it into a 2-d array in python. Also when I am trying to print the elements of the float array the following error is coming:- 'float' object is not iterable
0
1
608
0
46,737,626
0
0
0
0
1
false
1
2017-10-13T19:04:00.000
0
1
0
Resizing 2D arrays to a different size (e.,g. reduction or comression)
46,736,521
0
arrays,python-3.x,compression
n-dimensional arrays can be many things, aside from being an image. one example would be a geo-spatial representation that would consolidate (roll-up_ whenever you zoom out and drill down whenever you zoom in. array resizing technique should rely on the context of such resize takes place, and hence there is no best ans...
What is the best way to resize a 2D array (the array has a thermal data contents values between 20 to 30) from size 173X151 to size 146X121 without losing too much information. I understand it is possible to reduce the size of images with some function(images of intensity values 0 to 255) but my understanding that the...
0
1
40
0
46,747,332
0
0
0
1
1
false
3
2017-10-14T13:29:00.000
1
2
0
Set worksheet.hide_gridlines(2) to certain range of cells
46,745,120
0.099668
excel,python-2.7,xlsxwriter
As far as I know that isn't possible in Excel to hide gridlines for a range. Gridlines are either on or off for the entire worksheet. As a workaround you could turn the gridlines off and then add a border to each cell where you want them displayed. As a first step you should figure out how you would do what you want to...
Im creating Excel file from pandas and I'm using worksheet.hide_gridlines(2) the problem that all gridlines are hide in my current worksheet.I need to hide a range of cells, for example A1:I80.How can I do that?
0
1
2,139
0
57,548,526
0
0
0
0
1
false
7
2017-10-14T20:28:00.000
1
3
0
Can I use Train AND Test data for Imputation?
46,749,037
0.066568
python-2.7,data-science,imputation
The philosophy behind splitting data into training and test sets is to have the opportunity of validating the model through fresh(ish) data, right? So, by using the same imputer on both train and test sets, you are somehow spoiling the test data, and this may cause overfitting. You CAN use the same approach to impute t...
Interestingly, I see a lot of different answers about this both on stackoverflow and other sites: While working on my training data set, I imputed missing values of a certain column using a decision tree model. So here's my question. Is it fair to use ALL available data (Training & Test) to make a model for imputation ...
0
1
2,913
0
56,999,837
0
0
0
0
2
false
11
2017-10-15T15:18:00.000
2
4
0
What does "splitter" attribute in sklearn's DecisionTreeClassifier do?
46,756,606
0.099668
python,python-3.x,machine-learning,scikit-learn
Short ans: RandomSplitter initiates a **random split on each chosen feature**, whereas BestSplitter goes through **all possible splits on each chosen feature**. Longer explanation: This is clear when you go thru _splitter.pyx. RandomSplitter calculates improvement only on threshold that is randomly initiated (ref. li...
The sklearn DecisionTreeClassifier has a attribute called "splitter" , it is set to "best" by default, what does setting it to "best" or "random" do? I couldn't find enough information from the official documentation.
0
1
8,211
0
48,555,365
0
0
0
0
2
false
11
2017-10-15T15:18:00.000
4
4
0
What does "splitter" attribute in sklearn's DecisionTreeClassifier do?
46,756,606
0.197375
python,python-3.x,machine-learning,scikit-learn
The "Random" setting selects a feature at random, then splits it at random and calculates the gini. It repeats this a number of times, comparing all the splits and then takes the best one. This has a few advantages: It's less computation intensive than calculating the optimal split of every feature at every leaf. I...
The sklearn DecisionTreeClassifier has a attribute called "splitter" , it is set to "best" by default, what does setting it to "best" or "random" do? I couldn't find enough information from the official documentation.
0
1
8,211
0
48,411,184
0
0
0
0
1
true
0
2017-10-16T09:28:00.000
0
2
0
What is cuDNN implementation of rnn cells in Tensorflow
46,767,001
1.2
python-3.x,tensorflow,cudnn
In short: cudnnGRU and cudnnLSTM can/ must be used on GPU, normal rnn implementations not. So if you have tensorflow-gpu, cudnn implementation of RNN cells would run faster.
To create RNN cells, there are classes like GRUCell and LSTMCell which can be used later to create RNN layers. And also there are 2 other classes as CudnnGRU and CudnnLSTM which can be directly used to create RNN layers. In the documentation they say that the latter classes have cuDNN implementation. Why should I use o...
0
1
1,107
0
58,364,407
0
0
0
0
1
false
8
2017-10-16T16:41:00.000
0
3
0
ImportError: No module named 'sklearn.lda'
46,775,155
0
python,machine-learning,scikit-learn,lda
In case you are using new version and using from sklearn.qda import QDA it will give error, try from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
When I run classifier.py in the openface demos directory using: classifier.py train ./generated-embeddings/ I get the following error message: --> from sklearn.lda import LDA ModuleNotFoundError: No module named 'sklearn.lda'. I think to have correctly installed sklearn. What could be the reason for this message?
0
1
15,902
0
46,807,104
0
0
0
0
1
true
0
2017-10-17T04:38:00.000
0
1
0
Is there non-GPU equivalent of tf_utils package?
46,782,575
1.2
python,tensorflow,deep-learning
pandas already has quite a bit of functionality that tf_utils provides. E.g. read_csv could capture what I was looking for from load_dataset. get_dummies does what convert_to_one_hot does. I could not find any direct functionality in pandas that came close to random_mini_batches but it can be achieved with sampling a...
I am using CPU based tensorflow (on non GPU platform) from python. I want to use functionality like load_dataset, random_mini_batches, convert_to_one_hot etc. from tf_utils package. However the one from neuroailab github has dependency on tensorflow-gpu. Is there any other CPU based (for non GPU platform) equivalent pa...
0
1
462
0
46,813,155
0
1
0
0
1
false
5
2017-10-17T12:59:00.000
4
1
0
Spyder IDE - Cancel opening large variables without restarting the entire program
46,790,782
0.664037
python-3.x,spyder
(Spyder developer here) My answers: Cancel the request to view the variable without closing the program? No, that's not possible, sorry. Set a default so Spyder will only display the first 1000 rows of very large objects such as dataframes? That's already in place. The problem is the size in memory of your datafram...
I am working with large Pandas dataframes in Spyder. Occasionally I accidentally click the large dataframes in the Variable Explorer window and Spyder will very hang for very long periods while it tries to open. The only way I have found to stop this process is to close Spyder completely and then reopen. Is it possib...
0
1
890
0
46,792,992
0
0
0
0
1
false
3
2017-10-17T14:45:00.000
2
2
0
polynomial transformation in python
46,792,940
0.197375
python,polynomial-math,coordinate-transformation
Create a burner variable, store x-tau into it, and feed that into your function
I'm trying to shift a polynomial. I'm currently using numpy.poly1d() to make a quadratic equation. example: 2x^2 + 3x +4 but I need to shift the function by tau, such that 2(x-tau)^2 + 3(x-tau) + 4 Tau is a value that will change base on some other variables in my code.
0
1
487
0
46,832,946
0
0
0
0
1
true
1
2017-10-18T18:37:00.000
1
1
0
Bokeh Plots Axis Value don't show completely
46,817,031
1.2
python-3.x,bokeh
This is a known bug with current versions (around 0.12.10) for now the best workaround is to increase plot.min_border (or p.min_border_left, etc) to be able to accommodate whatever the longest label you expect is. Or to rotate the labels to be parallel to the axis so that they always take up the same space, e.g. p.yaxi...
I have just started exploring bokeh and here is a small issue I am stuck with. This is in regards with live graphs. The problem is with the axis values. Initially if I start with say 10, till 90 it shows correct values but while printing 100, it only show 10 and the last zero(0) is hidden. It's not visible. That is wh...
0
1
601
0
46,823,775
0
0
0
0
1
false
1
2017-10-19T05:44:00.000
0
1
0
Error when trying to pull row based on index value in the dataframe
46,823,445
0
python,dataframe
I think the way you are passing your argument is wrong, try passing it in the same pattern as its in csv. Like ["1/10/2011"], it should work. Good luck :)
I'am reading a data off a csv file. The columns are as below: Date , Buy, Sell, Price 1/10/2011, 1 , 5, 500 1/15/2011, 4, 2, 500 When I tried to pull data based on index like df["2011-01-10"], I got an error KeyError: '2011-01-10' Anyone know what this is might be the case? Thanks,
0
1
15
0
46,828,277
0
0
0
0
1
false
0
2017-10-19T10:44:00.000
0
2
0
What classification algorithm should I use for document classification with this variables?
46,828,118
0
python,machine-learning,svm,naivebayes,document-classification
Because of the continuous score, which i assume is your label, it's a regression problem. SVMs are more common for classification problems. There are lots of possible algorithms out there. Logistic Regression would be pretty common to solve something like this. Edit Now that you edited your post your problem became a ...
I'm trying to classify pages, in particular search for a page, in documents based on bag of words, page layout, contain tables or not, has bold titles, etc. With this premise I have created a pandas.DataFrame like this, for each document: page totalCharCount matchesOfWordX matchesOfWordY hasFeaturesX ...
0
1
258
0
61,232,784
0
0
0
0
1
true
3
2017-10-19T19:27:00.000
1
1
0
pandas: create a caption with to_latex?
46,837,459
1.2
python,pandas,latex
As of version 1.0.0, released on 29 January 2020, to_latex accepts caption and label arguments.
I am exporting a table from a pandas script into a .tex and would like to add a caption. with open('Table1.tex','w') as tf: tf.write(df.to_latex(longtable=True)) (I invoke the longtable argument to span the long table over multiple pages.) The Table1.tex file gets imported into a bigger LaTeX document via the \impo...
0
1
1,703
0
46,842,124
0
0
0
0
1
false
0
2017-10-20T01:26:00.000
0
1
0
Data Periodicity - How to normalize?
46,841,117
0
python,pandas,periodicity
First, you need to define what output you need, then, deduce how to treat the input to get the desired output. Regarding daily data for the first 10 years, it could be a possible option to keep only one day per week. Sub-sampling does not always mean loosing information, and does not always change the final result. It ...
I have a data set which contains 12 years of weather data. For first 10 years, the data was recorded per day. For last two years, it is now being recorded per week. I want to use this data in Python Pandas for analysis but I am little lost on how to normalize this for use. My thoughts Convert first 10 years data also ...
0
1
205
0
46,845,150
0
0
0
0
1
true
0
2017-10-20T03:02:00.000
1
1
0
Machine Learning, What are the common techniques for feature engineering and presenting the model?
46,841,795
1.2
python,machine-learning,visualization
Feature engineering is more of art than technique. That might require domain knowledge or you could try adding, subtracting, dividing and multiplying different columns to make features out of it and check if it adds value to the model. If you are using Linear Regression then the adjusted R-squared value must increase o...
I am having a ML language identification project (Python) that requires a multi-class classification model with high dimension feature input. Currently, all I can do to improve accuracy is through trail-and-error. Mindlessly combining available feature extraction algorithms and available ML models and see if I get luc...
0
1
211
0
61,983,790
0
0
0
0
1
false
8
2017-10-21T15:35:00.000
1
4
0
Python add audio to video opencv
46,864,915
0.049958
python-3.x,opencv,ffmpeg,opencv-python
You can use pygame for audio. You need to initialize pygame.mixer module And in the loop, add pygame.mixer.music.play() But for that, you will need to choose audio file as well. However, I have found better idea! You can use webbrowser module for playing videos (and because it would play on browser, you can hear sounds...
I use python cv2 module to join jpg frames into video, but I can't add audio to it. Is it possible to add audio to video in python without ffmpeg? P.S. Sorry for my poor English
0
1
19,102
0
46,887,660
0
0
0
0
1
false
0
2017-10-23T10:08:00.000
1
2
0
Tensorflow share weights across input placeholder
46,886,770
0.099668
python,tensorflow,neural-network
A simple option would be to replicate W. If the original W is, lets say p X q, you can just do l = tf.matmul(X, tf.tile(W, (k, 1))).
I am trying to build a neural network with shared input weights. Given pk inputs in the form X=[x_1, ..., x_p, v_1,...,v_p,z1,...,z_p,...] and a weight matrix w of shape (p, layer1_size), I want the the first layer to be defined as sum(w, x_.) + sum(w, v_.) + .... In other words the input and the first layer shoul...
0
1
547
0
58,123,095
0
0
0
0
1
false
8
2017-10-23T18:05:00.000
1
3
0
Thicken a one pixel line
46,895,772
0.066568
python,image,numpy,opencv,image-processing
I'd take a look at morphological operations. Dilation sounds closest to what you want. You might need to work on a subregion with your line if you don't want to dilate the rest of the image.
I'm using OpenCV to do some image processing on Python. I'm trying to overlay an outline on an image where the outline was made from a mask. I'm using cv2.Canny() to get the outline of the mask, then changing that to a color using cv2.cvtColor() then finally converting that edge to cyan using outline[np.where((outline ...
0
1
7,385
0
46,911,651
0
0
0
0
2
false
0
2017-10-24T12:53:00.000
0
2
0
orb opencv variable inputs
46,911,160
0
python,opencv,threshold,orb
The python docstring of ORB_create actually contains information about the parameter nfeatures, which is the maximum number of features to return. Could that solve your problem?
I am new to opencv and am trying to extract keypoints of a gesture image by ORB algorithm in python interface. The image input is binary and has many curvatures. So ORB gives too many points as keypoints (which are actually not). I am trying to increase the threshold of ORB algorithm so that the unnesessary points dont...
0
1
258
0
47,732,243
0
0
0
0
2
false
0
2017-10-24T12:53:00.000
0
2
0
orb opencv variable inputs
46,911,160
0
python,opencv,threshold,orb
After looking at the ORB() function in Opencv C++ description, I realized that the input parameters can be passed into function in Python as nfeatures=200,mask=img etc. (not sure about C++ though).
I am new to opencv and am trying to extract keypoints of a gesture image by ORB algorithm in python interface. The image input is binary and has many curvatures. So ORB gives too many points as keypoints (which are actually not). I am trying to increase the threshold of ORB algorithm so that the unnesessary points dont...
0
1
258
0
46,918,492
0
0
0
0
1
true
1
2017-10-24T17:26:00.000
1
1
0
Keras. Correct way to give a (x,y) plot + features as input to a NN
46,916,554
1.2
python-3.x,tensorflow,keras
If x coordinates for all plots are same you could (and in fact should) ignore it. Because in this case this data do not introduce any additional information. Their use will only lead to a more complex neural network, worse convergence and as result to increasing of training time and performance degradation. About secon...
Currently I'm trying to make Keras binary classify a set of (x,y) plots. As a newbie, I can't figure out the proper way to give a correct input, since I've got these plots with app 3400 pairs each one and a set of 8 aditional features (local minimae locations) for every plot. What I tried is to give keras a 3400 + 3400...
0
1
128
0
46,919,591
0
0
0
0
2
false
0
2017-10-24T20:20:00.000
0
2
0
How to use Pandas to find the strongest month of sale for a product
46,919,360
0
python,pandas
Well, I am not sure about how is your data so not sure if the answer will help but from what you said is that you are trying to check the month of the highest sales, so giving the product you will probably want to use the pandas groupby using the month and you will have a DataFrame with every month grouped. imagine a D...
I am new to python pandas, and I am trying to find the strongest month within a given series of timestamped sales data. The question to answer for n products is: when is the demand for the given product the highest? I am not looking for a complete solution but rather some ideas, how to approach this problem. I already ...
0
1
324
0
46,919,474
0
0
0
0
2
false
0
2017-10-24T20:20:00.000
0
2
0
How to use Pandas to find the strongest month of sale for a product
46,919,360
0
python,pandas
I don't have 50 reputation to add comment hence adding answer section. Some insight about your required solution would be great, because to me it's not clear about your requirement. BTW Coming to the idea, if your can split and load the time series data as the timestamp and demand then you can easily do it using regula...
I am new to python pandas, and I am trying to find the strongest month within a given series of timestamped sales data. The question to answer for n products is: when is the demand for the given product the highest? I am not looking for a complete solution but rather some ideas, how to approach this problem. I already ...
0
1
324
0
47,045,367
0
0
0
0
1
false
18
2017-10-25T07:49:00.000
0
4
0
Getting around tf.argmax which is not differentiable
46,926,809
0
python,tensorflow
tf.argmax is not differentiable because it returns an integer index. tf.reduce_max and tf.maximum are differentiable
I've written a custom loss function for my neural network but it can't compute any gradients. I thinks it is because I need the index of the highest value and are therefore using argmax to get this index. As argmax is not differentiable I to get around this but I don't know how it is possible. Can anyone help?
0
1
10,784
0
48,502,090
0
0
0
0
1
false
0
2017-10-25T09:46:00.000
1
1
0
Running Python Tensorflow on CPU and GPU in parallel
46,929,145
0.197375
python,tensorflow,gpu,cpu
Do any of your networks share operators? E.g. they use variables with the same name in the same variable_scope which is set to variable_scope(reuse=True) Then multiple nets will try to reuse the same underlying Tensor structures. Also check it tf.ConfigProto.allow_soft_placement is set to True or False in your tf.Ses...
I need to train a very large number of Neural Nets using Tensorflow with Python. My neural nets (MLP) are ranging from very small ones (~ 2 Hidden Layers with ~30 Neurons each) to large ones (3-4 Layers with >500 neurons each). I am able to run all of them sequencially on my GPU, which is fine. But my CPU is almost idl...
0
1
1,450
0
46,945,235
0
1
0
0
2
false
0
2017-10-26T02:53:00.000
0
3
0
If the value of list 1 is the index of list 2, how can I sort list 1 according to the list 2 value
46,945,113
0
python,sorting
You can transform the items in a using a key. That key is a function of each element of a. Try this: a = sorted(a, key=lambda i: b[i]) Note that if any value in a is outside the range of b, this would fail and raise an IndexError: list index out of range. Based on your description, however, you want the list to be sor...
Suppose there is a list a[i] which stores index value v, v is the index value of another list b[v]. I want to according to the values of list b to sort the list a. For example a=[0,2,3,1] b=[7,10,8,6] I want the list a become a=[1,2,0,3], is there some concise way to sort list a?
0
1
48
0
46,945,183
0
1
0
0
2
false
0
2017-10-26T02:53:00.000
-1
3
0
If the value of list 1 is the index of list 2, how can I sort list 1 according to the list 2 value
46,945,113
-0.066568
python,sorting
Simply solution would be sort the list b first and then get the indexes for list b after sorting, finally get the values of a list in the order of b indexes that were taken after sorting b list
Suppose there is a list a[i] which stores index value v, v is the index value of another list b[v]. I want to according to the values of list b to sort the list a. For example a=[0,2,3,1] b=[7,10,8,6] I want the list a become a=[1,2,0,3], is there some concise way to sort list a?
0
1
48
0
46,961,237
0
0
0
0
1
false
0
2017-10-26T18:10:00.000
-1
1
0
One hot encoding and its combination with DecisionTreeClassifier
46,961,091
-0.197375
python,pandas,scikit-learn,decision-tree,one-hot-encoding
The two options you are describing do two very different things. If you choose to binarize (one-hot encode) the values of the variable, there is no order to them. The decision tree at each split, considers a binary split on each of the new binary variables and chooses the most informative one. So yes, each binary feat...
So my understanding is that you perform one hot encoding to convert categorical features as integers to fit them to scikit learn machine learning classifier. So let's say we have two choices a. Splitting all the features into one hot encoded features (if A is say a categorical features that takes values 'a', 'b' and 'c...
0
1
325
0
46,979,942
0
0
0
0
1
false
0
2017-10-26T23:13:00.000
0
2
0
Python - How can I find difference between two rows of same column using loop in CSV file?
46,965,192
0
python,python-3.x,csv
import csv # Files to load (Remember to change these) file_to_load = "raw_data/budget_data_2.csv" # Read the csv and convert it into a list of dictionaries with open(file_to_load) as revenue_data: reader = csv.reader(revenue_data) # use of next to skip first title row in csv file next(reader) reven...
Date Revenue 9-Jan $943,690.00 9-Feb $1,062,565.00 9-Mar $210,079.00 9-Apr -$735,286.00 9-May $842,933.00 9-Jun $358,691.00 9-Jul $914,953.00 9-Aug $723,427.00 9-Sep -$837,468.00 9-Oct -$146,929.00 9-Nov $831,730.00 9-Dec $917,752.00 10-Jan $800,038.00 10-Feb $1,117,103.00 10-Mar ...
0
1
5,951
0
46,967,516
0
0
0
0
1
false
0
2017-10-27T04:01:00.000
0
2
0
python sklearn accuracy score for two different list
46,967,312
0
python,numpy,scikit-learn
You have to convert the array to list to make it work This should do for you accuracy_score(y_test.tolist(),labs)
I have two lists y_test = array('B', [1, 2, 3, 4, 5]) and labs = [1, 2, 3, 4, 5] In sklearn, when i do print accuracy_score(y_test,labs), i get error ValueError: Expected array-like (array or non-string sequence), got array('B', [1, 2, 3, 4, 5]). I tried to compare it using print accuracy_score(y_test['B'],labs) but...
0
1
2,073
0
47,003,117
0
0
0
0
1
false
1
2017-10-28T12:59:00.000
0
1
0
Is the loss value computed by keras for 2D CNN regression by keras point wise?
46,989,998
0
python,keras
The way how mse is defined in Keras make it compute an average pixel error. So you could simply take a loss value as an average pixel error.
I'm using keras for CNN on 2D images for regression with mean squared error as the loss function. The loss values are of the range 100. To know average error at each pixel, should I divide it by total number of pixels? Or the loss values displayed are for pixels?
0
1
246
0
46,998,392
0
0
0
0
1
false
0
2017-10-29T08:29:00.000
1
1
0
Logistic Regression- Working with categorical variable in Python?
46,998,234
0.197375
python,pandas,regression,statsmodels
You can apply grouping and then do logistic regression on each group. Or you treat it as multilabel classifier and do "Softmax regression".
I have a dataset that includes 7 different covariates and an output variable, the 'success rate'. I'm trying to find the important factors that predict the success rate. One of the covariates in my dataset is a categorical variable that takes on 700 values (0- 700), each representing the ID of the district they're fro...
0
1
755
0
46,999,668
0
0
0
0
1
false
0
2017-10-29T11:24:00.000
-1
2
0
Random element in fitness function genetic algorithm
46,999,584
-0.099668
python,neural-network,artificial-intelligence,genetic-algorithm
Normally you use a seed for genetic algorithms, which should be fixed. It will always generate the same "random" childs sequentially, which makes your approach reproducible. So the genetic algorithm is kind of pseudo-random. That is state of art how to perform genetic algorithms.
So I am using a genetic algorithm to train a feedforward neural network, tasked with recognizing a function given to the genetic algorithm. I.e x = x**2 or something more complicated obviously. I realized I am using random inputs in my fitness function, which causes the fitness to be somewhat random for a member of the...
0
1
495
0
47,006,813
0
0
0
0
1
false
1
2017-10-30T00:25:00.000
0
1
0
Catboost tuning order?
47,006,642
0
python,machine-learning,cross-validation,hyperparameters,catboost
You have essentially answered your own question already. Any variable that depends on something else x you must first define x. One thing to keep in mind is you can define a function before the variables you need to pass into it since its only when you call the function that you need the input variables, defining the f...
So with Catboost you have parameters to tune and also iterations to tune. So for iterations you can tune using cross validation with the overfit detector turned on. And for the rest of the parameters you can use Bayesian/Hyperopt/RandomSearch/GridSearch. My question is which order to tune Catboost in. Should I tune the...
0
1
1,861
0
47,014,211
0
0
0
0
1
false
0
2017-10-30T11:17:00.000
0
1
0
One-hot vector to softmax-like distribution in tensorflow
47,013,937
0
python,tensorflow
A solution could be to keep your one-hot vector ;). Another one, more general, is to make a random positive vector, then compute the difference between the highest score d and the score of your true class, then add a random number between d and +infinity to the true class's score, then normalize to get a valid distrib...
Is there a way in tensorflow to transform a one-hot vector into a softmax-like distribution? For example, I have the following one-hot vector: [0 0 0 0 1 0] I want to have a vector with probabilities where the one value is the most likely number, like: [0.1 0.1 0.1 0.1 0.5 0.1] This vector should always be random, ...
0
1
359
0
47,031,545
0
0
0
0
1
false
1
2017-10-30T23:49:00.000
2
1
0
Is there a heuristic for homogenizing image dimensions before using them to train neural net?
47,025,896
0.379949
python-2.7,image-processing,machine-learning,keras,conv-neural-network
I don't think there is a standard approach on this. In machine learning, in many cases we have to try and see. If I were you, if I had to build a custom neural network, I would start with mean image size and then I would gradually increase the size until reaching optimum score. If you are using a pretrained neural net...
I am training a neural net on a set of images with heterogeneous dimensions. Of course, they all have to have the same dimensions to be fed to the NN, and it is simple enough to use scipy.misc.imresize() for this. But, how should I choose width and height? My first instinct was to plot histograms of both and eyeball va...
0
1
44
0
47,037,520
0
1
0
0
1
false
1
2017-10-31T13:53:00.000
1
6
0
Using large index in python (numpy or lists)
47,037,150
0.033321
python,numpy
I can propose to use such notation [5*10**5:1*10**6] but it's not so clear as in case of 5e5 and 1e6. And even worse in case of 3.5e6 = 35*10**5
I frequently need to enter large integers for indexing and creating numpy arrays, such as 3500000 or 250000. Normally I'd enter these using scientific notation, 3.5e6 or .25e6 or such. This is quicker, and much less likely to have errors. Unfortunately, python expects integer datatypes for indexing. The obvious solu...
0
1
330
0
47,038,338
0
1
0
0
1
false
0
2017-10-31T14:42:00.000
0
5
0
Save data as a *.dat file?
47,038,101
0
python,database,save
Correct me if I'm wrong, but opening, writing to, and subsequently closing a file should count as "saving" it. You can test this yourself by running your import script and comparing the last modified dates.
I am writing a program in Python which should import *.dat files, subtract a specific value from certain columns and subsequently save the file in *.dat format in a different directory. My current tactic is to load the datafiles in a numpy array, perform the calculation and then save it. I am stuck with the saving par...
0
1
41,290
0
47,042,891
0
0
0
1
1
false
0
2017-10-31T18:53:00.000
0
1
0
How to prevent charts or tables to disappear when I re-open Jupyter Notebook?
47,042,689
0
python,ipython,jupyter-notebook,ipython-notebook
Are you explicitly saving your notebook before you re-open it? A Jupyter notebook is really just a large json object, eventually rendered as a fancy html object. If you save the notebook, illustrations and diagrams should be saved as well. If that doesn't do the trick, try putting the one-liner "data" in a different ce...
I use Pandas with Jupyter notebook a lot. After I ingest a table in from using pandas.read_sql, I would preview it by doing the following: data = pandas.read_sql("""blah""") data One problem that I have been running into is that all my preview tables will disappear if I reopen my .ipynb Is there a way to prevent that ...
0
1
71
0
55,322,568
0
1
0
0
4
false
5
2017-10-31T19:44:00.000
1
5
0
Jupyter Notebook: no module named pandas
47,043,407
0.039979
python,python-3.x,pandas,jupyter-notebook
Try this for python3 sudo pip3 install pandas
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent info...
0
1
7,502
0
47,049,051
0
1
0
0
4
false
5
2017-10-31T19:44:00.000
4
5
0
Jupyter Notebook: no module named pandas
47,043,407
0.158649
python,python-3.x,pandas,jupyter-notebook
You can try: which conda and which python to see the exact location where conda and python was installed and which was launched. And try using the absolute path of conda to launch jupyter. For example, /opt/conda/bin/jupyter notebook
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent info...
0
1
7,502
0
72,239,426
0
1
0
0
4
false
5
2017-10-31T19:44:00.000
0
5
0
Jupyter Notebook: no module named pandas
47,043,407
0
python,python-3.x,pandas,jupyter-notebook
The default kernel in jupyter notebook points the python that is different to the python used inside the terminal. You could check using which python So the packages installed by conda lives in different place compared to the python that is used by the jupyter notebook at default. To fix the issue, both needs to be sam...
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent info...
0
1
7,502
0
64,191,121
0
1
0
0
4
false
5
2017-10-31T19:44:00.000
0
5
0
Jupyter Notebook: no module named pandas
47,043,407
0
python,python-3.x,pandas,jupyter-notebook
Its seems using homebrew installs for packages dependancies of home brew formulas are not handled by home brew well. Mostly path issues as installs are in different locations vs pip3, I also had tried installing pandas thru nb with !pip3, but I got errors that is was already satisfied meaning it was already installed j...
I've searched through other questions but have not found anything that has helped (most just suggest you do install pandas with conda or pip). In my jupyter notebook I'm trying to import pandas (import pandas as pd) but I'm getting the following error: ModuleNotFoundError: No module named 'pandas' Some pertinent info...
0
1
7,502
0
47,129,061
0
1
0
0
1
false
3
2017-11-01T09:01:00.000
1
2
0
unable to read stata .dta file in python
47,051,326
0.099668
python,pandas,stata
Just use the read_table() of Pandas then make sure to include delim_whitespace=True and header=None.
I am trying to read a Stata (.dta) file in Python with pandas.read_stata, But I'm getting this error: ValueError: Version of given Stata file is not 104, 105, 108, 111 (Stata 7SE), 113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), 117 (Stata 13), or 118 (Stata 14) Please advise.
0
1
3,485
0
47,236,204
0
0
0
0
1
false
0
2017-11-02T01:47:00.000
0
2
0
why model is giving high accuracy of 84% but very low AUC 13%?
47,066,314
0
python,machine-learning,random-forest
No, your model is not fine. In your dataset around 88% records belong to "Label 0", which makes your model bias to "Label 0". Thus, even though your AUC is low, it shows 84% accuracy as most of the data belongs to "Label 0". You can undersample records belong to "Label 0" or oversample records belong to "Label 1" to m...
I have built model which gives me 84% accuracy for random forest and support vector machine but giving very low auc of 13% only. I am building this in python and I am new to machine learning and data science. I am predicting 0 and 1 labels on dataset. My overall dataset is having records of 30744. Label 1 - 6930 Label ...
0
1
1,723
0
50,312,611
0
0
0
0
1
false
0
2017-11-02T07:20:00.000
0
1
0
Using bokeh plotting with kafka streaming
47,069,678
0
java,python,apache-kafka,bokeh,apache-kafka-streams
Try kafka-python. You can set up a simple consumer to read the data from your cluster.
Here is a problem I am stuck with presently. Recently I have been exploring bokeh for plotting and kafka for streaming. And I thought of making a sample live dashboard using both of them. But the problem is I use bokeh with python and kafka stream api's with Java. Is there a way to use them together by any chance. The ...
1
1
824
0
47,099,951
0
0
0
0
1
false
1
2017-11-02T17:20:00.000
0
1
0
Looking to cluster short descriptions of reports. Should I use Word2Vec or Doc2Vec
47,081,149
0
python,machine-learning,nlp,word2vec,doc2vec
They're very similar, so just as with a single approach, you'd try tuning parameters to improve results in some rigorous manner, you should try them both, and compare the results. Your dataset sounds tiny compared to what either needs to induce good vectors – Word2Vec is best trained on corpuses of many millions to bi...
So, I have close to 2000 reports and each report has an associated short description of the problem. My goal is to cluster all of these so that we can find distinct trends within these reports. One of the features I'd like to use some sort of contextual text vector. Now, I've used Word2Vec and think this would be a go...
0
1
380
0
51,358,087
0
0
0
0
1
false
3
2017-11-03T11:28:00.000
0
2
0
Python : Halton and Hammersley quasi random sequences
47,094,705
0
python,numpy,scipy
Most library methods offering low discrepancy methods for arbitrary dimensions, won’t include arguments that allow you to define arbitrary intervals for each of the separate dimensions/components. However, in virtually all of these cases, you can adapt the exisiting method to suit your requirements with the additio...
I am trying to construct Hammersley and Halton quasi random sequences. I have for example three variables x1, x2 and x3. They all have integer values. x1 has a range from 2-4, x2 from 2-4 and x3 from 1-7. Is there any python package which can create those sequences? I saw that there are some procject like sobol or SALi...
0
1
1,801
0
56,950,297
0
0
0
0
1
false
8
2017-11-04T02:37:00.000
3
2
0
Why does get_weights return an empty list?
47,106,830
0.291313
python,machine-learning,tensorflow,keras
Maybe you are asking for weights before they are created. Weights are created when the Model is first called on inputs or build() is called with an input_shape. For example, if you load weights from checkpoint but you don't give an input_shape to the model, then get_weights() will return an empty list.
I am teaching myself data science and something peculiar has caught my eyes. In a sample DNN tutorial I was working on, I found that the Keras layer.get_weights() function returned empty list for my variables. I've successfully cross validated and used model.fit() function to compute the recall scores. But as I'm tryin...
0
1
2,799
0
47,526,695
0
1
0
0
2
false
0
2017-11-04T09:36:00.000
0
2
0
how can i make anaconda spyder code completion work again after installing tensorflow
47,109,343
0
python,tensorflow,anaconda,spyder,code-completion
Now that I have to use a temporary alternative, I installed anaconda version without an installed tensorflow in anaconda's envs. And I use it when I don't use tensorflow. I hope this question can be complement answered, please attent my answer.
I am data scientist in beijing and working with anaconda in win7 but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly. Now even I uninstall tensorflow,code completion function of spyder still not work. Any help? m...
0
1
487
0
47,525,945
0
1
0
0
2
false
0
2017-11-04T09:36:00.000
0
2
0
how can i make anaconda spyder code completion work again after installing tensorflow
47,109,343
0
python,tensorflow,anaconda,spyder,code-completion
I try pip rope_py3k、jedi and readline, and reset the set of tool, but all are not useful. and my Spyder code editing area also can not be automatically completed after the installation of tensorflow, I have re-installed again and found the same problem. However,when I re-installed all envs except tensorflow,it can work...
I am data scientist in beijing and working with anaconda in win7 but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly. Now even I uninstall tensorflow,code completion function of spyder still not work. Any help? m...
0
1
487
0
47,115,148
0
0
0
0
1
true
5
2017-11-04T18:20:00.000
10
1
0
Sklearn's imputer v/s df.fillnan to replace nan values with mean of the column
47,114,021
1.2
python,pandas,dataframe,scikit-learn
I feel imputer class has its own benefits because you can just simply mention mean or median to perform some action unlike in fillna where you need to supply values. But in imputer you need to fit and transform the dataset which means more lines of code. But it may give you better speed over fillna but unless really bi...
I found 2 ways to replace nan values in pythons, One using sklearn's imputer class and the other using df.fillnan() the later seems easy with less code. But efficiency wise which is better. Can anyone explain the use cases of each.?
0
1
2,979
0
47,131,429
0
0
0
0
1
false
7
2017-11-06T06:55:00.000
1
5
0
Diff between two dataframes in pandas
47,131,361
0.039979
python,pandas,merge,compare,diff
Set df2.columns = df1.columns Now, set every column as the index: df1 = df1.set_index(df1.columns.tolist()), and similarly for df2. You can now do df1.index.difference(df2.index), and df2.index.difference(df1.index), and the two results are your distinct columns.
I have two dataframes both of which have the same basic schema. (4 date fields, a couple of string fields, and 4-5 float fields). Call them df1 and df2. What I want to do is basically get a "diff" of the two - where I get back all rows that are not shared between the two dataframes (not in the set intersection). Note,...
0
1
32,562
0
47,259,321
0
1
0
0
1
false
0
2017-11-06T17:14:00.000
0
1
0
ModuleNotFound Error in python script but only when imported into a parent script
47,142,323
0
python,windows,numpy,anaconda
basteflp, thanks for your response. I managed to solve it. The module not found error was due to running the script outside of the a specific Anaconda environment. Running the script after loading the Anaconda environment resolved the error.
On a windows machine with Anaconda installed. Script B runs correctly and produces the correct result. Script B is called from a Windows console app. When script A imports script B, script B fails with the error "ModuleNotFoundError: No module named 'numpy'". When script B is passed directly to Python executable, scr...
0
1
52
0
47,147,531
0
0
0
0
1
false
4
2017-11-06T23:13:00.000
0
6
0
Filter Pandas Dataframe using an arbitrary number of conditions
47,147,414
0
python,pandas
I believe that reduce( (lambda x, y: x & (df[y[0]]<y[1])), list_of_filters ) will do it.
I am comfortable with basic filtering and querying using Pandas. For example, if I have a dataframe called df I can do df[df['field1'] < 2] or df[df['field2'] < 3]. I can also chain multiple criteria together, for example: df[(df['field1'] < 3) & (df['field2'] < 2)]. What if I don't know in advance how many criteria I ...
0
1
813
0
47,172,267
0
0
0
0
1
false
0
2017-11-07T01:36:00.000
0
1
0
Getting numpy vector from a trained Doc2Vec model for each document
47,148,615
0
python-3.x,nlp,gensim,doc2vec
Bulk training only creates vectors for tags you supplied. If you want to read out a bulk-trained vector per paragraph (as if by model.docvecs['paragraph000']), you have to give each paragraph a unique tag during training (like 'paragraph000'). You can give docs other tags as well - but bulk training only creates rememb...
This is my first time using Doc2Vec I'm trying to classify works of an author. I have trained a model with Labeled Sentences (paragraphs, or strings of specified length), with words = the list of words in the paragraph, and tags = author's name. In my case I only have two authors. I tried accessing the docvecs attribut...
0
1
773
0
64,742,353
0
0
0
0
1
false
30
2017-11-07T07:54:00.000
5
3
0
What is the difference between xgb.train and xgb.XGBRegressor (or xgb.XGBClassifier)?
47,152,610
0.321513
python,machine-learning,scikit-learn,regression,xgboost
From my opinion the main difference is the training/prediction speed. For further reference I will call the xgboost.train - 'native_implementation' and XGBClassifier.fit - 'sklearn_wrapper' I have made some benchmarks on a dataset shape (240000, 348) Fit/train time: sklearn_wrapper time = 89 seconds native_implementati...
I already know "xgboost.XGBRegressor is a Scikit-Learn Wrapper interface for XGBoost." But do they have any other difference?
0
1
17,925
0
47,173,103
0
1
0
0
2
false
0
2017-11-08T05:56:00.000
0
2
0
No module named tensorflow even after it is present in the local
47,172,525
0
python,tensorflow,jupyter
Its done...tried installing within the environment Tensorflow.
I am already having tensorflow in my anaconda.Still when i run the ipython notebook ,it shows No module named tensorflow.
0
1
263
0
47,172,877
0
1
0
0
2
false
0
2017-11-08T05:56:00.000
0
2
0
No module named tensorflow even after it is present in the local
47,172,525
0
python,tensorflow,jupyter
Are you using Virtual Environment? If yes, there might be difference in version. try "pip install ipython", and then import tensorflow. May it works.
I am already having tensorflow in my anaconda.Still when i run the ipython notebook ,it shows No module named tensorflow.
0
1
263
0
47,173,911
0
0
0
1
1
false
0
2017-11-08T06:50:00.000
0
1
0
Where is RDD or Spark SQL dataframe stored or persisted in client deploy mode on a Spark 2.1 Standalone cluster?
47,173,286
0
python,pyspark,apache-spark-sql,spark-dataframe
If i understand correctly, then what you will get on the client side is an int. At least should be, if setup correctly. So the answer is no, the DF is not going to hit your local RAM. You are interacting with the cluster via SparkSession (SparkContext for earlier versions). Even though you are developing -i.e. writing ...
I am deploying a Jupyter notebook(using python 2.7 kernel) on client side which accesses data on a remote and does processing in a remote Spark standalone cluster (using pyspark library). I am deploying spark cluster in Client mode. The client machine does not have any Spark worker nodes. The client does not have enoug...
0
1
187
0
47,379,830
0
0
0
0
1
true
2
2017-11-08T19:09:00.000
6
2
0
Sentiment Analysis with Imbalanced Dataset in LightGBM
47,187,750
1.2
python-3.x,machine-learning,nlp,sentiment-analysis,lightgbm
Are there any approaches to follow to handle this type of datasets that are so imbalanced.? Your dataset is almost balanced. 70/30 is close to equal. With gratient boosted trees it is possible to train on much more unbalanced data, like credit scoring, fraud detection, and medical diagnostics, where percentage of po...
I am trying to perform sentiment analysis on a dataset of 2 classes (Binary Classification). Dataset is heavily imbalanced about 70% - 30%. I am using LightGBM and Python 3.6 for making the model and predicting the output. I think imbalance in dataset effect performance of my model. I get about 90% accuracy but it does...
0
1
3,502
0
48,100,987
0
0
0
0
1
false
4
2017-11-09T18:55:00.000
0
2
0
NumPy + BLAS + LAPACK on GPU (AMD and Nvidia)
47,209,532
0
python,numpy,lapack,blas
Another option is ArrayFire. While this package does not contain a complete BLAS and LAPACK implementation, it does offer much of the same functionality. It is compatible with OpenCL and CUDA, and hence, is compatible with AMD and Nvidia architectures. It has wrappers for Python, making it easy to use.
We have a Python code which involves expensive linear algebra computations. The data is stored in NumPy arrays. The code uses numpy.dot, and a few BLAS and LAPACK functions which are currently accessed through scipy.linalg.blas and scipy.linalg.lapack. The current code is written for CPU. We want to convert the code so...
0
1
2,168
0
50,078,794
0
0
0
0
1
true
2
2017-11-10T20:32:00.000
4
1
0
OpenCV - undefined symbol: hb_font_funcs_set_variation_glyph_func
47,230,690
1.2
python,opencv,anaconda
you can try conda install -c conda-forge opencv, this one also will install opencv version 3. For your error, you can fix it by install pango using conda install -c conda-forge pango
I have a working anaconda environment on my ubuntu 17.10 machine and installed opencv3 using conda install -c menpo opencv3 When I try to import cv2 the following error shows up import cv2 ImportError: /usr/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0: undefined symbol: hb_font_funcs_set_variation_glyph_func
0
1
2,740
0
50,935,626
0
0
0
0
1
false
0
2017-11-12T17:45:00.000
1
1
0
Convert Contour into BLOB OpenCV
47,251,937
0.197375
python,opencv,blob,contour
After a lot of programming, I realized the procedure is: After a contour is extracted, create a black image with same size as the original image. Draw the contour in black image. Find the coordinate of the contour center by contour feature : moments, x0 and y0. Use Floodfill() to white-fill the interior of the contou...
Hi everyone, I am trying to convert a contour to a blob in an image .There are several blobs in image ; the proper one is extracted by applying contour feature. The blob is required to mask a grayscale image. I have tried extracting each non-zero pixels and pointPolygontest() in order to find the BLOB poi...
0
1
532
0
47,270,384
0
1
0
0
1
true
2
2017-11-12T19:45:00.000
0
4
0
Are all objects returned by value rather than by reference?
47,253,169
1.2
python,function,numpy,heap-memory
I see what I missed now: Objects are created on the heap, but function frames are on the stack. So when methodB finishes, its frame will be reclaimed, but that object will still exist on the heap, and methodA can access it with a simple reference.
I am coding in Python trying to decide whether I should return a numpy array (the result of a diff on some other array) or return numpy.where(diff)[0], which is a smaller array but requires that little extra work to create. Let's call the method where this happens methodB. I call methodB from methodA. The rub is that I...
0
1
2,293
0
47,269,281
0
1
0
0
1
true
3
2017-11-13T15:21:00.000
6
1
0
Spyder :An error ocurred while starting the kernel
47,267,716
1.2
python,installation,anaconda,spyder
The problem is that you have two Python versions installed: C:\Users\afsan\Anaconda3\ C:\Users\afsan\AppData\Local\Programs\Python\Python36 Given that it seems you want to use Spyder with Anaconda, please remove your second Python version (manually, if necessary). That should fix your problem.
I am still getting this error: An error occurred while starting the kernel Things I tried: setuptools command updating spyder Uninstalled everything that had word python in it from Uninstall or change progam panel Uninstalling and reinstalling anaconda Reading people's response on how they tried to fix it Tried not...
0
1
19,126
0
47,274,239
0
0
0
0
1
false
1
2017-11-13T21:46:00.000
0
3
0
python distinguish between '300' and '300.0' for a dataframe column
47,274,094
0
python,pandas,csv,dataframe
Some solutions: Go through all the files, change the columns names, then save the result in a new folder. Now when you read a file, you can go to the new folder and read it from there. Wrap the normal file read function in another function that automatically changes the column names, and call that new function when y...
Recently I have been developing some code to read a csv file and store key data columns in a dataframe. Afterwards I plan to have some mathematical functions performed on certain columns in the dataframe. I've been fairly successful in storing the correct columns in the dataframe. I have been able to have it do whatev...
0
1
213
0
47,689,288
0
0
0
0
1
false
0
2017-11-14T03:49:00.000
0
1
0
cv2 running optical flow on particular rectangles
47,277,332
0
python,opencv,opticalflow,cv2
Yes, it's possible. cv2.calcOpticalFlowPyrLK() will be the optical flow function you need. Before you make that function call, you will have to create an image mask. I did a similar project, but in C++, though I can outline the steps for you: Create an empty matrix with same width and height of your images Using the p...
I am using OpenCV's Optical Flow module. I understand the examples in the documentation but those take the entire image and then get the optical flow over the image. I only want to pass it over some parts of an image. Is it possible to do that? If yes, how do I go about it? Thanks!
0
1
279
0
51,707,973
0
0
0
0
2
true
5
2017-11-14T09:46:00.000
0
2
0
Early stopping using tensorflow tf.estimator ?
47,282,399
1.2
python,tensorflow
Recently I have come across this function in tensorflow API. tf.keras.callbacks.EarlyStopping. tf version is r1.9. Arguments: monitor: quantity to be monitored min_delta: minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvemen...
I am using tensorflow v1.4. I want to use early stopping using the validation set with a patience of 5 epochs. I have searched on the web and found out that there used to be a function called ValidationMonitor but it is depreciated now. So is there a way to achieve this ?
0
1
1,059
0
49,161,168
0
0
0
0
2
false
5
2017-11-14T09:46:00.000
0
2
0
Early stopping using tensorflow tf.estimator ?
47,282,399
0
python,tensorflow
There doesn't seem to be a good way of doing this, unfortunately. One method to consider is to save checkpoints quite often during training, and then later iterate over them and evaluating them. Then you can discard the checkpoints that does not have the best eval performance. This doesn't help you in saving time durin...
I am using tensorflow v1.4. I want to use early stopping using the validation set with a patience of 5 epochs. I have searched on the web and found out that there used to be a function called ValidationMonitor but it is depreciated now. So is there a way to achieve this ?
0
1
1,059
0
64,614,856
0
0
0
0
1
false
14
2017-11-14T12:58:00.000
1
5
0
Is there a way in pd.read_csv to replace NaN value with other character?
47,286,547
0.039979
python,pandas,csv
Putting this into the read_csv function does work: dtype={"count": pandas.Int64Dtype()} i.e. df = pd.read_csv('file.csv') This type supports both integers and pandas.NA values so you can import without the floats becoming integers. If necessary, you can then use regular DataFrame commands to clean up the missing values...
I have some data in csv file. Because it is collected from machine,all lines should be number but some NaN values exists in some lines.And the machine can auto replace these NaN values with a string '-'. My question is how to set params of pd.read_csv() to auto replace '-'values with zero from csv file?
0
1
31,191
0
51,295,721
0
1
0
0
1
true
4
2017-11-15T16:02:00.000
0
1
0
python3 Illegal instruction (core dump)
47,312,023
1.2
python,linux,virtual-machine,python-3.5
Although I'm not sure the source of the issue, I did a full purge of python3 and reinstalled it and all packages with it and that fixed the issue!
I am on a virtual machine running on Ubuntu 16.04. I have installed pandas, sklearn, and conda using pip3. When I try to run a python3 program using these packages, I get the error "Illegal instruction (core dump)." Not sure how to fix this. Simple python3 programs (aka no imports) run fine. I also tried importing but ...
0
1
2,258
0
47,341,200
0
1
0
0
1
false
16
2017-11-16T07:39:00.000
26
1
0
Viewing dataframes in Spyder using a command in its console
47,324,077
1
python,r,rstudio,spyder
(Spyder maintainer here) There's no function similar to view() in Spyder. To view the contents of a Dataframe, you need to double-click on it in the Variable Explorer.
I have been using R Studio for quite some time, and I find View() function very helpful for viewing my datasets. Is there a similar View() counterpart in Spyder?
0
1
18,421
0
47,371,621
0
0
0
0
2
false
0
2017-11-16T08:07:00.000
0
3
0
Learners in CNTK C# API
47,324,511
0
c#,python,cntk
Checked that CNTKLib is providing those learners in CPUOnly package. Nestrov is missing in there but present in python. There is a difference while creating the trainer object with CNTKLib learner function vs Learner class. If a learner class is used, net parameters are provided as a IList. This can be obtained using...
I am using C# CNTK 2.2.0 API for training. I have installed Nuget package CNTK.CPUOnly and CNTK.GPU. I am looking for following learners in C#. 1. AdaDelta 2. Adam 3. AdaGrad 4. Neterov Looks like Python supports these learners but C# package is not showing them. I can see only SGD and SGDMomentun learners in C# there....
0
1
447
0
47,324,718
0
0
0
0
2
false
0
2017-11-16T08:07:00.000
0
3
0
Learners in CNTK C# API
47,324,511
0
c#,python,cntk
Download the NCCL 2 app to configure in c# www.nvidia. com or google NCCL download
I am using C# CNTK 2.2.0 API for training. I have installed Nuget package CNTK.CPUOnly and CNTK.GPU. I am looking for following learners in C#. 1. AdaDelta 2. Adam 3. AdaGrad 4. Neterov Looks like Python supports these learners but C# package is not showing them. I can see only SGD and SGDMomentun learners in C# there....
0
1
447
0
47,336,205
0
0
0
0
1
false
0
2017-11-16T16:08:00.000
0
2
0
Error reported while running pyomo optimization with cbc solver and using timelimit
47,334,254
0
python-2.7,pyomo,coin-or-cbc
You could try to set the bound gap tolerance such that it will accept the other answer. I'm surprised that the solver status is coming back with error if there is a feasible solution found. Could you print out the whole results object?
I am trying to solve Optimisation problem with pyomo (Pyomo 5.3 (CPython 2.7.13 on Linux 3.10.0-514.26.2.el7.x86_64)) using CBC solver (Version: 2.9.8) and specifying a time limit in solver of 60 sec. The solver is getting a feasible solution (-1415.8392) but apparently not yet optimal (-1415.84) as you can see below. ...
0
1
1,241