GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
61,413,025
0
0
0
0
1
false
1,569
2013-04-11T08:14:00.000
4
15
0
How do I get the row count of a Pandas DataFrame?
15,943,769
0.053283
python,pandas,dataframe
Either of this can do it (df is the name of the DataFrame): Method 1: Using the len function: len(df) will give the number of rows in a DataFrame named df. Method 2: using count function: df[col].count() will count the number of rows in a given column col. df.count() will give the number of rows for all the columns.
How do I get the number of rows of a pandas dataframe df?
0
1
3,270,072
0
59,262,156
0
1
0
0
1
false
3
2013-04-11T19:21:00.000
3
2
0
easy_install and pip giving errors when trying to install numpy
15,957,071
0.291313
python,numpy,pip,easy-install
I was facing the same error while installing the requirements for my django project. This worked for me. Upgrade your setuptools version via pip install --upgrade setuptools and run the command for installing the packages again.
I am running Python 2.7.2 on my machine. I am trying to install numpy with easy_install and pip, but none of them are able to do so. So, when I try: sudo easy_install-2.7 numpy I get this error: "The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and ha...
0
1
5,561
0
15,998,577
0
0
0
0
1
false
5
2013-04-13T15:44:00.000
5
2
0
Using sklearn and Python for a large application classification/scraping exercise
15,989,610
0.462117
python,scrapy,classification,scikit-learn
Use the HashingVectorizer and one of the linear classification modules that supports the partial_fit API for instance SGDClassifier, Perceptron or PassiveAggresiveClassifier to incrementally learn the model without having to vectorize and load all the data in memory upfront and you should not have any issue in learning...
I am working on a relatively large text-based web classification problem and I am planning on using the multinomial Naive Bayes classifier in sklearn in python and the scrapy framework for the crawling. However, I am a little concerned that sklearn/python might be too slow for a problem that could involve classificatio...
1
1
940
0
15,990,493
0
1
0
0
1
false
14
2013-04-13T17:00:00.000
12
2
0
List of lists vs dictionary
15,990,456
1
python,arrays,list,math,dictionary
When the keys of the dictionary are 0, 1, ..., n, a list will be faster, since no hashing is involved. As soon as the keys are not such a sequence, you need to use a dict.
In Python, are there any advantages / disadvantages of working with a list of lists versus working with a dictionary, more specifically when doing numerical operations with them? I'm writing a class of functions to solve simple matrix operations for my linear algebra class. I was using dictionaries, but then I saw that...
0
1
20,631
0
16,064,603
0
0
0
0
1
false
0
2013-04-17T15:07:00.000
0
1
0
Alternative inputs to SciPy Radial Basis Functions
16,063,698
0
python,scipy
After looking through the source for the SciPy function, I will just subclass it and override init where the individual inputs are combined into an array anyway.
I am trying to generate a radial basis function where the input variables are defined at runtime. The SciPy.interpolate.Rbf function seems to request discrete lists for each input and output variable, eg: rbf(x,y,z). This restricts you to defining fixed variables before hand. I have tried unsuccessfully to pass a list...
0
1
512
0
16,225,932
0
1
0
0
1
true
4
2013-04-19T19:32:00.000
5
2
0
Multiplying Columns by Scalars in Pandas
16,112,209
1.2
python,pandas
As Wouter said, the recommended method is to convert the dict to a pandas.Series and multiple the two objects together: result = df * pd.Series(myDict)
Suppose I have a pandas DataFrame with two columns named 'A' and 'B'. Now suppose I also have a dictionary with keys 'A' and 'B', and the dictionary points to a scalar. That is, dict['A'] = 1.2 and similarly for 'B'. Is there a simple way to multiply each column of the DataFrame by these scalars? Cheers!
0
1
9,702
0
16,133,372
0
0
0
0
1
false
1
2013-04-21T16:07:00.000
1
2
0
Python Pylab subplot bug?
16,133,206
0.099668
python,matplotlib,plot
I found out that subplot() should be called before the plot(), issue resolved.
I am plotting some data using pylab and everything works perfect as I expect. I have 6 different graphs to plot and I can individually plot them in separate figures. But when I try to subplot() these graphs, the last one (subplot(3,2,6)) doesn't show anything. What confuses me is that this 6th graph is drawn perfectly ...
0
1
375
0
16,149,290
0
1
0
0
2
false
0
2013-04-22T14:06:00.000
0
4
0
Saving python data for an application
16,149,187
0
python,file,numpy
I had this problem long ago so i dont have the code near to show you, but i used a binary write in a tmp file to get that done. EDIT: Thats is, pickle is what i used. Thanks SpankMe and RoboInventor
I need to save multiple numpy arrays along with the user input that was used to compute the data these arrays contain in a single file. I'm having a hard time finding a good procedure to use to achieve this or even what file type to use. The only thing i can think of is too put the computed arrays along with the user ...
0
1
106
0
16,149,283
0
1
0
0
2
false
0
2013-04-22T14:06:00.000
2
4
0
Saving python data for an application
16,149,187
0.099668
python,file,numpy
How about using pickle and then storing pickled array objects in a storage of your choice, like database or files?
I need to save multiple numpy arrays along with the user input that was used to compute the data these arrays contain in a single file. I'm having a hard time finding a good procedure to use to achieve this or even what file type to use. The only thing i can think of is too put the computed arrays along with the user ...
0
1
106
0
16,401,807
0
0
1
0
1
false
0
2013-04-23T14:07:00.000
1
1
0
how to make minepy.MINE run faster?
16,171,519
0.197375
python,python-2.7
first, use the latest version of minepy. Second, you can use a smaller value of "alpha" parameter, say 0.5 or 0.45. In this way, you will reduce the computational time in despite of characteristic matrix accuracy. Davide
I have a numerical matrix of 2500*2500. To calculate the MIC (maximal information coefficient) for each pair of vectors, I am using minepy.MINE, but this is taking forever, can I make it faster?
0
1
425
0
16,180,974
0
0
0
0
1
false
90
2013-04-23T23:35:00.000
2
3
0
Drawing average line in histogram (matplotlib)
16,180,946
0.132549
python,matplotlib,axis
I would look at the largest value in your data set (i.e. the histogram bin values) multiply that value by a number greater than 1 (say 1.5) and use that to define the y axis value. This way it will appear above your histogram regardless of the values within the histogram.
I am drawing a histogram using matplotlib in python, and would like to draw a line representing the average of the dataset, overlaid on the histogram as a dotted line (or maybe some other color would do too). Any ideas on how to draw a line overlaid on the histogram? I am using the plot() command, but not sure how to d...
0
1
127,120
0
16,186,125
0
1
0
0
1
false
1
2013-04-24T05:21:00.000
1
6
0
How to generate a list of all possible alphabetical combinations based on an input of numbers
16,183,941
0.033321
python,algorithm,alphabetical
As simple as a tree Let suppose you have give "1261" Construct a tree with it a Root . By defining the node(left , right ) , where left is always direct map and right is combo version suppose for the if you take given Number as 1261 1261 -> (1(261) ,12(61)) -> 1 is left-node(direct map -> a) 12 is right node(combo...
I have just come across an interesting interview style type of question which I couldn't get my head around. Basically, given a number to alphabet mapping such that [1:A, 2:B, 3:C ...], print out all possible combinations. For instance "123" will generate [ABC, LC, AW] since it can be separated into 12,3 and 1,23. I'...
0
1
1,582
0
16,223,497
0
1
0
0
2
false
3
2013-04-25T16:57:00.000
-1
2
0
Python numpy - Reproducibility of random numbers
16,220,585
-0.099668
python,random,numpy,prng
If reproducibility is very important to you, I'm not sure I'd fully trust any PRNG to always produce the same output given the same seed. You might consider capturing the random numbers in one phase, saving them for reuse; then in a second phase, replay the random numbers you've captured. That's the only way to elimi...
We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called. Now I want to se...
0
1
1,368
0
16,296,438
0
1
0
0
2
true
3
2013-04-25T16:57:00.000
5
2
0
Python numpy - Reproducibility of random numbers
16,220,585
1.2
python,random,numpy,prng
Okay, David was right. The PRNGs in numpy work correctly. Throughout every minimal example I created, they worked as they are supposed to. My problem was a different one, but finally I solved it. Do never loop over a dictionary within a deterministic algorithm. It seems that Python orders the items arbitrarily when cal...
We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called. Now I want to se...
0
1
1,368
0
16,231,323
0
0
0
0
2
false
0
2013-04-26T07:29:00.000
0
2
0
train nltk classifier for just one label
16,230,984
0
python,machine-learning,nlp,classification,nltk
I see two questions How to train the system? Can the system consist of "sci-fi" and "others"? The answer to 2 is yes. Having a 80% confidence threshold idea also makes sense, as long as you see with your data, features and algorithm that 80% is a good threshold. (If not, you may want to consider lowering it if not a...
I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have on...
0
1
441
0
16,231,216
0
0
0
0
2
true
0
2013-04-26T07:29:00.000
0
2
0
train nltk classifier for just one label
16,230,984
1.2
python,machine-learning,nlp,classification,nltk
You can simply train a binary classifier to distinguish between sci-fi and not sci-fi So train on the movie plots that are labeled as sci-fi and also on a selection of all other genres. It might be a good idea to have a representative sample of the same size for the other genres such that not all are of the romantic co...
I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have on...
0
1
441
0
16,236,700
0
0
0
0
1
true
0
2013-04-26T12:37:00.000
4
3
0
list instead of separated arguments in python
16,236,652
1.2
python
Well, to convert your example, you would use np.random.normal(x, *P). However, np.random.normal(x,a,b,c,...,z) wouldn't actually work. Maybe you meant another function?
I am trying to use function np.random.curve_fit(x,a,b,c,...,z) with a big but fixed number of fitting parameters. Is it possible to use tuples or lists here for shortness, like np.random.curve_fit(x,P), where P=(a,b,c,...,z)?
0
1
77
0
54,499,731
0
0
0
0
1
false
24
2013-04-26T22:19:00.000
0
3
0
Clustering words based on Distance Matrix
16,246,066
0
python,cluster-computing,scikit-learn,hierarchical-clustering
Recommend to take a look at agglomerative clustering.
My objective is to cluster words based on how similar they are with respect to a corpus of text documents. I have computed Jaccard Similarity between every pair of words. In other words, I have a sparse distance matrix available with me. Can anyone point me to any clustering algorithm (and possibly its library in Pytho...
0
1
28,598
0
16,273,799
0
1
0
0
1
false
1
2013-04-29T07:30:00.000
1
4
0
Solving a linear equation in one variable
16,273,351
0.049958
c++,python,algorithm,linear-algebra
The first thing is to parse the string, to identify the various tokens (numbers, variables and operators), so that an expression tree can be formed by giving operator proper precedences. Regular expressions can help, but that's not the only method (grammar parsers like boost::spirit are good too, and you can even run y...
What would be the most efficient algorithm to solve a linear equation in one variable given as a string input to a function? For example, for input string: "x + 9 – 2 - 4 + x = – x + 5 – 1 + 3 – x" The output should be 1. I am considering using a stack and pushing each string token onto it as I encounter spaces in ...
0
1
3,057
0
31,707,656
0
1
0
0
2
false
26
2013-04-29T21:43:00.000
1
10
0
Generate random number between 0.1 and 1.0. Python
16,288,749
0.019997
python,random,floating-point
Try random.randint(1, 10)/100.0
I'm trying to generate a random number between 0.1 and 1.0. We can't use rand.randint because it returns integers. We have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0. Does somebody else have an idea for this problem?
0
1
37,037
0
16,289,924
0
1
0
0
2
false
26
2013-04-29T21:43:00.000
1
10
0
Generate random number between 0.1 and 1.0. Python
16,288,749
0.019997
python,random,floating-point
The standard way would be random.random() * 0.9 + 0.1 (random.uniform() internally does just this). This will return numbers between 0.1 and 1.0 without the upper border. But wait! 0.1 (aka ¹/₁₀) has no clear binary representation (as ⅓ in decimal)! So You won't get a true 0.1 anyway, simply because the computer can...
I'm trying to generate a random number between 0.1 and 1.0. We can't use rand.randint because it returns integers. We have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0. Does somebody else have an idea for this problem?
0
1
37,037
0
16,319,018
0
0
0
0
1
false
2
2013-05-01T11:37:00.000
-1
3
0
Sample integers from truncated geometric distribution
16,317,420
-0.066568
python,math
The simple answer is: pick a random number from geometric distribution and return mod n. Eg: random.geometric(p)%n P(x) = p(1-p)^x+ p(1-p)^(x+n) + p(1-p)^(x+2n) .... = p(1-p)^x *(1+(1-p)^n +(1-p)^(2n) ... ) Note that second part is a constant for a given p and n. The first part is geometric.
What is a good way to sample integers in the range {0,...,n-1} according to (a discrete version of) the exponential distribution? random.expovariate(lambd) returns a real number from 0 to positive infinity. Update. Changed title to make it more accurate.
0
1
2,007
0
16,320,580
0
0
0
0
2
false
10
2013-05-01T14:49:00.000
0
3
0
Quantifying randomness
16,320,412
0
python,random
You can use some mapping to convert strings to numeric and then apply standard tests like Diehard and TestU01. Note that long sequences of samples are needed (typically few MB files will do)
I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be? I've tried compressing the output strings (via zlib) figuri...
0
1
740
0
16,328,721
0
0
0
0
2
false
10
2013-05-01T14:49:00.000
0
3
0
Quantifying randomness
16,320,412
0
python,random
An outcome is considered random if it can't be predicted ahead of time with certainty. If it can be predicted with certainty it is considered deterministic. This is a binary categorization, outcomes either are deterministic or random, there aren't degrees of randomness. There are, however, degrees of predictability. ...
I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be? I've tried compressing the output strings (via zlib) figuri...
0
1
740
0
16,332,805
0
0
0
0
1
false
6
2013-05-01T21:18:00.000
1
2
0
How many features can scikit-learn handle?
16,326,699
0.099668
python,numpy,machine-learning,scipy,scikit-learn
Some linear model (Regression, SGD, Bayes) will probably be your best bet if you need to train your model frequently. Although before you go running any models you could try the following 1) Feature reduction. Are there features in your data that could easily be removed? For example if your data is text or ratings ba...
I have a csv file of [66k, 56k] size (rows, columns). Its a sparse matrix. I know that numpy can handle that size a matrix. I would like to know based on everyone's experience, how many features scikit-learn algorithms can handle comfortably?
0
1
2,840
0
17,346,232
0
0
0
0
1
true
7
2013-05-03T17:13:00.000
2
1
0
Render a mayavi scene with a large pipeline faster
16,364,311
1.2
python,mayavi
The general principle is that vtk objects have a lot of overhead, and so you for rendering performance you want to pack as many things into one object as possible. When you call mlab convenience functions like points3d it creates a new vtk object to handle that data. Thus iterating and creating thousands of single po...
I am using mayavi.mlab to display 3D data extracted from images. The data is as follows: 3D camera parameters as 3 lines in the x, y, x direction around the camera center, usually for about 20 cameras using mlab.plot3d(). 3D coloured points in space for about 4000 points using mlab.points3d(). For (1) I have a functi...
0
1
1,606
0
16,382,307
0
1
0
0
1
true
0
2013-05-05T06:30:00.000
2
1
0
How do I integrate a multivariable function?
16,382,019
1.2
python,algorithm,math,integral
It depends on your context and the performance criteria. I assume that you are looking for a numerical approximation (as opposed to a algebraic integration) A Riemann Sum is the standard 'educational' way of numerically calculating integrals but several computationally more efficient algorithms exist.
I have a function f(x_1, x_2, ..., x_n) where n >= 1 that I would like to integrate. What algorithm should I use to provide a decently stable / accurate solution? I would like to program it in Python so any open source examples are more than welcome! (I realize that I should use a library but this is just a learning ex...
0
1
270
0
61,941,548
0
0
0
0
2
false
249
2013-05-06T10:35:00.000
5
8
0
Delete the first three rows of a dataframe in pandas
16,396,903
0.124353
python,pandas
inp0= pd.read_csv("bank_marketing_updated_v1.csv",skiprows=2) or if you want to do in existing dataframe simply do following command
I need to delete the first three rows of a dataframe in pandas. I know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.
0
1
412,922
0
52,984,033
0
0
0
0
2
false
249
2013-05-06T10:35:00.000
9
8
0
Delete the first three rows of a dataframe in pandas
16,396,903
1
python,pandas
A simple way is to use tail(-n) to remove the first n rows df=df.tail(-3)
I need to delete the first three rows of a dataframe in pandas. I know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.
0
1
412,922
0
16,444,230
0
0
0
0
2
false
6
2013-05-07T18:54:00.000
1
3
0
Data structure options for efficiently storing sets of integer pairs on disk?
16,426,469
0.066568
python,c,data-structures,integer
How about using one hash table or B-tree per bucket? On-disk hashtables are standard. Maybe the BerkeleyDB libraries (availabe in stock python) will work for you; but be advised that they since they come with transactions they can be slow, and may require some tuning. There are a number of choices: gdbm, tdb that you s...
I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of "similar") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the spec...
0
1
704
0
16,444,440
0
0
0
0
2
false
6
2013-05-07T18:54:00.000
1
3
0
Data structure options for efficiently storing sets of integer pairs on disk?
16,426,469
0.066568
python,c,data-structures,integer
Why not just store a table containing stuff that was deleted since the last re-write? This table could be the same structure as your main bucket, maybe with a Bloom filter for quick membership checks. You can re-write the main bucket data without the deleted items either when you were going to re-write it anyway for so...
I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of "similar") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the spec...
0
1
704
0
16,495,361
0
0
0
0
1
false
0
2013-05-08T08:45:00.000
0
1
1
openCV install Error using brew
16,436,260
0
python,macos,opencv
Try using macports it builds opencv including python bindings without any issue. I have used this for osx 10.8.
i am trying to install opencv on my MacbookPro OSX 10.6.8 (snow leopard) and Xcode version is 3.2.6 and result of "which python" is Hong-Jun-Choiui-MacBook-Pro:~ teemo$ which python /Library/Frameworks/Python.framework/Versions/2.7/bin/python and i am suffering from this below.. Linking CXX shared library ../../li...
0
1
447
0
16,446,014
0
0
0
0
1
false
3
2013-05-08T13:30:00.000
0
3
0
Machine Learning in Python - Get the best possible feature-combination for a label
16,442,055
0
python,machine-learning,nltk
You could compute the representativeness of each feature to separate the classes via feature weighting. The most common method for feature selection (and therefore feature weighting) in Text Classification is chi^2. This measure will tell you which features are better. Based on this information you can analyse the spec...
My Question is as follows: I know a little bit about ML in Python (using NLTK), and it works ok so far. I can get predictions given certain features. But I want to know, is there a way, to display the best features to achieve a label? I mean the direct opposite of what I've been doing so far (put in all circumstances, ...
0
1
879
0
16,950,633
0
0
0
0
1
false
6
2013-05-08T19:46:00.000
-3
4
0
Randomized stratified k-fold cross-validation in scikit-learn?
16,448,988
-0.148885
python,machine-learning,scikit-learn,cross-validation
As far as I know, this is actually implemented in scikit-learn. """ Stratified ShuffleSplit cross validation iterator Provides train/test indices to split data in train test sets. This cross-validation object is a merge of StratifiedKFold and ShuffleSplit, which returns stratified randomized folds. The folds are made...
Is there any built-in way to get scikit-learn to perform shuffled stratified k-fold cross-validation? This is one of the most common CV methods, and I am surprised I couldn't find a built-in method to do this. I saw that cross_validation.KFold() has a shuffling flag, but it is not stratified. Unfortunately cross_valida...
0
1
9,847
0
16,471,982
0
1
0
0
1
false
10
2013-05-09T21:51:00.000
1
6
0
Generating numbers with Gaussian function in a range using python
16,471,763
0.033321
python,gaussian
If you have a small range of integers, you can create a list with a gaussian distribution of the numbers within that range and then make a random choice from it.
I want to use the gaussian function in python to generate some numbers between a specific range giving the mean and variance so lets say I have a range between 0 and 10 and I want my mean to be 3 and variance to be 4 mean = 3, variance = 4 how can I do that ?
0
1
36,597
0
16,524,872
0
0
0
0
1
true
4
2013-05-11T22:51:00.000
5
1
0
Residuals of Random Forest Regression (Python)
16,502,445
1.2
python,python-2.7,numpy,scipy,scikit-learn
There is no function for that, as we like to keep the interface very simple. You can just do y - rf.predict(X)
When using RandomForestRegressor from Sklearn, how do you get the residuals of the regression? I would like to plot out these residuals to check the linearity.
0
1
2,682
0
16,521,114
0
0
0
0
1
false
6
2013-05-12T17:36:00.000
4
3
0
Python / Scipy - implementing optimize.curve_fit 's sigma into optimize.leastsq
16,510,227
0.26052
python,scipy,curve-fitting,least-squares
Assuming your data are in arrays x, y with yerr, and the model is f(p, x), just define the error function to be minimized as (y-f(p,x))/yerr.
I am fitting data points using a logistic model. As I sometimes have data with a ydata error, I first used curve_fit and its sigma argument to include my individual standard deviations in the fit. Now I switched to leastsq, because I needed also some Goodness of Fit estimation that curve_fit could not provide. Everyth...
0
1
4,123
0
16,583,343
0
0
0
0
1
false
0
2013-05-16T05:44:00.000
5
2
0
Stochastic Gradient Boosting giving unpredictable results
16,579,775
0.462117
python,machine-learning,scikit-learn,scikits
First, a couple of remarks: the name of the algorithm is Gradient Boosting (Regression Trees or Machines) and is not directly related to Stochastic Gradient Descent you should never evaluate the accuracy of a machine learning algorithm on you training data, otherwise you won't be able to detect the over-fitting of the...
I'm using the Scikit module for Python to implement Stochastic Gradient Boosting. My data set has 2700 instances and 1700 features (x) and contains binary data. My output vector is 'y', and contains 0 or 1 (binary classification). My code is, gb = GradientBoostingClassifier(n_estimators=1000,learn_rate=1,subsample=0.5...
0
1
1,888
0
42,148,893
0
0
0
0
1
false
15
2013-05-16T19:48:00.000
5
2
0
Pandas Convert 'NA' to NaN
16,596,188
0.462117
python,pandas,bioinformatics
Just ran into this issue--I specified a str converter for the column instead, so I could keep na elsewhere: pd.read_csv(... , converters={ "file name": str, "company name": str})
I just picked up Pandas to do with some data analysis work in my biology research. Turns out one of the proteins I'm analyzing is called 'NA'. I have a matrix with pairwise 'HA, M1, M2, NA, NP...' on the column headers, and the same as "row headers" (for the biologists who might read this, I'm working with influenza). ...
0
1
11,645
0
21,592,812
0
0
0
0
1
false
3
2013-05-16T23:58:00.000
0
2
0
Python 3.3 pandas, pip-3.3
16,599,357
0
python,pandas,pip
Thanks, I just had the same issue with Angstrom Linux on the BeagleBone Black board and the easy_install downgrade solution solved it. One thing I did need to do, is after installing easy_install using opkg install python-setuptools I then had to go into the easy_install file (located in /usr/bin/easy_install) and ch...
So, I'm trying to install pandas for Python 3.3 and have been having a really hard time- between Python 2.7 and Python 3.3 and other factors. Some pertinent information: I am running Mac OSX Lion 10.7.5. I have both Python 2.7 and Python 3.3 installed, but for my programming purposes only use 3.3. This is where I'm a...
0
1
2,260
0
16,607,943
0
0
0
0
1
false
0
2013-05-17T03:42:00.000
0
4
0
How do I convert a 2D numpy array into a 1D numpy array of 1D numpy arrays?
16,601,049
0
python,arrays,numpy,nested,vectorization
I think it makes little sense to use numpy arrays to do that, just think you're missing out on all the advantages of numpy.
In other words, each element of the outer array will be a row vector from the original 2D array.
0
1
5,758
0
16,652,325
0
0
0
0
2
false
2
2013-05-18T14:20:00.000
1
2
0
Interpolaton algorithm to correct a slight clock drift
16,625,298
0.099668
python,scipy,numeric,numerical-methods
Before you can ask the programming question, it seems to me you need to investigate a more fundamental scientific one. Before you can start picking out particular equations to fit badfastclock to goodslowclock, you should investigate the nature of the drift. Let both clocks run a while, and look at their points togethe...
I have some sampled (univariate) data - but the clock driving the sampling process is inaccurate - resulting in a random slip of (less than) 1 sample every 30. A more accurate clock at approximately 1/30 of the frequency provides reliable samples for the same data ... allowing me to establish a good estimate of the cl...
0
1
426
0
16,708,058
0
0
0
0
2
false
2
2013-05-18T14:20:00.000
0
2
0
Interpolaton algorithm to correct a slight clock drift
16,625,298
0
python,scipy,numeric,numerical-methods
Bsed on your updated question, if the data is smooth with time, just place all the samples in a time trace, and interpolate on the sparse grid (time).
I have some sampled (univariate) data - but the clock driving the sampling process is inaccurate - resulting in a random slip of (less than) 1 sample every 30. A more accurate clock at approximately 1/30 of the frequency provides reliable samples for the same data ... allowing me to establish a good estimate of the cl...
0
1
426
0
18,478,963
0
0
0
0
1
false
2
2013-05-18T22:19:00.000
0
2
0
Can I link numpy with AMD's gpu accelerated blas library
16,629,529
0
python,numpy,opencl,gpgpu
If memory servers, pyCuda at least, probably also pyOpenCL can work with numPy
I reconized numpy can link with blas, and I thought of why not using gpu accelerated blas library. Did anyone use to do so?
0
1
2,411
0
16,657,039
0
1
0
0
1
false
0
2013-05-20T19:25:00.000
1
2
0
Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access?
16,656,850
0.099668
python,arrays,2d,large-data
It would help to know more about your data, and what kind of access you need to provide. How fast is "fast enough" for you? Just to be clear, "7M" means 7,000,000 right? As a quick answer without any of that information, I have had positive experiences working with redis and tokyo tyrant for fast read access to large a...
Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access? I want to write each element once and read many times. Thanks
0
1
390
0
46,957,388
0
1
0
0
3
false
32
2013-05-21T03:47:00.000
9
4
0
Difference between plt.close() and plt.clf()
16,661,790
1
python,matplotlib
I think it is worth mentioning that plt.close() releases the memory, thus is preferred when generating and saving many figures in one run. Using plt.clf() in such case will produce a warning after 20 plots (even if they are not going to be shown by plt.show()): More than 20 figures have been opened. Figures created th...
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way? I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, ind...
0
1
49,996
0
44,976,331
0
1
0
0
3
false
32
2013-05-21T03:47:00.000
2
4
0
Difference between plt.close() and plt.clf()
16,661,790
0.099668
python,matplotlib
plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots. plt.close() closes a window, which will be the current window, if not specified otherwise.
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way? I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, ind...
0
1
49,996
0
16,661,815
0
1
0
0
3
true
32
2013-05-21T03:47:00.000
41
4
0
Difference between plt.close() and plt.clf()
16,661,790
1.2
python,matplotlib
plt.close() will close the figure window entirely, where plt.clf() will just clear the figure - you can still paint another plot onto it. It sounds like, for your needs, you should be preferring plt.clf(), or better yet keep a handle on the line objects themselves (they are returned in lists by plot calls) and use .set...
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way? I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, ind...
0
1
49,996
0
16,689,864
0
0
0
0
1
true
1
2013-05-22T10:41:00.000
5
1
0
Alternative to Matlab cell data-structure in Python
16,689,681
1.2
python,matlab,data-structures,cell
Have you considered a list of numpy.arrays?
I have a Matlab cell array, each of whose cells contains an N x M matrix. The value of M varies across cells. What would be an efficient way to represent this type of a structure in Python using numpy or any standard Python data-structure?
0
1
1,299
0
16,698,292
0
0
0
0
1
true
8
2013-05-22T16:49:00.000
7
1
0
Comparing computer vision libraries in python
16,697,391
1.2
python,opencv,image-processing,computer-vision,scikit-learn
I have worked mainly with OpenCV and also with scikit-image. I would say that while OpenCV is more focus on computer vision (classification, feature detection and extraction,...). However lately scikit-image is improving rapidly. I faced that some algorithms perform faster under OpenCV, however in most cases I find mu...
I want to decide about a Python computer vision library. I had used OpenCV in C++, and like it very much. However this time I need to develop my algorithm in Python. My short list has three libraries: 1- OpenCV (Python wrapper) 2- PIL (Python Image Processing Library) 3- scikit-image Would you please help me to compare...
0
1
2,421
0
16,710,763
0
0
0
0
2
false
0
2013-05-23T09:35:00.000
0
4
0
Breadth First Search or Depth First Search?
16,710,374
0
python,algorithm,graph
When you want to find the shortest path you should use BFS and not DFS because BFS explores the closest nodes first so when you reach your goal you know for sure that you used the shortest path and you can stop searching. Whereas DFS explores one branch at a time so when you reach your goal you can't be sure that there...
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better? Thank...
0
1
1,736
0
16,710,738
0
0
0
0
2
false
0
2013-05-23T09:35:00.000
0
4
0
Breadth First Search or Depth First Search?
16,710,374
0
python,algorithm,graph
If there are no weights for the edges on the graph, a simple Breadth-first search where you access nodes in the graph iteratively and check if any of the new nodes equals the destination-node can be done. If the edges have weights, DJikstra's algorithm and Bellman-Ford algoriths are things which you should be looking a...
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better? Thank...
0
1
1,736
0
16,735,119
0
1
0
0
1
false
0
2013-05-24T09:41:00.000
0
3
0
What is a good way to merge non intersecting sets in a list to end up with denseley packed sets?
16,731,960
0
python,algorithm
I am not sure whether it will give an optimal solution, but would simply repeatedly merging the two largest non-overlapping sets not work?
I'm currently doing this by using a sort of a greedy algorithm by iterating over the sets from largest to smallest set. What would be a good algorithm to choose if i'm more concerned about finding the best solution rather than efficiency? Details: 1) Each set has a predefined range 2) My goal is to end up with a lot of...
0
1
124
0
18,938,559
0
0
0
0
1
false
2
2013-05-25T01:12:00.000
1
1
1
celery.chord gives IndexError: list index out of range error in celery version 3.0.19
16,745,487
0.197375
python,runtime-error,celery
This is an error that occurs when a chord header has no tasks in it. Celery tries to access the tasks in the header using self.tasks[0] which results in an index error since there are no tasks in the list.
Has anyone seen this error in celery (a distribute task worker in Python) before? Traceback (most recent call last): File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/home/mcapp/.virtualenv/lister/local/...
0
1
859
0
16,752,052
0
0
0
0
2
false
2
2013-05-25T17:13:00.000
0
2
0
Split a weighted graph into n graphs to minimize the sum of weights in each graph
16,751,995
0
python,algorithm
Remove the k-1 edges with the highest weights.
Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.
0
1
161
0
16,752,049
0
0
0
0
2
false
2
2013-05-25T17:13:00.000
1
2
0
Split a weighted graph into n graphs to minimize the sum of weights in each graph
16,751,995
0.099668
python,algorithm
What you are searching for is called weighted max-cut.
Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.
0
1
161
0
16,766,609
0
1
0
0
2
false
0
2013-05-27T05:05:00.000
0
3
0
Reading certain lines of a string
16,766,587
0
python,list,printing,lines
sed -n 200,300p, perhaps, for 200 to 300 inclusive; adjust the numbers by ±1 if exclusive or whatever?
Hi I am trying to read a csv file into a double list which is not the problem atm. What I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be wri...
0
1
56
0
16,766,726
0
1
0
0
2
false
0
2013-05-27T05:05:00.000
0
3
0
Reading certain lines of a string
16,766,587
0
python,list,printing,lines
If it is a specific column ranging between 200 and 300, use filter() function. new_array = filter(lambda x: x['column'] >= 200 or z['column'] <= 300, sl)
Hi I am trying to read a csv file into a double list which is not the problem atm. What I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be wri...
0
1
56
0
16,823,062
0
1
0
0
1
true
1
2013-05-29T18:04:00.000
1
2
0
Pylab after upgrading
16,820,903
1.2
python,matplotlib,python-3.3
I suspect you need to install python3-matplotlib, python3-numpy, etc. python-matlab is the python2 version.
Today I upgraded to Xubuntu 13.04 which comes with Python 3.3. Before that, I was working with Pyton 3.2, which was working perfectly fine. When running my script under Python 3.3, I get an ImportError: No module named 'pylab' in import pylab. Running in Python 3.2, which I reinstalled, throws ImportError: cannot im...
0
1
312
0
60,213,042
0
1
0
0
2
false
133
2013-05-31T08:23:00.000
-1
10
0
How do I convert strings in a Pandas data frame to a 'date' data type?
16,852,911
-0.019997
python,date,pandas
Try to convert one of the rows into timestamp using the pd.to_datetime function and then use .map to map the formular to the entire column
I have a Pandas data frame, one of the column contains date strings in the format YYYY-MM-DD For e.g. '2013-10-28' At the moment the dtype of the column is object. How do I convert the column values to Pandas date format?
0
1
297,830
0
33,577,649
0
1
0
0
2
false
133
2013-05-31T08:23:00.000
25
10
0
How do I convert strings in a Pandas data frame to a 'date' data type?
16,852,911
1
python,date,pandas
Now you can do df['column'].dt.date Note that for datetime objects, if you don't see the hour when they're all 00:00:00, that's not pandas. That's iPython notebook trying to make things look pretty.
I have a Pandas data frame, one of the column contains date strings in the format YYYY-MM-DD For e.g. '2013-10-28' At the moment the dtype of the column is object. How do I convert the column values to Pandas date format?
0
1
297,830
0
16,877,799
0
1
0
0
1
true
1
2013-06-01T21:29:00.000
2
1
0
Scikit-Learn n_jobs Multiprocessing Locked To One Core
16,877,448
1.2
python,scikit-learn
Found it, there's a note on the svm page that if you enable verbose settings multiprocessing may break. Disabling verbose fixed it
Trying to use gridsearch CV with multiple jobs via the n_jobs argument and I can see using htop that they're all launched and running but they all get stuck/assigned on the same core using 25% each (I'm using a 4 core machine). I'm on Ubuntu 12.10 and I'm running the latest master pulled from github. Anyone know how to...
0
1
391
0
16,893,364
0
0
0
0
1
true
0
2013-06-03T08:54:00.000
2
1
0
How to save 32/64 bit grayscale floats to TIFF with matplotlib?
16,893,102
1.2
python,matplotlib,tiff
Using matplotlib to export to TIFF will use PIL anyway. As far as I know, matplotlib has native support only for PNG, and uses PIL to convert to other file formats. So when you are using matplotlib to export to TIFF, you can use PIL immediately.
I'm trying to save some arrays as TIFF with matplotlib, but I'm getting 24 bit RGB files instead with plt.imsave(). Can I change that without resorting to the PIL? It's quite important for me to keep everything in pure matplotlib.
0
1
792
0
69,644,920
0
1
0
0
2
false
22
2013-06-04T05:00:00.000
2
4
0
Delete a group after pandas groupby
16,910,114
0.099668
python,pandas
it is so simple, you need to use the filter function and lambda exp: df_filterd=df.groupby('name').filter(lambda x:(x.name == 'cond1' or...(other condtions ))) you need to take care that if you want to use more than condtion to put it in brackets().. and you will get back a DataFrame not GroupObject.
Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name.
0
1
27,093
0
58,868,271
0
1
0
0
2
false
22
2013-06-04T05:00:00.000
0
4
0
Delete a group after pandas groupby
16,910,114
0
python,pandas
Should be easy: df.drop(index='group_name',inplace=True)
Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name.
0
1
27,093
0
16,927,877
0
0
0
0
1
false
0
2013-06-04T19:19:00.000
0
2
0
Sorting using Map-Reduce - Possible approach
16,925,802
0
python,sorting,hadoop,bigdata,hadoop-streaming
I'll assume that you are looking for a total sort order without a secondary sort for all your rows. I should also mention that 'better' is never a good question since there is typically a trade-off between time and space and in Hadoop we tend to think in terms of space rather than time unless you use products that are ...
I have a large dataset with 500 million rows and 58 variables. I need to sort the dataset using one of the 59th variable which is calculated using the other 58 variables. The variable happens to be a floating point number with four places after decimal. There are two possible approaches: The normal merge sort While ca...
0
1
889
0
16,969,259
0
1
0
0
2
false
0
2013-06-06T18:12:00.000
1
2
0
I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions?
16,969,190
0.099668
python,matrix
Read the input till Ctrl+d, split by newline symbols first and then split the results by spaces.
Entering arbitrary sized matrices to manipulate them using different operations.
0
1
51
0
16,971,642
0
1
0
0
2
false
0
2013-06-06T18:12:00.000
1
2
0
I am working on a project where you have to input a matrix that has an arbitrary size. I am using python. Suggestions?
16,969,190
0.099668
python,matrix
Think about who is using this programme, and how, then develop an interface which meets those needs.
Entering arbitrary sized matrices to manipulate them using different operations.
0
1
51
0
16,982,178
0
1
0
0
1
true
0
2013-06-07T10:15:00.000
2
1
0
Scikit-Learn windows Installation error : python 2.7 required which was not found in the registry
16,981,708
1.2
python,python-2.7,scikit-learn,enthought,epd-python
Enthought Canopy 1.0.1 does not register the user's Python installation as the main one for the system. This has been fixed and will work in the upcoming release.
I have installed Enthought Canopy 32 - bit which comes with python 2.7 32 bit . And I downloaded windows installer scikit-learn-0.13.1.win32-py2.7 .. My machine is 64 bit. I could'nt find 64 bit scikit learn installer for intel processor, only AMD is available. Python 2.7 required which was not found in the registry is...
0
1
494
0
17,032,461
0
0
0
0
1
false
0
2013-06-08T16:10:00.000
0
1
0
New sort criteria "random" for Plone 4 old style collections
17,001,402
0
python,collections,plone,zope
There is no random sort criteria. Any randomness will need to be done in custom application code.
is there any best practice for adding a "random" sort criteria to the old style collection in Plone? My versions: Plone 4.3 (4305) CMF 2.2.7 Zope 2.13.19
0
1
109
0
17,070,022
0
0
0
0
1
false
6
2013-06-11T20:48:00.000
3
1
0
How can i distribute processing of minibatch kmeans (scikit-learn)?
17,053,548
0.53705
python,machine-learning,multiprocessing,scikit-learn
I don't think this is possible. You could implement something with OpenMP inside the minibatch processing. I'm not aware of any parallel minibatch k-means procedures. Parallizing stochastic gradient descent procedures is somewhat hairy. Btw, the n_jobs parameter in KMeans only distributes the different random initializ...
In Scikit-learn , K-Means have n_jobs but MiniBatch K-Means is lacking it. MBK is faster than KMeans but at large sample sets we would like it distribute the processing across multiprocessing (or other parallel processing libraries). Is MKB's Partial-fit the answer?
0
1
1,981
0
17,054,932
0
1
0
0
3
true
63
2013-06-11T20:56:00.000
40
5
0
How do you stop numpy from multithreading?
17,053,671
1.2
python,multithreading,numpy
Set the MKL_NUM_THREADS environment variable to 1. As you might have guessed, this environment variable controls the behavior of the Math Kernel Library which is included as part of Enthought's numpy build. I just do this in my startup file, .bash_profile, with export MKL_NUM_THREADS=1. You should also be able to do it...
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which woul...
0
1
25,964
0
21,673,595
0
1
0
0
3
false
63
2013-06-11T20:56:00.000
12
5
0
How do you stop numpy from multithreading?
17,053,671
1
python,multithreading,numpy
In more recent versions of numpy I have found it necessary to also set NUMEXPR_NUM_THREADS=1. In my hands, this is sufficient without setting MKL_NUM_THREADS=1, but under some circumstances you may need to set both.
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which woul...
0
1
25,964
0
48,665,619
0
1
0
0
3
false
63
2013-06-11T20:56:00.000
52
5
0
How do you stop numpy from multithreading?
17,053,671
1
python,multithreading,numpy
Only hopefully this fixes all scenarios and system you may be on. Use numpy.__config__.show() to see if you are using OpenBLAS or MKL From this point on there are a few ways you can do this. 2.1. The terminal route export OPENBLAS_NUM_THREADS=1 or export MKL_NUM_THREADS=1 2.2 (This is my preferred way) In your pytho...
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which woul...
0
1
25,964
0
21,414,260
0
0
0
0
1
false
7
2013-06-12T02:33:00.000
0
4
0
Parsing a txt file into a dictionary to write to csv file
17,056,818
0
python,csv,file-io
I know this is an older question so maybe you have long since solved it but I think you are approaching this in a more complex way than is needed. I figure I'll respond in case someone else has the same problem and finds this. If you are doing things this way because you do not have a software key, it might help to kn...
Eprime outputs a .txt file like this: *** Header Start *** VersionPersist: 1 LevelName: Session Subject: 7 Session: 1 RandomSeed: -1983293234 Group: 1 Display.RefreshRate: 59.654 *** Header End *** Level: 2 *** LogFrame Start *** MeansEffectBias: 7 Procedure: trialProc itemID: 7 bias1Answer: 1 ...
0
1
1,142
0
23,725,918
0
0
1
0
1
false
8
2013-06-12T21:09:00.000
4
7
0
Embed python into fortran 90
17,075,418
0.113791
python,fortran,embed
There is a very easy way to do this using f2py. Write your python method and add it as an input to your Fortran subroutine. Declare it in both the cf2py hook and the type declaration as EXTERNAL and also as its return value type, e.g. REAL*8. Your Fortran code will then have a pointer to the address where the python me...
I was looking at the option of embedding python into fortran90 to add python functionality to my existing fortran90 code. I know that it can be done the other way around by extending python with fortran90 using the f2py from numpy. But, i want to keep my super optimized main loop in fortran and add python to do some ad...
0
1
9,062
0
63,390,537
0
0
0
0
1
false
415
2013-06-13T23:05:00.000
2
13
0
How to reversibly store and load a Pandas dataframe to/from disk
17,098,654
0.03076
python,pandas,dataframe
Another quite fresh test with to_pickle(). I have 25 .csv files in total to process and the final dataframe consists of roughly 2M items. (Note: Besides loading the .csv files, I also manipulate some data and extend the data frame by new columns.) Going through all 25 .csv files and create the dataframe takes around 14...
Right now I'm importing a fairly large CSV as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?
0
1
432,843
0
17,101,084
0
1
1
0
1
false
3
2013-06-14T01:39:00.000
3
1
0
PyPy and efficient arrays
17,099,850
0.53705
python,arrays,numpy,pypy
array.array is a memory efficient array. It packs bytes/words etc together, so there is only a few bytes of extra overhead for the entire array. The one place where numpy can use less memory is when you have a sparse array (and are using one of the sparse array implementations) If you are not using sparse arrays, you s...
My project currently uses NumPy, only for memory-efficient arrays (of bool_, uint8, uint16, uint32). I'd like to get it running on PyPy which doesn't support NumPy. (failed to install it, at any rate) So I'm wondering: Is there any other memory-efficient way to store arrays of numbers in Python? Anything that is suppor...
0
1
919
0
17,117,416
0
0
0
0
1
false
0
2013-06-14T16:55:00.000
0
2
0
Finding log-likelihood in a restricted boltzmann machine
17,113,613
0
python,machine-learning,artificial-intelligence,neural-network
Assume you have v visible units, and h hidden units, and v < h. The key idea is that once you've fixed all the values for each visible unit, the hidden units are independent. So you loop through all 2^v subsets of visible unit activations. Then computing the likelihood for the RBM with this particular activated visi...
I have been researching RBMs for a couple months, using Python along the way, and have read all your papers. I am having a problem, and I thought, what the hey? Why not go to the source? I thought I would at least take the chance you may have time to reply. My question is regarding the Log-Likelihood in a Restricted Bo...
0
1
1,642
0
17,133,162
0
0
0
0
1
false
2
2013-06-15T15:35:00.000
4
1
0
SVM Multiclass Classification using Scikit Learn - Code not completing
17,125,247
0.664037
python,python-2.7,machine-learning,svm,scikit-learn
First, for text data you don't need a non linear kernel, so you should use an efficient linear SVM solver such as LinearSVC or PassiveAggressiveClassifier instead. The SMO algorithm of SVC / libsvm is not scalable: the complexity is more than quadratic which is practice often makes it useless for dataset larger than 50...
I have a text data labelled into 3 classes and class 1 has 1% data, class 2 - 69% and class 3 - 30%. Total data size is 10000. I am using 10-fold cross validation. For classification, SVM of scikit learn python library is used with class_weight=auto. But the code for 1 step of 10-fold CV has been running for 2 hrs and ...
0
1
2,914
0
52,143,806
0
0
0
0
1
false
2
2013-06-15T15:35:00.000
1
2
0
Get first element of Pandas Series of string
17,125,248
0.099668
python,string,pandas,series
Get the Series head(), then access the first value: df1['tweet'].head(1).item() or: Use the Series tolist() method, then slice the 0'th element: df.height.tolist() [94, 170] df.height.tolist()[0] 94 (Note that Python indexing is 0-based, but head() is 1-based)
I think I have a relatively simply question but am not able to locate an appropriate answer to solve the coding problem. I have a pandas column of string: df1['tweet'].head(1) 0 besides food, Name: tweet I need to extract the text and push it into a Python str object, of this form...
0
1
8,156
0
17,141,432
0
0
0
0
1
false
0
2013-06-17T03:36:00.000
0
1
0
Unable to export pandas dataframe into excel file
17,140,080
0
python-2.7,pandas,xls
The problem you are facing is that your excel has a character that cannot be decoded to unicode. It was probably working before but maybe you edited this xls file somehow in Excel/Libre. You just need to find this character and either get rid of it or replace it with the one that is acceptable.
I am trying to export dataframe to .xls file using to_excel() method. But while execution it was throwing an error: "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 892: ordinal not in range(128)". Just few moments back it was working fine. The code I used is: :csv2.to_excel("C:\\Use...
0
1
474
0
17,154,439
0
0
0
0
1
true
1
2013-06-17T18:11:00.000
2
1
0
Pyplot polar scatter plot color for sign
17,154,006
1.2
python,colors,matplotlib,scatter
I'm not sure if this is the "proper" way to do this, but you could programmatically split your data into two subsets: one containing the positive values and the second containing the negative values. Then you can call the plot function twice, specifying the color you want for each subset. It's not an elegant solution,...
I have a pyplot polar scatter plot with signed values. Pyplot does the "right" thing and creates only a positive axis, then reflects negative values to look as if they are a positive value 180 degrees away. But, by default, pyplot plots all points using the same color. So positive and negative values are indistinguis...
0
1
785
0
20,356,186
0
1
0
0
1
false
3
2013-06-17T18:34:00.000
0
3
0
How can scipy.weave.inline be used in a MPI-enabled application on a cluster?
17,154,381
0
python,scipy,cluster-computing,mpi
One quick workaround is to use a local directory on each node (e.g. /tmp as Wesley said), but use one MPI task per node, if you have the capacity.
If scipy.weave.inline is called inside a massive parallel MPI-enabled application that is run on a cluster with a home-directory that is common to all nodes, every instance accesses the same catalog for compiled code: $HOME/.pythonxx_compiled. This is bad for obvious reasons and leads to many error messages. How can th...
0
1
282
0
17,181,542
0
0
0
0
1
false
2
2013-06-18T23:01:00.000
2
1
0
change detection on video frames
17,180,409
0.379949
python,opencv,image-processing,frame,python-imaging-library
For effects like zooming in and out, optical flow seems the best choice. Search for research papers on "Shot Detection" for other possible approaches. As for the techniques you mention, did you apply some form of noise reduction before using them?
I have a sequence of actions taking place on a video like, say "zooming in and zooming out" a webpage. I want to catch the frames that had a visual change from a some previous frame and so on. Basically, want to catch the visual difference happening in the video. I have tried using feature detection using SURF. It just...
0
1
371
0
17,212,232
0
0
0
0
1
false
1
2013-06-20T09:13:00.000
0
1
0
to measure Contact Angle between 2 edges in an image
17,209,762
0
opencv,python-2.7
Simplify edge A and B into a line equation (using only the few last pixels) Get the line equations of the two lines (form y = mx + b) Get the angle orientations of the two lines θ=atan|1/m| Subtract the two angles from each other Make sure to do the special case of infinite slope, and also do some simple math to get F...
i need to find out contact angle between 2 edges in an image using open cv and python so can anybody suggest me how to find it? if not code please let me know algorithm for the same.
0
1
531
0
18,653,827
0
0
0
0
1
true
1
2013-06-20T15:44:00.000
0
1
0
finding corresponding pixels before and after scipy.ndimage.interpolate.rotate
17,218,051
1.2
python,indexing,scipy,rotatetransform,correspondence
After some time of debugging, I realized that depending on the angle - typically under and over n*45 degrees - scipy adds a row and a column to the output image. a simple test of the angle adding one to the indices solved my problem. I hope this can help the future reader of this topic.
I hope this hasn't already been answered but I haven't found this anywhere. My problem is quite simple : I'm trying to compute an oblic profile of an image using scipy. One great way to do it is to : locate the segment along which I want my profile by giving the beginning and the end of my segment, extract the minim...
0
1
131
0
17,220,530
0
0
0
0
2
false
0
2013-06-20T16:51:00.000
0
2
0
MATLAB to web app
17,219,344
0
python,django,matlab,web-applications,octave
You could always just host the MATLAB code and sample .mat on a website for people to download and play with on their own machines if they have a MATLAB license. If you are looking at having some sort of embedded app on your website you are going to need to rewrite your code in another language. The project sounds d...
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What ...
1
1
934
0
17,224,492
0
0
0
0
2
false
0
2013-06-20T16:51:00.000
1
2
0
MATLAB to web app
17,219,344
0.099668
python,django,matlab,web-applications,octave
A cheap and somewhat easy way (with limited functionality) would be: Install MATLAB on your server, or use the MATLAB Compiler to create a stand alone executable (not sure if that comes with your version of MATLAB or not). If you don't have the compiler and can't install MATLAB on your server, you could always go to a ...
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What ...
1
1
934
0
17,241,104
0
0
0
0
1
false
289
2013-06-21T17:25:00.000
75
8
0
How do I convert a pandas Series or index to a Numpy array?
17,241,004
1
python,pandas
You can use df.index to access the index object and then get the values in a list using df.index.tolist(). Similarly, you can use df['col'].tolist() for Series.
Do you know how to get the index or column of a DataFrame as a NumPy array or python list?
0
1
558,431
0
17,257,084
0
0
0
0
1
true
1
2013-06-23T01:56:00.000
2
1
0
Inverse of a Matrix in Python
17,257,056
1.2
python,numpy,matrix-inverse
It may very well have to do with the smallness of the values in the matrix. Some matrices that are not, in fact, mathematically singular (with a zero determinant) are totally singular from a practical point of view, in that the math library one is using cannot process them properly. Numerical analysis is tricky, as you...
While trying to compute inverse of a matrix in python using numpy.linalg.inv(matrix), I get singular matrix error. Why does it happen? Has it anything to do with the smallness of the values in the matrix. The numbers in my matrix are probabilities and add up to 1.
0
1
7,873
0
17,270,654
0
1
0
0
2
false
2
2013-06-24T07:39:00.000
1
2
0
how do you distinguish numpy arrays from Python's built-in objects
17,270,293
0.099668
python,numpy,naming-conventions
numpy arrays and lists should occupy similar syntactic roles in your code and as such I wouldn't try to distinguish between them by naming conventions. Since everything in python is an object the usual naming conventions are there not to help distinguish type so much as usage. Data, whether represented in a list or a n...
PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase). It seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as "+" actually mean something totally different. Does anyone have any nami...
0
1
415
0
17,270,547
0
1
0
0
2
false
2
2013-06-24T07:39:00.000
2
2
0
how do you distinguish numpy arrays from Python's built-in objects
17,270,293
0.197375
python,numpy,naming-conventions
You may use a prefix np_ for numpy arrays, thus distinguishing them from other variables.
PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase). It seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as "+" actually mean something totally different. Does anyone have any nami...
0
1
415
0
17,300,620
0
1
0
0
1
false
7
2013-06-25T14:43:00.000
1
4
0
Python Sort On The Fly
17,300,419
0.049958
python,algorithm,sorting
Have a list of size 20 tupples initialised with less than the minimum result of the calculation and two indices of -1. On calculating a result append it to the results list, with the indices of the pair that resulted, sort on the value only and trim the list to length 20. Should be reasonably efficient as you only e...
I am thinking about a problem I haven't encountered before and I'm trying to determine the most efficient algorithm to use. I am iterating over two lists, using each pair of elements to calculate a value that I wish to sort on. My end goal is to obtain the top twenty results. I could store the results in a third list, ...
0
1
911
0
17,347,945
0
0
0
0
1
true
44
2013-06-26T09:07:00.000
84
4
0
How can I check if a Pandas dataframe's index is sorted
17,315,881
1.2
python,pandas
How about: df.index.is_monotonic
I have a vanilla pandas dataframe with an index. I need to check if the index is sorted. Preferably without sorting it again. e.g. I can test an index to see if it is unique by index.is_unique() is there a similar way for testing sorted?
0
1
16,454
0
17,370,686
0
0
0
0
1
true
2
2013-06-27T21:46:00.000
0
1
0
Are pandas Panels as efficient as multi-indexed DataFrames?
17,353,773
1.2
python,pandas,data-analysis
they have a similiar storage mechanism, and only really differ in the indexing scheme. Performance wise they should be similar. There is more support (code-wise) for multi-level df's as they are more often used. In addition Panels have different silicing semantics, so dtype guarantees are different.
I am wondering whether there is any computational or storage disadvantage to using Panels instead of multi-indexed DataFrames in pandas. Or are they the same behind the curtain?
0
1
169
0
17,371,090
0
1
0
0
1
false
4
2013-06-28T18:08:00.000
1
2
0
numpy array memory allocation
17,371,059
0.099668
python,numpy
Numpy in general is more efficient if you pre-allocate the size. If you know you're going to be populating an MxN matrix...create it first then populate as opposed to using appends for example. While the list does have to be created, a lot of the improvement in efficiency comes from acting on that structure. Reading/...
From what I've read about Numpy arrays, they're more memory efficient that standard Python lists. What confuses me is that when you create a numpy array, you have to pass in a python list. I assume this python list gets deconstructed, but to me, it seems like it defeats the purpose of having a memory efficient data str...
0
1
4,676
0
19,917,484
0
0
0
0
1
false
0
2013-06-29T05:07:00.000
1
1
0
What IS a .fits file, as in, what is a .fits array?
17,376,904
0.197375
python,arrays,astronomy,fits
A FITS file consists of header-data units. A header-data unit contains an ASCII-type header with keyword-value-comment triples plus either binary FITS tables or (hyperdimensional) image cubes. Each entry in a table of a binary FITS table may itself contain hyperdimensional image cubes. An array is some slice through so...
I'm basically trying to plot some images based on a given set of parameters of a .fits file. However, this made me curious: what IS a .fits array? When I type in img[2400,3456] or some random values in the array, I get some output. I guess my question is more conceptual than code-based, but, it boils down to this: wha...
0
1
119
0
17,416,531
0
0
0
0
1
true
2
2013-07-02T02:21:00.000
2
1
0
Multiplication of Multidimensional matrices (arrays) in Python
17,416,448
1.2
python,numpy,linear-algebra,multidimensional-array
Let's say you're trying to use a Markov chain to model english sentence syntax. Your transition matrix will give you the probability of going from one part of speech to another part of speech. Now let's suppose that we're using a 3rd-order Markov model. This would give use the probability of going from state 123 to ...
First of all, I am aware that matrix and array are two different data types in NumPy. But I put both in the title to make it a general question. If you are editing this question, please feel free to remove one. Ok, here is my question, Here is an edit to the original question. Consider a Markov Chain with a 2 dimension...
0
1
846
0
17,456,347
0
1
0
0
1
false
0
2013-07-03T19:09:00.000
1
4
0
Python Matching License Plates
17,456,233
0.049958
python,comparison
What you're asking is about a fuzzy search, from what it sounds like. Instead of checking string equality, you can check if the two string being compared have a levenshtein distance of 1 or less. Levenshtein distance is basically a fancy way of saying how many insertions, deletions or changes will it take to get from...
I am working on a traffic study and I have the following problem: I have a CSV file that contains time-stamps and license plate numbers of cars for a location and another CSV file that contains the same thing. I am trying to find matching license plates between the two files and then find the time difference between th...
0
1
1,137