GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
63,021,426
0
0
0
0
2
false
275
2014-04-06T19:24:00.000
43
16
0
Filtering Pandas DataFrames on dates
22,898,824
1
python,datetime,pandas,filtering,dataframe
If you have already converted the string to a date format using pd.to_datetime you can just use: df = df[(df['Date'] > "2018-01-01") & (df['Date'] < "2019-07-01")]
I have a Pandas DataFrame with a 'date' column. Now I need to filter out all rows in the DataFrame that have dates outside of the next two months. Essentially, I only need to retain the rows that are within the next two months. What is the best way to achieve this?
0
1
624,303
0
23,057,921
0
0
0
0
1
true
1
2014-04-07T12:02:00.000
0
2
0
DatetimeIndex with time part only: is it possible
22,911,865
1.2
python,pandas
No, it is not possible, only with datetime or with float index. However, variant offered by unutbu is very useful.
I've stuck with such a problem. I have a set of observation of passenger traffic. Data is stored in .xlsx file with the following structure: date_of_observation, time, station_name, boarding, alighting. I wonder if it's possible to create Dataframe with DatetimeIndex from such data if I need only 'time' component of da...
0
1
938
0
62,222,727
0
0
0
0
1
false
31
2014-04-08T15:13:00.000
4
5
0
Fastest file format for read/write operations with Pandas and/or Numpy
22,941,147
0.158649
python,numpy,pandas
If the priority is speed I would recommend: feather - the fastest parquet - a bit slower, but saves lots of disk space
I've been working for a while with very large DataFrames and I've been using the csv format to store input data and results. I've noticed that a lot of time goes into reading and writing these files which, for example, dramatically slows down batch processing of data. I was wondering if the file format itself is of rel...
0
1
31,690
0
22,949,986
0
0
0
0
1
false
13
2014-04-08T23:11:00.000
1
5
0
dot product of two 1D vectors in numpy
22,949,966
0.039979
python,numpy
If you want an inner product then use numpy.dot(x,x) for outer product use numpy.outer(x,x)
I'm working with numpy in python to calculate a vector multiplication. I have a vector x of dimensions n x 1 and I want to calculate x*x_transpose. This gives me problems because x.T or x.transpose() doesn't affect a 1 dimensional vector (numpy represents vertical and horizontal vectors the same way). But how do I ca...
0
1
11,187
0
23,002,130
0
0
0
0
1
false
2
2014-04-10T06:46:00.000
0
1
0
Why is the mean smaller than the minimum and why does this change with 64bit floats?
22,980,487
0
python,arrays,numpy,floating-accuracy,floating-point-conversion
If you're working with large arrays, be aware of potential overflow problems!! Changing from 32-bit to 64-bit floats in this instance avoids an (unflagged as far as I can tell) overflow that lead to the anomalous mean calculation.
I have an input array, which is a masked array. When I check the mean, I get a nonsensical number: less than the reported minimum value! So, raw array: numpy.mean(A) < numpy.min(A). Note A.dtype returns float32. FIX: A3=A.astype(float). A3 is still a masked array, but now the mean lies between the minimum and the maxim...
0
1
226
0
22,996,581
0
0
0
0
1
true
2
2014-04-10T18:48:00.000
4
2
0
How to search in one NumPy array for positions for getting at these position the value from a second NumPy array?
22,996,507
1.2
python,arrays,numpy,arcgis,arcpy
If I understand your description right, you should just be able to do B[A].
I have two raster files which I have converted into NumPy arrays (arcpy.RasterToNumpyArray) to work with the values in the raster cells with Python. One of the raster has two values True and False. The other raster has different values in the range between 0 to 1000. Both rasters have exactly the same extent, so both N...
0
1
143
0
23,001,960
0
0
0
0
1
true
1
2014-04-11T01:26:00.000
4
1
0
Counting 1's in a n x n array of 0's and 1's
23,001,932
1.2
python,algorithm,count
Since all 1's come before the 0's, you can find the index of the first 0 using Binary search algorithm (which is log N) and you just have to do this for all the N rows. So the total complexity is NlogN.
Assuming that in each row of the array, all 1's come before the 0's, how would I be able to come up with an (O)nlogn algorithm to count the 1's in the array. I think first I would have to make a counter, search each row for 1's (n), and add that to the counter. Where does the "log n part" come into play? I read that a ...
0
1
100
0
23,028,931
0
0
0
0
1
true
1
2014-04-11T17:33:00.000
4
1
0
Scikit's GBM assumptions on feature type
23,019,076
1.2
python,scikit-learn
All features are continuous for gradient boosting (and practically all other estimators). Tree-based models should be able to learn splits in categorical features that are encoded as "levels" (1, 2, 3) rather than dummy variables ([1, 0, 0], [0, 1, 0], [0, 0, 1]), but this requires deep trees instead of stumps and the ...
Does scikit's GradientBoostingRegressor make any assumptions on the feature's type? Or does it treat all features as continuous? I'm asking because I have several features that are truly categorical that I have encoded using LabelEncoder().
0
1
138
0
23,089,696
0
0
0
0
1
true
0
2014-04-11T22:51:00.000
3
1
0
Restricting magnitude of change in guess in fmin_bfgs
23,023,851
1.2
python,scipy,mathematical-optimization
I recently ran into the same problem with fmin_bfgs. As far as I could see, the answer is negative. I didn't see a way to limit the stepsize. My workaround was to first run Nelder-Mead fmin for some iterations, and then switch to fmin_bfgs. Once I was close enough to the optimum, the curvature of my function was much n...
I'm trying to estimate a statistical model using MLE in python using the fmin_BFGS function in Scipy.Optimize and a numerically computed Hessian. It is currently giving me the following warning: Desired error not necessarily achieved due to precision loss. When I print the results of each evaluation, I see that while...
0
1
236
0
27,126,021
0
0
0
0
1
false
3
2014-04-12T12:19:00.000
1
1
0
Content based recommender system with sklearn or numpy
23,030,284
0.197375
python,numpy,machine-learning,scikit-learn,recommendation-engine
I believe You can use centered cosine similarity /pearson corelation to make this work and make use of collaborative filtering technique to achieve this Before you use pearson co -relation you need to fill the Null ( the fields which dont have any entries) with zero ,now pearson co relation centers the similarity matr...
I am trying to build a content-based recommender system in python/pandas/numpy/sklearn. Here are the matrix involved and their size: X: n_customers * n_features (contains the features of each customer) Y: n_customers *n_products (contains the scores given by each customer to each product) Theta: n_features * n_products...
0
1
3,553
0
23,038,786
0
1
0
0
1
true
0
2014-04-13T03:25:00.000
3
1
0
Is In place sorting algorithm always faster? What are the adavantage of in place sorting? Python
23,038,760
1.2
python,sorting
is creating a sorting algorithm a common task for a professional developer? No. It's good to be able to do it, but most of the time, you'll just use sorts other people already wrote. On what task do developer need to create a sorting algorithm? If you're providing a sorting routine for other people to use, you may n...
I am new to programming. is creating a sorting algorithm a common task for a professional developer? On what task do developer need to create a sorting algorithm? And Finally, what are the advantages of an in-place sorting algorithm? any help is appreciated!!
0
1
330
0
50,678,634
0
0
0
0
1
false
5
2014-04-13T17:09:00.000
0
2
0
RGB to HSI function in python
23,045,695
0
python,python-imaging-library
To convert the available RGB image to HSI format( Hue,Saturation, Intensity), you can make use of the CV_RGB2HSI function available in the openCV docs.
I want to convert an RGB image to HSI,I found lot of inbuilt functions like rgb_to_hsv, rgb_to_hls, etc. Is there any function for conversion from RGB to HSI color model in python??
0
1
3,209
0
23,065,974
0
0
0
0
1
false
2
2014-04-14T16:06:00.000
0
2
0
Curve fitting differential equations in Python
23,064,886
0
python-2.7,physics,curve-fitting,differential-equations
Certainly you intend to have the third derivative on the right. Group your data in relatively small bins, possibly overlapping. For each bin, compute a cubic approximation of the data. From that compute the derivatives in the center point of the group. With the derivatives of all groups you now have a classical linear...
I have a curve of >1000 points that I would like to fit to a differential equation in the form of x'' = (a*x'' + b x' + c x + d), where a,b,c,d are constants. How would I proceed in doing this using Python 2.7?
0
1
756
0
23,086,286
0
1
0
0
1
true
0
2014-04-15T14:11:00.000
0
1
0
Is dynamically writing to a csv file slower than appending to an array in Python?
23,086,241
1.2
python,csv,dynamic
I am assuming you mean to ask about separate calls to writer.writerow() vs. building a list, then writing that list with writer.writerows(). Memory wise, the former is more efficient. That said, don't worry too much about speed here; writing to disk is your bottleneck, not how you build the data.
I'm talking about time taken to dynamically write values to a csv file vs appending those same values to an array. Eventually, I will append that array to the csv file, but that's out of the scope of this question.
0
1
658
0
25,503,548
0
1
0
0
1
false
0
2014-04-16T18:21:00.000
0
1
0
Python OpenCV "ImportError: undefined Symbol" or Memory Access Error
23,117,242
0
python-2.7,opencv,opensuse,undefined-symbol
Not exactly a prompt answer (nor a direct one). I had the same issue and (re)installing various dependencies didn't help either. Ultimately, I cloned (from git) and compiled opencv (which includes the cv2.so library) from scratch, replaced the old cv2.so library and got it to work. Here is the git repo: https://github....
I'm using OpenSUSE 13.1 64-bit on an Lenovo ThinkPad Edge E145. I tryed to play a bit around with Python(2.7) and Python-OpenCV(2.4). Both is installed by using YAST. When i start the Python-Interactive-Mode (by typing "python") and try to "import cv" there are 2 things that happen: case 1: "import cv" --> End's up wit...
0
1
892
0
23,163,657
0
1
0
0
1
false
0
2014-04-18T22:43:00.000
1
1
0
Call the name of a data frame rather than its content (Pandas)
23,163,508
0.197375
python,string,list,pandas
The DataFrame doesn't know the name of the variable you've assigned to it. Depending on how you're printing the object, either the __str__ or __repr__ method will get called to get a description of the object. If you want to get back 'df2', you could put them into a dictionary to map the name back to the object. If y...
I have a list of dataframes but when I call the content of the list it returns the content of the called dataframe. List = [df1, df2, df3, ..., dfn] List[1] will give, class 'pandas.core.frame.DataFrame'> DatetimeIndex: 4753 entries, etc but I want it to give str(List[1])??? 'df2' Thanks for the help
0
1
70
0
23,183,710
0
0
0
0
1
false
1
2014-04-19T17:49:00.000
1
1
0
Parameters to let random_powerlaw_tree() generate trees with more than 10 nodes
23,173,427
0.197375
python,networkx
To generate trees with more nodes it is only needed to increase the "number of tries" (parameter of random_powerlaw_tree). 100 tries is not enough even to have a tree with 11 nodes (it gives an error). For example, with 1000 tries I manage to generate trees with 100 nodes, using networkX 1.8.1 and python 3.4.0
I am trying to use one of the random graph-generators of NetworkX (version 1.8.1): random_powerlaw_tree(n, gamma=3, seed=None, tries=100) However, I always get this error File "/Library/Python/2.7/site-packages/networkx/generators/random_graphs.py", line 840, in random_powerlaw_tree "Exceeded max (%d) attempts for a va...
0
1
451
0
29,718,358
0
0
0
0
1
false
6
2014-04-20T04:01:00.000
1
1
0
Text classification in python - (NLTK Sentence based)
23,178,275
0.197375
python,python-3.x,machine-learning,classification,bayesian
Ideally, it is said that the more you train your data, the 'better' your results are but it really depends after you've tested it and compared it to the real results you've prepared. So to answer your question, training the model with keywords may give you too broad results that may not be arguments. But really, you h...
I need to classify text and i am using Text blob python module to achieve it.I can use either Naive Bayes classifier/Decision tree. I am concern about the below mentioned points. 1) I Need to classify sentences as argument/ Not an argument. I am using two classifiers and training the model using apt data sets. My quest...
0
1
1,170
0
71,003,756
0
0
0
0
1
false
11
2014-04-21T10:28:00.000
0
6
0
OpenCV - Fastest method to check if two images are 100% same or not
23,195,522
0
python,c++,opencv
I have done this task. Compare file sizes. Compare exif data. Compare first 'n' byte, where 'n' is 128 to 1024 or so. Compare last 'n' bytes. Compare middle 'n' bytes. Compare checksum
There are many questions over here which checks if two images are "nearly" similar or not. My task is simple. With OpenCV, I want to find out if two images are 100% identical or not. They will be of same size but can be saved with different filenames.
0
1
15,991
0
39,591,989
0
0
0
0
1
false
319
2014-04-21T14:51:00.000
13
18
0
Detect and exclude outliers in a pandas DataFrame
23,199,796
1
python,pandas,filtering,dataframe,outliers
scipy.stats has methods trim1() and trimboth() to cut the outliers out in a single row, according to the ranking and an introduced percentage of removed values.
I have a pandas data frame with few columns. Now I know that certain rows are outliers based on a certain column value. For instance column 'Vol' has all values around 12xx and one value is 4000 (outlier). Now I would like to exclude those rows that have Vol column like this. So, essentially I need to put a filter on...
0
1
440,407
0
23,232,968
0
0
0
0
1
true
0
2014-04-22T10:44:00.000
1
1
0
Initializing the weights of a MLP with the RBM weights
23,217,264
1.2
python,scikit-learn
scikit-learn does not currently have an MLP implemented which you can initialize via an RBM, but you can still access the weights which are stored in the components_ attribute and the bias which is stored in the intercept_hidden_ attribute. If you're interested in using modern MLPs, torch7, pylearn2, and deepnet are al...
I want to build a Deep Believe Network with scikit-learn. As I know one should train many Restricted Boltzmann Machines (RBM) individually. Then one should create a Multilayer Perceptron (MLP) that has the same number of layers as the number of (RBMs), and the weights of the MLP should be initialized with the weights o...
0
1
674
0
23,234,151
0
0
0
0
1
false
4
2014-04-23T03:13:00.000
14
1
0
How to use pickle to save data to disk?
23,234,103
1
python,pickle
Save an object containing the game state before the program exits: pickle.dump(game_state, open('gamestate.pickle', 'wb')) Load the object when the program is started: game_state = pickle.load(open('gamestate.pickle', 'rb')) In your case, game_state may be a list of questions.
I'm making a Animal guessing game and i finish the program but i want to add pickle so it save questions to disk, so they won't go away when the program exits. Anyone can help?
0
1
3,645
0
23,254,015
0
0
0
0
1
false
0
2014-04-23T13:34:00.000
1
1
0
Condtionally selecting values from a Numpy array returned from PyFITS
23,246,013
0.197375
python,numpy,fits,pyfits
The expression data.field[('zquality' > 2) & ('pgal'==3)] is asking for fields where the string 'zquality' is greater than 2 (always true) and where the string 'pgal' is equal to 3 (also always false). Actually chances are you're getting an exception because data.field is a method on the Numpy recarray objects that PyF...
I have opened a FITS file in pyfits. The HEADER file reads XTENSION='BINTABLE' with DIMENSION= 52989R x 36C with 36 column tags like, 'ZBEST', 'ZQUALITY', 'M_B', 'UB', 'PGAL' etc. Now, I have to choose objects from the data with 'ZQUALITY' greater than 2 & 'PGAL' equals to 3. Then I have to make a histogram for the 'ZB...
0
1
229
0
23,264,484
0
0
0
0
1
true
0
2014-04-24T08:45:00.000
0
1
0
Get probability of classification from decision tree
23,264,037
1.2
python,machine-learning,decision-tree,cart-analysis
When you train your tree using the training data set, every time you do a split on your data, the left and right node will end up with a certain proportion of instances from class A and class B. The percentage of instances of class A (or class B) can be interpreted as probability. For example, assume your training dat...
I'm implementing decision tree based on CART algorithm and I have a question. Now I can classify data, but my task is not only classify data. I want have a probability of right classification in end nodes. For example. I have dataset that contains data of classes A and B. When I put an instance of some class to my tre...
0
1
2,896
0
23,279,735
0
0
0
1
1
false
2
2014-04-24T20:51:00.000
1
2
0
to_excel on desktop regardless of the user
23,279,546
0.099668
python,pandas
This depends on your operating system. You're saying you'd like to save the file on the desktop of the user who is running the script right? On linux (not sure if this is true of every distribution) you could pass in "~/desktop/my_file.xls" as the path where you're saving the file
Is there a way to use pandas to_excel function to write to the desktop, no matter which user is running the script? I've found answers for VBA but nothing for python or pandas.
0
1
781
0
38,764,796
0
0
0
0
1
false
18
2014-04-25T04:49:00.000
28
4
0
How to subtract rows of one pandas data frame from another?
23,284,409
1
python,merge,pandas
Consider Following: df_one is first DataFrame df_two is second DataFrame Present in First DataFrame and Not in Second DataFrame Solution: by Index df = df_one[~df_one.index.isin(df_two.index)] index can be replaced by required column upon which you wish to do exclusion. In above example, I've used index as a refere...
The operation that I want to do is similar to merger. For example, with the inner merger we get a data frame that contains rows that are present in the first AND second data frame. With the outer merger we get a data frame that are present EITHER in the first OR in the second data frame. What I need is a data frame tha...
0
1
34,025
0
23,285,666
0
0
0
0
2
false
34
2014-04-25T05:24:00.000
6
7
0
Opening a pdf and reading in tables with python pandas
23,284,759
1
python,pdf,pandas
this is not possible. PDF is a data format for printing. The table structure is therefor lost. with some luck you can extract the text with pypdf and guess the former table columns.
Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function?
1
1
82,529
0
41,133,523
0
0
0
0
2
false
34
2014-04-25T05:24:00.000
3
7
0
Opening a pdf and reading in tables with python pandas
23,284,759
0.085505
python,pdf,pandas
Copy the table data from a PDF and paste into an Excel file (which usually gets pasted as a single rather than multiple columns). Then use FlashFill (available in Excel 2016, not sure about earlier Excel versions) to separate the data into the columns originally viewed in the PDF. The process is fast and easy. Then ...
Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function?
1
1
82,529
0
23,300,115
0
0
0
0
1
true
0
2014-04-25T17:44:00.000
1
1
0
Classification using SVM from opencv
23,299,694
1.2
python,opencv,svm
As a simple approach, you can train an additional classifier to determine if your feature is a digit or not. Use non-digit images as positive examples and the other classes' positives (i.e. images of digits 0-9) as the negative samples of this classifier. You'll need a huge amount of non-digit images to make it work, a...
I have problem with classification using SVM. Let's say that I have 10 classes, digts from 0 to 9. I can train SVM to recognize theese classes, but sometimes I get image which is not digt, but SVM still tries to categorize this image. Is there a way to set threshold for SVM on the output maybe (as I can set it for Neur...
0
1
645
0
23,440,098
0
0
0
0
1
false
16
2014-04-27T10:09:00.000
1
2
0
Julia Dataframes vs Python pandas
23,322,025
0.099668
python,pandas,dataframe,julia
I'm a novice at this sort of thing but have definitely been using both as of late. Truth be told, they seem very quite comparable but there is far more documentation, Stack Overflow questions, etc pertaining to Pandas so I would give it a slight edge. Do not let that fact discourage you however because Julia has some a...
I am currently using python pandas and want to know if there is a way to output the data from pandas into julia Dataframes and vice versa. (I think you can call python from Julia with Pycall but I am not sure if it works with dataframes) Is there a way to call Julia from python and have it take in pandas dataframes? (w...
0
1
9,570
0
23,326,609
0
0
0
0
1
false
0
2014-04-27T17:13:00.000
1
1
0
learn a threshold from labels and discrimination values?
23,326,430
0.197375
python,r,algorithm
Sort the points, group them by value, and try all <=2n+1 thresholds that classify differently (<=n+1 gaps between distinct data values including the sentinels +-infinity and <=n distinct data values). The latter step is linear-time if you try thresholds lesser to greater and keep track of how many points are misclassif...
I have a set of {(v_i, c_i), i=1,..., n}, where v_i in R and c_i in {-1, 0, 1} are the discrimination value and label of the i-th training example. I would like to learn a threshold t so that the training error is the minimum when I declare the i-th example has label -1 if v_i < t, 0 if v_i=t, and 1 if v_i>t. How ca...
0
1
72
0
23,421,883
0
0
0
0
1
true
0
2014-04-30T16:31:00.000
0
1
0
Categorizing points using known distributions
23,393,456
1.2
python,machine-learning,statistics,categorization
The principled way to do this is to assign probabilities to different model types and to different parameters within a model type. Look for "Bayesian model estimation".
My problem is as follows: I am given a number of chi-squared values for the same collection of data sets, fitted with different models. (so, for example, for 5 collections of points, fitted with either a single binomial distribution, or both binomial and normal distributions, I would have 10 chi-squared values). I woul...
0
1
52
0
23,396,854
0
0
0
0
1
false
0
2014-04-30T19:47:00.000
1
1
0
How to determine the "sentiment" between two named entities with Python/NLTK?
23,396,807
0.197375
python,nlp,nltk
In short "you cannot". This task is far beyond simple text processing which is provided with NLTK. Such objects relations sentiment analysis could be the topic of the research paper, not something solvable with a simple approach. One possible method would be to perform a grammar analysis, extraction of the conceptual r...
I'm using NLTK to extract named entities and I'm wondering how it would be possible to determine the sentiment between entities in the same sentence. So for example for "Jon loves Paris." i would get two entities Jon and Paris. How would I be able to determine the sentiment between these two entities? In this case shou...
0
1
301
0
23,472,135
0
0
0
0
1
true
4
2014-05-04T16:42:00.000
5
1
0
When does fit() stop running in scikit?
23,458,792
1.2
python,scikit-learn
There's no hard limit to the number of iterations for LogisticRegression; instead it tries to detect convergence with a specified tolerance, tol: the smaller tol, the longer the algorithm will run. From the source code, I gather that the algorithms stops when the norm of the objective's gradient is less than tol times ...
I'm using scikit-learn to train classifiers. I'm particularly using linear_model.LogisticRegression. But my question is: what's the stopping criteria for the training?! because I don't see any parameter that indicates the number of epochs! Also the same for random forests?
0
1
1,201
0
23,552,362
0
0
1
0
1
false
7
2014-05-06T19:49:00.000
3
4
0
How to detect if all the rows of a non-square matrix are orthogonal in python
23,503,667
0.148885
python,math,numpy,scipy
Approach #3: Compute the QR decomposition of AT In general, to find an orthogonal basis of the range space of some matrix X, one can compute the QR decomposition of this matrix (using Givens rotations or Householder reflectors). Q is an orthogonal matrix and R upper triangular. The columns of Q corresponding to non-zer...
I can test the rank of a matrix using np.linalg.matrix_rank(A) . But how can I test if all the rows of A are orthogonal efficiently? I could take all pairs of rows and compute the inner product between them but is there a better way? My matrix has fewer rows than columns and the rows are not unit vectors.
0
1
7,018
0
23,613,409
0
0
0
0
1
false
1
2014-05-12T07:57:00.000
0
3
0
Dynamic Programming for Dice probability in a Role Playing Pen & Paper game
23,603,762
0
python,algorithm
So the problem is: how many ways can you roll with attributes 12, 13 and 12 and a talent of 7. Lets assume you know the outcome of the first dice, lets say its 11. Then the problem is reduced to how many ways can you roll with attributes 13 and 12 and with a talent of 7. Now try it with a different first roll, lets say...
The Dark Eye is a popular fantasy role-playing game, the German equivalent of Dungeons and Dragons. In this system a character has a number of attributes and talents. Each talent has several attributes associated with it, and to make a talent check the player rolls a d20 (a 20-sided die) for each associated attribute. ...
0
1
1,791
0
23,622,236
0
1
0
0
2
false
3
2014-05-13T01:23:00.000
1
2
0
All of Ram being used in a Cellular Automata Python Script
23,621,423
0.099668
python,arrays,memory-management,anaconda,cellular-automata
If the grid is sparsely populated, you might be better off tracking just the populated parts, using a different data structure, rather than a giant python list (array).
I have a high intensity model that is written in python, with array calculations involving over 200,000 cells for over 4000 time steps. There are two arrays, one a fine grid array, one a coarser grid mesh, Information from the fine grid array is used to inform the characteristics of the coarse grid mesh. When the progr...
0
1
151
0
23,622,159
0
1
0
0
2
false
3
2014-05-13T01:23:00.000
1
2
0
All of Ram being used in a Cellular Automata Python Script
23,621,423
0.099668
python,arrays,memory-management,anaconda,cellular-automata
Sounds like your problem is memory management. You're likely writing to your swap file, which would drastically slow down your processing. GPU wouldn't help you with this, as you said you're maxing out your RAM, not your processing (CPU). You probably need to rewrite your algorithm or use different datatypes, but you h...
I have a high intensity model that is written in python, with array calculations involving over 200,000 cells for over 4000 time steps. There are two arrays, one a fine grid array, one a coarser grid mesh, Information from the fine grid array is used to inform the characteristics of the coarse grid mesh. When the progr...
0
1
151
0
34,070,637
0
0
0
0
1
false
0
2014-05-13T05:54:00.000
0
2
0
Graphviz xdot utility fails to parse graphs
23,623,717
0
python,python-2.7,ubuntu,graphviz
This is a bug in latest ubuntu xdot package, please use xdot in pip repository: sudo apt-get remove xdot sudo pip install xdot
Lately I have observed that xdot utility which is implemented in python to view dot graphs is giving me following error when I am trying to open any dot file. File "/usr/bin/xdot", line 4, in xdot.main() File "/usr/lib/python2.7/dist-packages/xdot.py", line 1947, in main win.open_file(args[0]) File "/usr/lib/pytho...
0
1
1,384
0
23,634,933
0
1
0
0
1
false
5
2014-05-13T14:56:00.000
8
3
0
How to create the negative of a sentence in nltk
23,634,759
1
python,nlp,nltk
No there is not. What is more important it is quite a complex problem, which can be a topic of research, and not something that "simple built in function" could solve. Such operation requires semantic analysis of the sentence, think about for example "I think that I could run faster" which of the 3 verbs should be nega...
I am new to NLTK. I would like to create the negative of a sentence (which will usually be in the present tense). For example, is there a function to allow me to convert: 'I run' to 'I do not run' or 'She runs' to 'She does not run'. I suppose I could use POS to detect the verb and its preceding pronoun but I just won...
0
1
2,143
0
23,644,971
0
0
0
0
1
false
0
2014-05-14T02:11:00.000
0
2
1
Using TotalOrderPartitioner in Hadoop streaming
23,644,545
0
python,hadoop
Did not try, but taking the example with KeyFieldBasedPartitioner and simply replacing: -partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner with -partitioner org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner Should work.
I'm using python with Hadoop streaming to do a project, and I need the similar functionality provided by the TotalOrderPartitioner and InputSampler in Hadoop, that is, I need to sample the data first and create a partition file, then use the partition file to decide which K-V pair will go to which reducer in the mappe...
0
1
909
0
23,659,957
0
0
0
0
1
true
3
2014-05-14T16:01:00.000
1
1
0
Tell scipy.optimize.minimize to fail
23,659,698
1.2
python,scipy,mathematical-optimization,minimize
You have 2 options I can think of: opt for constrained optimization modify your objective function to diverge whenever your numerical simulation does not converge. Basically this means returning a large value, large compared to a 'normal' value, which depends on your problem at hand. minimize will then try to optimize...
I'm using scipy.optimize.minimize for unrestricted optimization of an objective function which receives a couple of parameters and runs a complex numerical simulation based on these parameters. This simulation does not always converge in which case I make the objective function return inf, in some cases, in others NaN....
0
1
2,291
0
23,668,100
0
0
0
0
1
true
0
2014-05-15T01:15:00.000
3
1
0
How to split one big rectangle on N smaller rectangles to look random?
23,667,700
1.2
python,c++,algorithm,boost
One rectangle can be divided into two rectangles by drawing either a horizontal or a vertical line. Divide one of those rectangles and the result is three rectangles. Continue until you have N rectangles. Some limitations to observe to improve the results Don't divide a rectangle with a horizontal line if the height ...
How to split one big rectangle on N smaller rectangles to look random ? I need to generate couple divisions for different value of n. Is there library for this in boost for c++ or some for python ?
0
1
1,261
0
23,682,058
0
0
0
0
1
true
5
2014-05-15T12:06:00.000
8
1
0
What's the meaning of p-values which produced by feature selection (i.e. chi2 method)?
23,677,734
1.2
python,classification,scikit-learn,feature-selection
In general the p-value indicates how probable a given outcome or a more extreme outcome is under the null hypothesis. In your case of feature selection, the null hypothesis is something like this feature contains no information about the prediction target, where no information is to be interpreted in the sense of the s...
Recently, I have used sklearn(a python meachine learning library) to do a short-text classification task. I found that SelectKBest class can choose K best of features. However, the first argument of SelectKBest is a score function, which "taking two arrays X and y, and returning a pair of arrays (scores, pvalues)". I k...
0
1
6,714
0
23,698,952
0
1
0
0
1
false
0
2014-05-15T13:38:00.000
0
1
0
'combine first' in pandas produces NA error
23,679,951
0
python,pandas
Found the best solution: pd.tools.merge.concat([test.construction,test.ops],join='outer') Joins along the date index and keeps the different columns. To the extent the column names are the same, it will join 'inner' or 'outer' as specified.
I have two dataframes, each with a series of dates as the index. The dates to not overlap (in other words one date range from, say, 2013-01-01 through 2016-06-15 by month and the second DataFrame will start on 2016-06-15 and run quarterly through 2035-06-15. Most of the column names overlap (i.e. are the same) and th...
0
1
373
0
23,688,578
0
0
0
0
1
true
4
2014-05-15T20:41:00.000
9
1
0
Confusion about Artist place in matplotlib hierarchy
23,688,227
1.2
python,matplotlib
I do not like those layers pylab : massive namespace dump that pulls in everything from pyplot and numpy. It is not a 'layer' so much as a very cluttered name space. pyplot : state-machine based layer (it knows what your 'current axes' and 'current figure' are and applies the functions to that axes/figure. You shoul...
In this period I am working with matplotlib. I studied many examples, and was able to modify it to suit several of my needs. But I would like to better understand the general structure of the library. For this reason, apart reading a lot of tutorials on the web, I also purchased the ebook "Matplotlib for Python Develop...
0
1
1,693
0
23,734,414
0
1
0
0
1
false
13
2014-05-16T15:40:00.000
0
7
0
Get permutation with specified degree by index number
23,699,378
0
python,algorithm,permutation,time-complexity,combinatorics
The first part is straight forward if you work wholly in the lexiographic side of things. Given my answer on the other thread, you can go from a permutation to the factorial representation instantly. Basically, you imagine a list {0,1,2,3} and the number that I need to go along is the factorial representation, so for 1...
I've been working on this for hours but couldn't figure it out. Define a permutation's degree to be the minimum number of transpositions that need to be composed to create it. So a the degree of (0, 1, 2, 3) is 0, the degree of (0, 1, 3, 2) is 1, the degree of (1, 0, 3, 2) is 2, etc. Look at the space Snd as the space...
0
1
1,866
0
23,717,381
0
0
0
0
1
true
0
2014-05-17T22:58:00.000
1
1
0
IFFT taking orders of magnitude more than FFT
23,716,904
1.2
python,numpy,scipy,signal-processing,fft
If your IFFT's length is different from that of the FFT, and the length of the IFFT isn't composed of only very small prime factors (2,3,etc.), then the efficiency can drop off significantly. Thus, this method of resampling is only efficient if the two sample rates are different by ratios with small prime factors, such...
I'm trying to resample a 1-D signal using an FFT method (basically, the one from scipy.signal). However, the code is taking forever to run, even though my input signal is a power of two in length. After looking at profiling, I found the root of the problem. Basically, this method takes an FFT, then removes part of the ...
0
1
285
0
23,722,134
0
1
0
0
3
false
3
2014-05-18T12:09:00.000
3
4
0
Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB?
23,721,725
0.148885
python,matlab,numpy,scipy
From my experience, using Python is more rewarding, especially for a beginner in enginnering. In comparison to Matlab, Python is a general purpose language, and knowing it makes many more tasks than, say, signal analysis easy to accomplish. In my opinion it's easier to interface with external hardware or to do other ta...
I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is s...
0
1
375
0
23,722,826
0
1
0
0
3
false
3
2014-05-18T12:09:00.000
6
4
0
Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB?
23,721,725
1
python,matlab,numpy,scipy
You should consider what particular capabilities you need, and see if Numpy and Scipy can meet them. Matlab's real value isn't in the base package, which is more-or-less matched by a combination of numpy, scipy and matplotlib, but in the various toolboxes one can purchase. For instance, I'm not aware of a Robust Cont...
I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is s...
0
1
375
0
23,722,879
0
1
0
0
3
false
3
2014-05-18T12:09:00.000
3
4
0
Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB?
23,721,725
0.148885
python,matlab,numpy,scipy
I personally feel that working with Python is a lot better, As @bdoering mentioned working on Opensource projects is far better than working on closed source. Matlab is quite industry specific, and is still not wide spread in the industry. If you work with these softwares, sooner or later you will be stuck between dif...
I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is s...
0
1
375
0
23,765,727
0
0
0
0
1
false
3
2014-05-19T04:53:00.000
2
2
0
Text summarization using deep learning techniques
23,729,919
0.197375
python,theano,summarization,deep-learning
I think you need to be a little more specific. When you say "I am unable to figure to how exactly the summary is generated for each document", do you mean that you don't know how to interpret the learned features, or don't you understand the algorithm? Also, "deep learning techniques" covers a very broad range of model...
I am trying to summarize text documents that belong to legal domain. I am referring to the site deeplearning.net on how to implement the deep learning architectures. I have read quite a few research papers on document summarization (both single document and multidocument) but I am unable to figure to how exactly the s...
0
1
2,167
0
23,784,889
0
0
0
0
1
false
0
2014-05-21T13:25:00.000
0
1
0
Pandas performance: Multiple dtypes in one column or split into different dtypes?
23,784,578
0
python,pandas
Seems to me that it may depend on what your subsequent use case is. But IMHO I would make each column unique type otherwise functions such as group by with totals and other common Pandas functions simply won't work.
I have huge pandas DataFrames I work with. 20mm rows, 30 columns. The rows have a lot of data, and each row has a "type" that uses certain columns. Because of this, I've currently designed the DataFrame to have some columns that are mixed dtypes for whichever 'type' the row is. My question is, performance wise, should ...
0
1
581
0
38,926,745
0
0
0
0
2
false
0
2014-05-21T14:48:00.000
1
2
0
Updating Pandas dependencies after installing pandas
23,786,694
0.099668
python,pandas
Yes, you will, Pandas sources those dependencies.
Pandas has a number of dependencies, e.g matplotlib, statsmodels, numexpr etc. Say I have Pandas installed, and I update many of its dependencies, if I don't update Pandas, could I run into any problems?
0
1
184
0
23,787,041
0
0
0
0
2
true
0
2014-05-21T14:48:00.000
3
2
0
Updating Pandas dependencies after installing pandas
23,786,694
1.2
python,pandas
If your version of pandas is old (i.e., not 0.13.1), you should definitely update it to take advantage of any new features/optimizations of the dependencies, and any new features/bug fixes of pandas itself. It is a very actively-maintained project, and there are issues with older versions being fixed all the time. Of c...
Pandas has a number of dependencies, e.g matplotlib, statsmodels, numexpr etc. Say I have Pandas installed, and I update many of its dependencies, if I don't update Pandas, could I run into any problems?
0
1
184
0
23,816,393
0
1
0
0
1
false
0
2014-05-21T20:56:00.000
1
3
0
How to extract meaning from sentences after running named entity recognition?
23,793,628
0.066568
python,nlp,nltk
I do not think your "algo" is even doing entity recognition... however, stretching the problem you presented quite a bit, what you want to do looks like coreference resolution in coordinated structures containing ellipsis. Not easy at all: start by googling for some relevant literature in linguistics and computational ...
First: Any recs on how to modify the title? I am using my own named entity recognition algorithm to parse data from plain text. Specifically, I am trying to extract lawyer practice areas. A common sentence structure that I see is: 1) Neil focuses his practice on employment, tax, and copyright litigation. or 2) Neil f...
0
1
1,842
0
23,809,181
0
0
0
0
1
false
1
2014-05-22T13:34:00.000
1
1
0
Define a 2D Gaussian probability with five peaks
23,808,446
0.197375
python,numpy,statistics,scipy,probability
If I understand what you're asking, check out Gaussian Mixture Models and Expectation Maximization. I don't know of any pre-implemented versions of these in Python, although I haven't looked too hard.
I have a 2D data and it contains five peaks. Could I fit five 2D Gaussians function to obtain the peaks? In my problem, the peaks do not refer to the clustering problem. Which I think EM would be an appropriate answer for it. In my case I measure a variable in x-y space and it shows maximum in more than one position...
0
1
190
1
23,861,390
0
0
0
0
1
false
1
2014-05-23T01:04:00.000
1
1
0
Interoperability advice - Python, C, Matplotlib/OpenGL run-time efficency
23,819,504
0.197375
python,c,opengl,interop,python-cffi
It would help to know what the turnaround time for the simulation runs is and how fast you want to display and update graphs. More or less realtime, tens of milliseconds for each? Seconds? Minutes? If you want to draw graphs, I'd recommend Matplotlib rather than OpenGL. Even hacking the Matplotlib code yourself to make...
Current conditions: C code being rewritten to do almost the same type of simulation every time (learning behavior in mice) Matlab code being written for every simulation to plot results (2D, potentially 3D graphs) Here are my goals: Design GUI (wxPython) that allows me to build a dynamic simulator GUI also displays ...
0
1
461
0
23,841,290
0
0
0
0
1
false
0
2014-05-23T21:17:00.000
3
1
0
Example order in machine learning algorithms (Scikit Learn)
23,838,453
0.53705
python,numpy,machine-learning,scipy,scikit-learn
No, the ordering of the patterns in the training set do not matter. While the ordering of samples can affect stochastic gradient descent learning algorithms (like for example the one for the NN) they are in most cases coded in a way that ensures internal randomness. SVM on the other hand is globally convergant and it w...
I'm doing some classification with Python and scikit-learn. I have a question which doesn't seem to be covered in the documentation: if I'm doing, for example, classification with SVM, does the order of the input examples matter? If I have binary labels, will the results be less accurate if I put all the examples wi...
0
1
138
0
23,876,265
0
0
0
0
1
false
2
2014-05-24T12:49:00.000
0
2
0
Constrained optimization in SciPy
23,845,235
0
python,numpy,scipy
The returned value of scipy.optimize.minimize is of type Result: Result contains, among other things, the inputs (x) which minimize f.
I need, for a simulation, to find the argument (parameters) that maximizes a multivariable function with constraints. I've seen that scipy.optimize.minimize gives the minimum of a function (and, the maximum of the minus function) of a given function and I can use constraints and bounds. But, reading the doc, I've find ...
0
1
4,417
0
23,895,466
0
0
0
0
1
true
1
2014-05-27T17:22:00.000
1
1
0
Change the column name of dataframe at runtime
23,895,408
1.2
python,pandas
i would recommend just using pandas.io.sql to download your database data. it returns your data in a DataFrame. but if, for some reason, you want to access the columns, you already have your answer: assignment: df['column%d' % count] = data retrieval: df['column%d' % count]
I am trying to initialize an empty dataframe with 5 column values. Say column1, column2, column3, column4, column5. Now I want to read data from database and want to insert specific column values from the database to this dataframe. Since there are 5 columns its easier to do it individually. But i have to extend the n...
0
1
193
0
23,944,220
0
1
1
0
2
false
1
2014-05-28T10:02:00.000
1
2
0
how improve speed of math.sqrt() with numba jit compiler in python 2.7
23,908,547
0.099668
python,performance,jit,numba
Numba is mapping math.sqrt calls to sqrt/sqrtf in libc already. The slowdown probably comes from the overhead of Numba. This overhead comes from (un)boxing PyObjects and detecting if errors occurred in the compiled code. It affects calling small functions from Python but less when calling from another Numba compiled...
I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt. How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt? -- ...
0
1
1,612
0
23,943,709
0
1
1
0
2
false
1
2014-05-28T10:02:00.000
4
2
0
how improve speed of math.sqrt() with numba jit compiler in python 2.7
23,908,547
0.379949
python,performance,jit,numba
Numba already does replace calls to math.sqrt to calls to a machine-code library for sqrt. So, if you are getting slower performance it might be something else. Can you post the code you are trying to speed up. Also, which version of Numba are you using. In the latest version of Numba, you can call the inspect_ty...
I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt. How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt? -- ...
0
1
1,612
0
23,930,543
0
0
0
0
1
false
2
2014-05-29T02:14:00.000
1
1
0
Is it possible to mask outliers within a scikit learn pipeline?
23,924,714
0.197375
python,scikit-learn,outliers
There's no support for masking in scikit-learn; outlier detection is done ad hoc by some estimators (e.g. DBSCAN, or RANSAC, which will appear in the next release). If you want to remove outliers yourself, just use NumPy indexing.
I have a pipeline where I transform some data and fit a curve to it. Is there a preferred/standard way for masking the outliers in the data?
0
1
906
0
23,946,348
0
1
1
0
1
true
2
2014-05-29T22:46:00.000
7
1
0
How do numpy and GMPY2 compare with GMP in terms of speed?
23,944,242
1.2
python,c,numpy,gmp,gmpy
numpy and GMPY2 have different purposes. numpy has fast numerical libraries but to achieve high performance, numpy is effectively restricted to working with vectors or arrays of low-level types - 16, 32, or 64 bit integers, or 32 or 64 bit floating point values. For example, numpy access highly optimized routines writt...
I understand that GMPY2 supports the GMP library and numpy has fast numerical libraries. I want to know how the speed compares to actually writing C (or C++) code with GMP. Since Python is a scripting language, I don't think it will ever be as fast as a compiled language, however I have been wrong about these generaliz...
0
1
2,014
0
27,366,408
0
0
0
0
1
false
2
2014-06-02T18:06:00.000
0
1
0
I get a PyBrain BackpropTrainer AssertionError on Windows 7, which requirement is missin?
24,000,654
0
python,neural-network,backpropagation,pybrain
The assert statement checks if a condition is true. In this case, if the inner dimension (indim) of your network is the same as your dataset, ds. Check if im3.flatten() is igual 12288 assert ds.indim == network.indim # 12288 != im3.flatten(), error!
I initialized ds = SupervisedDataSet(12288,1) and add data ds.appendLinked(im3.flatten(),10) where im3 is an openCV picture. and this is my trainer -> trainer = BackpropTrainer(red, ds) When the running process reach BackpropTrainer, i get an AssertionError on backprop.py line 35 self.setData(dataset). It's a pybrain e...
0
1
424
0
29,903,784
0
0
0
0
1
false
0
2014-06-02T18:55:00.000
1
1
0
mmh3 not installed on Elastic MapReduce in AWS
24,001,364
0.197375
python,amazon-web-services,elastic-map-reduce
You can use pip install mmh3 to install it.
I need to use mmh3 for hashing. However, when I run "python MultiwayJoin.py R.csv S.csv T.csv -r emr > output.txt" in terminal, it returned an error said that: File "MultiwayJoin.py", line 5, in import mmh3 ImportError: No module named mmh3
0
1
1,466
0
64,190,596
0
0
0
0
1
false
13
2014-06-02T20:13:00.000
0
4
0
python: How to use POS (part of speech) features in scikit learn classfiers (SVM) etc
24,002,485
0
python,machine-learning,scikit-learn,nltk
I think a better method would be to : Step-1: Create word/sentence embeddings for each text/sentence. Step-2: Calculate the POS-tags. Feed the POS-tags to a embedder as Step-1. Step-3: Elementwise multiply the two vectors. (This is to ensure that the word-embeddings in each sentence is weighted by the POS-tags associa...
I want to use part of speech (POS) returned from nltk.pos_tag for sklearn classifier, How can I convert them to vector and use it? e.g. sent = "This is POS example" tok=nltk.tokenize.word_tokenize(sent) pos=nltk.pos_tag(tok) print (pos) This returns following [('This', 'DT'), ('is', 'VBZ'), ('POS', 'NNP'), ('example', ...
0
1
11,721
0
24,048,037
0
0
0
0
1
false
0
2014-06-03T07:42:00.000
0
1
0
Matplotlib animation + IPython: temporary disabling interactive mode?
24,009,656
0
python,animation,matplotlib,ipython
As @tcaswell pointed out, the problem was caused by the callback that was indirectly calling plt.show().
I have a python a script that generates an animation using matplotlib's animation.FuncAnimation and animation.FFMpegWriter. It works well, but there's an issue when running the code in IPython: each frame of the animation is displayed on screen while being generated, which slows down the movie generation process. I've ...
0
1
285
0
24,024,883
0
1
0
0
1
false
0
2014-06-03T20:54:00.000
0
2
0
How to generate random int around specific mean?
24,024,736
0
python,random
Yes, there is. It is random.triangular(min, max, av). Your mean value will be close, but not equal to av. Edit: see comments below, this has drawbacks.
I need to generate 100 age values between 23 and 72 and the mean value must be 42 years. Do you think such a function already exists in standard python? If not, I think I know python just enough and should be able to code the algorithm but I am hoping something is already there for use. Any hints?
0
1
110
0
24,030,090
0
0
0
0
1
false
29
2014-06-03T23:25:00.000
0
3
0
Exporting figures from Bokeh as svg or pdf?
24,026,618
0
python,bokeh
It seems that since bokeh uses html5 canvas as a backend, it will be writing things to static html pages. You could always export the html to pdf later.
Is it possible to output individual figures from Bokeh as pdf or svg images? I feel like I'm missing something obvious, but I've checked the online help pages and gone through the bokeh.objects api and haven't found anything...
1
1
17,265
0
24,771,340
0
0
0
0
1
true
3
2014-06-06T22:15:00.000
1
1
0
SciPy Quad Integration: Accuracy Warning
24,091,411
1.2
python-2.7,scipy,integrate
in scipy.integrate.quad there's a lower-level call to a difference function that is iterated over, and it's iterated over divmax times. In your case, the default for divmax=20. There are some functions that you can override this default -- for example scipy.integrate.quadrature.romberg allows you to set divmax (defau...
I am currently trying to compute out an integral with scipy.integrate.quad, and for certain values I get the following error: /Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/site-packages/scipy/integrate/quadrature.py:616: AccuracyWarning: divmax (20) exceeded. Latest difference = 2.005732e-02 Accurac...
0
1
2,346
0
24,121,929
0
0
0
0
1
false
1
2014-06-09T13:59:00.000
-3
2
0
How to set bin content in a hist2d (matplotlib)
24,121,883
-0.291313
python,matplotlib,histogram
Do you want to set the number of elements in each bin? I think that's a bar plot rather than a histogram. So simply use ax.bar instead. EDIT Good point by tcaswell, for multidimensional images the equivalent is imshow rather than bar.
I am trying to plot values of z binned in (x,y). This would look like a hist2d in matplotlib but with the bin content being defined by another array instead of representing the number of counts. Is there any way to set the bin content in hist2d?
0
1
1,859
0
27,997,033
0
0
0
0
1
false
16
2014-06-09T14:50:00.000
1
6
0
pandas ValueError: numpy.dtype has the wrong size, try recompiling
24,122,850
0.033321
python,numpy,pandas
pip uninstall numpy uninstalls the old version of numpy pip install numpy finds and installs the latest version of numpy
I took a new clean install of OSX 10.9.3 and installed pip, and then did pip install pandas pip install numpy Both installs seemed to be perfectly happy, and ran without any errors (though there were a zillion warnings). When I tried to run a python script with import pandas, I got the following error: numpy.d...
0
1
27,508
0
35,639,956
0
0
0
0
1
false
0
2014-06-12T13:11:00.000
0
1
0
timeout pandas read_csv stringio timeout
24,185,302
0
django,python-2.7,pandas
This is over a year old, but this is the only SO thread I found on this issue so thought I'd comment on what we did to fix it. It turns out there are issues with pd.read_csv(FileObject, engine="C") on an embedded wsgi process. We ended up solving this issue by upgrading to pandas 0.17.0. Another working solution was ...
Pandas read_csv causes a timeout on my production server with python 2.7, django 1.6.5, apache and nginx. This happens only when using a string buffer like StringIO.StringIO or io.BytesIO. When supplying a filename as argument to read_csv everything works fine. Debugging does not help because on my development server t...
0
1
1,792
0
24,209,906
0
1
0
0
1
false
0
2014-06-13T16:20:00.000
0
1
0
Finding the yearly mean temperatures from scrambled months python
24,209,812
0
python,mean
Use a dictionary with the year as the key and the temp and a counter as the value (as a list). If the year isn't found add an entry with the mean temp and the counter at 1. If the year is already there add the temperature to the existing temp and increment the counter. The rest should be easy. Note this gives you the a...
Year Month MeanTemp Max Temp Min Temp Total Rain(mm) Total Snow(cm) 2003 12 -0.1 9 -10.8 45 19.2 1974 1 -5.9 8.9 -20 34.3 35.6 2007 8 22.4 34.8 9.7 20.8 0 1993 7 21.7 32.5 11 87.7 0 1...
0
1
554
0
25,560,508
0
1
0
0
1
false
2
2014-06-13T21:04:00.000
2
1
0
Anaconda Spyder integrated IPython display for dataframes
24,213,788
0.379949
ipython,spyder
The development version of Spyder has a Pandas Dataframe editor (and an numpy array editor, up to 3d). You can run this from source or wait for the next release, 2.3.1. This is probably more adequate to edit or visualize dataframe than using the embedded qtconsole.
The new anaconda spyder has an integrated iPython console. I really dislike the way dataframes are displayed in this console. There are some border graphics around the dataframe and resizing that occurs with window resizing that make it difficult to examine the contents. In addition, often for large dataframes if on...
0
1
1,871
0
24,214,946
0
0
0
0
1
true
1
2014-06-13T22:00:00.000
0
1
0
Mac OS Mavericks numpy Version Issue
24,214,364
1.2
python,macos,numpy
From our conversation in chat and adding path to .bashrc not working: Putting: /usr/local/bin first in /etc/paths will resolve the issue
When I run 'pip freeze' it shows that numpy==1.8.1; however, when I start Python and import numpy and then check the version number via numpy.version.version I get 1.6. This is confusing to me and it's also creating a problem for scipy and matplotlib. For example, when I attempt to do the following import 'from matplot...
0
1
469
0
24,232,729
0
0
0
0
1
true
5
2014-06-15T18:23:00.000
6
1
0
Pandas diff() functionality on two columns in a dataframe
24,232,701
1.2
python,python-2.7,pandas,offset
You can shift A column first: df['A'].shift(-1) - df['B']
I have a data frame in which column A is the start time of an activity and column B is the finish time of that activity, and each row represents an activity (rows are arranged chronologically). I want to compute the difference in time between the end of one activity and the start of the next activity, i.e. df[i+1][A] ...
0
1
5,331
0
24,289,392
0
0
0
0
1
false
3
2014-06-18T11:30:00.000
3
2
0
How can i implement spherical hankel function of the first kind by scipy/numpy or sympy?
24,284,390
0.291313
python,numpy,scipy,sympy
Although it would be nice if there were an existing routine for calculating the spherical Hankel functions (like there is for the ordinary Hankel functions), they are just a (complex) linear combination of the spherical Bessel functions of the first and second kind so can be easily calculated from existing routines. S...
I knew that there is no builtin sph_hankel1 in scipy then i want to know that how to implement it in the right way? Additional: Just show me one correct implementation of sph_hankel1 either using of Scipy or Sympy.
0
1
997
0
24,317,131
0
0
0
0
2
false
2
2014-06-19T12:10:00.000
7
2
0
How to speed up Python code for running on a powerful machine?
24,306,285
1
python,performance,numpy,cuda,gpu
The comments and Moj's answer give a lot of good advice. I have some experience on signal/image processing with python, and have banged my head against the performance wall repeatedly, and I just want to share a few thoughts about making things faster in general. Maybe these help figuring out possible solutions with sl...
I've completed writing a multiclass classification algorithm that uses boosted classifiers. One of the main calculations consists of weighted least squares regression. The main libraries I've used include: statsmodels (for regression) numpy (pretty much everywhere) scikit-image (for extracting HoG features of image...
0
1
6,194
0
24,306,811
0
0
0
0
2
false
2
2014-06-19T12:10:00.000
4
2
0
How to speed up Python code for running on a powerful machine?
24,306,285
0.379949
python,performance,numpy,cuda,gpu
I am afraid you can not speed up your program by just running it on a powerful computer. I had this issue while back. I first used python (very slow), then moved to C(slow) and then had to use other tricks and techniques. for example it is sometimes possible to apply some dimensionality reduction to speed up things whi...
I've completed writing a multiclass classification algorithm that uses boosted classifiers. One of the main calculations consists of weighted least squares regression. The main libraries I've used include: statsmodels (for regression) numpy (pretty much everywhere) scikit-image (for extracting HoG features of image...
0
1
6,194
0
24,347,728
0
0
0
0
1
true
1
2014-06-19T16:47:00.000
1
1
0
Is there a way generate a chart on google spreadsheet automatically using Python?
24,312,068
1.2
python-2.7,charts,google-sheets,google-spreadsheet-api
AFAIK, no. There is no way to do this with python. Google-apps-script can do this, but the spreadsheet-api (Gdata) can't. You can make a call from Python to Google-apps-script and pass parameters.
Is there a way generate a chart on google spreadsheet automatically using Python? I checked gspread. There seems no api for making charts. Thanks~
0
1
1,378
0
24,524,206
0
0
0
0
2
false
15
2014-06-23T13:25:00.000
2
3
0
Naive Bayes: Imbalanced Test Dataset
24,367,141
0.132549
python,machine-learning,classification,scikit-learn,text-classification
I think gustavodidomenico makes a good point. You can think of Naive Bayes as learning a probability distribution, in this case of words belonging to topics. So the balance of the training data matters. If you use decision trees, say a random forest model, you learn rules for making the assignment (yes there are pro...
I am using scikit-learn Multinomial Naive Bayes classifier for binary text classification (classifier tells me whether the document belongs to the category X or not). I use a balanced dataset to train my model and a balanced test set to test it and the results are very promising. This classifer needs to run in real tim...
0
1
9,405
0
24,528,969
0
0
0
0
2
true
15
2014-06-23T13:25:00.000
11
3
0
Naive Bayes: Imbalanced Test Dataset
24,367,141
1.2
python,machine-learning,classification,scikit-learn,text-classification
You have encountered one of the problems with classification with a highly imbalanced class distribution. I have to disagree with those that state the problem is with the Naive Bayes method, and I'll provide an explanation which should hopefully illustrate what the problem is. Imagine your false positive rate is 0.01, ...
I am using scikit-learn Multinomial Naive Bayes classifier for binary text classification (classifier tells me whether the document belongs to the category X or not). I use a balanced dataset to train my model and a balanced test set to test it and the results are very promising. This classifer needs to run in real tim...
0
1
9,405
0
24,424,870
0
0
0
0
1
false
0
2014-06-25T04:05:00.000
0
1
0
How to plot text documents in a scatter map?
24,400,012
0
python,numpy,matplotlib,scikit-learn
If X is a sparse matrix, you probably need X = X.todense() in order to get access to the data in the correct format. You probably want to check X.shape before doing this though, as if X is very large (but very sparse) it may consume a lot of memory when "densified".
I'm using scikit to perform text classification and I'm trying to understand where the points lie with respect to my hyperplane to decide how to proceed. But I can't seem to plot the data that comes from the CountVectorizer() function. I used the following function: pl.scatter(X[:, 0], X[:, 1]) and it gives me the erro...
0
1
99
0
24,466,862
0
1
0
0
1
true
1
2014-06-27T11:01:00.000
1
1
0
SPSS equivalent of Python Dictionary
24,450,211
1.2
python,dictionary,spss
A Python dictionary is an in-memory hash table where lookup of individual elements requires fixed time, and there is no deterministic order. SPSS data files are disk-based and sequential and are designed for fast, in-order access for arbitrarily large amounts of data. So these are intended for quite different purposes...
I was trying to Google above, but knowing absolutely nothing about SPSS I wasn't sure what search phrase I should be using. From my initial search (tried using words: "Dictionary" and "Scripting Dictionary") it seems there is something called Data Dictionary in SPSS, but description suggest it is not the same as Python...
0
1
176
0
24,494,507
0
0
0
0
1
false
2
2014-06-30T15:36:00.000
-1
1
0
Solving system of linear inequalities in 3 or more variables - Python
24,493,849
-0.197375
python,python-2.7
make a matrix object and use Crammer's method
I want to solve systems of linear inequalities in 3 or more variables. That is, to find all possible solutions. I originally found GLPK and tried the python binding, but the last few updates to GLPK changed the APIs and broke the bindings. I haven't been able to find a way to making work. I would like to have the symb...
0
1
1,233
0
24,500,552
0
0
0
0
1
false
0
2014-06-30T23:43:00.000
0
1
0
Histogram bin size(matplotlib)
24,500,522
0
python,matplotlib
Use the numpy function histogram, which returns arrays with the bin locations and sizes.
I'm creating a histogram(which is NOT normalized) using matplotlib. I want to get the exact size of each bin. That is, not the width but the length. In other words, number of data contained in each bin. Any tips???
0
1
148
0
24,505,486
0
0
0
0
1
true
4
2014-07-01T05:58:00.000
3
2
0
Quickest linear regression implementation in python
24,503,344
1.2
python,scipy,scikit-learn,statsmodels,pymc
The scikit-learn SGDRegressor class is (iirc) the fastest, but would probably be more difficult to tune than a simple LinearRegression. I would give each of those a try, and see if they meet your needs. I also recommend subsampling your data - if you have many gigs but they are all samples from the same distibution, yo...
I'm performing a stepwise model selection, progressively dropping variables with a variance inflation factor over a certain threshold. In order to do this, I'm running OLS many, many times on datasets ranging from a few hundred MB to 10 gigs. What is the quickest implementation of OLS would be for larger datasets? The ...
0
1
3,098
0
24,524,359
0
0
0
0
1
true
3
2014-07-01T17:06:00.000
2
1
0
How do you plot the hyperplane of an sklearn svm with more than 2 features in matplotlib?
24,515,783
1.2
python,matplotlib,plot
If your linear SVM classifier works quite well, then that suggests there is a hyperplane which separates your data. So there will be a nice 2D geometric representation of the decision boundary. To understand the "how" you need to look at the support vectors themselves, see which ones contribute to which side of the ...
I have a scikits-learn linear svm.SVC classifier designed to classify text into 2 classes (-1,1). The classifier uses 250 features from the training set to make its predictions, and it works fairly well. However, I can't figure out how plot the hyperplane or the support vectors in matplotlib. All the examples online u...
0
1
1,876
0
24,516,539
0
0
0
0
1
false
3
2014-07-01T17:47:00.000
2
3
0
efficient, fast numpy histograms
24,516,396
0.132549
python,arrays,performance,numpy,histogram
First, fill in your 16 bins without considering date at all. Then, sort the elements within each bin by date. Now, you can use binary search to efficiently locate a given year/month/week within each bin.
I have a 2D numpy array consisting of ca. 15'000'000 datapoints. Each datapoint has a timestamp and an integer value (between 40 and 200). I must create histograms of the datapoint distribution (16 bins: 40-49, 50-59, etc.), sorted by year, by month within the current year, by week within the current year, and by day w...
0
1
4,163
0
24,518,289
0
1
0
0
2
false
7
2014-07-01T19:18:00.000
4
3
0
How to stop NLTK stemmer from removing the trailing "e"?
24,517,722
0.26052
python,nlp,nltk
The goal of a stemmer is to remove as much of the word as possible to allow it to cover as many cases as possible, yet retain the core of the word. One reason profile might go to profil is to cover the case of profiling. You would need a conditional or another stemmer in order to guard against this, although I would im...
I'm using NLTK stemmer to remove grammatical variations of a stem word. However, the Port or Snowball stemmers remove the trailing "e" of the original form of a noun or verb, e.g., Profile becomes Profil. How can I prevent this from happening? I know I can use a conditional to guard against this. But obviously it will ...
0
1
3,819
0
24,521,458
0
1
0
0
2
true
7
2014-07-01T19:18:00.000
8
3
0
How to stop NLTK stemmer from removing the trailing "e"?
24,517,722
1.2
python,nlp,nltk
I agree with Philip that the goal of stemmer is to retain only the stem. For this particular case you can try a lemmatizer instead of stemmer which will supposedly retain more of a word and is meant to remove exactly different forms of a word like 'profiles' --> 'profile'. There is a class in NLTK for this - try WordNe...
I'm using NLTK stemmer to remove grammatical variations of a stem word. However, the Port or Snowball stemmers remove the trailing "e" of the original form of a noun or verb, e.g., Profile becomes Profil. How can I prevent this from happening? I know I can use a conditional to guard against this. But obviously it will ...
0
1
3,819
0
24,519,951
0
0
0
0
2
true
2
2014-07-01T19:22:00.000
3
2
0
Online version of scikit-learn's TfidfVectorizer
24,517,793
1.2
python,machine-learning,nlp,scikit-learn,vectorization
Intrinsically you can not use TF IDF in an online fashion, as the IDF of all past features will change with every new document - which would mean re-visiting and re-training on all the previous documents, which would no-longer be online. There may be some approximations, but you would have to implement them yourself.
I'm looking to use scikit-learn's HashingVectorizer because it's a great fit for online learning problems (new tokens in text are guaranteed to map to a "bucket"). Unfortunately the implementation included in scikit-learn doesn't seem to include support for tf-idf features. Is passing the vectorizer output through a ...
0
1
2,712
0
24,841,469
0
0
0
0
2
false
2
2014-07-01T19:22:00.000
4
2
0
Online version of scikit-learn's TfidfVectorizer
24,517,793
0.379949
python,machine-learning,nlp,scikit-learn,vectorization
You can do "online" TF-IDF, contrary to what was said in the accepted answer. In fact, every search engine (e.g. Lucene) does. What does not work if assuming you have TF-IDF vectors in memory. Search engines such as lucene naturally avoid keeping all data in memory. Instead they load one column at a time (which due to ...
I'm looking to use scikit-learn's HashingVectorizer because it's a great fit for online learning problems (new tokens in text are guaranteed to map to a "bucket"). Unfortunately the implementation included in scikit-learn doesn't seem to include support for tf-idf features. Is passing the vectorizer output through a ...
0
1
2,712
0
24,519,425
0
0
0
0
1
false
1
2014-07-01T20:54:00.000
0
2
0
Matplotlib saves pdf with data outside set
24,519,113
0
python,pdf,matplotlib,plot
If you don't have a requirement to use PDF figures, you can save the matplotlib figures as .png; this format just contains the data on the screen, e.g. I tried saving a large scatter plot as PDF, its size was 198M; as png it came out as 270K; plus I've never had any problems using png inside latex.
I have a problem with Matplotlib. I usually make big plots with many data points and then, after zooming or setting limits, I save in pdf only a specific subset of the original plot. The problem comes when I open this file: matplotlib saves all the data into the pdf making not visible the one outside of the range. This...
0
1
247
0
24,531,535
0
0
0
0
1
true
0
2014-07-02T11:22:00.000
1
1
0
I have generated a pdf file using matplotlib and I want to add a logo to this pdf file. How can I do it
24,529,823
1.2
image,python-2.7,matplotlib,pdf-generation
If you can do it the other way round, it is easier: plot the image load the logo from file with, e.g. Image module (PIL) add the logo with plt.imshow, use the extent keyword to place it correctly save the image into PDF (You may even want to plot the logo first, so that it stays in the background.) Unfortunately, thi...
I am using matplotlib to draw a graph using some data and I have saved it in Pdf format.Now I want to add a logo to this file.How can I do this. Thanks in advance
0
1
141
0
45,240,779
0
0
0
0
1
false
7
2014-07-02T16:36:00.000
5
2
0
How to Combine pyWavelet and openCV for image processing?
24,536,552
0.462117
python,opencv,image-processing,dwt
Answer of Navaneeth is correct but with two correction: 1- Opencv read and save the images as BGR not RGB so you should do cv2.COLOR_BGR2GRAY to be exact. 2- Maximum level of _multilevel.py is 7 not 10, so you should do : w2d("test1.png",'db1',7)
I need to do an image processing in python. i want to use wavelet transform as the filterbank. Can anyone suggest me which one library should i use? I had pywavelet installed, but i don't know how to combine it with opencv. If i use wavedec2 command, it raise ValueError("Expected 2D input data.") Can anyone help me?
0
1
11,865
0
61,270,549
0
0
0
0
1
false
6
2014-07-04T18:21:00.000
0
3
0
Sample a truncated integer power law in Python?
24,579,269
0
python,numpy,random,distribution
Use numpy.random.zipf and just reject any samples greater than or equal to m
What function can I use in Python if I want to sample a truncated integer power law? That is, given two parameters a and m, generate a random integer x in the range [1,m) that follows a distribution proportional to 1/x^a. I've been searching around numpy.random, but I haven't found this distribution.
0
1
2,482