GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
9,300,400
0
0
0
0
2
true
1
2012-02-15T05:16:00.000
1
2
0
NLTK certainty measure?
9,288,221
1.2
python,classification,nltk,probability
I am not sure about the NLTK implementation of Naive Bayes, but the Naive Bayes algorithm outputs probabilities of class membership. However, they are horribly calibrated. If you want good measures of certainty, you should use a different classification algorithm. Logistic regression will do a decent job at producing...
In NLTK, if I write a NaiveBayes classifier for say movie reviews (determining if positive or negative), how can I determine the classifier "certainty" when classify a particular review? That is, I know how to run an 'accuracy' test on a given test set to see the general accuracy of the classifier. But is there anyway ...
0
1
524
0
9,300,932
0
0
0
0
2
false
1
2012-02-15T05:16:00.000
1
2
0
NLTK certainty measure?
9,288,221
0.099668
python,classification,nltk,probability
nltk.classify.util.log_likelihood. For this problem, you can also try measuring the results by precision, recall, F-score at the token level, that is, scores for positive and negative respectively.
In NLTK, if I write a NaiveBayes classifier for say movie reviews (determining if positive or negative), how can I determine the classifier "certainty" when classify a particular review? That is, I know how to run an 'accuracy' test on a given test set to see the general accuracy of the classifier. But is there anyway ...
0
1
524
0
9,297,835
0
0
0
0
2
false
1
2012-02-15T16:57:00.000
0
2
0
2d random walk in python - drawing hypotenuse from distribution
9,297,679
0
python,random,trigonometry,hypotenuse
If you have a hypotenuse in the form of a line segment, then you have two points. From two points in the form P0 = (x0, y0) P1 = (x1, y1) you can get the x and y displacements by subtracting x0 from x1 and y0 from y1. If your hypotenuse is actually a vector in a polar coordinate plane, then yes, you'll have to take the...
I'm writing a simple 2d brownian motion simulator in Python. It's obviously easy to draw values for x displacement and y displacement from a distribution, but I have to set it up so that the 2d displacement (ie hypotenuse) is drawn from a distribution, and then translate this to new x and y coordinates. This is probabl...
0
1
996
0
9,298,238
0
0
0
0
2
true
1
2012-02-15T16:57:00.000
1
2
0
2d random walk in python - drawing hypotenuse from distribution
9,297,679
1.2
python,random,trigonometry,hypotenuse
This is best done by using polar coordinates (r, theta) for your distributions (where r is your "hypotenuse")), and then converting the result to (x, y), using x = r cos(theta) and y = r sin(theta). That is, select r from whatever distribution you like, and then select a theta, usually from a flat, 0 to 360 deg, distr...
I'm writing a simple 2d brownian motion simulator in Python. It's obviously easy to draw values for x displacement and y displacement from a distribution, but I have to set it up so that the 2d displacement (ie hypotenuse) is drawn from a distribution, and then translate this to new x and y coordinates. This is probabl...
0
1
996
0
9,349,317
0
0
0
0
1
false
1
2012-02-18T11:15:00.000
1
1
0
Pybrain: Completely linear network
9,340,677
0.197375
python,neural-network,backpropagation,forecasting,pybrain
Try applying log() to the price-attribute - then scale all inputs and outputs to [-1..1] - of course, when you want to get the price from the network-output you'll have to reverse log() with exp()
I am currently trying to create a Neural Network with pybrain for stock price forecasting. Up to now I have only used Networks with a binary output. For those Networks sigmoid inner layers were sufficient but I don't think this would be the right approach for Forecasting a price. The problem is, that when I create such...
0
1
1,055
0
9,355,945
0
0
0
0
1
false
0
2012-02-20T02:25:00.000
1
2
0
What are the best algorithms for Word-Sense-Disambiguation
9,355,460
0.099668
python,nlp,nltk,text-processing
Well, WSD is an open problem (since it's language... and AI...), so currently each of those claims are in some sense valid. If you are engaged in a domain-specific project, I think you'd be best served by a statistical method (Support Vector Machines) if you can find a proper corpus. Personally, if you're using pytho...
What are the best algorithms for Word-Sense-Disambiguation I read a lot of posts, and each one proves in a research document that a specific algorithm is the best, this is very confusing. I just come up with 2 realizations 1-Lesk Algorithm is deprecated, 2-Adapted Lesk is good but not the best Please if anybody based o...
0
1
1,301
0
17,582,671
0
0
0
0
3
false
34
2012-02-20T17:56:00.000
20
7
0
Missing values in scikits machine learning
9,365,982
1
python,machine-learning,scikit-learn,missing-data,scikits
I wish I could provide a simple example, but I have found that RandomForestRegressor does not handle NaN's gracefully. Performance gets steadily worse when adding features with increasing percentages of NaN's. Features that have "too many" NaN's are completely ignored, even when the nan's indicate very useful informati...
Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that.
0
1
39,190
0
48,199,308
0
0
0
0
3
false
34
2012-02-20T17:56:00.000
1
7
0
Missing values in scikits machine learning
9,365,982
0.028564
python,machine-learning,scikit-learn,missing-data,scikits
When you run into missing values on input features, the first order of business is not how to impute the missing. The most important question is WHY SHOULD you. Unless you have clear and definitive mind what the 'true' reality behind the data is, you may want to curtail urge to impute. This is not about technique or pa...
Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that.
0
1
39,190
0
18,020,591
0
0
0
0
3
false
34
2012-02-20T17:56:00.000
11
7
0
Missing values in scikits machine learning
9,365,982
1
python,machine-learning,scikit-learn,missing-data,scikits
I have come across very similar issue, when running the RandomForestRegressor on data. The presence of NA values were throwing out "nan" for predictions. From scrolling around several discussions, the Documentation by Breiman recommends two solutions for continuous and categorical data respectively. Calculate the Medi...
Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that.
0
1
39,190
0
9,367,777
0
0
1
0
1
false
0
2012-02-20T20:01:00.000
0
1
0
Optimizer/minimizer for integer argument
9,367,630
0
python,numpy,scipy
There is no general solution for this problem. If you know the properties of the function it should be possible to deduce some bounds for the variables and then test all combinations. But that is not very efficient. You could approximate a solution with scipy.optimize.leastsq and then round the results to integers. The...
Does anybody know a python function (proven to work and having its description in internet) which able to make minimum search for a provided user function when argument is an array of integers? Something like scipy.optimize.fmin_l_bfgs_b scipy.optimize.leastsq but for integers
0
1
569
0
9,375,030
0
0
0
0
1
true
8
2012-02-21T09:17:00.000
8
1
0
Why has the numpy random.choice() function been discontinued?
9,374,885
1.2
python,numpy,scipy
random.choice is as far as I can tell part of python itself, not of numpy. Did you import random? Update: numpy 1.7 added a new function, numpy.random.choice. Obviously, you need numpy 1.7 for it. Update2: it seems that in unreleased numpy 2.0, this was temporarily called numpy.random.sample. It has been renamed back. ...
I've been working with numpy and needed the random.choice() function. Sadly, in version 2.0 it's not in the random or the random.mtrand.RandomState modules. Has it been excluded for a particular reason? There's nothing in the discussion or documentation about it! For info, I'm running Numpy 2.0 on python 2.7 on mac os....
0
1
6,270
0
62,791,902
0
1
0
0
1
false
23
2012-02-22T13:29:00.000
0
3
0
How much memory is used by a numpy ndarray?
9,395,758
0
python,arrays,memory,numpy,floating-point
I gauss, easily, we can compute by print(a.size // 1024 // 1024, a.dtype) it is similar to how much MB is uesd, however with the param dtype, float=8B, int8=1B ...
Does anybody know how much memory is used by a numpy ndarray? (with let's say 10,000,000 float elements).
0
1
14,086
0
9,419,462
0
1
0
0
1
true
1
2012-02-23T18:53:00.000
1
4
0
have local numpy override global
9,419,327
1.2
python,numpy,centos
Python searches the path in order, so simply put the directory where you installed your NumPy first in the path. You can check numpy.version.version to make sure you're getting the version you want.
I'm using a server where I don't have administrative rights and I need to use the latest version of numpy. The system administrator insists that he cannot update the global numpy to the latest version, so I have to install it locally. I can do that without trouble, but how do I make sure that "import numpy" results in...
0
1
1,857
0
9,455,730
0
0
0
0
1
true
2
2012-02-26T18:05:00.000
4
2
0
RAM requirements for matrix processing
9,455,651
1.2
python,matrix
In theory, an element of {0, 1} should consume at most 1 bit per cell. That means 8 cells per byte or 1192092895 megabytes or about one petabyte, which is too much, unless you are google :) Not to mention, even processing (or saving) such matrix would take too much time (about a year I'd say). You said that in many cas...
So I'm designing a matrix for a computer vision project and believe I have one of my calculations wrong. Unfortunately, I'm not sure where it's wrong. I was considering creating a matrix that was 100,000,000 x 100,000,000 with each 'cell' containing a single integer (1 or 0). If my calculations are correct, it would ...
0
1
793
0
9,478,656
0
0
0
0
1
false
7
2012-02-28T08:05:00.000
2
3
0
How do i fill "holes" in an image?
9,478,347
0.132549
python,image-processing,interpolation,mask,astronomy
What you want is not interpolation at all. Interpolation depends on the assumption that data between known points is roughly contiguous. In any non-trivial image, this will not be the case. You actually want something like the content-aware fill that is in Photoshop CS5. There is a free alternative available in The GIM...
I have photo images of galaxies. There are some unwanted data on these images (like stars or aeroplane streaks) that are masked out. I don't just want to fill the masked areas with some mean value, but to interpolate them according to surrounding data. How do i do that in python? We've tried various functions in SciPy...
0
1
5,262
0
9,500,809
0
0
0
0
1
true
1
2012-02-29T14:02:00.000
1
1
0
Eclipse editor doesn't recognize Scipy content
9,500,524
1.2
python,eclipse,scipy
Try to recreate your project in PyDev and add these new libraries.
I just installed Scipy and Numpy on my machine and added them to the System Library option in eclipse. Now the program runs fine, but eclipse editor keeps giving this red mark on the side says "Unresolved import". I guess I didn't configure correctly. Any one know how to fix this ? Thanks.
0
1
611
0
9,523,685
0
0
0
0
1
false
2
2012-03-01T20:31:00.000
1
4
0
Random number generation with C++ or Python
9,523,570
0.049958
c++,python,random,simulation,probability
At least in C++, rand is sometimes rather poor quality, so code should rarely use it for anything except things like rolling dice or shuffling cards in children's games. In C++ 11, however, a set of random number generator classes of good quality have been added, so you should generally use them by preference. Seeding ...
I heard that computation results can be very sensitive to choice of random number generator. 1 I wonder whether it is relevant to program own Mersenne-Twister or other pseudo-random routines to get a good number generator. Also, I don't see why I should not trust native or library generators as random.uniform() in nu...
0
1
3,275
0
9,567,221
0
0
0
0
1
true
3
2012-03-05T11:45:00.000
1
3
0
Cassandra/Pycassa: Getting random rows
9,566,060
1.2
python,cassandra,uuid,pycassa
You might be able to do this by making a get_range request with a random start key (just a random string), and a row_count of 1. From memory, I think the finish key would need to be the same as start, so that the query 'wraps around' the keyspace; this would normally return all rows, but the row_count will limit that....
Is there a possibility to retrieve random rows from Cassandra (using it with Python/Pycassa)? Update: With random rows I mean randomly selected rows!
0
1
2,264
0
9,570,191
0
0
0
0
2
false
0
2012-03-05T16:23:00.000
0
4
0
Blank Values in Excel File From CSV (not just rows)
9,570,157
0
python,excel,csv
Can you see the missing values when you open the CSV with wordpad? If so, then Python or any other scripting language should see them too.
I'm currently opening CSV files in Excel with multiple columns, where values will only appear if a number changes. For example, the data may ACTUALLY be: 90,90,90,90,91. But it will only appear as 90,,,,91. I'd really like the values in between to be filled with 90s. Is there anyway python could help with this? I reall...
0
1
2,496
0
9,570,477
0
0
0
0
2
false
0
2012-03-05T16:23:00.000
0
4
0
Blank Values in Excel File From CSV (not just rows)
9,570,157
0
python,excel,csv
You can also do this entirely in excel: Select column (or whatever range you're working with), then go to Edit>Go To (Ctrl+G) and click Special. Check Blanks & click OK. This will select only the empty cells within the list. Now type the = key, then up arrow and ctrl-enter. This will put a formula in every blank cel...
I'm currently opening CSV files in Excel with multiple columns, where values will only appear if a number changes. For example, the data may ACTUALLY be: 90,90,90,90,91. But it will only appear as 90,,,,91. I'd really like the values in between to be filled with 90s. Is there anyway python could help with this? I reall...
0
1
2,496
0
9,597,117
0
0
0
0
1
false
3
2012-03-07T03:43:00.000
4
4
0
Clustering using k-means in python
9,595,494
0.197375
python,tags,cluster-analysis,data-mining,k-means
Since the data you have is binary and sparse (in particular, not all users have tagged all documents, right)? So I'm not at all convinced that k-means is the proper way to do this. Anyway, if you want to give k-means a try, have a look at the variants such as k-medians (which won't allow "half-tagging") and convex/sphe...
I have a document d1 consisting of lines of form user_id tag_id. There is another document d2 consisting of tag_id tag_name I need to generate clusters of users with similar tagging behaviour. I want to try this with k-means algorithm in python. I am completely new to this and cant figure out how to start on this. Can ...
0
1
2,252
0
9,646,517
0
1
0
0
1
true
1
2012-03-08T01:31:00.000
1
1
0
Solve linear system in Python without NumPy
9,611,746
1.2
python,numpy,scipy,jython,linear-algebra
As suggested by @talonmies' comment, the real answer to this is 'find an equivalent Java package.'
I have to solve linear equations system using Jython, so I can't use Num(Sci)Py for this purpose. What are the good alternatives?
0
1
2,868
0
11,955,915
0
0
0
0
3
false
7
2012-03-09T22:36:00.000
0
4
0
Python Pandas: can't find numpy.core.multiarray when importing pandas
9,641,916
0
python,numpy,pandas
@user248237: I second Keith's suggestion that its probably a 32/64 bit compatibility issue. I ran into the same problem just this week while trying to install a different module. Check the versions of each of your modules and make everything matches. In general, I would stick to the 32 bit versions -- not all modules...
I'm trying to get my code (running in eclipse) to import pandas. I get the following error: "ImportError: numpy.core.multiarray failed to import"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1
0
1
5,833
0
12,003,130
0
0
0
0
3
false
7
2012-03-09T22:36:00.000
1
4
0
Python Pandas: can't find numpy.core.multiarray when importing pandas
9,641,916
0.049958
python,numpy,pandas
Just to make sure: Did you install pandas from the sources ? Make sure it's using the version of NumPy you want. Did you upgrade NumPy after installing pandas? Make sure to recompile pandas, as there can be some changes in the ABI (but w/ that version of NumPy, I doubt it's the case) Are you calling pandas and/or Num...
I'm trying to get my code (running in eclipse) to import pandas. I get the following error: "ImportError: numpy.core.multiarray failed to import"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1
0
1
5,833
0
12,007,981
0
0
0
0
3
false
7
2012-03-09T22:36:00.000
1
4
0
Python Pandas: can't find numpy.core.multiarray when importing pandas
9,641,916
0.049958
python,numpy,pandas
Try to update to numpy version 1.6.1. Helped for me!
I'm trying to get my code (running in eclipse) to import pandas. I get the following error: "ImportError: numpy.core.multiarray failed to import"when I try to import pandas. I'm using python2.7, pandas 0.7.1, and numpy 1.5.1
0
1
5,833
0
11,876,131
0
0
0
0
1
false
3
2012-03-10T07:29:00.000
1
1
0
Interpolation of large 2d masked array
9,644,735
0.197375
python,matrix,numpy,scipy,interpolation
Very late, but... I have a problem similar to yours, and am getting the segmentation fault with bisplines, and also memory error with rbf (in which the "thin_plate" function works great for me. Since my data is unstructured but is created in a structured manner, I use downsampling to half or one third of the density of...
I have a numpy masked matrix. And wanted to do interpolation in the masked regions. I tried the RectBivariateSpline but it didn't recognize the masked regions as masked and used those points also to interpolate. I also tried the bisplrep after creating the X,Y,Z 1d vectors. They were each of length 45900. It took a lot...
0
1
1,646
0
9,651,522
0
0
0
0
1
false
1
2012-03-10T22:18:00.000
1
2
0
Image spot detection in Python
9,650,670
0.099668
python,image-processing,computer-vision
I do not know if there is a library but you could segment these areas using a simple thresholding segmentation algorithm. Say, you want to find red spots. Extract the red channel from the image, select a threshold, and eliminate pixels that are below the threshold. The resulting pixels are your spots. To find a suitabl...
I have millions of images containing every day photos. I'm trying to find a way to pick out those in which some certain colours are present, say, red and orange, disregarding the shape or object. The size may matter - e.g., at least 50x50 px. Is there an efficient and lightweight library for achieving this? I know ther...
0
1
2,147
0
9,675,452
0
0
1
0
1
false
2
2012-03-12T21:56:00.000
2
4
0
Running m-files from Python
9,675,386
0.099668
python,matlab
You can always start matlab as separate subprocess and collect results via std.out/files. (see subprocess package).
pymat doesnt seem to work with current versions of matlab, so I was wondering if there is another equivalent out there (I havent been able to find one). The gist of what would be desirable is running an m-file from python (2.6). (and alternatives such as scipy dont fit since I dont think they can run everything from th...
0
1
5,746
0
9,739,828
0
1
0
0
1
true
6
2012-03-15T15:33:00.000
7
1
0
How do I tell pandas to parse a particular column as a datetime object, but not make it an index?
9,723,000
1.2
python,parsing,datetime,pandas
Pass dateutil.parser.parse (or another datetime conversion function) in the converters argument to read_csv
I have a csv file where one of the columns is a date/time string. How do I parse it correctly with pandas? I don't want to make that column the index. Thanks! Uri
0
1
1,378
0
9,731,658
0
1
0
0
1
true
2
2012-03-16T03:36:00.000
2
2
0
Creating a thread for each operation or a some threads for various operations?
9,731,496
1.2
python,multithreading,matrix,distributed
You will probably get the best performance if you use one thread for each CPU core available to the machine running your application. You won't get any performance benefit by running more threads than you have processors. If you are planning to spawn new threads each time you perform a matrix multiplication then ther...
For a class project I am writing a simple matrix multiplier in Python. My professor has asked for it to be threaded. The way I handle this right now is to create a thread for every row and throw the result in another matrix. What I wanted to know if it would be faster that instead of creating a thread for each row it c...
0
1
251
0
9,760,852
0
0
0
0
1
true
6
2012-03-16T09:05:00.000
2
1
0
scikits confusion matrix with cross validation
9,734,403
1.2
python,machine-learning,scikits,scikit-learn
You can either use an aggregate confusion matrix or compute one for each CV partition and compute the mean and the standard deviation (or standard error) for each component in the matrix as a measure of the variability. For the classification report, the code would need to be modified to accept 2 dimensional inputs so ...
I am training a svm classifier with cross validation (stratifiedKfold) using the scikits interfaces. For each test set (of k), I get a classification result. I want to have a confusion matrix with all the results. Scikits has a confusion matrix interface: sklearn.metrics.confusion_matrix(y_true, y_pred) My question...
0
1
4,075
0
9,760,985
0
0
0
0
1
false
1
2012-03-18T17:46:00.000
0
2
0
How to get related topics from a present wikipedia article?
9,760,636
0
python,keyword,wikipedia,topic-maps
You can scrape the categories if you want. If you're working with python, you can read the wikitext directly from their API, and use mwlib to parse the article and find the links. A more interesting but harder to implement approach would be to create clusters of related terms, and given the list of terms extracted from...
I am writing a user-app that takes input from the user as the current open wikipedia page. I have written a piece of code that takes this as input to my module and generates a list of keywords related to that particular article using webscraping and natural language processing. I want to expand the functionality of the...
0
1
1,060
0
9,771,616
0
0
0
0
3
true
8
2012-03-19T10:04:00.000
11
3
0
How to save ctypes objects containing pointers
9,768,218
1.2
python,ctypes,pickle
Python has no way of doing that automatically for you: You will have to build code to pick all the desired Data yourself, putting them in a suitable Python data structure (or just adding the data in a unique bytes-string where you will know where each element is by its offset) - and then save that object to disk. This ...
I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers. How can I save the ctypes object and what the pointers are pointing to for later use? I tried scipy.io.savemat => TypeError: Could not convert object to array cPickle => ctypes objects containing pointers cannot b...
0
1
14,249
0
41,899,145
0
0
0
0
3
false
8
2012-03-19T10:04:00.000
1
3
0
How to save ctypes objects containing pointers
9,768,218
0.066568
python,ctypes,pickle
To pickle a ctypes object that has pointers, you would have to define your own __getstate__/__reduce__ methods for pickling and __setstate__ for unpickling. More information in the docs for pickle module.
I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers. How can I save the ctypes object and what the pointers are pointing to for later use? I tried scipy.io.savemat => TypeError: Could not convert object to array cPickle => ctypes objects containing pointers cannot b...
0
1
14,249
0
9,768,597
0
0
0
0
3
false
8
2012-03-19T10:04:00.000
0
3
0
How to save ctypes objects containing pointers
9,768,218
0
python,ctypes,pickle
You could copy the data into a Python data structure and dereference the pointers as you go (using the contents attribute of a pointer).
I use a 3rd party library which returns after a lot of computation a ctypes object containing pointers. How can I save the ctypes object and what the pointers are pointing to for later use? I tried scipy.io.savemat => TypeError: Could not convert object to array cPickle => ctypes objects containing pointers cannot b...
0
1
14,249
0
9,802,282
0
0
0
0
1
true
1
2012-03-21T08:48:00.000
3
1
0
python/numpy: Using own data structure with np.allclose() ? Where to look for the requirements / what are they?
9,801,235
1.2
python,numpy,magic-methods,type-coercion
The term "arraylike" is used in the numpy documentation to mean "anything that can be passed to numpy.asarray() such that it returns an appropriate numpy.ndarray." Most sequences with proper __len__() and __getitem__() methods work okay. Note that the __getitem__(i) must be able to accept a single integer index in rang...
I'm implementing a Matrix Product State class, which is some kind of special tensor decomposition scheme in python/numpy for fast algorithm prototyping. I don't think that there already is such a thing out there, and I want to do it myself to get a proper understanding of the scheme. What I want to have is that, if I s...
0
1
416
0
9,820,025
0
0
0
0
1
true
1
2012-03-22T07:22:00.000
1
1
0
Python import works in interpreter, doesn't work in script Numpy/Matplotlib
9,817,995
1.2
python,path,numpy,matplotlib
You'll generally need to install numpy, matplotlib etc once for every version of python you use, as it will install itself to the specific 'python2.x/site-packages' directory. Is the above output generated from a 2.6 or 2.7 session? If it's a 2.6 session, then yes, pointing your PYTHONPATH at 2.7 won't work - numpy in...
I'm on OSX Snow Leopard and I run 2.7 in my scripts and the interpreter seems to be running 2.6 Before I was able to import numpy but then I would get an error when trying to import matplotlib so I went looking for a solution and updated my PYTHONPATH variable, but I think I did it incorrectly and have now simply screw...
0
1
2,566
0
9,873,885
0
1
0
0
1
false
12
2012-03-26T14:05:00.000
3
5
0
Choose m evenly spaced elements from a sequence of length n
9,873,626
0.119427
python,algorithm
Use a loop (int i=0; i < m; i++) Then to get the indexes you want, Ceil(i*m/n).
I have a vector/array of n elements. I want to choose m elements. The choices must be fair / deterministic -- equally many from each subsection. With m=10, n=20 it is easy: just take every second element. But how to do it in the general case? Do I have to calculate the LCD?
0
1
11,243
0
12,095,050
0
1
0
0
1
false
33
2012-03-27T20:38:00.000
14
8
0
Pickle alternatives
9,897,345
1
python,serialization
Pickle is actually quite fast so long as you aren't using the (default) ASCII protocol. Just make sure to dump using protocol=pickle.HIGHEST_PROTOCOL.
I am trying to serialize a large (~10**6 rows, each with ~20 values) list, to be used later by myself (so pickle's lack of safety isn't a concern). Each row of the list is a tuple of values, derived from some SQL database. So far, I have seen datetime.datetime, strings, integers, and NoneType, but I might eventually ha...
0
1
35,941
0
9,927,957
0
0
0
0
1
false
1
2012-03-29T14:42:00.000
3
3
0
Reading csv in python pandas and handling bad values
9,927,711
0.197375
python,numpy,pandas
You can pass a custom list of values to be treated as missing using pandas.read_csv . Alternately you can pass functions to the converters argument.
I am using pandas to read a csv file. The data are numbers but stored in the csv file as text. Some of the values are non-numeric when they are bad or missing. How do I filter out these values and convert the remaining data to integers. I assume there is a better/faster way than looping over all the values and using i...
0
1
3,639
0
9,993,744
0
0
0
0
1
true
0
2012-03-29T16:05:00.000
0
1
0
Numpy needs the ucs2
9,929,170
1.2
linux,unicode,numpy,python-2.7,ucs
I suggest that a quick solution to these sort of complications is that you use the Enthought Python Distribpotion (EPD) on Linux which includes a wide range of extensions. Cheers.
I have installed Numpy using ActivePython and when I try to import numpy module, it is throwing the following error: ImportError: /opt/ActivePython-2.7/lib/python2.7/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode I am fairly new to python, and I am not sure what to do. I appre...
0
1
1,398
0
28,609,198
0
0
0
0
1
false
2
2012-03-30T08:13:00.000
2
3
0
ZeroMQ PUB/SUB filtering and performance
9,939,238
0.132549
python,zeromq
From the ØMQ guide: From ZeroMQ v3.x, filtering happens at the publisher side when using a connected protocol (tcp:// or ipc://). Using the epgm:// protocol, filtering happens at the subscriber side. In ZeroMQ v2.x, all filtering happened at the subscriber side.
I am trying to implement a broker using zeromq PUB/SUB(python eventlets). zeromq 2.1 does not seem to implement filtering at publisher and all messages are broadcasted to all subscribers which inturn apply filter. Is there some kind of workaround to achieve filtering at publisher. If not how bad is the performance if t...
0
1
4,347
0
9,982,670
0
0
0
0
1
false
0
2012-04-02T16:37:00.000
1
4
0
Thinning contour lines in a binary image
9,980,270
0.049958
python,image-processing,binary
A combination of erosion and dilation (and vice versa) on a binary image can help to get rid of salt n pepper like noise leaving small lines intact. Keywords are 'rank order filters' and 'morphological filters'.
I have a binary image with contour lines and need to purify each contour line of all unnecessary pixels, leaving behind a minimally connected line. Can somebody give me a source, code example or further information for this kind of problem and where to search for help, please?
0
1
6,610
0
10,003,296
0
1
0
0
1
false
3
2012-04-04T00:08:00.000
6
2
0
Python determinant calculation(without the use of external libraries)
10,003,232
1
python,matrix,linear-algebra
The rule of Sarrus is only a mnemonic for solving 3x3 determinants, and won't be as helpful moving beyond that size. You should investigate the Leibniz formula for calculating the determinant of an arbitrarily large square matrix. The nice thing about this formula is that the determinant of an n*n matrix is that it can...
I'm making a small matrix operations library as a programming challenge to myself(and for the purpose of learning to code with Python), and I've come upon the task of calculating the determinant of 2x2, 3x3 and 4x4 matrices. As far as my understanding of linear algebra goes, I need to implement the Rule of Sarrus in or...
0
1
3,244
0
10,054,525
0
1
0
0
1
true
0
2012-04-04T07:11:00.000
1
2
0
Creating a corpus from data in a custom format
10,006,467
1.2
python,nlp,nltk
You don't need to input the files yourself or to provide words and sents methods. Read in your corpus with PlaintextCorpusReader, and it will provide those for you. The corpus reader constructor accepts arguments for the path and filename pattern of the files, and for the input encoding (be sure to specify it). The c...
I have hundreds of files containing text I want to use with NLTK. Here is one such file: বে,বচা ইয়াণ্ঠা,র্চা ঢার্বিত তোখাটহ নতুন, অ প্রবঃাশিত। তবে ' এ বং মুশায়েরা ' পত্রিব্যায় প্রকাশিত তিনটি লেখাই বইযে সংব্যজান ব্যরার জনা বিশেষভাবে পরিবর্ধিত। পাচ দাপনিকেব ড:বন নিয়ে এই বই তৈরি বাবার পরিব্যল্পনাও ম্ভ্রাসুনতন সামন্তে...
0
1
272
0
10,043,383
0
1
0
0
1
true
2
2012-04-06T10:46:00.000
2
1
0
Segmentation fault Python
10,042,429
1.2
python,segmentation-fault,enthought
Python only SEGFAULTs if There is error in a native extension DLL code loaded Virtual machine has bugs (it has not) Run Python in -vvv mode to see more information about import issues. You probably need to recompile the modules you need against the Python build you are using. Python major versions and architecture (...
I have the Enthought Python Distribution installed. Before that I installed Python2.7 and installed other modules (e.g. opencv). Enthought establishes itself as the default python. Called 7.2, but it is 2.7. Now if i want to import cv in the Enthought Python it always gives me the Segmentation fault Error. Is there ...
0
1
4,794
0
10,724,685
0
0
0
0
1
false
3
2012-04-09T06:20:00.000
0
1
0
Spyder plotting blocks console commands
10,069,680
0
python,matlab,matplotlib,ipython,spyder
I was having a similar (I think) problem. Make sure your interpreter is set to execute in the current interpreter (default, should allow for interactive plotting). If it's set to execute in a new dedicated python interpreter make sure that interact with the python interpreter after execution is selected. This solved...
Whenever I execute a plt.show() in an Ipython console in spyderlib, the console freezes until I close the figure window. This only occurs in spyderlib and the blocking does occur when I run ipython --pylab or run ipython normally and call plt.ion() before plotting. I've tried using plt.draw(), but nothing happens wit...
0
1
1,329
0
20,901,570
0
0
0
0
1
true
0
2012-04-09T09:57:00.000
0
2
0
Can we search content(text) within images using plone 4.1?
10,071,609
1.2
python,plone
Best is to use collective.DocumentViewer with various options to select from
How can we search content(text) within images using plone 4.1. I work on linux Suppose an image say a sample.jpg contains text like 'Happy Birthday', on using search 'Birthday' I should get the contents i.e sample.jpg
0
1
232
0
10,076,295
0
0
0
0
1
true
2
2012-04-09T16:20:00.000
4
1
0
scipy: significance of the return values of spearmanr (correlation)
10,076,222
1.2
python,statistics,scipy,correlation
It's up to you to choose the level of significance (alpha). To be coherent you shall choose it before running the test. The function will return you the lowest alpha you can choose for which you reject the null hypothesis (H0) [reject H0 when p-value < alpha or equivalently -p-value>-alpha]. You therefore know that the...
The output of spearmanr (Spearman correlation) of X,Y gives me the following: Correlation: 0.54542821980327882 P-Value: 2.3569040685361066e-65 where len(X)=len(Y)=800. My questions are as follows: 0) What is the confidence (alpha?) here ? 1) If correlation coefficient > alpha, the hypothesis of the correlation bei...
1
1
1,006
0
39,361,043
0
0
0
0
1
false
5
2012-04-10T05:54:00.000
-3
2
0
python numpy sort eigenvalues
10,083,772
-0.291313
python,numpy
np.linalg.eig will often return complex values. You may want to consider using np.sort_complex(eig_vals).
I am using linalg.eig(A) to get the eigenvalues and eigenvectors of a matrix. Is there an easy way to sort these eigenvalues (and associated vectors) in order?
0
1
13,966
0
10,101,163
0
1
0
0
2
false
2
2012-04-11T04:01:00.000
0
4
0
How do I initialize a one-dimensional array of two-dimensional elements in Python?
10,099,619
0
python,arrays
Using the construct [[0,0]]*3 works just fine and returns the following: [[0, 0], [0, 0], [0, 0]]
I want to initialize an array that has X two-dimensional elements. For example, if X = 3, I want it to be [[0,0], [0,0], [0,0]]. I know that [0]*3 gives [0, 0, 0], but how do I do this for two-dimensional elements?
0
1
1,407
0
10,099,628
0
1
0
0
2
false
2
2012-04-11T04:01:00.000
0
4
0
How do I initialize a one-dimensional array of two-dimensional elements in Python?
10,099,619
0
python,arrays
I believe that it's [[0,0],]*3
I want to initialize an array that has X two-dimensional elements. For example, if X = 3, I want it to be [[0,0], [0,0], [0,0]]. I know that [0]*3 gives [0, 0, 0], but how do I do this for two-dimensional elements?
0
1
1,407
0
10,108,983
0
0
1
0
1
false
20
2012-04-11T14:47:00.000
1
5
0
Detecting geographic clusters
10,108,368
0.039979
python,r,geolocation,cran
A few ideas: Ad-hoc & approximate: The "2-D histogram". Create arbitrary "rectangular" bins, of the degree width of your choice, assign each bin an ID. Placing a point in a bin means "associate the point with the ID of the bin". Upon each add to a bin, ask the bin how many points it has. Downside: doesn't correctly...
I have a R data.frame containing longitude, latitude which spans over the entire USA map. When X number of entries are all within a small geographic region of say a few degrees longitude & a few degrees latitude, I want to be able to detect this and then have my program then return the coordinates for the geographic b...
0
1
5,580
0
10,177,394
0
0
0
0
1
false
7
2012-04-11T19:20:00.000
4
2
0
Implementing alternative forms of LDA
10,112,500
0.379949
python,r,nlp,text-mining,lda
My friend's response is below, pardon the language please. First I wrote up a Python implementation of the collapsed Gibbs sampler seen here (http://www.pnas.org/content/101/suppl.1/5228.full.pdf+html) and fleshed out here (http://cxwangyi.files.wordpress.com/2012/01/llt.pdf). This was slow as balls. Then I used a Pyt...
I am using Latent Dirichlet Allocation with a corpus of news data from six different sources. I am interested in topic evolution, emergence, and want to compare how the sources are alike and different from each other over time. I know that there are a number of modified LDA algorithms such as the Author-Topic model, To...
0
1
2,734
0
10,144,447
0
1
1
0
1
false
1
2012-04-13T15:19:00.000
1
3
0
How fast are nested python generators?
10,143,637
0.066568
python,generator
"Nested" iterators amount to the composition of the functions that the iterators implement, so in general they pose no particularly novel performance considerations. Note that because generators are lazy, they also tend to cut down on memory allocation as compared with repeatedly allocating one sequence to transform i...
Okay, so I probably shouldn't be worrying about this anyway, but I've got some code that is meant to pass a (possibly very long, possibly very short) list of possibilities through a set of filters and maps and other things, and I want to know if my implementation will perform well. As an example of the type of thing I ...
0
1
1,253
0
10,144,991
0
0
0
0
1
false
2
2012-04-13T16:03:00.000
1
2
0
Emails and Map Reduce Job
10,144,325
0.099668
python,map,hadoop,mapreduce,reduce
Yea, you need to use hadoop streaming if you want to use write Python code for running MapReduce Jobs
I'm just starting out with Hadoop and writing some Map Reduce jobs. I was looking for help on writing a MR job in python that allows me to take some emails and put them into HDFS so I can search on the text or attachments of the email? Thank you!
0
1
251
0
10,155,633
0
0
0
0
1
false
0
2012-04-14T17:04:00.000
0
1
0
Deciding input values to DBSCAN algorithm
10,155,542
0
python,cluster-analysis,dbscan
DBSCAN is pretty often hard to estimate its parameters. Did you think about the OPTICS algorithm? You only need in this case Min_samples which would correspond to the minimal cluster size. Otherwise for DBSCAN I've done it in the past by trial and error : try some values and see what happens. A general rule to follow i...
I have written code in python to implement DBSCAN clustering algorithm. My dataset consists of 14k users with each user represented by 10 features. I am unable to decide what exactly to keep as the value of Min_samples and epsilon as input How should I decide that? Similarity measure is euclidean distance.(Hence it bec...
0
1
2,333
0
12,068,757
0
0
0
0
2
false
4
2012-04-15T00:37:00.000
0
3
0
Importing Confusion Pandas
10,158,613
0
python,pandas
I had the same error. I did not build pandas myself so i thought i should not get this error as mentioned on the pandas site. So i was confused on how to resolved this error. The pandas site says that matplotlib is an optional depenedency so i didn't install it initially. But interestingly, after installing matplotlib ...
I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website. now I tried " import pandas" but then it gives me an error ImportError: C extensions not built: if you installed already verify that you are not importing from the source directory. I am new to python and pandas in...
0
1
2,351
0
11,630,790
0
0
0
0
2
false
4
2012-04-15T00:37:00.000
1
3
0
Importing Confusion Pandas
10,158,613
0.066568
python,pandas
Had the same issue. Resolved by checking dependencies - make sure you have numpy > 1.6.1 and python-dateutil > 1.5 installed.
I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website. now I tried " import pandas" but then it gives me an error ImportError: C extensions not built: if you installed already verify that you are not importing from the source directory. I am new to python and pandas in...
0
1
2,351
0
10,210,348
0
0
0
1
1
false
1
2012-04-18T08:52:00.000
3
1
0
Customizing csv output in htsql
10,205,990
0.53705
python,sql,htsql
If you want TAB as a delimiter, use tsv format (e.g. /query/:tsv instead of /query/:csv). There is no way to specify the encoding other than UTF-8. You can reencode the output manually on the client.
I would like to know if somebody knows a way to customize the csv output in htsql, and especially the delimiter and the encoding ? I would like to avoid iterating over each result and find a way through configuration and/or extensions. Thank in advance. Anthony
0
1
170
0
10,248,052
0
1
0
0
1
false
2
2012-04-20T08:10:00.000
1
2
0
When to and when not to use map() with multiprocessing.Pool, in Python? case of big input values
10,242,525
0.099668
python,dictionary,parallel-processing,multiprocessing,large-data
I hit a similar issue: parallelizing calculations on a big dataset. As you mentioned multiprocessing.Pool.map pickles the arguments. What I did was to implement my own fork() wrapper that only pickles the return values back to the parent process, hence avoiding pickling the arguments. And a parallel map() on top of the...
Is it efficient to calculate many results in parallel with multiprocessing.Pool.map() in a situation where each input value is large (say 500 MB), but where input values general contain the same large object? I am afraid that the way multiprocessing works is by sending a pickled version of each input value to each wor...
0
1
2,008
0
10,308,680
0
0
0
0
1
true
2
2012-04-24T18:56:00.000
1
1
0
python, scikits-learn: which learning methods support sparse feature vectors?
10,304,280
1.2
python,machine-learning,scikits,scikit-learn
We don't have that yet. You have to read the docstrings of the individual classes for now. Anyway, non linear models do not tend to work better than linear model for high dim sparse data such as text documents (and they can overfit more easily).
I'm getting a memory error trying to do KernelPCA on a data set of 30.000 texts. RandomizedPCA works alright. I think what's happening is that RandomizedPCA works with sparse arrays and KernelPCA don't. Does anyone have a list of learning methods that are currently implemented with sparse array support in scikits-lear...
0
1
691
0
27,016,762
0
0
0
0
1
false
7
2012-04-24T21:04:00.000
3
4
0
Quantile/Median/2D binning in Python
10,305,964
0.148885
python,numpy,statistics,scipy
I'm just trying to do this myself and it sound like you want the command "scipy.stats.binned_statistic_2d" from you can find the mean, median, standard devation or any defined function for the third parameter given the bins. I realise this question has already been answered but I believe this is a good built in solutio...
do you know a quick/elegant Python/Scipy/Numpy solution for the following problem: You have a set of x, y coordinates with associated values w (all 1D arrays). Now bin x and y onto a 2D grid (size BINSxBINS) and calculate quantiles (like the median) of the w values for each bin, which should at the end result in a BINS...
0
1
5,849
0
10,327,841
0
0
0
0
1
false
1
2012-04-24T22:58:00.000
0
2
0
Most efficient language to implement tensor factorization for Web Application
10,307,173
0
python,c++,mysql,django,large-data
Python is just fine. I am a Python person. I do not know C++ personally. However, during my research of python the creator of mathematica stated himself that python is equally as powerful as mathematica. Python is used in many highly accurate calculations. (i. e. engineering software, architecture work, etc. . .)
I have implemented Tensor Factorization Algorithm in Matlab. But, actually, I need to use it in Web Application. So I implemented web site on Django framework, now I need to merge it with my Tensor Factorization algorithm. For those who are not familiar with tensor factorization, you can think there are bunch of mult...
0
1
291
0
10,352,335
0
0
0
0
1
true
0
2012-04-27T13:23:00.000
3
2
0
best way to extend python / numpy performancewise
10,351,450
1.2
python,c,numpy
I would say it depends on your skills/experience and your project. If this is very ponctual and you are profficient in C/C++ and you have already written python wrapper, then write your own extension and interface it. If you are going to work with Numpy on other project, then go for the Numpy C-API, it's extensive and ...
As there are multitude of ways to write binary modules for python, i was hopping those of you with experience could advice on the best approach if i wish to improve the performance of some segments of the code as much as possible. As i understand, one can either write an extension using the python/numpy C-api, or wrap ...
0
1
1,000
0
10,395,730
0
1
0
0
1
true
1
2012-05-01T09:12:00.000
1
1
0
Numpy cannot be accessed in sub directories
10,395,691
1.2
python,numpy
then it does not recognize the module zeroes in the program Make sure you don't have a file called numpy.py in your subdirectory. If you do, it would shadow the "real" numpy module and cause the symptoms you describe.
I have used import numpy as np in my program and when I try to execute np.zeroes to create a numpy array then it does not recognize the module zeroes in the program. This happens when I execute in the subdirectory where the python program is. If I copy it root folder and execute, then it shows the results. Can someone ...
0
1
73
0
10,409,722
0
0
0
0
2
false
4
2012-05-02T07:46:00.000
1
4
0
Finding a specific index in a binary image in linear time?
10,409,674
0.049958
python,image-processing,numpy,boolean,python-imaging-library
Depending on the size of your blob, I would say that dramatically reducing the resolution of your image may achieve what you want. Reduce it to a 1/10 resolution, find the one white pixel, and then you have a precise idea of where to search for the centroid.
I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible? Two lines...
0
1
1,130
0
10,409,877
0
0
0
0
2
false
4
2012-05-02T07:46:00.000
2
4
0
Finding a specific index in a binary image in linear time?
10,409,674
0.099668
python,image-processing,numpy,boolean,python-imaging-library
The centroid's coordinates are arithmetic means of coordinates of the points. If you want the linear solution, just go pixel by pixel, and count means of each coordinates, where the pixels are white, and that's the centroid. There is probably no way you can make it better than linear in general case, however, if your c...
I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible? Two lines...
0
1
1,130
0
10,421,975
0
0
0
0
1
false
0
2012-05-02T20:14:00.000
0
1
0
Manipulator/camera calibration issue (linear algebra oriented)
10,420,966
0
python,linear-algebra,robotics,calibration
You really need four data points to characterize three independent axes of movement. Can you can add some other constraints, ie are the manipulator axes orthogonal to each other, even if not fixed relative to the stage's axes? Do you know the manipulator's alignment roughly, even if not exactly? What takes the most tim...
I'm working on a research project involving a microscope (with a camera connected to the view port; the video feed is streamed to an application we're developing) and a manipulator arm. The microscope and manipulator arm are both controlled by a Luigs & Neumann control box (very obsolete - the computer interfaces with ...
0
1
303
0
10,428,163
0
1
1
0
3
false
1
2012-05-03T08:42:00.000
2
3
0
Stop Python from using more than one cpu
10,427,900
0.132549
python,multithreading,parallel-processing,cpu-usage
Your code might be calling some functions that uses C/C++/etc. underneath. In that case, it is possible for multiple thread usage. Are you calling any libraries that are only python bindings to some more efficiently implemented functions?
I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu. However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I h...
0
1
1,390
0
10,429,302
0
1
1
0
3
false
1
2012-05-03T08:42:00.000
1
3
0
Stop Python from using more than one cpu
10,427,900
0.066568
python,multithreading,parallel-processing,cpu-usage
You can always set your process affinity so it run on only one cpu. Use "taskset" command on linux, or process explorer on windows. This way, you should be able to know if your script has same performance using one cpu or more.
I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu. However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I h...
0
1
1,390
0
10,445,816
0
1
1
0
3
false
1
2012-05-03T08:42:00.000
1
3
0
Stop Python from using more than one cpu
10,427,900
0.066568
python,multithreading,parallel-processing,cpu-usage
Could it be that your code uses SciPy or other numeric library for Python that is linked against Intel MKL or another vendor provided library that uses OpenMP? If the underlying C/C++ code is parallelised using OpenMP, you can limit it to a single thread by setting the environment variable OMP_NUM_THREADS to 1: OMP_NUM...
I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu. However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I h...
0
1
1,390
0
10,481,152
0
0
0
0
1
false
2
2012-05-07T11:08:00.000
1
2
0
system swaps before the memory is full
10,481,008
0.099668
python,linux,matplotlib,archlinux
One thing to take into account for the huge numpy array is that you are not touching it. Memory is allocated lazily by default by the kernel. Try writing some values in that huge array and then check for swapping behaviour.
My program plots a large number of lines (~200k) with matplotlib which is pretty greedy for memory. I usually have about 1.5G of free memory before plotting. When I show the figures, the system starts swapping heavily when there's still about 600-800M of free RAM. This behavior is not observed when, say, creating a hug...
0
1
199
0
10,859,872
0
1
0
0
1
true
6
2012-05-07T21:26:00.000
4
4
0
What are good libraries for creating a python program for (visually appealing) 3D physics simulations/visualizations?
10,489,377
1.2
python,3d,visualization,physics,simulation
3D support for python is fairly weak compared to other languages, but with the way that most of them are built, the appearance of the program is far more mutable than you might think. For instance, you talked about Vpython, while many of their examples are not visually appealing, most of them are also from previous rel...
What are good libraries for creating a python program for (visually appealing) 3D physics simulations/visualizations? I've looked at Vpython but the simulations I have seen look ugly, I want them to be visually appealing. It also looks like an old library. For 3D programming I've seen suggestions of using Panda3D and p...
0
1
5,810
0
10,516,123
0
0
0
0
2
false
2
2012-05-09T12:17:00.000
0
4
0
document classification using naive bayes in python
10,515,907
0
python,nltk,document-classification
There could be many reasons for the classifier not working, and there are many ways to tweak it. did you train it with enough positive and negative examples? how did you train the classifier? did you give it every word as a feature, or did you also add more features for it to train on(like length of the text for examp...
I'm doing a project on document classification using naive bayes classifier in python. I have used the nltk python module for the same. The docs are from reuters dataset. I performed preprocessing steps such as stemming and stopword elimination and proceeded to compute tf-idf of the index terms. i used these values to ...
0
1
2,839
0
21,133,966
0
0
0
0
2
true
2
2012-05-09T12:17:00.000
1
4
0
document classification using naive bayes in python
10,515,907
1.2
python,nltk,document-classification
A few points that might help: Don't use a stoplist, it lowers accuracy (but do remove punctuation) Look at word features, and take only the top 1000 for example. Reducing dimensionality will improve your accuracy a lot; Use bigrams as well as unigrams - this will up the accuracy a bit. You may also find alternative w...
I'm doing a project on document classification using naive bayes classifier in python. I have used the nltk python module for the same. The docs are from reuters dataset. I performed preprocessing steps such as stemming and stopword elimination and proceeded to compute tf-idf of the index terms. i used these values to ...
0
1
2,839
0
10,535,067
0
1
0
0
1
false
0
2012-05-10T13:24:00.000
0
1
0
How to have multiple y axis on a line graph in Python
10,534,950
0
python,graph
The generic answer is to write a method that allows you to scale the y value for each data set to lay on the graph the way you want it. Then all your data points will have a y value on the same scale, and you can label the Y axis based upon how you define your translation for each data set.
I want to make a line graph with multiple sets of data on the same graph. But they all scale differently so will need individual y axis scales. What code will put each variable on a separate axis?
0
1
150
0
10,546,350
0
1
0
0
2
false
18
2012-05-11T05:36:00.000
1
6
0
creating pandas data frame from multiple files
10,545,957
0.033321
python,pandas
I might try to concatenate the files before feeding them to pandas. If you're in Linux or Mac you could use cat, otherwise a very simple Python function could do the job for you.
I am trying to create a pandas DataFrame and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the DataFrame. Not sure what's the way to append to current DataFrame in...
0
1
24,945
0
10,563,786
0
1
0
0
2
false
18
2012-05-11T05:36:00.000
3
6
0
creating pandas data frame from multiple files
10,545,957
0.099668
python,pandas
Potentially horribly inefficient but... Why not use read_csv, to build two (or more) dataframes, then use join to put them together? That said, it would be easier to answer your question if you provide some data or some of the code you've used thus far.
I am trying to create a pandas DataFrame and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the DataFrame. Not sure what's the way to append to current DataFrame in...
0
1
24,945
0
10,569,942
0
1
0
0
1
false
3
2012-05-13T06:47:00.000
0
2
0
Sorting a list with elements containing dictionary values
10,569,853
0
python,list,sorting,dictionary
have you already tried sorted(list_for_sorting, key=dictionary_you_wrote.__getitem__) ?
I'm trying to make a sorting system with card ranks and their values are obtained from a separate dictionary. In a simple deck of 52 cards, we have 2 to Ace ranks, in this case I want a ranking system where 0 is 10, J is 11, Q is 12, K is 13, A is 14 and 2 is 15 where 2 is the largest valued rank. The thing is, if ther...
0
1
198
0
10,596,347
0
0
0
0
1
true
1
2012-05-14T05:26:00.000
1
1
0
How to use Products.csvreplicata 1.1.7 with Products.PressRoom to export PressContacts in Plone 4.1
10,577,866
1.2
python,plone
Go to Site setup / CSV Replicata tool, and select PressRoom content(s) as exportable (and then select the schemata you want to be considered during import/export).
How to use Products.csvreplicata 1.1.7 with Products.PressRoom 3.18 to export PressContacts to csv in Plone 4.1? Or is there any other product to import/export all the PressRoom contacts into csv.
1
1
153
0
71,463,872
0
0
0
0
1
false
15
2012-05-14T08:04:00.000
0
10
0
How to generate negative random value in python
10,579,518
0
python,random
If you want to generate 2 random integers between 2 negative values than print(f"{-random.randint(1, 5)}") can also do the work.
I am starting to learn python, I tried to generate random values by passing in a negative and positive number. Let say -1, 1. How should I do this in python?
0
1
43,543
0
10,583,784
0
1
0
0
1
false
0
2012-05-14T12:19:00.000
1
3
0
How can I dynamically generate class instances with single attributes read from flat file in Python?
10,583,195
0.066568
python,oop
I have a .csv file You're in luck; CSV support is built right in, via the csv module. Do you suggest creating a class dictionary for accessing every instance? I don't know what you think you mean by "class dictionary". There are classes, and there are dictionaries. But I still need to provide a name to every single...
I apologise if this question has already been asked. I'm really new to Python programming, and what I need to do is this: I have a .csv file in which each line represent a person and each column represents a variable. This .csv file comes from an agent-based C++ simulation I have done. Now, I need to read each line of ...
0
1
725
0
10,608,972
0
0
0
0
2
false
1
2012-05-15T19:16:00.000
0
2
0
Modular serialization with pickle (Python)
10,607,350
0
python,pickle
Metaprogramming is strong in Python; Python classes are extremely malleable. You can alter them after declaration all the way you want, though it's best done in a metaclass (decorator). More than that, instances are malleable, independently of their classes. A 'reference to a place' is often simply a string. E.g. a ref...
I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now. I thought i cou...
0
1
289
0
10,608,783
0
0
0
0
2
false
1
2012-05-15T19:16:00.000
0
2
0
Modular serialization with pickle (Python)
10,607,350
0
python,pickle
Here's how I think I would go about this. Have a module level dictionary mapping persistent_id to SpecialClass objects. Every time you initialise or unpickle a SpecialClass instance, make sure that it is added to the dictionary. Override SpecialClass's __getattr__ and __setattr__ method, so that specialobj.foo = anoth...
I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now. I thought i cou...
0
1
289
0
54,660,955
0
0
0
0
1
false
70
2012-05-17T12:48:00.000
1
20
0
Python / Pandas - GUI for viewing a DataFrame or Matrix
10,636,024
0.01
python,user-interface,pandas,dataframe
I've also been searching very simple gui. I was surprised that no one mentioned gtabview. It is easy to install (just pip3 install gtabview ), and it loads data blazingly fast. I recommend using gtabview if you are not using spyder or Pycharm.
I'm using the Pandas package and it creates a DataFrame object, which is basically a labeled matrix. Often I have columns that have long string fields, or dataframes with many columns, so the simple print command doesn't work well. I've written some text output functions, but they aren't great. What I'd really love is ...
0
1
100,991
0
10,661,419
0
0
0
0
1
true
0
2012-05-19T00:38:00.000
4
1
0
Generating animated SVG with python
10,661,381
1.2
python,svg,slice,animated
The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing. Well, yes. That's how SVG animation works; it takes the current objects in the image and applies transformations to them. If you want a "movie" then you will need to make a video from the images.
I have been using the svgwrite library to generate a sequence of svg images. I would like to turn this sequence of images into an animated svg. The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing. Is it possible to use the time slices I have to genera...
0
1
1,810
0
12,110,615
0
0
0
0
1
false
3
2012-05-19T22:24:00.000
2
2
0
Python numpy memmap matrix multiplication
10,669,270
0.197375
python,numpy,memory-management,matrix-multiplication,large-data
you might try to use np.memmap, and compute the 10x10 output matrix one element at a time. so you just load the first row of the first matrix and the first column of the second, and then np.sum(row1 * col1).
Im trying to produce a usual matrix multiplication between two huge matrices (10*25,000,000). My memory runs out when I do so. How could I use numpy's memmap to be able to handle this? Is this even a good idea? I'm not so worried about the speed of the operation, I just want the result even if it means waiting some tim...
0
1
1,031
0
10,688,691
0
1
0
0
1
false
0
2012-05-21T16:01:00.000
2
1
0
Installed NumPy successfully, but not accessible with virtualenv
10,688,601
0.379949
python,ubuntu,numpy,virtualenv
You have to install it inside of your virtual environment. The easiest way to do this is: source [virtualenv]/bin/activate pip install numpy
I have successfully install NumPy on Ubuntu; however when inside a virtualenv, NumPy is not available. I must be missing something obvious, but I do not understand why I can not import NumPy when using python from a virtualenv. Can anyone help? I am using Python 2.7.3 as my system-wide python and inside my virtualenv. ...
0
1
394
0
10,727,166
0
0
0
0
1
false
1
2012-05-23T20:12:00.000
2
2
0
How can I speed up the training of a network using my GPU?
10,727,140
0.197375
python,neural-network,gpu,pybrain
Unless PyBrain is designed for that, you probably can't. You might want to try running your trainer under PyPy if you aren't already -- it's significantly faster than CPython for some workloads. Perhaps this is one of those workloads. :)
I was wondering if there is a way to use my GPU to speed up the training of a network in PyBrain.
0
1
2,082
0
10,768,872
0
0
0
0
1
false
1
2012-05-26T18:36:00.000
0
2
0
How to calculate estimation for monotonically growing sequence in python?
10,768,817
0
python,math,numpy,scipy
If the sequence does not have a lot of noise, just use the latest point, and the point for 1/3 of the current, then estimate your line from that. Otherwise do something more complicated like a least squares fit for the latter half of the sequence. If you search on Google, there are a number of code samples for doing t...
I have a monotonically growing sequence of integers. For example seq=[(0, 0), (1, 5), (10, 20), (15, 24)]. And a integer value greater than the largest argument in the sequence (a > seq[-1][0]). I want to estimate value corresponding to the given value. The sequence grows nearly linearly, and earlier values are less ...
0
1
254
0
10,790,083
0
0
0
0
1
false
5
2012-05-28T19:56:00.000
3
2
0
Phrase corpus for sentimental analysis
10,789,834
0.291313
python,nlp,nltk
In this case, the work not modifies the meaning of the phrase expecteed to win, reversing it. To identify this, you would need to POS tag the sentence and apply the negative adverb not to the (I think) verb phrase as a negation. I don't know if there is a corpus that would tell you that not would be this type of modifi...
Good day, I'm attempting to write a sentimental analysis application in python (Using naive-bayes classifier) with the aim to categorize phrases from news as being positive or negative. And I'm having a bit of trouble finding an appropriate corpus for that. I tried using "General Inquirer" (http://www.wjh.harvard.edu/~...
0
1
1,434
0
10,802,164
0
0
0
0
2
false
3
2012-05-29T13:20:00.000
0
3
0
Saving large Python arrays to disk for re-use later --- hdf5? Some other method?
10,800,039
0
python,database,arrays,save,hdf5
I would use a single file with fixed record length for this usecase. No specialised DB solution (seems overkill to me in that case), just plain old struct (see the documentation for struct.py) and read()/write() on a file. If you have just millions of entries, everything should be working nicely in a single file of s...
I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used / manipulated later. I have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point ...
0
1
1,123
0
10,817,026
0
0
0
0
2
true
3
2012-05-29T13:20:00.000
2
3
0
Saving large Python arrays to disk for re-use later --- hdf5? Some other method?
10,800,039
1.2
python,database,arrays,save,hdf5
HDF5 is an excellent choice! It has a nice interface, is widely used (in the scientific community at least), many programs have support for it (matlab for example), there are libraries for C,C++,fortran,python,... It has a complete toolset to display the contents of a HDF5 file. If you later want to do complex MPI calc...
I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used / manipulated later. I have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point ...
0
1
1,123
0
10,807,433
0
0
0
0
1
true
1
2012-05-29T19:22:00.000
3
4
0
Change color weight of raw image file
10,805,356
1.2
java,python,image-processing,rgb
Typical white-balance issues are caused by differing proportions of red, green, and blue in the makeup of the light illuminating a scene, or differences in the sensitivities of the sensors to those colors. These errors are generally linear, so you correct for them by multiplying by the inverse of the error. Suppose you...
I am working on a telescope project and we are testing the CCD. Whenever we take pictures things are slightly pink-tinted and we need true color to correctly image galactic objects. I am planning on writing a small program in python or java to change the color weights but how can I access the weight of the color in a...
0
1
1,116
0
39,332,002
0
0
0
0
1
false
7
2012-05-30T10:19:00.000
3
2
0
Using scipy to perform discrete integration of the sample
10,814,353
0.291313
python,integration,scipy,integral
There is only one method in SciPy that does cumulative integration which is scipy.integrate.cumtrapz() which does what you want as long as you don't specifically need to use the Simpson rule or another method. For that, you can as suggested always write the loop on your own.
I am trying to port from labview to python. In labview there is a function "Integral x(t) VI" that takes a set of samples as input, performs a discrete integration of the samples and returns a list of values (the areas under the curve) according to Simpsons rule. I tried to find an equivalent function in scipy, e.g. s...
0
1
26,770
0
11,115,655
0
1
0
0
1
false
18
2012-06-01T01:12:00.000
1
4
0
Python multiprocessing design
10,843,240
0.049958
python,iteration,multiprocessing,gdal
As python is not really meant to do intensive number-cunching, I typically start converting time-critical parts of a python program to C/C++ and speed things up a lot. Also, the python multithreading is not very good. Python keeps using a global semaphore for all kinds of things. So even when you use the Threads that p...
I have written an algorithm that takes geospatial data and performs a number of steps. The input data are a shapefile of polygons and covariate rasters for a large raster study area (~150 million pixels). The steps are as follows: Sample points from within polygons of the shapefile For each sampling point, extract val...
0
1
2,043
0
10,857,757
0
0
0
1
1
false
1
2012-06-01T14:01:00.000
1
1
0
xlrd - append data to already opened workbook
10,851,726
0.197375
python,xlrd,xlwt
Not directly. xlutils can use xlrd and xlwt to copy a spreadsheet, and appending to a "to be written" worksheet is straightforward. I don't think reading the open spreadsheet is a problem -- but xlwt will not write to the open book/sheet. You might write an Excel VBA macro to draw the graphs. In principle, I think a...
I am trying to write a python program for appending live stock quotes from a csv file to an excel file (which is already open) using xlrd and xlwt. The task is summarised below. From my stock-broker's application, a csv file is continually being updated on my hard disk. I wish to write a program which, when run, would ...
0
1
1,299
0
10,875,787
0
0
0
0
1
false
0
2012-06-04T00:09:00.000
0
2
0
"Cloning" a corpus in NLTK?
10,874,994
0
python,nlp,nltk,corpus
Why don't you a define a new corpus by copying the definition of movie_reviews in nltk.corpus? You can do this all you want with new directories, and then copy the directory structure and replace the files.
I'm attempting to create my own corpus in NLTK. I've been reading some of the documentation on this and it seems rather complicated... all I wanted to do is "clone" the movie reviews corpus but with my own text. Now, I know I can just change files in the move reviews corpus to my own... but that limits me to working wi...
0
1
301
0
10,886,261
0
0
0
0
2
false
2
2012-06-04T18:07:00.000
2
6
0
Multiply each pixel in an image by a factor
10,885,984
0.066568
python,image-processing,python-imaging-library,rgb,pixel
As a basic optimization, it may save a little time if you create 3 lookup tables, one each for R, G, and B, to map the input value (0-255) to the output value (0-255). Looking up an array entry is probably faster than multiplying by a decimal value and rounding the result to an integer. Not sure how much faster. Of cou...
I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this...
0
1
12,396