GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
21,508,062
0
1
0
0
1
true
1
2014-02-02T06:58:00.000
3
1
0
Plotting dictionaries within a dictionary in Myplotlib python
21,507,956
1.2
python,matplotlib
Creating sample data In [3]: data = {'title1': {10:20, 4:10}, 'title2':{8:10, 9:20, 10:30}} In [4]: data Out[4]: {'title1': {4: 10, 10: 20}, 'title2': {8: 10, 9: 20, 10: 30}} Iterating over data; creating x and y for each title and plotting it in new figure In [5]: for title, data_dict in data.iteritems(): ...: ...
I need help plotting a dictionary, below is the data sample data set. I want to create a graph where x:y are (x,y) coordinates and title'x' would be the title of the graph.. I want to create individual graphs for each data set so one for title1':{x:y, x:y}, another one for title2:{x:y, x:y}....and so on. Any help woul...
0
1
6,297
0
21,532,842
0
0
0
0
1
false
1
2014-02-03T16:39:00.000
1
2
0
Handling K-means with large dataset 6gb with scikit-learn?
21,532,724
0.099668
python,scikit-learn
Clustering is not in itself that well-defined a problem (a 'good' clustering result depends on your application) and k-means algorithm only gives locally optimal solutions based on random initialization criteria. Therefore I doubt that the results you would get from clustering a random 2GB subsample of the dataset woul...
I am using scikit-learn. I want to cluster a 6gb dataset of documents and find clusters of documents. I only have about 4Gb ram though. Is there a way to get k-means to handle large datasets in scikit-learn? Thank you, Please let me know if you have any questions.
0
1
2,177
0
21,613,541
0
0
0
0
2
true
4
2014-02-06T19:54:00.000
0
5
0
Determine if determinant is exactly zero
21,612,677
1.2
python,math,numpy,linear-algebra
As the entries in the matrices are either 1 or 0 the smallest non-zero absolute value of a determinant is 1. So there is no need to fear a true non-zero value that is very close to 0. Alternatively one can apparently use sympy to get an exact answer.
I have a lot of 10 by 10 (0,1)-matrices and I would like to determine which have determinant exactly 0 (that is which are singular). Using scipy.linalg.det I get a floating point number which I have to test to see if it is close to zero. Is it possible to do the calculation exactly so I can be sure I am not finding fal...
0
1
2,357
0
21,613,054
0
0
0
0
2
false
4
2014-02-06T19:54:00.000
3
5
0
Determine if determinant is exactly zero
21,612,677
0.119427
python,math,numpy,linear-algebra
You can use Gaussian elimination to bring the matrix to a triangular form. Since your elements are all 0 or 1, the calculation even using floating point arithmetic will be exact (you are only multiplying/dividing/adding/subtracting by -1, 0 and 1, which is exact). The determinant is then 0 if one element of the diagon...
I have a lot of 10 by 10 (0,1)-matrices and I would like to determine which have determinant exactly 0 (that is which are singular). Using scipy.linalg.det I get a floating point number which I have to test to see if it is close to zero. Is it possible to do the calculation exactly so I can be sure I am not finding fal...
0
1
2,357
0
21,645,757
0
0
0
0
1
false
0
2014-02-07T16:38:00.000
0
1
0
The predict method shows standardized probability?
21,633,136
0
python-2.7,probability,scikit-learn,prediction,adaboost
Do you mean you get probabilities per sample that are 1/n_classes on average? That's necessarily the case; the probabilities reported by predict_proba are the conditional class probability distribution P(y|X) over all values for y. To produce different probabilities, perform any necessary computations according to your...
I'm using the AdaBoostClassifier in Scikit-learn and always get an average probability of 0.5 regardless of how unbalanced the training sets are. The class predictions (predict_) seems to give correct estimates, but these aren't reflected in the predict_probas method which always average to 0.5. If my "real" probabilit...
0
1
213
0
24,217,870
0
0
0
0
1
false
3
2014-02-08T00:08:00.000
1
2
0
Error when trying to sum an array by block's
21,640,028
0.099668
python,arrays,numpy
In case anyone else has a similar problem but the chosen answer doesn't solve it, one possibility could be that in Python3, some index or integer quantity fed into a np function is an expression using '/' for example n/2, which ought to be '//'.
I have a large dataset stored in a numpy array (A) I am trying to sum by block's using: B=numpy.add.reduceat(numpy.add.reduceat(A, numpy.arange(0, A.shape[0], n),axis=0), numpy.arange(0, A.shape[1], n), axis=1) it work's fine when i try it on a test array but with my data's I get the following message: TypeError: Can...
0
1
4,600
0
21,656,184
0
1
0
0
1
false
2
2014-02-08T21:53:00.000
-2
2
0
nltk interface to stanford parser
21,652,251
-0.197375
python,nlp,nltk,stanford-nlp
There is no module named stanford in NLTK.You can store output of stanford parser and make use of it through python program.
I am getting problems to access Stanford parser through python NLTK (they developed an interface for NLTK) import nltk.tag.stanford Traceback (most recent call last): File "", line 1, in ImportError: No module named stanford
0
1
5,543
0
21,668,162
0
0
0
0
1
true
10
2014-02-10T01:17:00.000
1
2
0
Converting large SAS dataset to hdf5
21,667,547
1.2
python,pandas,sas,hdf5
I haven't had much luck with this in the past. We (where I work) just use Tab separated files for transport between SAS and Python -- and we do it a lot. That said, if you are on Windows, you can attempt to setup an ODBC connection and write the file that way.
I have multiple large (>10GB) SAS datasets that I want to convert for use in pandas, preferably in HDF5. There are many different data types (dates, numerical, text) and some numerical fields also have different error codes for missing values (i.e. values can be ., .E, .C, etc.) I'm hoping to keep the column names and ...
0
1
2,301
0
23,423,563
0
0
0
0
1
false
7
2014-02-10T11:11:00.000
0
4
0
Nearest Neighbors in Python given the distance matrix
21,675,570
0
python,machine-learning,scipy,scikit-learn
Want to add to ford's answer that you have to do like this metric = DistanceMetric.get_metric('pyfunc',func=/your function name/) You cannot just put your own function as the second argument, you must name the argument as "func"
I have to apply Nearest Neighbors in Python, and I am looking ad the scikit-learn and the scipy libraries, which both require the data as input, then will compute the distances and apply the algorithm. In my case I had to compute a non-conventional distance, therefore I would like to know if there is a way to directly ...
0
1
5,868
0
21,687,176
0
1
0
0
1
true
1
2014-02-10T19:18:00.000
2
3
0
How to install numpy in OSX properly?
21,685,980
1.2
python,macos,numpy
Using the built-in python for OS X is not recommended and will likely cause more headaches in the future (assuming it's not behind your current problems). Assuming your python is fine, there's still the issue of getting numpy working. In my experience, installing numpy with pip will often run into problems. In addition...
I'm using the built in python version in OSX, I also installed pip by sudo easy_install pip and secondly I installed numpy by sudo pip install numpy. However, when I run any python file which uses numpy I get an error message like: Import error: No module named numpy Like numpy isn't installed in system. When I cal...
0
1
273
0
59,060,580
0
0
0
0
1
false
14
2014-02-12T09:34:00.000
0
2
0
replace rows in a pandas data frame
21,723,830
0
python,pandas,dataframe
If you are replacing the entire row then you can just use an index and not need row,column slices. ... data.loc[2]=5,6
I want to start with an empty data frame and then add to it one row each time. I can even start with a 0 data frame data=pd.DataFrame(np.zeros(shape=(10,2)),column=["a","b"]) and then replace one line each time. How can I do that?
0
1
63,571
0
21,734,013
0
1
0
0
2
false
2
2014-02-12T14:12:00.000
1
2
0
PyObjC: How can one use NSCoding to implement python pickling?
21,730,339
0.099668
python,pickle,nscoding,pyobjc
PyObjC does support writing Python objects to a (keyed) archive (that is, any object that can be pickled implements NSCoding). That’s probably the easiest way to serialize arbitrary graphs of Python and Objective-C objects. As I wrote in the comments for another answer I ran into problems when trying to find a way to...
Title says it all. It seems like it ought be possible (somehow) to implement python-side pickling for PyObjC objects whose Objective-C classes implement NSCoding without re-implementing everything from scratch. That said, while value-semantic members would probably be straightforward, by-reference object graphs and con...
0
1
261
0
21,733,669
0
1
0
0
2
false
2
2014-02-12T14:12:00.000
0
2
0
PyObjC: How can one use NSCoding to implement python pickling?
21,730,339
0
python,pickle,nscoding,pyobjc
Shouldn't it be pretty straightforward? On pickling, call encodeWithCoder on the object using an NSArchiver or something. Have pickle store that string. On unpickling, use NSUnarchiver to create an NSObject from the pickled string.
Title says it all. It seems like it ought be possible (somehow) to implement python-side pickling for PyObjC objects whose Objective-C classes implement NSCoding without re-implementing everything from scratch. That said, while value-semantic members would probably be straightforward, by-reference object graphs and con...
0
1
261
0
21,740,743
0
0
0
0
1
true
1
2014-02-12T21:50:00.000
1
1
0
Best format to pack data for correlation determination?
21,740,498
1.2
python,csv,scipy,correlation
Each dataset is a column and all the datasets combined to make a CSV. It get read as a 2D array by numpy.genfromtxt() and then call numpy.corrcoef() to get correlation coefficients. Note: you should also consider the same data layout, but using pandas. Read CSV into a dataframe by pandas.read_csv() and get the correlat...
I'm using a Java program to extract some data points, and am planning on using scipy to determine the correlation coefficients. I plan on extracting the data into a csv-style file. How should I format each corresponding dataset, so that I can easily read it into scipy?
1
1
73
0
21,762,285
0
1
0
0
1
false
4
2014-02-13T18:09:00.000
0
2
0
Python CSV reader start at line_num
21,762,173
0
python,csv
If I were doing this I think I would add a marker line after each read - before the file is saved again , then I would read the file in as a string , split on the marker, convert back to a list and feed the list to the process.
I need to read a CSV with a couple million rows. The file grows throughout the day. After each time I process the file (and zip each row into a dict), I start the process over again, except creating the dict only for the new lines. In order to get to the new lines though, I have to iterate over each line with CSV r...
0
1
1,904
0
21,765,862
0
0
0
0
1
true
1
2014-02-13T21:16:00.000
4
2
0
How to read a large image in chunks in python?
21,765,647
1.2
python,image,image-processing,numpy,pytables
You can use numpy.memmap and let the operating system decide which parts of the image file to page in or out of RAM. If you use 64-bit Python the virtual memory space is astronomic compared to the available RAM.
I'm trying to compute the difference in pixel values of two images, but I'm running into memory problems because the images I have are quite large. Is there way in python that I can read an image lets say in 10x10 chunks at a time rather than try to read in the whole image? I was hoping to solve the memory problem by r...
0
1
3,371
0
21,773,776
0
0
0
0
1
false
1
2014-02-14T07:45:00.000
0
2
0
Detecting certain columns and deleting these
21,773,514
0
python,pandas
In pandas it would be del df['columnname'].
I have a dataframe where some columns (not row) are like ["","","",""]. Those columns with that characteristic I would like to delete. Is there an efficient way of doing that?
0
1
72
0
21,775,708
0
1
1
0
2
false
7
2014-02-14T08:03:00.000
-1
5
0
Making a Python unit test that never runs in parallel
21,773,821
-0.039979
python,unit-testing,python-unittest
The problem is that the name of config_custom.csv should itself be a configurable parameter. Then each test can simply look for config_custom_<nonce>.csv, and any number of tests may be run in parallel. Cleanup of the overall suite can just clear out config_custom_*.csv, since we won't be needing any of them at that po...
tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time. Long Specific Example: I have ...
0
1
2,425
0
34,140,669
0
1
1
0
2
false
7
2014-02-14T08:03:00.000
0
5
0
Making a Python unit test that never runs in parallel
21,773,821
0
python,unit-testing,python-unittest
The best testing strategy would be to make sure your testing on disjoint data sets. This will bypass any race conditions and make the code simpler. I would also mock out open or __enter__ / __exit__ if your using the context manager. This will allow you to fake the event that a file doesn't exist.
tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time. Long Specific Example: I have ...
0
1
2,425
0
21,791,001
0
0
0
0
1
false
3
2014-02-14T22:43:00.000
3
2
0
filter pandas dataframe for timedeltas
21,790,816
0.291313
python-2.7,pandas
for the 60 days you're looking to compare to, create a timedelta object of that value timedelta(days=60) and use that for the filter. and if you're already getting timedelta objects from the subtraction, recasting it to a timedelta seems unnecessary. and finally, make sure you check the signs of the timedeltas you're c...
I got a pandas dataframe, containing timestamps 'expiration' and 'date'. I want to filter for rows with a certain maximum delta between expiration and date. When doing fr.expiration - fr.date I obtain timedelta values, but don't know how to get a filter criteria such as fr[timedelta(fr.expiration-fr.date)<=60days]
0
1
2,238
0
21,824,056
0
0
0
0
2
false
2
2014-02-15T20:07:00.000
2
2
0
Computing K-means clustering on Location data in Python
21,802,946
0.197375
python,scikit-learn,cluster-analysis,data-mining,k-means
Is the data already in vector space e.g. gps coordinates? If so you can cluster on it directly, lat and lon are close enough to x and y that it shouldn't matter much. If not, preprocessing will have to be applied to convert it to a vector space format (table lookup of locations to coords for instance). Euclidean distan...
I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations. I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean dist...
0
1
976
0
21,825,022
0
0
0
0
2
true
2
2014-02-15T20:07:00.000
5
2
0
Computing K-means clustering on Location data in Python
21,802,946
1.2
python,scikit-learn,cluster-analysis,data-mining,k-means
Don't use k-means with anything other than Euclidean distance. K-means is not designed to work with other distance metrics (see k-medians for Manhattan distance, k-medoids aka. PAM for arbitrary other distance functions). The concept of k-means is variance minimization. And variance is essentially the same as squared E...
I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations. I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean dist...
0
1
976
0
21,828,293
0
0
0
0
1
false
7
2014-02-17T10:42:00.000
3
2
0
Is it possible to run Python's scikit-learn algorithms over Hadoop?
21,826,863
0.291313
python,hadoop,machine-learning,bigdata,scikit-learn
Look out for jpype module. By using jpype you can run Mahout Algorithms and you will be writing code in Python. However I feel this won't be the best of solution. If you really want massive scalability than go with Mahout directly. I practice, do POC's, solve toy problems using scikit-learn, however when I need to do m...
I know it is possible to use python language over Hadoop. But is it possible to use scikit-learn's machine learning algorithms on Hadoop ? If the answer is no, is there some machine learning library for python and Hadoop ? Thanks for your Help.
0
1
5,638
0
21,840,597
1
0
0
0
1
false
0
2014-02-17T18:47:00.000
2
1
0
least cpu-expensive way to find the two most (and least) remote vertices of a graph [igraph]
21,836,929
0.379949
python,igraph,shortest-path
For the first question, you can find all shortest paths, and then choose between the pairs making up the longest distances. I don't really understand the second question. If you are searching for unweighted paths, then every pair of vertices at both ends of an edge have the minimum distance (1). That is, if you don't c...
In igraph, what's the least cpu-expensive way to find: the two most remote vertices (in term of shortest distances form one another) of a graph. Unlike the farthest.points() function, which chooses the first found pair of vertices with the longest shortest distance if more than one pair exists, I'd like to randomly se...
0
1
99
0
59,014,783
0
0
0
0
1
false
1
2014-02-18T07:13:00.000
1
3
0
i have python 33 but unable to import numpy and matplotlib package
21,846,661
0.066568
python,numpy,matplotlib
I would suggest that first uninstall numpy and matplotlib using pip uninstall command then install again using pip install from python command line terminal and restart your system.
I am unable to import numpy and matplotlib package in python33. I am getting this error. I have tried to install this two packages but unable to import. I am getting the following error: import numpy Traceback (most recent call last): File "", line 1, in import numpy ImportError: No mo...
0
1
275
0
71,455,335
0
0
0
0
1
false
233
2014-02-19T15:00:00.000
1
7
0
warning about too many open figures
21,884,271
0.028564
python,python-3.x,matplotlib
matplotlib by default keeps a reference of all the figures created through pyplot. If a single variable used for storing matplotlib figure (e.g "fig") is modified and rewritten without clearing the figure, all the plots are retained in RAM memory. Its important to use plt.cla() and plt.clf() instead of modifying and re...
In a script where I create many figures with fix, ax = plt.subplots(...), I get the warning RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory. However, I don't understand wh...
0
1
164,804
0
21,894,459
0
0
0
0
1
false
0
2014-02-19T22:28:00.000
0
2
0
SciPy - Constrained Minimization derived from a Directed Graph
21,893,973
0
python,algorithm,graph,scipy,mathematical-optimization
The prohibition against self-flows makes some instances of this problem infeasible (e.g., one node that has in- and out-flows of 1). Otherwise, a reasonably sparse solution with at most one self-flow always can be found as follows. Initialize two queues, one for the nodes with positive out-flow from lowest ID to highes...
I'm looking for a solution to the following graph problem in order to perform graph analysis in Python. Basically, I have a directed graph of N nodes where I know the following: The sum of the weights of the out-edges for each node The sum of the weights of the in-edges for each node Following from the above, the sum...
0
1
200
0
21,900,644
0
0
0
0
1
true
4
2014-02-20T01:06:00.000
5
2
0
NumPy Array Copy-On-Write
21,896,030
1.2
python,numpy,copy-on-write
Copy-on-write is a nice concept, but explicit copying seems to be "the NumPy philosophy". So personally I would keep the "readonly" solution if it isn't too clumsy. But I admit having written my own copy-on-write wrapper class. I don't try to detect write access to the array. Instead the class has a method "get_array(r...
I have a class that returns large NumPy arrays. These arrays are cached within the class. I would like the returned arrays to be copy-on-write arrays. If the caller ends up just reading from the array, no copy is ever made. This will case no extra memory will be used. However, the array is "modifiable", but does not mo...
0
1
2,415
0
21,919,317
0
1
0
0
2
false
0
2014-02-20T20:23:00.000
1
3
0
How to label certain x values
21,918,718
0.066568
python,matplotlib,plot,weather
Matplotlib xticks are your friend. Will allow you to set where the ticks appear. As for date formatting, make sure you're using dateutil objects, and you'll be able to handle the formatting.
I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked. I would also want to control where such markings are placed along the x axis, and control the range of the axis. I also want ...
0
1
2,934
0
21,919,748
0
1
0
0
2
false
0
2014-02-20T20:23:00.000
2
3
0
How to label certain x values
21,918,718
0.132549
python,matplotlib,plot,weather
You can use a DayLocator as in: plt.gca().xaxis.set_major_locator(dt.DayLocator()) And DateFormatter as in: plt.gca().xaxis.set_major_formatter(dt.DateFormatter("%d/%m/%Y")) Note: import matplotlib.dates as dt
I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked. I would also want to control where such markings are placed along the x axis, and control the range of the axis. I also want ...
0
1
2,934
0
21,947,575
0
1
0
0
1
false
4
2014-02-21T23:49:00.000
-1
2
0
Extract a certain part of a string after a key phrase using pandas?
21,947,487
-0.099668
python,string,pandas,extract
This will grab the number 10 and put it in a variable called yards. x = "(12:25) (No Huddle Shotgun) P.Manning pass short left to W.Welker pushed ob at DEN 34 for 10 yards (C.Graham)." yards = (x.split("for ")[-1]).split(" yards")[0]
I have an NFL dataset with a 'description' column with details about the play. Each successful pass and run play has a string that's structured like: "(12:25) (No Huddle Shotgun) P.Manning pass short left to W.Welker pushed ob at DEN 34 for 10 yards (C.Graham)." How do I locate/extract the number after "for" in the str...
0
1
4,497
0
21,987,220
0
0
0
0
2
false
1
2014-02-24T11:24:00.000
2
3
0
Detect an arc from an image contour or edge
21,986,356
0.132549
python,opencv,image-processing
If you think that there will not be any change in the shape (i mean arc won't become line or something like this) then you can have a look a Generalized Hough Transform (GHT) which can detect any shape you want. Cons: There is no directly function in openCV library for GHT but you can get several source code at intern...
I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify. Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an app...
0
1
5,077
0
22,008,350
0
0
0
0
2
false
1
2014-02-24T11:24:00.000
1
3
0
Detect an arc from an image contour or edge
21,986,356
0.066568
python,opencv,image-processing
You can do it this way: Convert the image to edges using canny filter. Make the image binary using threshold function there is an option for regular threshold, otsu or adaptive. Find contours with sufficient length (findContours function) Iterate all the contours and try to fit ellipse (fitEllipse function) Validate f...
I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify. Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an app...
0
1
5,077
0
21,998,600
0
0
0
0
1
true
11
2014-02-24T20:07:00.000
18
2
0
Changing what the ends of whiskers represent in matplotlib's boxplot function
21,997,897
1.2
python,matplotlib
To get the whiskers to appear at the min and max of the data, set the whis parameter to an arbitrarily large number. In other words: boxplots = ax.boxplot(myData, whis=np.inf). The whis kwarg is a scaling factor of the interquartile range. Whiskers are drawn to the outermost data points within whis * IQR away from the ...
I understand that the ends of whiskers in matplotlib's box plot function extend to max value below 75% + 1.5 IQR and minimum value above 25% - 1.5 IQR. I would like to change it to represent max and minimum values of the data or the 5th and 95th quartile of the data. Is is possible to do this?
0
1
8,736
0
22,001,369
0
0
0
0
1
false
0
2014-02-24T23:12:00.000
1
1
0
Pandas read csv data type
22,001,176
0.197375
python,csv,pandas
Just do str(int(float('2.09228E+14'))) which should give you '209228000000000'
I'm trying to read a csv with pandas using the read_csv command. However, one of my columns is a 15 digit number which is read in as a float and then truncated to exponential notation. So the entries in this column become 2.09228E+14 instead of the 15 digit number I want. I've tried reading it as a string, but I get...
0
1
1,866
0
22,024,672
0
1
0
0
1
false
1
2014-02-25T19:45:00.000
0
1
0
Large dataset - no error - but it wont run - python memory issue?
22,024,577
0
python,numpy
The exit code of the python process should reveal the reason for the process exiting. In the event of an adverse condition, the exit code will be something other than 0. If you are running in a Bash shell or similar, you can run "echo $?" in your shell after running Python to see its exit status. If the exit status is ...
So I am trying to run various large images which gets put into an array using numpy so that I can then do some calculations. The calculations get done per image and the opening and closing of each image is done in a loop. I a have reached a frustration point because I have no errors in the code (well none to my knowled...
0
1
107
0
22,026,711
0
1
0
0
1
false
5
2014-02-25T21:17:00.000
5
3
0
Python: Fast and efficient way of writing large text file
22,026,393
0.321513
python,python-2.7,file-io,dataframe,string-concatenation
Unless you are running into a performance issue, you can probably write to the file line by line. Python internally uses buffering and will likely give you a nice compromise between performance and memory efficiency. Python buffering is different from OS buffering and you can specify how you want things buffered by se...
I have a speed/efficiency related question about python: I need to write a large number of very large R dataframe-ish files, about 0.5-2 GB sizes. This is basically a large tab-separated table, where each line can contain floats, integers and strings. Normally, I would just put all my data in numpy dataframe and use np...
0
1
15,877
0
22,044,916
0
0
0
0
1
true
0
2014-02-26T00:21:00.000
2
1
0
Scipy: Fitting Data with Two Dimensional Error
22,029,142
1.2
python,scipy
Try scipy.odr. It allows to specify weights/errors in both input and response variable.
So I already know how to use scipy.optimize.curve_fit for normal fitting needs, but what do I do if both my x data and my y data both have error bars?
0
1
104
0
65,142,140
0
0
0
0
1
false
394
2014-02-26T20:55:00.000
2
8
0
Difference between numpy.array shape (R, 1) and (R,)
22,053,050
0.049958
python,numpy,matrix,multidimensional-array
The data structure of shape (n,) is called a rank 1 array. It doesn't behave consistently as a row vector or a column vector which makes some of its operations and effects non intuitive. If you take the transpose of this (n,) data structure, it'll look exactly same and the dot product will give you a number and not a m...
In numpy, some of the operations return in shape (R, 1) but some return (R,). This will make matrix multiplication more tedious since explicit reshape is required. For example, given a matrix M, if we want to do numpy.dot(M[:,0], numpy.ones((1, R))) where R is the number of rows (of course, the same issue also occurs c...
0
1
193,239
0
22,161,505
0
0
0
0
1
false
2
2014-02-27T08:23:00.000
-2
2
0
write and read on real time pytables
22,062,837
-0.197375
python,real-time,pytables
This is definitely possible. It is especially easy if you only have one process in 'w' and multiple processes in 'r' mode. Just make sure in your 'w' process to flush() the file and/or the datasets occasionally. If you do this, the 'r' process will be able to see the data.
I am not sure if what I am thinking would be possible, I would need the help from someone experienced working with HDF5/PyTables. The escenario would be like this: Let's say that we have a process, or a machine or a connexion etc, acquiring data, and storing in a HDF5/PyTable format. I will call it store software. Woul...
0
1
996
0
22,080,238
0
1
0
0
1
true
1
2014-02-27T20:42:00.000
6
1
0
How to transpose a matrix without using numpy or zip (or other imports)
22,079,882
1.2
python,matrix,transpose
[[row[i] for row in data] for i in range(len(data[0]))]
How do you transpose a matrix without using numpy or zip or other imports? I thought this was the answer but it does not work if the matrix has only 1 row... [[row[i] for row in data] for i in range(len(data[1]))]
0
1
4,612
0
22,084,546
0
0
0
0
1
false
3
2014-02-28T01:33:00.000
1
3
0
Python: Create a graph with defined number of edges per node
22,084,435
0.066568
python,networkx
It seems to me you should decide how many nodes you will have generate the number of links per node in your desired distribution - make sure the sum is even start randomly connecting pairs of nodes until all link requirements are satisfied There are a few more constraints - no pair of nodes should be connected more t...
How I can create a graph with -predefined number of connections for each node, say 3 -given distribution of connections (say Poisson distribution with given mean) Thanks
0
1
1,817
0
60,347,465
0
0
0
0
1
false
3
2014-02-28T09:37:00.000
-1
2
0
Pandas: DataReader in combination with ISIN identification
22,091,306
-0.099668
python,pandas,stock
Forget about Python. There is absolutely no way to convert an ISIN to a Ticker Symbol. You have completely misunderstood the wikipedia page.
I'm trying to compute some portfolio statistics using Python Pandas, and I am looking for a way to query stock data with DataReader using the ISIN (International Securities Identification Number). However, as far as I can see, DataReader is not compatible with such ids, although both YahooFinance and GoogleFinance can...
0
1
3,894
0
22,109,817
0
0
0
0
1
true
4
2014-02-28T23:58:00.000
2
1
0
set capstyle of spines for pdf backend
22,108,095
1.2
python,matplotlib
I don't think it's possible. I did a little bit of the backend's work in my main script, setting up a RendererPdf (defined in backend_pdf.py) and conatining a GraphicsContextPdf which is a GraphicsContextBase which keeps a capstyle, intialized as butt. As confirmed by grep, this is the only place where butt is hardcode...
Figures rendered with the PDF backend have a 'butt' capstyle in my reader. (If I zoom at the corner of a figure in a pdf, I do not see a square corner, but the overlap of shortened lines.) I would like either a 'round' or 'projecting' (what matplotlib calls the 'square' capstyle) cap. Thus the Spine objects are in ques...
0
1
176
0
22,123,913
0
0
0
0
2
true
1
2014-03-02T00:52:00.000
4
2
0
What's the difference between using libSVM in sci-kit learn, or e1070 in R, for training and using support vector machines?
22,122,506
1.2
python,r,machine-learning,scikit-learn,svm
I do not have experiece with e1070, however from googling it it seems that it either uses or is based on LIBSVM (I don't know enough R to determine which from the cran entry). Scilearnkit also uses LIBSVM. In both cases the model is going to be trained by LIBSVM. Speed, scalability, variety of options available is goi...
Recently I was contemplating the choice of using either R or Python to train support vector machines. Aside from the particular strengths and weaknesses intrinsic to both programming languages, I'm wondering if there is any heuristic guidelines for making a decision on which way to go, based on the packages themselves....
0
1
381
0
22,189,863
0
0
0
0
2
false
1
2014-03-02T00:52:00.000
0
2
0
What's the difference between using libSVM in sci-kit learn, or e1070 in R, for training and using support vector machines?
22,122,506
0
python,r,machine-learning,scikit-learn,svm
Sometime back I had the same question. Yes, both e1070 and scikit-learn use LIBSVM. I have experience with e1070 only. But there are some areas where R is better. I have read in the past that Python does not handle categorical features properly (at least not right out of the box). This could be a big deal for some. I a...
Recently I was contemplating the choice of using either R or Python to train support vector machines. Aside from the particular strengths and weaknesses intrinsic to both programming languages, I'm wondering if there is any heuristic guidelines for making a decision on which way to go, based on the packages themselves....
0
1
381
0
22,165,531
0
0
0
0
1
true
11
2014-03-03T08:58:00.000
2
4
0
The equivalent function of Matlab imfilter in Python
22,142,369
1.2
python,matlab,scipy
Using the functions scipy.ndimage.filters.correlate and scipy.ndimage.filters.convolve
I know the equivalent functions of conv2 and corr2 of MATLAB are scipy.signal.correlate and scipy.signal.convolve. But the function imfilter has the property of dealing with the outside the bounds of the array. Like as symmetric, replicate and circular. Can Python do that things
0
1
12,599
0
37,731,398
0
0
0
0
5
false
1
2014-03-03T10:02:00.000
0
5
0
Examples on N-D arrays usage
22,143,644
0
python,arrays,numpy
They are very applicable in scientific computing. Right now, for instance, I am running simulations which output data in a 4D array: specifically | Time | x-position | y-position | z-position |. Almost every modern spatial simulation will use multidimensional arrays, along with programming for computer games.
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me exa...
0
1
141
0
22,146,242
0
0
0
0
5
false
1
2014-03-03T10:02:00.000
0
5
0
Examples on N-D arrays usage
22,143,644
0
python,arrays,numpy
There are so many examples... The way you are trying to represent it is probably wrong, let's take a simple example: You have boxes and a box stores N items in it. You can store up to 100 items in each box. You've organized the boxes in shelves. A shelf allows you to store M boxes. You can identify each box by a index...
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me exa...
0
1
141
0
22,144,505
0
0
0
0
5
false
1
2014-03-03T10:02:00.000
1
5
0
Examples on N-D arrays usage
22,143,644
0.039979
python,arrays,numpy
A few simple examples are: A n x m 2D array of p-vectors represented as an n x m x p 3D matrix, as might result from computing the gradient of an image A 3D grid of values, such as a volumetric texture These can even be combined in the case of a gradient of a volume in which case you get a 4D matrix Staying with the g...
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me exa...
0
1
141
0
22,144,263
0
0
0
0
5
false
1
2014-03-03T10:02:00.000
0
5
0
Examples on N-D arrays usage
22,143,644
0
python,arrays,numpy
For example, a 3D array could be used to represent a movie, that is a 2D image that changes with time. For a given time, the first two axes would give the coordinate of a pixel in the image, and the corresponding value would give the color of this pixel, or a grey scale level. The third axis would then represent time. ...
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me exa...
0
1
141
0
22,144,157
0
0
0
0
5
false
1
2014-03-03T10:02:00.000
0
5
0
Examples on N-D arrays usage
22,143,644
0
python,arrays,numpy
Practical applications are hard to come up with but I can give you a simple example for 3D. Imagine taking a 3D world (a game or simulation for example) and splitting it into equally sized cubes. Each cube could contain a specific value of some kind (a good example is temperature for climate modelling). The matrix can ...
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me exa...
0
1
141
0
25,162,895
0
0
0
0
1
true
14
2014-03-03T20:04:00.000
10
2
0
Pandas MultiIndex versus Panel
22,156,258
1.2
python,pandas
In my practice, the strongest, easiest-to-see difference is that a Panel needs to be homogeneous in every dimension. If you look at a Panel as a stack of Dataframes, you cannot create it by stacking Dataframes of different sizes or with different indexes/columns. You can indeed handle more non-homogeneous type of data...
Using Pandas, what are the reasons to use a Panel versus a MultiIndex DataFrame? I have personally found significant difference between the two in the ease of accessing different dimensions/levels, but that may just be my being more familiar with the interface for one versus the other. I assume there are more substanti...
0
1
2,444
0
22,512,378
0
0
0
0
1
false
2
2014-03-03T22:00:00.000
2
1
0
ArtistAnimation vs FuncAnimation matplotlib animation matplotlib.animation
22,158,395
0.379949
python,animation,matplotlib
I think you are right, although it is simple to go from a list to a function (just iterate over it) or back (store function values in an array). So it really doesn't matter too much, but you can pick the one that best suits your code, as you described. (Personally I find ArtistAnimation to be the most convenient) If yo...
So in the examples of matplotlib.animation there are two main functions that are used to make animations: AritstAnimation and FuncAnimation. According to the documentation the use of each of them is: .ArtistAnimation: Before calling this function, all plotting should have taken place and the relevant artists saved. Fu...
0
1
2,205
0
22,161,688
0
1
0
0
2
true
1
2014-03-03T22:48:00.000
5
3
0
IPython & matplotlib config profiles and files
22,159,215
1.2
python,matplotlib,ipython
We (IPython) have kind of gone back and forth on the best location for config on Linux. We used to always use ~/.ipython, but then we switched to ~/.config/ipython, which is the XDG-specified location (more correct, for a given value of correct), while still checking both. In IPython 2, we're switching back to ~/.ipyth...
Over time, I have seen IPython (and equivalently matplotlib) using two locations for config files: ~/.ipython/profile_default/ ~/.config/ipython/profile_default which is the right one? Do these packages check both? In case it matters, I am using Anaconda on OS X and on Linux
0
1
555
0
22,161,946
0
1
0
0
2
false
1
2014-03-03T22:48:00.000
2
3
0
IPython & matplotlib config profiles and files
22,159,215
0.132549
python,matplotlib,ipython
As far as matplotlib is concerned, on OS X the config file (matplotlibrc) will be looked for first in the current directory, then in ~/.matplotlib, and finally in INSTALL/matplotlib/mpl-data/matplotlibrc, where INSTALL is the Python site-packages directory. With a standard install of Python from python.org, this is /Li...
Over time, I have seen IPython (and equivalently matplotlib) using two locations for config files: ~/.ipython/profile_default/ ~/.config/ipython/profile_default which is the right one? Do these packages check both? In case it matters, I am using Anaconda on OS X and on Linux
0
1
555
0
22,159,481
0
1
0
0
1
true
0
2014-03-03T22:58:00.000
2
1
0
(Text Classification) Handling same words but from different documents [TFIDF ]
22,159,351
1.2
python,text,machine-learning,classification,tf-idf
First, let's get some terminology clear. A term is a word-like unit in a corpus. A token is a term at a particular location in a particular document. There can be multiple tokens that use the same term. For example, in my answer, there are many tokens that use the term "the". But there is only one term for "the". ...
So I'm making a python class which calculates the tfidf weight of each word in a document. Now in my dataset I have 50 documents. In these documents many words intersect, thus having multiple same word features but with different tfidf weight. So the question is how do I sum up all the weights into one singular weight?
0
1
722
0
22,185,324
0
0
0
0
1
true
2
2014-03-04T23:17:00.000
4
1
0
Determine sparsity of sparse matrix ( Lil matrix )
22,185,277
1.2
python,scipy,sparse-matrix
m.nnz is the number of nonzero elements in the matrix m, you can use m.size to get the total number of elements.
I have a large sparse matrix, implemented as a lil sparse matrix from sci-py. I just want a statistic for how sparse the matrix is once populated. Is there a method to find out this?
0
1
157
0
22,188,201
0
0
0
0
1
false
0
2014-03-05T03:46:00.000
3
3
0
Estimate probability that two random integers between 0 and k are relatively prime
22,188,097
0.197375
python,primes
Without a complete enumeration of the relative primeness of all numbers between 0 and k (a huge task and one that grows as the square of k) you can make an estimate by selecting a relatively large number of random pairs (p of them) and determine whether they are relatively prime. The assumption is that as the sample si...
By generating and checking p random pairs. Somewhat confused on how to go about doing this. I know I could make an algorithm that determines whether or not two integers are relatively prime. I am also having difficulty understanding what generating and checking p random pairs means.
0
1
519
0
22,214,465
0
1
0
0
1
false
0
2014-03-06T03:20:00.000
0
2
0
Given an array length 1 or more of ints, return the smallest value in the array
22,214,166
0
python-2.7
Is this ok? using C int def my_min(nums) { int i,min; int min[N]; for(i=0;i<nums;i++) { scanf("%d",&min[i]); if (i==0) { min=min[0]; } else { if(min>min[i]) { min=min[i]; } } } return min; }
Given an array length 1 or more of ints, return the smallest value in the array. my_min([10, 3, 5, 6]) -> 3 The program starts with def my_min(nums):
0
1
58
0
22,274,333
0
0
0
0
1
true
0
2014-03-08T19:45:00.000
1
1
0
one colormap for multiple subplots with different maximum values
22,274,186
1.2
python,matplotlib,color-mapping
imgshow takes two arguments vmin and vmax for the color scale. You could do what you want by putting the same vmin and vmax for both your subplots. To find vmin you can take the minimum between the minimum of all the values in your data (and same reasoning for vmax).
I want to do two subplots with imshow using the same colormap by which I mean: if points in both plots have the same color, they correspond to the same value. But how can I get imshow to use only 9/10 or so of the colormap for the first plot, because it's maximal value is only 9/10 of the maximal value in the second pl...
0
1
105
0
22,281,914
0
1
0
0
1
false
0
2014-03-09T07:37:00.000
0
3
0
Grouping a list in python specifically
22,279,611
0
python,sorting
I would use a dictionary with the first element as key. Also look into ordered dictionaries.
Hi is there anyway to group this list such that it would return a string(first element) and a list within a tuple for each equivalent first element? ie., [('106', '1', '1', '43009'), ('106', '1', '2', '43179'), ('106', '1', '4', '43619'), ('171', '1', '3', '59111'), ('171', '1', '4', '57089'), ('171', '1', '5', '57079...
0
1
53
0
22,284,670
0
0
0
0
1
false
0
2014-03-09T14:35:00.000
0
1
0
Python averaging with ndarrays, csvfile data
22,283,494
0
python,arrays,numpy
I think you're certainly on the right track (because python together with numpy is a great combination for this task), but in order to do what you want to do, you do need some basic programming skills. I'll assume you at least know a little about working in an interactive python shell and how to import modules etc. :-)...
In a folder, I have a number of .csv files (count varies) each of which has 5 rows and 1200 columns of numerical data(float). Now I want to average the data in these files (i.e. R1C1 of files gives one averaged value in a resulting file, and so on for every position (R2C2 of all files gives one value in the same posit...
0
1
59
0
22,323,918
0
0
0
0
1
true
1
2014-03-10T18:46:00.000
1
1
0
Using Python parser to sniff delimiter Spammed to STDOUT
22,308,688
1.2
python,python-2.7,pandas
This is a 'bug' in that I think this is a debugging message. To work-around, pass engine='python' to disable the message.
When using pandas.read_csv setting sep = None for automatic delimiter detection, the message Using Python parser to sniff delimiter is printed to STDOUT. My code calls this function often so this greatly annoys me, how can I prevent this from happening short of going into the source and deleting the print statement. Th...
0
1
113
0
22,346,834
0
0
0
0
1
false
2
2014-03-12T09:10:00.000
1
1
0
Compare Numpy and Matlab code that uses random permutation
22,346,684
0.197375
python,matlab,random,numpy,permutation
This is a common issue. While the random number generator is identical, the function which converts your random number stream into a random permutation is different. There is no specified standard algorithm which describes the expected result. To solve this issue, you have to use the same library in both tools.
I'm having problems to compare the output of two code because of random number state. I'm comparing the MATLAB randperm function with the output of the equivalent numpy.random.permutation function but, even if I've set the seed to the same value with a MATLAB rand('twister',0) and a python numpy.random.seed(0) I'm obta...
0
1
1,337
0
22,360,726
0
0
0
0
1
true
1
2014-03-12T18:17:00.000
0
1
0
Python Neurolab - fixing output range
22,360,412
1.2
python,neural-network
Simply use a standard sigmoid/logistic activation function on the output neuron. sigmoid(x) > 0 forall real-valued x so that should do what you want. By default, many neural network libraries will use either linear or symmetric sigmoid outputs (which can go negative). Just note that it takes longer to train networks wi...
I am learning some model based on examples ${((x_{i1},x_{i2},....,x_{ip}),y_i)}_{i=1...N}$ using a neural network of Feed Forward Multilayer Perceptron (newff) (using python library neurolab). I expect the output of the NN to be positive for any further simulation of the NN. How can I make sure that the results of sim...
0
1
948
0
22,375,190
0
1
0
0
1
true
0
2014-03-13T06:28:00.000
1
1
0
Panel truncate error: tuple object has no attribute 'year'
22,370,760
1.2
python,pandas
In dateutil 2.2 there was an internal API change. Pandas 0.12 shows this bug as it relies on this API. Pandas >= 0.13 works around, or you can downgrade to dateutil 2.1
I am running code on two separate machines, it works on one machine and not on the other. I have a Pandas panel object x and I am using x.truncate('2002-01-01'). It works on one machine and not the other. The error thrown is DateParseError: 'tuple' object has no attribute 'year'. I have some inkling there is something...
0
1
179
0
22,389,392
0
0
0
0
1
false
0
2014-03-13T15:39:00.000
0
2
0
How can I create a subset of the 'most dissimilar' arrays from a set of possible combinations?
22,383,642
0
python,arrays,math,numpy,combinations
Your algorithm could look like this: Keep the last X (say 10) of the combinations that have been used in a list of some sort. Pick Y (say 10) combinations randomly. Analyze each of the Y combinations against the last X combinations to find the most dissimilar combination. This would involve writing a method that woul...
Say I have an array of shape (32,). Each element can have one of four int values:0 to 3 If I wanted to create an array for each possible combination I would have 432 ( approximately 1.84 x 1019) arrays - this is overly burdensome. Is there a straightforward way to pick fewer arrays, say 1 x 106, by picking the 'most di...
0
1
120
0
22,419,620
0
0
0
0
1
false
3
2014-03-15T02:54:00.000
1
1
0
Scikit-learn, random forests - How many samples does each tree contain?
22,418,958
0.197375
python,scikit-learn,random-forest
I believe RandomForestClassier will use the entire training set to build each tree. Typically building each tree involves selecting the features which have the most predictive power(the ones which create the largest 'split'), and having more data makes computing that more accurate.
In scikit-learn's RandomForestClassifier, there is no setting to specify how many samples each tree should be built from. That is, how big the subsets should be that are randomly pulled from the data to build each tree. I'm having trouble finding how many samples scikit-learn pulls by default. Does anyone know?
0
1
280
0
22,436,696
0
0
0
0
1
false
0
2014-03-16T11:55:00.000
1
1
0
Statistical analysis of .h5 files (SPSS?)
22,436,515
0.197375
python,r,hdf5,statistics,h5py
Is there a way to convert the data without losing any information? If the HDF5 data is regular enough, you can just load it in Python or R and save it out again as CSV (or even SPSS .sav format if you're a bit more adventurous and/or care about performance). Why doesn't SPSS support h5 anyway? Who knows. It probabl...
I have two sets of data in separated .h5 files (Hierarchical Data Format 5, HDF5), obtained with python scripts, and I would like to perform statistical analysis to find correlations between them. My experience here is limited; I don't know any R. I would like to load the data into SPSS, but SPSS doesn't seem to suppor...
0
1
398
0
22,519,952
0
0
0
1
1
false
1
2014-03-16T12:51:00.000
2
1
0
Import nested Json into cassandra
22,437,058
0.379949
java,python,json,cassandra,cassandra-cli
If you don't need to be able to query individual items from the json structure, just store the whole serialized string into one column. If you do need to be able to query individual items, I suggest using one of the collection types: list, set, or map. As far as typing goes, I would leave the value as text or blob and...
I have list of nested Json objects which I want to save into cassandra (1.2.15). However the constraint I have is that I do not know the column family's column data types before hand i.e each Json object has got a different structure with fields of different datatypes. So I am planning to use dynamic composite type for...
0
1
1,459
0
22,440,992
0
0
0
0
1
false
2
2014-03-16T18:23:00.000
1
2
0
How to pick a random element in an np array only if it isn't a certain value
22,440,923
0.099668
python,arrays,numpy
If you're willing to accept probabilistic times and you have fewer than 50% ignored values, you can just retry until you have an acceptable value. If you can't, you're going to have to go over the entire array at least once to know which values to ignore, but that takes n memory.
I'm using python. I have what might be an easy question, though I can't see it. If I have an array x = array([1.0,0.0,1.5,0.0,6.0]) and y = array([1,2,3,4,5]) I'm looking for an efficient way to pick randomly between 1.0,1.5,6.0 ignoring all zeros while retaining the index for comparison with another array such as y. S...
0
1
1,746
0
22,673,856
0
0
0
0
1
true
0
2014-03-17T17:03:00.000
0
1
0
Python with openCV on MAC crashes
22,460,645
1.2
eclipse,macos,opencv,python-2.7,pydev
Ok, it's working now. Here is what I did: Install Python and every package I need for it with Macports Set the Macports version as standard Adjust PATH and PYTHONPATH Reboot (not sure if needed) Remove old interpreter and libs in Eclipse Choose the new Python installation as Interpreter in Eclipse Confirm the new libs...
My final goal is to use Python scripts with SciPy, NumPy, Theano and openCV libraries to write code for a machine learning application. Everything worked so far apart from the openCV. I am trying to install openCV 2.4.8 to use in Python projects in my Eclipse Kepler installation on my MBA running Mac OSX 10.9.2. I have...
0
1
611
0
22,463,625
0
0
0
0
1
true
2
2014-03-17T17:15:00.000
4
2
0
scikit learn creation of dummy variables
22,460,948
1.2
python,machine-learning,scikit-learn
For which algorithms in scikit-learn is this transformation into dummy variables necessary? And for those algorithms that aren't, it can't hurt, right? All algorithms in sklearn with the notable exception of tree-based methods require one-hot encoding (also known as dummy variables) for nominal categorical variables. ...
In scikit-learn, which models do I need to break categorical variables into dummy binary fields? For example, if the column is political-party, and the values are democrat, republican and green, for many algorithms, you have to break this into three columns where each row can only hold one 1, and all the rest must be 0...
0
1
3,572
0
22,663,453
0
0
0
0
1
true
0
2014-03-18T18:51:00.000
0
2
0
Time signal shifted in amplitude, FIR filter with scipy.signal
22,488,460
1.2
python,scipy,filtering
So finally I adapted one filter to get the zerofrequency and another bandpassfilter to get the 600 Hz frequency. Passzero has to be true just for the zerofrequency then it works. I'm not yet happy with the phase delay but I'm working on it. 1)bandpass 600 Hz: taps_bp = bandpass_fir(ntaps, lowcut, highcut, fs) Function ...
I am implementing a bandpass filter in Python using scipy.signal (using the firwin function). My original signal consists of two frequencies (w_1=600Hz, w_2=800Hz). There might be a lot more frequencies that's why I need a bandpass filter. In this case I want to filter the frequency band around 600 Hz, so I took 600 +...
0
1
794
0
22,522,819
0
0
0
0
1
false
2
2014-03-19T12:45:00.000
1
2
0
Weibull Censored Data
22,506,268
0.099668
python,scipy,weibull
If I understand correctly, then this requires estimation with censored data. None of the scipy.stats.distribution will directly estimate this case. You need to combine the likelihood function of the non-censored and the likelihood function of the censored observations. You can use the pdf and the cdf, or better sf, of ...
I'm currently working with some lifetime data that corresponds to the Installation date and Failure date of units. The data is field data, so I do have a major number of suspensions (units that haven't presented a failure yet). I would like to make some Weibull analysis with this data using Scipy stats library (fitting...
0
1
1,430
0
22,562,740
0
1
0
0
1
false
3
2014-03-20T14:48:00.000
3
2
0
How to deal with indeterminate form in Python
22,536,589
0.291313
python,numpy,complex-numbers
The short answer is that the C99 standard (Annex G) on complex number arithmetic recognizes only a single complex infinity (think: Riemann sphere). (inf, nan) is one representation for it, and (-inf, 6j) is another, equivalent representation.
At some point in my python script, I require to make the calculation: 1*(-inf + 6.28318530718j). I understand why this will return -inf + nan*j since the imaginary component of 1 is obviously 0, but I would like the multiplication to have the return value of -inf + 6.28318530718j as would be expected. I also want whate...
0
1
584
0
22,584,181
0
0
0
0
1
false
11
2014-03-21T14:31:00.000
6
2
0
Pandas dataset into an array for modelling in Scikit-Learn
22,562,540
1
python,pandas,scikit-learn
Pandas DataFrames are very good at acting like Numpy arrays when they need to. If in doubt, you can always use the values attribute to get a Numpy representation (df.values will give you a Numpy array of the values in DataFrame df.
Can we run scikit-learn models on Pandas DataFrames or do we need to convert DataFrames into NumPy arrays?
0
1
9,121
0
22,591,329
0
0
0
0
1
true
1
2014-03-23T12:18:00.000
1
1
0
Can anyone in detail explain how cv and cv2 are different and what makes cv2 better and faster than cv?
22,590,811
1.2
python,c++,opencv,numpy
there is no question at all, - use cv2 the old cv api, that wraps IplImage and CvMat is being phased out, and will be no more available in the next release of opencv the newer cv2 api uses numpy arrays for almost anything, so you can easily combine it with scipy, matplotlib, etc.
I've recently started using openCV in python. I've come across various posts comparing cv and cv2 and with an overview saying how cv2 is based on numpy and makes use of an array (cvMat) as opposed to cv makes use of old openCV bindings that was using Iplimage * (correct me if i'm wrong). However I would really like kn...
0
1
230
0
22,595,047
0
0
0
0
1
false
0
2014-03-23T16:03:00.000
1
1
0
Python widget for real time plotting
22,593,328
0.197375
python,user-interface,matplotlib,tkinter,wxpython
Tkinter, which is part of python, comes with a canvas widget that can be used for some simple plotting. It can draw lines and curves, and one datapoint every couple of seconds is very easy for it to handle.
Is there a minimalistic python module out there that I can use to plot real time data that comes in every 2-3 seconds? I've tried matplotlib but I'm having a couple errors trying to get it to run so I'm not looking for something as robust and with many features.
0
1
133
0
48,487,656
0
0
0
0
1
false
2
2014-03-23T21:22:00.000
0
2
0
Making scikit-learn train on all training data after cross-validation
22,597,239
0
python,scikit-learn
My recommendation is to not use the cross-validation split that had the best performance. That could potential give you problems with high bias. Afterall, the performance just happened to be good because there was a fold used for testing that just happened to match the data used for training. When you generalize it to ...
I'm using scikit-learn to train classifiers. I want also to do cross validation, but after cross-validation I want to train on the entire dataset. I found that cross_validation.cross_val_score() just returns the scores. Edit: I would like to train the classifier that had the best cross-validation score with all of my d...
0
1
1,088
0
22,609,701
0
0
0
0
2
false
2
2014-03-24T12:03:00.000
1
2
0
what library is able to extract SIFT features in Python?
22,608,905
0.099668
python-2.7,computer-vision,python-module
OpenCV is free to use. But SIFT itself as algorithm is patented, so if you would make your own implementation of SIFT, not based on Lowe`s code, you still could not use it in commercial application. So, unless you have got a license for SIFT, no library with it, is free. But you can consult with patent guys - some cou...
In python which library is able to extract SIFT visual descriptors? I know opencv has an implementation but it is not free to use and skimage does not include SIFT particularly.
0
1
2,214
0
23,098,414
0
0
0
0
2
true
2
2014-03-24T12:03:00.000
2
2
0
what library is able to extract SIFT features in Python?
22,608,905
1.2
python-2.7,computer-vision,python-module
I would like to suggest VLFeat, another open source vision library. It also has a python wrapper. The implementation of SIFT in VLFeat is modified from the original algorithm, but I think the performance is good.
In python which library is able to extract SIFT visual descriptors? I know opencv has an implementation but it is not free to use and skimage does not include SIFT particularly.
0
1
2,214
0
22,619,589
0
0
0
0
2
false
0
2014-03-24T20:04:00.000
2
2
0
How can i check in numpy if a binary image is almost all black?
22,619,506
0.197375
python,opencv,image-processing,numpy,scikit-image
Here is a list of ideas I can think of: get the np.sum() and if it is lower than a threshold, then consider it almost black calculate np.mean() and np.std() of the image, an almost black image is an image that has low mean and low variance
How can i see in if a binary image is almost all black or all white in numpy or scikit-image modules ? I thought about numpy.all function or numpy.any but i do not know how neither for a total black image nor for a almost black image.
0
1
1,554
0
22,619,838
0
0
0
0
2
true
0
2014-03-24T20:04:00.000
2
2
0
How can i check in numpy if a binary image is almost all black?
22,619,506
1.2
python,opencv,image-processing,numpy,scikit-image
Assuming that all the pixels really are ones or zeros, something like this might work (not at all tested): def is_sorta_black(arr, threshold=0.8): tot = np.float(np.sum(arr)) if tot/arr.size > (1-threshold): print "is not black" return False else: print "is kinda black" return...
How can i see in if a binary image is almost all black or all white in numpy or scikit-image modules ? I thought about numpy.all function or numpy.any but i do not know how neither for a total black image nor for a almost black image.
0
1
1,554
0
22,698,775
0
0
0
0
1
false
25
2014-03-27T20:37:00.000
4
2
0
How to sort 2D array (numpy.ndarray) based to the second column in python?
22,698,687
0.379949
python,arrays,sorting,numpy
sorted(Data, key=lambda row: row[1]) should do it.
I'm trying to convert all my codes to Python. I want to sort an array which has two columns so that the sorting must be based on the 2th column in the ascending order. Then I need to sum the first column data (from first line to, for example, 100th line). I used "Data.sort(axis=1)", but it doesn't work. Does anyone hav...
0
1
75,440
0
22,731,897
0
0
0
0
1
false
2
2014-03-28T01:16:00.000
1
2
0
Clustering a list of dates
22,702,428
0.099668
python-2.7,numpy,scipy,cluster-analysis
k-means is exclusively for coordinates. And more precisely: for continuous and linear values. The reason is the mean functions. Many people overlook the role of the mean for k-means (despite it being in the name...) On non-numerical data, how do you compute the mean? There exist some variants for binary or categorial d...
I have a list of dates I'd like to cluster into 3 clusters. Now, I can see hints that I should be looking at k-means, but all the examples I've found so far are related to coordinates, in other words, pairs of list items. I want to take this list of dates and append them to three separate lists indicating whether they...
0
1
7,440
0
24,051,792
1
0
0
0
1
false
1
2014-03-28T17:48:00.000
0
1
0
Fix the seed for the community module in Python that uses networkx module
22,719,863
0
python,networkx
I had to change the seed inside every class I used.
I am using the community module to extract communities from a networkx graph. For the community module, the order in which the nodes are processed makes a difference. I tried to set the seed of random to get consistent results but that is not working. Any idea on how to do this? thanks
0
1
165
0
22,724,963
0
0
0
0
1
true
0
2014-03-28T18:17:00.000
1
2
0
Can I select rows based on group size with pandas? Or do I have to use SQL?
22,720,349
1.2
python,pandas
I've made it work with records.groupby('product_name').filter(lambda x: len(x['url']) == 1). Note that simply using len(x) doesn't work. With a dataframe with more than two columns (which is probably most of the real-life dataframes), one has to specify a column for x: any column, except the one to group by with. Also,...
With pandas I can do grouping using df.groupby('product_name').size(). But if I'm only interested rows whose "product_name" is unique, i.e. those records with groupby.size equal to one, how can I filter the df to see only such rows? In other words, can I perform filtering on a database using pandas, based on the number...
0
1
2,961
0
22,730,167
0
0
0
0
1
false
5
2014-03-29T09:21:00.000
4
3
0
How to calculate exp(x) for really big integers in Python?
22,729,223
0.26052
python,math,numpy,artificial-intelligence
@Paul already gave you the answer for computational question However - from neural network point of view your problem is indication that you are doing something wrong. There is no reasonable use of neural networks, where you have to compute such number. You seem to forget about at least one of: Input data scaling/norm...
I'm using a sigmoid function for my artificial neural network. The value that I'm passing to the function ranges from 10,000 to 300,000. I need a high-precision answer because that would serve as the weights of the connection between the nodes in my artificial neural network. I've tried looking in numpy but no luck. Is...
0
1
3,244
0
22,737,241
0
0
0
0
2
false
15
2014-03-29T21:15:00.000
6
4
0
one-dimensional array shapes (length,) vs. (length,1) vs. (length)
22,737,000
1
python,arrays,math,numpy
In Python, (length,) is a tuple, with one 1 item. (length) is just parenthesis around a number. In numpy, an array can have any number of dimensions, 0, 1, 2, etc. You are asking about the difference between 1 and 2 dimensional objects. (length,1) is a 2 item tuple, giving you the dimensions of a 2d array. If you ar...
When I check the shape of an array using numpy.shape(), I sometimes get (length,1) and sometimes (length,). It looks like the difference is a column vs. row vector... but It doesn't seem like that changes anything about the array itself [except some functions complain when I pass an array with shape (length,1)]. What ...
0
1
18,706
0
61,132,626
0
0
0
0
2
false
15
2014-03-29T21:15:00.000
0
4
0
one-dimensional array shapes (length,) vs. (length,1) vs. (length)
22,737,000
0
python,arrays,math,numpy
A vector in Python is actually a two-dimensional array. It's just a coincidence that the number of rows is 1 (for row vectors), or the number of columns is 1 (for column vectors). By contrast, a one-dimensional array is not a vector (neither a row vector nor a column vector). To understand this, think a concept in geom...
When I check the shape of an array using numpy.shape(), I sometimes get (length,1) and sometimes (length,). It looks like the difference is a column vs. row vector... but It doesn't seem like that changes anything about the array itself [except some functions complain when I pass an array with shape (length,1)]. What ...
0
1
18,706
0
22,766,449
0
1
0
0
1
false
1
2014-03-31T14:17:00.000
0
1
0
Installation of Compatible version of Numpy and Scipy on Abaqus 6.13-2 with python 2.6.2
22,764,021
0
python,numpy,scipy
What you should do is: install python 2.6.2 separately onto your system (it looks like you are using windows, right?), and then install scipy corresponding to python 2.6.2, and then copy the site-packages to the abaqus folder. Note that 1) you can't use matplotlib due to the tkinter problem; 2) the numpy is already com...
Can anyone give inputs/clue/direction on installation of compatible version of numpy and scipy in abaqus python 2.6.2? I tried installing numpy-1.6.2, numpy-1.7.1 and numpy-1.8.1. But all gives an error of unable to find vcvarsall.bat. because it doesn't have a module named msvccomplier. based on the some of the answe...
0
1
2,294
0
22,776,862
0
0
0
0
1
true
0
2014-04-01T02:24:00.000
1
1
0
How to know name of the person in the image?
22,775,681
1.2
python-2.7,opencv
You can use the filename of the image for that purpose. All you need to do is keep the filenames stored somewhere in your application, alongside the Mat objects.
I implemented face recognition algorithm in raspberry pi(python 2.7 is was used), i have many sets of faces, if the captured face is one in database then the face is detected(i am using eigen faces algo). My question is can i know whose face(persons name) is detected? (can we have sort of tags to image and display name...
0
1
599
0
22,799,245
0
0
0
0
1
false
20
2014-04-02T00:05:00.000
3
3
0
Keep finite entries only in Pandas
22,799,208
0.197375
python,pandas
You can use .dropna() after a DF[DF==np.inf]=np.nan, (unless you still want to keep the NANs and only drop the infs)
In Pandas, I can use df.dropna() to drop any NaN entries. Is there anything similar in Pandas to drop non-finite (e.g. Inf) entries?
0
1
18,662
0
22,817,669
0
1
0
0
1
false
4
2014-04-02T16:27:00.000
3
2
0
Pip doesn’t know where numpy is installed
22,817,533
0.291313
python,python-2.7,numpy,pip,python-packaging
Maybe run deactivate if you are running virtualenv?
Trying to uninstall numpy. I tried pip uninstall numpy. It tells me that it isn't installed. However, numpy is still installed at /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy. How can I make sure pip finds the numpy package?
0
1
7,578
0
22,848,834
0
1
0
0
1
true
2
2014-04-03T13:34:00.000
3
2
0
What are the operational limits of Rpy2?
22,839,403
1.2
python,rpy2
Bring the presumed limitations on. Rpy2 is, at its lower level (the rpy2.rinterface level), exposing a very large part of the R C-API. Technically, one can do more with rpy2 than one can from R itself (writing C extension for R would possibly be the only way to catch up). As an amusing fact,doing "R stuff" from rpy2 ca...
I know basic python programming and thus want to stay on the python datasci path. Problem is, there are many R packages that appeal to me as a social science person. Can Rpy2 allow full use of any general arbitrary r package, or is there a catch. How well does it work in practice? If Rpy2 is too limited, I'd unfortunat...
0
1
488
0
23,772,908
0
1
0
0
1
false
7
2014-04-05T07:47:00.000
18
4
0
Error installing scipy library through pip on python 3: "compile failed with error code 1"
22,878,109
1
python,python-3.x,scipy,pip
I was getting the same thing when using pip, I went to the install and it pointed to the following dependencies. sudo apt-get install python python-dev libatlas-base-dev gcc gfortran g++
I'm trying to install scipy library through pip on python 3.3.5. By the end of the script, i'm getting this error: Command /usr/local/opt/python3/bin/python3.3 -c "import setuptools, tokenize;file='/private/tmp/pip_build_root/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n...
0
1
11,676
0
22,897,471
0
0
0
0
1
false
0
2014-04-06T17:12:00.000
0
2
0
Perform n linear regressions, simultaneously
22,897,243
0
python,pandas,linear-regression
as far as I know, there is no way to put this all at once in the optimized Fortran library, LAPACK, since each regression is it's own independent optimization problem. note that the loop over 4 items is not taking any time relative to the regression itself, that you need to fully compute because each regression is an i...
I have y - a 100 row by 5 column Pandas DataFrame I have x - a 100 row by 5 column Pandas DataFrame For i=0,...,4 I want to regress y[:,i] against x[:,i]. I know how to do it using a loop. But is there a way to vectorise the linear regression, so that I don't have the loop in there?
0
1
472
0
62,341,726
0
0
0
0
2
false
275
2014-04-06T19:24:00.000
4
16
0
Filtering Pandas DataFrames on dates
22,898,824
0.049958
python,datetime,pandas,filtering,dataframe
You could just select the time range by doing: df.loc['start_date':'end_date']
I have a Pandas DataFrame with a 'date' column. Now I need to filter out all rows in the DataFrame that have dates outside of the next two months. Essentially, I only need to retain the rows that are within the next two months. What is the best way to achieve this?
0
1
624,303