GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
17,457,967
0
0
0
0
1
false
4
2013-07-03T20:21:00.000
1
3
0
Converting Pandas Dataframe types
17,457,418
0.066568
python,numpy,pandas
df = df.convert_objects(convert_numeric=True) will work in most cases. I should note that this copies the data. It would be preferable to get it to a numeric type on the initial read. If you post your code and a small example, someone might be able to help you with that.
I have a pandas dataFrame created through a mysql call which returns the data as object type. The data is mostly numeric, with some 'na' values. How can I cast the type of the dataFrame so the numeric values are appropriately typed (floats) and the 'na' values are represented as numpy NaN values?
0
1
4,908
0
18,509,671
0
0
1
0
1
false
3
2013-07-03T20:24:00.000
0
3
0
Large training and testing data in libsvm
17,457,460
0
python,c++,svm,libsvm
easy.py is a script for training and evaluating a classifier. it does a metatraining for the SVM parameters with grid.py. in grid.py is a parameter "nr_local_worker" which is defining the mumber of threads. you might wish to increase it (check processor load).
I'm using Libsvm in a 5x2 cross validation to classify a very huge amount of data, that is, I have 47k samples for training and 47k samples for testing in 10 different configurations. I usually use the Libsvm's script easy.py to classify the data, but it's taking so long, I've been waiting for results for more than 3 h...
0
1
3,432
0
17,471,244
0
0
0
0
1
true
0
2013-07-04T07:28:00.000
1
1
0
An issue with generating a random graph with given degree sequence: time consuming or some error?
17,464,274
1.2
python,networkx
That algorithm's run time could get very long for some degree sequences. And it is not guaranteed to produce a graph. Depending on your end use you might consider using the configuration_model(). Though it doesn't sample graphs uniformly at random and might produce parallel edges and self loops it will always finish.
I have an empirical network with 585 nodes and 5,441 edges. This is a scale-free network with max node degree of 179 and min node degree of 1. I am trying to create an equivalent random graph (using random_degree_sequence_graph from networkx), but my python just keeps running. I did similar exercise fro the network wit...
0
1
532
0
17,488,424
0
0
0
0
1
true
0
2013-07-05T10:10:00.000
0
1
1
Installing python (same version) on accident twice
17,486,322
1.2
python,scipy,reinstall
How about sudo port uninstall python27?
I accidentally installed python 2.7 again on my mac (mountain lion), when trying to install scipy using macports: sudo port install py27-scipy ---> Computing dependencies for py27-scipy ---> Dependencies to be installed: SuiteSparse gcc47 cctools cctools-headers llvm-3.3 libffi llvm_select cloog gmp isl gcc_select ...
0
1
609
0
42,853,468
0
0
0
0
1
false
5
2013-07-08T11:50:00.000
0
2
0
How to pass Unicode title to matplotlib?
17,525,882
0
python,matplotlib,unicode,python-2.x
In Python3, there is no need to worry about all that troublesome UTF-8 problems. One note that you will need to set a Unicode font before plotting. matplotlib.rc('font', family='Arial')
Can't get the titles right in matplotlib: 'technologieën in °C' gives: technologieÃn in ÃC Possible solutions already tried: u'technologieën in °C' doesn't work neither does: # -*- coding: utf-8 -*- at the beginning of the code-file. Any solutions?
0
1
6,904
0
28,424,354
0
0
0
0
1
false
3
2013-07-08T21:36:00.000
1
2
0
How can i reduce memory usage of Scikit-Learn Vectorizers?
17,536,394
0.099668
python,numpy,machine-learning,scipy,scikit-learn
One way to overcome the inability of HashingVectorizer to account for IDF is to index your data into elasticsearch or lucene and retrieve termvectors from there using which you can calculate Tf-IDF.
TFIDFVectorizer takes so much memory ,vectorizing 470 MB of 100k documents takes over 6 GB , if we go 21 million documents it will not fit 60 GB of RAM we have. So we go for HashingVectorizer but still need to know how to distribute the hashing vectorizer.Fit and partial fit does nothing so how to work with Huge Corpu...
0
1
4,523
0
17,538,468
0
1
0
0
2
false
11
2013-07-08T22:03:00.000
1
5
0
Scala equivalent of Python help()
17,536,758
0.039979
python,scala,equivalent
Similarly, IDEA has its "Quick Documentation Look-up" command, which works for Scala as well as Java (-Doc) JARs and source-code documentation comments.
I'm proficient in Python but a noob at Scala. I'm about to write some dirty experiment code in Scala, and came across the thought that it would be really handy if Scala had a function like help() in Python. For example, if I wanted to see the built-in methods for a Scala Array I might want to type something like help(A...
0
1
2,972
0
55,942,086
0
1
0
0
2
false
11
2013-07-08T22:03:00.000
0
5
0
Scala equivalent of Python help()
17,536,758
0
python,scala,equivalent
In scala , you can try using the below ..( similar to the one we have in python ).. help(RDD1) in python will give you the rdd1 description with full details. Scala > RDD1.[tab] On hitting tab you will find the list of options available to the specified RDD1, similar option you find in eclipse .
I'm proficient in Python but a noob at Scala. I'm about to write some dirty experiment code in Scala, and came across the thought that it would be really handy if Scala had a function like help() in Python. For example, if I wanted to see the built-in methods for a Scala Array I might want to type something like help(A...
0
1
2,972
0
17,581,063
0
0
1
0
1
false
1
2013-07-10T15:19:00.000
2
1
0
pycuda.gpuarray.dot() very slow at first call
17,574,547
0.379949
python,cuda,pycuda,mailing-list
One reason would be that Pycuda is compiling the kernel before uploading it. As far as I remember thought that should happen only the very first time it executes it. One solution could be to "warm up" the kernel by executing it once and then start the profiling procedure.
I have a working conjungate gradient method implementation in pycuda, that I want to optimize. It uses a self written matrix-vector-multiplication and the pycuda-native gpuarray.dot and gpuarray.mul_add functions Profiling the program with kernprof.py/line_profiler returned most time (>60%) till convergence spend in on...
0
1
748
0
17,587,872
0
1
0
0
1
false
3
2013-07-11T06:34:00.000
0
3
0
Python - Combing data from different .csv files. into one
17,586,573
0
python,csv,python-2.7
you can use os.listdir() to get list of files in directory
I need some help from python programmers to solve the issue I'm facing in processing data:- I have .csv files placed in a directory structure like this:- -MainDirectory Sub directory 1 sub directory 1A fil.csv Sub directory 2 sub directory 2A file.csv sub directory 3 sub directory 3A file.csv Instead of ...
0
1
1,781
0
17,659,174
0
0
0
0
1
true
0
2013-07-15T16:12:00.000
0
1
0
How to read out point position of a given plot in matplotlib?(without using mouse)
17,658,836
1.2
python,matplotlib
Just access the input data that you used to generate the plot. Either this is a mathematical function which you can just evaluate for a given x or this is a two-dimensional data set which you can search for any given x. In the latter case, if x is not contained in the data set, you might want to interpolate or throw an...
The case is, I have a 2D array and can convert it to a plot. How can I read the y value of a point with given x?
0
1
37
0
17,667,109
0
0
0
0
1
true
14
2013-07-16T02:26:00.000
11
3
0
Finding N closest numbers
17,667,022
1.2
python,algorithm
collect all the values to create a single ordered sequence, with each element tagged with the array it came from: 0(0), 2(1), 3(2), 4(0), 6(1), ... 12(3), 13(2) then create a window across them, starting with the first (0(0)) and ending it at the first position that makes the window span all the arrays (0(0) -> 7(3)) t...
Have a 2-dimensional array, like - a[0] = [ 0 , 4 , 9 ] a[1] = [ 2 , 6 , 11 ] a[2] = [ 3 , 8 , 13 ] a[3] = [ 7 , 12 ] Need to select one element from each of the sub-array in a way that the resultant set of numbers are closest, that is the difference between the highest number and lowest number in the set is minimum....
0
1
667
0
17,716,804
0
1
0
0
1
true
1
2013-07-18T07:04:00.000
3
1
0
Random number indexing past inputs Python
17,716,737
1.2
python,random
Create 2 variables that contain the lowest and highest possible values. Whenever you get a response, store it in the appropriate variable. Make the RNG pick a value between the two.
I'm having a little bug in my program where you give the computer a random number to try to guess, and a range to guess between and the amount of guesses it has. After the computer generates a random number, it asks you if it is your number, if not, it asks you if it is higher or lower than it. My problem is, if your n...
0
1
53
0
19,030,208
0
0
0
0
1
false
1
2013-07-19T07:21:00.000
0
2
0
MCMC implementation in Python
17,740,281
0
python,c,statistics,mcmc
The simplest way to explore the distribution of A is to generate samples based on the samples of B, C, and D, using your rule. That is, for each iteration, draw one value of B, C, and D from their respective sample sets, independently, with repetition, and calculate A = B*C/D. If the sample sets for B, C, and D have th...
I have the following problem: There are 12 samples around 20000 elements each from unknown distributions (sometimes the distributions are not uni-modal so it's hard to automatically estimate an analytical family of the distributions). Based on these distributions I compute different quantities. How can I explore the di...
0
1
838
0
17,756,813
0
0
0
0
1
false
2
2013-07-19T23:01:00.000
2
1
0
Python Process using only 1.6 GB RAM Ubuntu 32 bit in Numpy Array
17,756,791
0.379949
python,numpy,ubuntu-12.04,32-bit
A 32-bit OS can only address up to aroung 4gb of ram, while a 64-bit OS can take advantage of a lot more ram (theoretically 16.8 million terabytes). Since your OS is 32-bit, your OS can only take advantage of 4gb, so your other 4gb isn't used. The other 64-bit machine doesn't have the 4gb ram limit, so it can take ad...
I have a program for learning Artificial Neural Network and it takes a 2-d numpy array as training data. The size of the data array I want to use is around 300,000 x 400 floats. I can't use chunking here because the library I am using (DeepLearningTutorials) takes a single numpy array as training data. The code shows M...
0
1
452
0
54,003,707
0
0
0
0
1
false
15
2013-07-20T17:12:00.000
5
4
0
pandas dataframe group year index by decade
17,764,619
0.244919
python,pandas
if your Data Frame has Headers say : DataFrame ['Population','Salary','vehicle count'] Make your index as Year: DataFrame=DataFrame.set_index('Year') use below code to resample data in decade of 10 years and also gives you some of all other columns within that dacade datafame=dataframe.resample('10AS').sum()
suppose I have a dataframe with index as monthy timestep, I know I can use dataframe.groupby(lambda x:x.year) to group monthly data into yearly and apply other operations. Is there some way I could quick group them, let's say by decade? thanks for any hints.
0
1
31,949
0
17,768,482
0
0
0
0
1
false
3
2013-07-20T23:55:00.000
0
1
0
Co-clustering algorithm in python
17,767,807
0
python,machine-learning,scipy,scikit-learn,unsupervised-learning
The fastest clustering algorithm I know of does this: Repeat O(log N) times: C = M x X Where X is N x dim and M is clus x N... If your clusters are not "flat"... Perform f(X) = ... This just projects X onto some "flat" space...
Are there implementations available for any co-clustering algorithms in python? The scikit-learn package has k-means and hierarchical clustering but seems to be missing this class of clustering.
0
1
720
0
20,057,520
0
0
0
0
1
false
6
2013-07-22T03:04:00.000
3
3
0
Un-normalized Gaussian curve on histogram
17,779,316
0.197375
python,matplotlib,histogram,gaussian
Another way of doing this is to find the normalized fit and multiply the normal distribution with (bin_width*total length of data) this will un-normalize your normal distribution
I have data which is of the gaussian form when plotted as histogram. I want to plot a gaussian curve on top of the histogram to see how good the data is. I am using pyplot from matplotlib. Also I do NOT want to normalize the histogram. I can do the normed fit, but I am looking for an Un-normalized fit. Does anyone here...
0
1
12,313
0
17,786,438
0
0
0
0
1
true
3
2013-07-22T09:01:00.000
3
1
0
Constraints on fitting parameters with Python and ODRPACK
17,783,481
1.2
python,scipy,curve-fitting
I'm afraid that the older FORTRAN-77 version of ODRPACK wrapped by scipy.odr does not incorporate constraints. ODRPACK95 is a later extension of the original ODRPACK library that predates the scipy.odr wrappers, and it is unclear that we could legally include it in scipy. There is no explicit licensing information for ...
I'm using the ODRPACK library in Python to fit some 1d data. It works quite well, but I have one question: is there any possibility to make constraints on the fitting parameters? For example if I have a model y = a * x + b and for physical reasons parameter a can by only in range (-1, 1). I've found that such constrain...
0
1
780
0
17,813,855
0
0
0
0
1
false
0
2013-07-23T14:25:00.000
2
1
0
An optimal algorithm for the weighted set cover issue?
17,813,029
0.379949
python,algorithm,set,cover
If you want an exponential algorithm, just try every subset of the set of packages and take the cheapest one that contains all the things you need.
Sorry about the title, SO wasn't allowing the word "problem" in it. I have the following problem: I have packages of things I want to sell, and each package has a price. When someone requests things X, Y and Z, I want to look through all the packages, some of which contain more than one item, and give the user the comb...
0
1
1,472
0
17,822,258
0
0
0
0
1
false
22
2013-07-23T18:13:00.000
20
1
0
Is there any way to add points to KD tree implementation in Scipy
17,817,889
1
python,scipy,kdtree
The problem with k-d-trees is that they are not designed for updates. While you can somewhat easily insert objects (if you use a pointer based representation, which needs substantially more memory than an array-based tree), and do deletions with tricks such as tombstone messages, doing such changes will degrate the per...
I have a set of points for which I want to construct KD Tree. After some time I want to add few more points to this KDTree periodically. Is there any way to do this in scipy implementation
0
1
6,831
0
17,822,815
0
0
0
0
1
true
0
2013-07-23T23:17:00.000
1
1
0
Specify DataType using read_table() in Pandas
17,822,595
1.2
python,pandas
I believe we enabled in 0.12 you can pass str,np.str_,object in place of an S4 which all convert to object dtype in any event or after you read it in df['year'].astype(object)
I am loading a text file into pandas, and have a field that contains year. I want to make sure that this field is a string when pulled into the dataframe. I can only seem to get this to work if I specify the exact length of the string using the code below: df = pd.read_table('myfile.tsv', dtype={'year':'S4'}) Is the...
0
1
4,022
0
17,845,330
0
0
0
0
1
false
0
2013-07-24T20:58:00.000
0
1
0
Fast processing
17,844,688
0
python,igraph
As far as I understand, you wont have access to the C backend from Python. What about storing the sorted edge in an attribute of the vertices eg in g.vs["sortedOutEdges"] ?
In python and igraph I have many nodes with high degree. I always need to consider the edges from a node in order of their weight. It is slow to sort the edges each time I visit the same node. Is there some way to persuade igraph to always give the edges from a node in weight sorted order, perhaps by some preprocessing...
0
1
71
0
58,946,534
0
0
0
0
1
false
78
2013-07-26T06:04:00.000
19
6
0
Is there a parameter in matplotlib/pandas to have the Y axis of a histogram as percentage?
17,874,063
1
python,pandas,matplotlib
I know this answer is 6 years later but to anyone using density=True (the substitute for the normed=True), this is not doing what you might want to. It will normalize the whole distribution so that the area of the bins is 1. So if you have more bins with a width < 1 you can expect the height to be > 1 (y-axis). If you ...
I would like to compare two histograms by having the Y axis show the percentage of each column from the overall dataset size instead of an absolute value. Is that possible? I am using Pandas and matplotlib. Thanks
0
1
86,215
0
17,900,531
0
0
0
0
1
false
14
2013-07-27T16:38:00.000
0
2
0
Represent a tree hierarchy using an Excel spreadsheet to be easily parsed by Python CSV reader?
17,900,112
0
python,excel,csv,tree,hierarchy
If spreadsheet is a must in this solution, hierarchy can be represented by indents on the Excel side (empty cells at the beginnings of rows), one row per node/leaf. On the Python side, one can parse them to tree structure (of course, one needs to filter out empty rows and some other exceptions). Node type can be specif...
I have a non-technical client who has some hierarchical product data that I'll be loading into a tree structure with Python. The tree has a variable number of levels, and a variable number nodes and leaf nodes at each level. The client already knows the hierarchy of products and would like to put everything into an Ex...
0
1
13,410
0
61,605,458
0
0
0
0
1
false
3
2013-07-28T03:06:00.000
0
3
0
Pandas import error
17,904,600
0
python-2.7,pandas,easy-install
Panda does not work with python 2.7 , do you will need python 3.6 or higer
I tried installing pandas using easy_install and it claimed that it successfully installed the pandas package in my Python Directory. I switch to IDLE and try import pandas and it throws me the following error - Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import pandas File "C:\...
0
1
26,604
0
17,938,780
0
1
0
0
1
false
0
2013-07-29T13:32:00.000
0
5
0
Dividing an array into partitions NOT evenly sized, given the points where each partition should start or end, in python
17,925,460
0
python,arrays,slice
For each range in your limits list create an empty list plus one for the overflow values as a tupple with the max value in the list and the min value for that list, the last one will have a max on None For each value in the values list run through your tupples until you find the one that your value is > min and < max o...
How do I divide a list into smaller not evenly sized intervals, give the ideal initial and final values of each interval? I have a list of 16383 items. I also have a separate list of the values at which each interval should end and the following should enter. I would need to use the given intervals to assign each eleme...
0
1
141
0
17,951,581
0
0
0
0
1
false
0
2013-07-30T14:57:00.000
0
1
0
Passing 2D argument into numpy.optimize.fmin error
17,950,492
0
python,optimization,scipy
It seems it is in fact impossible to pass a 2D list to numpy.optimize.fmin. However flattening the input f was not that much of a problem and while it makes the code slightly uglier, the optimisation now works. Interestingly I also coded the optimisation in Matlab which does take 2D inputs to its fminsearch function. B...
I currently have a function PushLogUtility(p,w,f) that I am looking to optimise w.r.t f (2xk) list for fixed p (9xk list) and w (2xk) list. I am using the scipy.optimize.fmin function but am getting errors I believe because f is 2-dimensional. I had written a previous function LogUtility(p,q,f) passing a 1-dimensional ...
0
1
380
0
17,971,361
0
0
0
0
1
true
0
2013-07-31T04:04:00.000
1
2
0
OpenCV Python 3.3
17,961,391
1.2
python,opencv
You'll have to install all the libraries you want to use together with OpenCV for Python 2.7. This is not much of a problem, you can do it with pip in one line, or choose one of the many pre-built scientific Python packages.
I have Python 3.3 and 2.7 installed on my computer For Python 3.3, I installed many libraries like numpy, scipy, etc Since I also want to use opencv, which only supports python 2.7 so far, I installed opencv under Python 2.7. Hey, here comes the problem, what if I want to import numpy as well as cv in the same script?
0
1
1,449
0
18,023,402
0
1
0
0
1
false
0
2013-08-02T18:02:00.000
1
1
0
Disambiguation of Names using Edit Distance
18,023,356
0.197375
python,levenshtein-distance
Create a dictionary keyed by zipcode, with lists of company names as the values. Now you only have to match company names per zipcode, a much smaller search space.
I have a huge list of company names and a huge list of zipcodes associated with those names. (>100,000). I have to output similar names (for example, AJAX INC and AJAX are the same company, I have chosen a threshold of 4 characters for edit distance), but only if their corresponding zipcodes match too. The trouble is...
0
1
164
0
47,751,572
0
0
0
0
2
false
170
2013-08-06T20:18:00.000
116
7
0
How to estimate how much memory a Pandas' DataFrame will need?
18,089,667
1
python,pandas
Here's a comparison of the different methods - sys.getsizeof(df) is simplest. For this example, df is a dataframe with 814 rows, 11 columns (2 ints, 9 objects) - read from a 427kb shapefile sys.getsizeof(df) >>> import sys >>> sys.getsizeof(df) (gives results in bytes) 462456 df.memory_usage() >>> df.memory_usage() ...
I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory...
0
1
124,007
0
18,089,887
0
0
0
0
2
false
170
2013-08-06T20:18:00.000
10
7
0
How to estimate how much memory a Pandas' DataFrame will need?
18,089,667
1
python,pandas
Yes there is. Pandas will store your data in 2 dimensional numpy ndarray structures grouping them by dtypes. ndarray is basically a raw C array of data with a small header. So you can estimate it's size just by multiplying the size of the dtype it contains with the dimensions of the array. For example: if you have 1000...
I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory...
0
1
124,007
0
18,339,341
0
0
0
0
1
false
4
2013-08-09T10:36:00.000
0
1
0
nearest k neighbours that satisfy conditions (python)
18,144,810
0
python,constraints,nearest-neighbor,kdtree
If you are looking for the neighbours in a line of sight, couldn't use an method like cKDTree.query_ball_point(self, x, r, p, eps) which allows you to query the KDTree for neighbours that are inside a radius of size r around the x array points. Unless I misunderstood your question, it seems that the line of sight is...
I have a slight variant on the "find k nearest neighbours" algorithm which involves rejecting those that don't satisfy a certain condition and I can't think of how to do it efficiently. What I'm after is to find the k nearest neighbours that are in the current line of sight. Unfortunately scipy.spatial.cKDTree doesn't...
0
1
631
0
18,153,590
0
0
0
0
1
false
0
2013-08-09T15:19:00.000
1
2
0
FFT resolution bandwidth
18,150,150
0.099668
python,numpy,fft
The bandwidth of each FFT result bin is inversely proportional to the length of the FFT window. For a wider bandwidth per bin, use a shorter FFT. If you have more data, then Welch's method can be used with sequential STFT windows to get an average estimate.
I have a numpy fft for a large number of samples. How do I reduce the resolution bandwidth, so that it will show me fewer frequency bins, with averaged power output?
0
1
921
0
18,201,497
0
0
0
0
1
false
9
2013-08-10T17:07:00.000
3
2
0
Efficient nearest neighbour search for sparse matrices
18,164,348
0.291313
python,scipy,scikit-learn,nearest-neighbor
You can try to transform your high-dimensional sparse data to low-dimensional dense data using TruncatedSVD then do a ball-tree.
I have a large corpus of data (text) that I have converted to a sparse term-document matrix (I am using scipy.sparse.csr.csr_matrix to store sparse matrix). I want to find, for every document, top n nearest neighbour matches. I was hoping that NearestNeighbor routine in Python scikit-learn library (sklearn.neighbors.Ne...
0
1
3,890
0
18,180,322
0
1
0
0
1
false
6
2013-08-12T04:47:00.000
0
6
0
finding a set of ranges that a number fall in
18,179,680
0
python,algorithm
How about, sort by first column O(n log n) binary search to find indices that are out of range O(log n) throw out values out of range sort by second column O(n log n) binary search to find indices that are out of range O(log n) throw out values out of range you are left with the values in range This should be O(n log...
I have a 200k lines list of number ranges like start_position,stop position. The list includes all kinds of overlaps in addition to nonoverlapping ones. the list looks like this [3,5] [10,30] [15,25] [5,15] [25,35] ... I need to find the ranges that a given number fall in. And will repeat it for 100k numbers. For ex...
0
1
3,636
0
18,321,537
0
1
0
0
1
true
7
2013-08-16T21:44:00.000
3
1
0
SciPy 0.12.0 and Numpy 1.6.1 - numpy.core.multiarray failed to import
18,282,568
1.2
python,numpy,scipy
So it seems that the cause of the error was incompatibility between scipy 0.12.0 and the much older numpy 1.6.1. There are two ways to fix this - either to upgrade numpy (to ~1.7.1) or to downgrade scipy (to ~0.10.1). If ArcGIS 10.2 specifically requires Numpy 1.6.1, the easiest option is to downgrade scipy.
I just installed ArcGIS v10.2 64bit background processing which installs Python 2.7.3 64bit and NumPy 1.6.1. I installed SciPy 0.12.0 64bit to the same Python installation. When I opened my Python interpreter I was able to successfully import arcpy, numpy, and scipy. However, when I tried to import scipy.ndimage I go...
0
1
6,801
0
18,315,125
0
0
0
0
1
false
0
2013-08-19T13:23:00.000
-1
2
0
Selecting random rows with python and writing to a new file
18,314,913
-0.099668
python,random-sample
The basic procedure is this: 1. Open the input file This can be accomplished with the basic builtin open function. 2. Open the output file You'll probably use the same method that you chose in step #1, but you'll need to open the file in write mode. 3. Read the input file to a variable It's often preferable to read th...
I need to open a csv file, select 1000 random rows and save those rows to a new file. I'm stuck and can't see how to do it. Can anyone help?
0
1
11,279
0
21,596,301
0
0
0
0
2
false
0
2013-08-21T08:30:00.000
0
2
0
OpenCv2: Using HoughLinesP raises " is not a numpy array"
18,352,493
0
python,opencv
For me, it wasn't working when the environment was of ROS Fuerte but it worked when the environment was of ROS Groovy. As Alexandre had mentioned above, it must be the problem with the opencv2 versions. Fuerte had 2.4.2 while Groovy had 2.4.6
Using HoughLinesP raises "<'unknown'> is not a numpy array", but my array is really a numpy array. It works on one of my computer, but not on my robot...
0
1
158
0
18,353,075
0
0
0
0
2
false
0
2013-08-21T08:30:00.000
2
2
0
OpenCv2: Using HoughLinesP raises " is not a numpy array"
18,352,493
0.197375
python,opencv
Found it: I don't have the same opencv version on my robot and on my computer ! For the records calling HoughLinesP: works fine on 2.4.5 and 2.4.6 leads to "<unknown> is not a numpy array" with version $Rev: 4557 $
Using HoughLinesP raises "<'unknown'> is not a numpy array", but my array is really a numpy array. It works on one of my computer, but not on my robot...
0
1
158
0
71,715,493
0
0
0
0
2
false
11
2013-08-23T02:45:00.000
0
6
0
K-th order neighbors in graph - Python networkx
18,393,842
0
python,networkx,adjacency-list
Yes,you can get a k-order ego_graph of a node subgraph = nx.ego_graph(G,node,radius=k) then neighbors are nodes of the subgraph neighbors= list(subgraph.nodes())
I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops. I looked at networkx and the only function relevant was neighbors. However, this just returns the orde...
0
1
10,646
0
21,031,826
0
0
0
0
2
true
11
2013-08-23T02:45:00.000
27
6
0
K-th order neighbors in graph - Python networkx
18,393,842
1.2
python,networkx,adjacency-list
You can use: nx.single_source_shortest_path_length(G, node, cutoff=K) where G is your graph object.
I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops. I looked at networkx and the only function relevant was neighbors. However, this just returns the orde...
0
1
10,646
0
18,456,597
0
1
0
0
1
false
4
2013-08-27T01:35:00.000
1
6
0
Retrieve 10 random lines from a file
18,455,589
0.033321
python,numpy
It is possible to do the job with one pass and without loading the entire file into memory as well. Though the code itself is going to be much more complicated and mostly unneeded unless the file is HUGE. The trick is the following: Suppose we only need one random line, then first save first line into a variable, then...
I have a text file which is 10k lines long and I need to build a function to extract 10 random lines each time from this file. I already found how to generate random numbers in Python with numpy and also how to open a file but I don't know how to mix it all together. Please help.
0
1
3,957
1
18,474,882
0
0
0
0
1
false
1
2013-08-27T17:55:00.000
1
1
0
AttributeError: 'FigureCanvasWxAgg' object has no attribute '_idletimer'
18,472,394
0.197375
python,macos,matplotlib,wxpython
_idletimer is likely to be a private, possibly implementation specific member of one of the classes - since you do not include the code or context I can not tell you which. In general anything that starts with an _ is private and if it is not your own, and specific to the local class, should not be used by your code as...
I am creating a GUI program using wxPython. I am also using matplotlib to graph some data. This data needs to be animated. To animate the data I am using the FuncAnimate function, which is part of the matplotlib package. When I first started to write my code I was using a PC, running windows 7. I did my initial test...
0
1
287
0
18,511,409
0
0
0
0
1
true
1
2013-08-29T12:36:00.000
2
1
0
What does extent do within imshow()?
18,511,206
1.2
python,matplotlib,histogram2d
Extent defines the images max and min of the horizontal and vertical values. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max].
Im wanting to use imshow() to create an image of a 2D histogram. However on several of the examples ive seen the 'extent' is defined. What does 'extent' actually do and how do you choose what values are appropriate?
0
1
128
0
18,537,377
0
0
0
0
1
true
0
2013-08-29T19:19:00.000
1
1
0
DBSCAN with potentially imprecise lat/long coordinates
18,519,356
1.2
python,algorithm,cluster-analysis,data-mining,dbscan
Note that DBSCAN doesn't actually need the distances. Look up Generalized DBSCAN: all it really uses is a "is a neighbor of" relationship. If you really need to incorporate uncertainty, look up the various DBSCAN variations and extensions that handle imprecise data explicitely. However, you may get pretty much the same...
I've been running sci-kit learn's DBSCAN implementation to cluster a set of geotagged photos by lat/long. For the most part, it works pretty well, but I came across a few instances that were puzzling. For instance, there were two sets of photos for which the user-entered text field specified that the photo was taken at...
0
1
723
0
20,780,282
0
0
0
0
2
false
14
2013-09-03T14:39:00.000
0
5
0
How I can use cv2.ellipse?
18,595,099
0
python,opencv
these parameters should be integer, or it will raise TypeError
OpenCV2 for python have 2 function [Function 1] Python: cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) → None [Function 2] Python: cv2.ellipse(img, box, color[, thickness[, lineType]]) → None I want to use [Function 1] But when I use this Code cv2.ellipse(Res...
0
1
13,427
0
28,592,694
0
0
0
0
2
false
14
2013-09-03T14:39:00.000
7
5
0
How I can use cv2.ellipse?
18,595,099
1
python,opencv
Make sure all the ellipse parameters are int otherwise it raises "TypeError: ellipse() takes at most 5 arguments (10 given)". Had the same problem and casting the parameters to int, fixed it. Please note that in Python, you should round the number first and then use int(), since int function will cut the number: x = 2....
OpenCV2 for python have 2 function [Function 1] Python: cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) → None [Function 2] Python: cv2.ellipse(img, box, color[, thickness[, lineType]]) → None I want to use [Function 1] But when I use this Code cv2.ellipse(Res...
0
1
13,427
0
18,641,432
0
0
0
0
1
true
0
2013-09-05T14:02:00.000
3
1
0
Pytables, HDF5 Attribute setting and deletion,
18,638,461
1.2
python,hdf5,pytables
HDF5 attribute access is notoriously slow. HDF5 is really built for and around the array data structure. Things like groups and attributes are great helpers but they are not optimized. That said while attribute reading is slow, attribute writing is even slower. Therefore, it is always worth the extra effort to do w...
I am working a lot with pytables and HDF5 data and I have a question regarding the attributes of nodes (the attributes you access via pytables 'node._v_attrs' property). Assume that I set such an attribute of an hdf5 node. I do that over and over again, setting a particular attribute (1) always to the same value (so ov...
0
1
986
0
18,647,689
0
1
0
0
2
false
0
2013-09-05T21:12:00.000
1
2
0
How to create a .py file within canopy?
18,646,039
0.099668
python,enthought
Umair, ctrl + n or File > Python File will do what you want. Best, Jonathan
I am using Enthought canopy for data analysis. I didn't find any option to create a .py file to write my code and save it for later use. I tried File> New >IPython Notebook, wrote my code and saved it. But the next time I opened it within Canopy editor, it wasn't editable. I need something like a Python shell where you...
0
1
1,009
0
19,578,868
0
1
0
0
2
false
0
2013-09-05T21:12:00.000
1
2
0
How to create a .py file within canopy?
18,646,039
0.099668
python,enthought
Let me add that if you need to open the file, even if it's a text file but you want to be able to run it as a Python file (or whatever language format) just look at the bottom of the Canopy window and select the language you want to use. In some cases it may default to just text. Click it and select the language you wa...
I am using Enthought canopy for data analysis. I didn't find any option to create a .py file to write my code and save it for later use. I tried File> New >IPython Notebook, wrote my code and saved it. But the next time I opened it within Canopy editor, it wasn't editable. I need something like a Python shell where you...
0
1
1,009
0
18,687,144
0
0
0
0
1
false
3
2013-09-07T07:35:00.000
1
2
0
How to get a series of random points in a specific range of distances from a reference point and generate xyz coordinates
18,670,974
0.099668
python
Generically you can populate your points in two ways: 1) use random to create the coordinates for your points within the outer bounds of the solution. If a given random point falls outside the max or inside the inner limit. 2) You can do it using polar coordinates: generate a random distance between the inner and oute...
I have to generate a various points and its xyz coordinates and then calculate the distance. Lets say I want create random point coordinates from point a away from 2.5 cm in all directions. so that i can calculate the mutual distance and angles form a to all generated point (red) I want to remove the redundant poi...
0
1
5,337
0
18,719,287
0
0
0
0
1
true
14
2013-09-09T16:49:00.000
19
3
0
Proximity Matrix in sklearn.ensemble.RandomForestClassifier
18,703,136
1.2
python,scikit-learn,random-forest
We don't implement proximity matrix in Scikit-Learn (yet). However, this could be done by relying on the apply function provided in our implementation of decision trees. That is, for all pairs of samples in your dataset, iterate over the decision trees in the forest (through forest.estimators_) and count the number of...
I'm trying to perform clustering in Python using Random Forests. In the R implementation of Random Forests, there is a flag you can set to get the proximity matrix. I can't seem to find anything similar in the python scikit version of Random Forest. Does anyone know if there is an equivalent calculation for the python ...
0
1
6,625
0
18,735,714
0
0
0
0
2
false
0
2013-09-10T14:09:00.000
1
2
0
Sort out bad pictures of a dataset (k-means, clustering, sklearn)
18,721,204
0.099668
python,computer-vision,cluster-analysis,scikit-learn,k-means
K-means is not very robust to noise; and your "bad pictures" probably can be considered as such. Furthermore, k-means doesn't work too well for sparse data; as the means will not be sparse. You may want to try other, more modern, clustering algorithms that can handle this situation much better.
I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses. At the Moment my approach looks like: computing descriptor...
0
1
550
0
18,735,840
0
0
0
0
2
false
0
2013-09-10T14:09:00.000
1
2
0
Sort out bad pictures of a dataset (k-means, clustering, sklearn)
18,721,204
0.099668
python,computer-vision,cluster-analysis,scikit-learn,k-means
I don't have the solution to your problem but here is a sanity check to perform prior to the final clustering, to check that the kind of features you extracted is suitable for your problem: extract the histogram features for all the pictures in your dataset compute the pairwise distances of all the pictures in your da...
I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses. At the Moment my approach looks like: computing descriptor...
0
1
550
0
18,815,103
0
0
0
0
1
true
0
2013-09-11T14:26:00.000
0
1
0
Deleting Lines of a Plot Which has Plotted with axvspan()
18,743,895
1.2
python,matplotlib,interactive-mode
Ok I have found the necessary functions. I used dir() function to find methods. axvspan() returns a matplotlib.patches.Polygon result. This type of data has set_visible method, using it as x.set_visible(0) I removed the lines and shapes.
I am doing some animating plots with ion()function. I want to draw and delete some lines. I found out axvspan() function, I can plot the lines and shapes with it as I want. But as long as I am doing an animation I also want to delete that lines and shapes. I couldn't find a way to delete them.
0
1
159
0
18,777,073
0
1
0
0
1
false
3
2013-09-13T01:48:00.000
1
3
0
Python 2.7 - ImportError: No module named Image
18,776,988
0.066568
python,windows,opencv,installation
Try to put the python(2.7) at your Windows path. Do the following steps: Open System Properties (Win+Pause) or My Computer and right-click then Properties Switch to the Advanced tab Click Environment Variables Select PATH in the System variables section Click Edit Add python's path to the end of the list (the paths ar...
Recently, I have been studying OpenCV to detect and recognize faces using C++. In order to execute source code demonstration from the OpenCV website I need to run Python to crop image first. Unfortunately, the message error is 'ImportError: No module named Image' when I run the Python script (this script is provided by...
0
1
22,097
0
18,778,542
0
1
0
0
1
false
0
2013-09-13T04:21:00.000
0
2
0
Cash flow diagram in python
18,778,266
0
python,image,matplotlib
If you simply need arrows pointing up and down, use Unicode arrows like "↑" and "↓". This would be really simple if rendering in a browser.
I need to make a very simple image that will illustrate a cash flow diagram based on user input. Basically, I just need to make an axis and some arrows facing up and down and proportional to the value of the cash flow. I would like to know how to do this with matplot.
0
1
875
0
36,248,111
0
0
0
0
1
true
1
2013-09-13T16:55:00.000
0
1
0
How to plot a heatmap of a big matrix with matplotlib (45K * 446)
18,791,469
1.2
python,matplotlib,bigdata,heatmap
I solved by downsampling the matrix to a smaller matrix. I decided to try two methodologies: supposing I want to down-sample a matrix of 45k rows to a matrix of 1k rows, I took a row value every 45 rows another methodology is, to down-sample 45k rows to 1k rows, to group the 45k rows into 1k groups (composed by 45 adj...
I am trying to plot a heatmap of a big microarray dataset (45K rows per 446 columns). Using pcolor from matplotlib I am unable to do it because my pc goes easily out of memory (more than 8G).. I'd prefer to use python/matplotlib instead of R for personal opinion.. Any way to plot heatmaps in an efficient way? Thanks
0
1
1,446
0
35,315,482
0
1
0
0
1
false
6
2013-09-18T21:17:00.000
1
1
0
How to share Ipython notebook kernels?
18,882,510
0.197375
ipython,ipython-notebook
When I have a long noetbook, I create functions from my code, and hide it into python modules, which I then import in the notebook. So that I can have huge chunk of code hidden on the background, and my notebook smaller for handier manipulation.
I have some very large IPython (1.0) notebooks, which I find very unhandy to work with. I want to split the large notebook into several smaller ones, each covering a specific part of my analysis. However, the notebooks need to share data and (unpickleable) objects. Now, I want these notebooks to connect to the same ker...
0
1
1,689
0
18,897,337
0
0
0
0
1
false
0
2013-09-19T04:49:00.000
0
1
0
Repeat rows in files in wakari
18,886,383
0
python
cat dataset.csv dataset.csv dataset.csv dataset.csv > bigdata.csv
In wakari, how do I download a CSV file and create a new CSV file with each of the rows in the original file repeated N number of times in the new CSV file.
0
1
61
0
18,908,045
0
1
0
0
1
false
1
2013-09-20T02:37:00.000
0
2
0
Difference between a numpy array and a multidimensional list in Python?
18,907,998
0
python,arrays,list,numpy,multidimensional-array
Numpy is an extension, and demands that all the objects on it are of the same type , defined on creation. It also provides a set of linear algebra operations. Its more like a mathematical framework for python to deal with Numeric Calculations (matrix, n stuffs).
After only briefly looking at numpy arrays, I don't understand how they are different than normal Python lists. Can someone explain the difference, and why I would use a numpy array as opposed to a list?
0
1
2,487
0
18,910,584
0
0
0
0
1
false
0
2013-09-20T06:26:00.000
0
1
0
Analyze Text to find patterns and useful information
18,910,200
0
python,nlp
Try xlrd Python Module to read and process excel sheets. I think an appropriate implementation using this module is an easy way to solve your problem.
to provide some context: Issues in an application are logged in an excel sheet and one of the columns in that sheet contains the email communication between the user (who had raised the issue) and the resolve team member. There are bunch of other columns containing other useful information. My job is to find useful ins...
0
1
637
0
20,853,862
0
1
0
0
1
false
2
2013-09-24T05:49:00.000
2
4
0
How to install scikit-learn for Portable Python?
18,973,863
0.099668
python-2.7,scikit-learn,portable-python
you can easily download SciKit executable, extract it with python, copy SciKit folder and content to c:\Portable Python 2.7.5.1\App\Lib\site-packages\ and you'll have SciKit in your portable python. I just had this problem and solved this way.
While I am trying to install scikit-learn for my portable python, its saying " Python 2.7 is not found in the registry". In the next window, it does ask for an installation path but neither am I able to copy-paste the path nor write it manually. Otherwise please suggest some other alternative for portable python which ...
0
1
1,485
0
44,480,375
0
0
0
0
1
false
12
2013-09-25T01:33:00.000
0
2
0
Python Svmlight Error: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
18,994,787
0
python-2.7,scipy,svmlight
I also met this problem when I assigned numbers to a matrix. like this: Qmatrix[list2[0], list2[j]] = 1 the component may be a non-integer number, so I changed to this: Qmatrix[int(list2[0]), int(list2[j])] = 1 and the warning removed
I'm running python 2.7.5 with scikit_learn-0.14 on my Mac OSX Mountain Lion. Everything I run a svmlight command however, I get the following warning: DeprecationWarning: using a non-integer number instead of an integer will result in an error >in the future
0
1
34,473
0
38,722,056
0
1
0
0
2
false
21
2013-09-26T13:14:00.000
0
4
0
How to check that the anaconda package was properly installed
19,029,333
0
python,macos,numpy,installation,anaconda
I don't think the existing answer answers your specific question (about installing packages within Anaconda). When I install a new package via conda install <PACKAGE>, I then run conda list to ensure the package is now within my list of Anaconda packages.
I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded/installed Anaconda 1.7. However, whe...
0
1
69,121
0
41,600,022
0
1
0
0
2
false
21
2013-09-26T13:14:00.000
1
4
0
How to check that the anaconda package was properly installed
19,029,333
0.049958
python,macos,numpy,installation,anaconda
Though the question is not relevant to Windows environment, FYI for windows. In order to use anaconda modules outside spyder or in cmd prompt, try to update the PYTHONPATH & PATH with C:\Users\username\Anaconda3\lib\site-packages. Finally, restart the command prompt. Additionally, sublime has a plugin 'anaconda' which ...
I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded/installed Anaconda 1.7. However, whe...
0
1
69,121
0
19,042,578
0
0
0
0
1
true
1
2013-09-27T01:53:00.000
1
1
0
How to enforce scipy.optimize.fmin_l_bfgs_b to use 'dtype=float32'
19,041,486
1.2
python,optimization,scipy,gpu,multidimensional-array
I am not sure you can ever do it. fmin_l_bfgd_b is provided not by pure python code, but by a extension (a wrap of FORTRAN code). In Win32/64 platform it can be found at \scipy\optimize\_lbfgsb.pyd. What you want may only be possible if you can compile the extension differently or modify the FORTRAN code. If you check ...
I am trying to optimize functions with GPU calculation in Python, so I prefer to store all my data as ndarrays with dtype=float32. When I am using scipy.optimize.fmin_l_bfgs_b, I notice that the optimizer always passes a float64 (on my 64bit machine) parameter to my objective and gradient functions, even when I pass a...
0
1
1,678
0
28,295,797
0
0
0
0
1
true
19
2013-09-27T19:18:00.000
20
2
0
Specifying the line width of the legend frame, in matplotlib
19,058,485
1.2
python,matplotlib
For the width: legend.get_frame().set_linewidth(w) For the color: legend.get_frame().set_edgecolor("red")
In matplotlib, how do I specify the line width and color of a legend frame?
0
1
13,797
0
61,194,900
0
0
0
0
1
false
184
2013-09-28T20:10:00.000
6
11
0
Drop columns whose name contains a specific string from pandas DataFrame
19,071,199
1
python,pandas,dataframe
This method does everything in place. Many of the other answers create copies and are not as efficient: df.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True)
I have a pandas dataframe with the following column names: Result1, Test1, Result2, Test2, Result3, Test3, etc... I want to drop all the columns whose name contains the word "Test". The numbers of such columns is not static but depends on a previous function. How can I do that?
0
1
184,940
0
21,080,034
0
0
0
0
1
false
2
2013-09-30T07:19:00.000
0
1
0
How to resize y axis of a dendogram
19,088,527
0
python,scipy,hierarchical-clustering,dendrogram
If your're really only interested in distance proportions between the fusions, you could adapt your input linkage (cut off an offset in the third column of the linkage matrix). This will screw the absolute cophenetic distances of course. do some normalization of your input data, before clustering it Or you manipul...
I am using scipy.cluster.hierarchy as sch to draw a dendogram after makeing an hierarchical clustering. The problem is that the clustering happens on the top of the dendogram in between 0.8 and 1.0 which is the similarity degree in the y axis. How can i "cut" all the graph from 0 to 0.6 where nothing "interesting" gra...
0
1
362
0
19,616,987
0
0
0
0
1
false
26
2013-10-01T03:48:00.000
6
4
0
How to compute scipy sparse matrix determinant without turning it to dense?
19,107,617
1
python,numpy,scipy,linear-algebra,sparse-matrix
The "standard" way to solve this problem is with a cholesky decomposition, but if you're not up to using any new compiled code, then you're out of luck. The best sparse cholesky implementation is Tim Davis's CHOLMOD, which is licensed under the LGPL and thus not available in scipy proper (scipy is BSD).
I am trying to figure out the fastest method to find the determinant of sparse symmetric and real matrices in python. using scipy sparse module but really surprised that there is no determinant function. I am aware I could use LU factorization to compute determinant but don't see a easy way to do it because the return ...
0
1
6,098
0
19,220,952
0
0
0
0
1
false
0
2013-10-02T05:12:00.000
1
1
0
How to detect start of raw-rgb video frame?
19,130,365
0.197375
python,video,rgb,gstreamer
If this really is raw rgb video, there is no (realistic) way to detect the start of the frame. I would assume your video would come as whole frames, so one buffer == one frame, and hence no need for such detection.
I have raw-rgb video coming from PAL 50i camera. How can I detect the start of frame, just like I would detect the keyframe of h264 video, in gstreamer? I would like to do that for indexing/cutting purposes.
0
1
234
0
19,389,797
0
0
0
0
1
false
2
2013-10-04T01:40:00.000
1
1
0
Scipy or pandas for sparse matrix computations?
19,171,822
0.197375
python,numpy,matrix,pandas
After some research I found that both pandas and Scipy have structures to represent sparse matrix efficiently in memory. But none of them have out of box support for compute similarity between vectors like cosine, adjusted cosine, euclidean etc. Scipy support this on dense matrix only. For sparse, Scipy support dot pro...
I have to compute massive similarity computations between vectors in a sparse matrix. What is currently the best tool, scipy-sparse or pandas, for this task?
0
1
839
0
19,185,124
0
0
0
0
1
false
3
2013-10-04T15:22:00.000
4
1
0
Is there a standard way to work with numerical probability density functions in Python?
19,184,975
0.664037
python,random,numpy,scipy,distribution
How about numpy.convolve? It takes two arrays, rather than two functions, which seems ideal for your use. I'll also mention the ECDF function in the statsmodels package in case you really want to turn your observations into (step) functions.
I have a continuous random variable given by its density distribution function or by cumulative probability distribution function. The distribution functions are not analytical. They are given numerically (for example as a list of (x,y) values). One of the things that I would like to do with these distributions is to f...
0
1
287
0
19,217,476
0
0
0
0
2
false
1
2013-10-06T20:09:00.000
0
4
0
How do I use Gimp / OpenCV Color to separate images into coloured RGB layers?
19,213,407
0
python,opencv,rgb,gimp
As the blue,green,red images each has 1 channel only.So, this is basically a gray-scale image. If you want to add colors in the dog_blue.jpg for example then you create a 3-channel image and copy the contents in all the channels or do cvCvtColor(src,dst,CV_GRAY2BGR). Now you will be able to add colors to it as it has b...
I have a JPG image, and I would like to find a way to: Decompose the image into red, green and blue intensity layers (8 bit per channel). Colorise each of these now 'grayscale' images with its appropriate color Produce 3 output images in appropriate color, of each channel. For example if I have an image: dog.jpg I wa...
0
1
2,596
0
55,448,010
0
0
0
0
2
false
1
2013-10-06T20:09:00.000
0
4
0
How do I use Gimp / OpenCV Color to separate images into coloured RGB layers?
19,213,407
0
python,opencv,rgb,gimp
In the BGR image, you have three channel. When you split the channel using the split() function, like B,G,R=cv2.split(img), then B,G,R becomes a single or monochannel image. So you need to add two extra channel with zeros to make it 3 channel image but activated for a specific color channel.
I have a JPG image, and I would like to find a way to: Decompose the image into red, green and blue intensity layers (8 bit per channel). Colorise each of these now 'grayscale' images with its appropriate color Produce 3 output images in appropriate color, of each channel. For example if I have an image: dog.jpg I wa...
0
1
2,596
0
19,221,774
0
0
0
0
1
false
1
2013-10-07T09:52:00.000
6
2
0
How many columns in pandas, python?
19,221,694
1
python,pandas
You get an out of memory error because you run out of memory, not because there is a limit on the number of columns.
Have anyone known the total columns in pandas, python? I have just created a dataframe for pandas included more than 20,000 columns but I got memory error. Thanks a lot
0
1
2,587
0
19,228,974
0
1
0
0
1
false
0
2013-10-07T15:10:00.000
0
1
0
macport and FINK
19,228,380
0
python,macos,port,fink
In terms of how your Python interpreter works, no: there is no negative effect on having Fink Python as well as MacPorts Python installed on the same machine, just as there is no effect from having multiple installations of Python by anything.
I have a mac server and I have both FINK and macport installation of python/numpy/scipy I was wondering if having both will affect the other? In terms of memory leaks/unusual results? In case you are wondering why both ? Well I like FINK but macports allows me to have python2.4 which FINK does not provide (yes I neede...
0
1
421
0
19,235,077
0
1
0
0
2
false
0
2013-10-07T21:17:00.000
0
2
0
Linked Matrix Implementation in Python?
19,234,950
0
python,list,matrix,linked-list
There's more than one way to interpret this, but one option is: Have a single "head" node at the top-left corner and a "tail" node at the bottom-right. There will then be row-head, row-tail, column-head, and column-tail nodes, but these are all accessible from the overall head and tail, so you don't need to keep track ...
I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user ...
0
1
1,333
0
19,237,061
0
1
0
0
2
false
0
2013-10-07T21:17:00.000
0
2
0
Linked Matrix Implementation in Python?
19,234,950
0
python,list,matrix,linked-list
It really depends on what options you want/need to efficiently support. For instance, a singly linked list with only a head pointer can be a stack (insert and remove at the head). If you add a tail pointer you can insert at either end, but only remove at the head (stack or queue). A doubly linked list can support ins...
I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user ...
0
1
1,333
0
43,874,003
0
0
0
0
2
false
9
2013-10-08T19:46:00.000
11
5
0
Python - how to normalize time-series data
19,256,930
1
python,time-series
The solutions given are good for a series that aren’t incremental nor decremental(stationary). In financial time series( or any other series with a a bias) the formula given is not right. It should, first be detrended or perform a scaling based in the latest 100-200 samples. And if the time series doesn't come from a n...
I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of nor...
0
1
20,885
0
21,486,466
0
0
0
0
2
false
9
2013-10-08T19:46:00.000
0
5
0
Python - how to normalize time-series data
19,256,930
0
python,time-series
I'm not going to give the Python code, but the definition of normalizing, is that for every value (datapoint) you calculate "(value-mean)/stdev". Your values will not fall between 0 and 1 (or 0 and 100) but I don't think that's what you want. You want to compare the variation. Which is what you are left with if you do ...
I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of nor...
0
1
20,885
0
64,969,412
0
1
0
0
1
false
61
2013-10-11T02:25:00.000
2
7
0
How to (intermittently) skip certain cells when running IPython notebook?
19,309,287
0.057081
python,ipython,ipython-notebook,ipython-magic
The simplest way to skip python code in jupyter notebook cell from running, I temporarily convert those cells to markdown.
I usually have to rerun (most parts of) a notebook when reopen it, in order to get access to previously defined variables and go on working. However, sometimes I'd like to skip some of the cells, which have no influence to subsequent cells (e.g., they might comprise a branch of analysis that is finished) and could tak...
0
1
40,298
0
19,317,456
0
0
0
0
1
true
1
2013-10-11T11:15:00.000
1
1
0
Is there a way to see column 'grams' of the TfidfVectoririzer output?
19,316,788
1.2
python,scikit-learn,tf-idf
Use get_feature_names method as specified in comments by larsmans
I want to visualize "Words/grams" used in columns of TfidfVectorizer outut in python-scikit library . Is there a way ? I tried to to convert csr to array , but cannot see header composed of grams.
0
1
31
0
19,352,360
0
0
0
0
1
false
2
2013-10-14T01:23:00.000
-1
2
0
Sample from weighted histogram
19,352,225
-0.099668
python,numpy,histogram,sample
You need to refine your problem statement better. For example, if your array has only 1 row, what do you expect. If your array has 20,000 rows what do you expect? ...
I have a 2 column array, 1st column weights and the 2 nd column values which I am plotting using python. I would like to draw 20 samples from this weighted array, proportionate to their weights. Is there a python/numpy command which does that?
0
1
1,324
0
19,356,586
0
1
0
0
1
false
1
2013-10-14T08:15:00.000
1
3
0
How can I implement a data structure that preserves order and has fast insertion/removal?
19,355,986
0.066568
python,data-structures,python-3.x,deque
Using doubly-linked lists in Python is a bit uncommon. However, your own proposed solution of a doubly-linked list and a dictionary has the correct complexity: all the operations you ask for are O(1). I don't think there is in the standard library a more direct implementation. Trees might be nice theoretically, but a...
I'm looking for a data structure that preserves the order of its elements (which may change over the life of the data structure, as the client may move elements around). It should allow fast search, insertion before/after a given element, removal of a given element, lookup of the first and last elements, and bidirectio...
0
1
708
0
19,374,300
0
1
0
0
1
false
116
2013-10-15T06:02:00.000
7
6
0
Assigning a variable NaN in python without numpy
19,374,254
1
python,constants,nan
You can do float('nan') to get NaN.
Most languages have a NaN constant you can use to assign a variable the value NaN. Can python do this without using numpy?
0
1
180,714
0
19,378,238
0
1
0
0
1
false
3
2013-10-15T09:52:00.000
0
4
0
Python: Merging two arbitrary data structures
19,378,143
0
python
If you know one structure is always a subset of the other, then just iterate the superset and in O(n) time you can check element by element whether it exists in the subset and if it doesn't, put it there. As far as I know there's no magical way of doing this other than checking it manually element by element. Which, as...
I am looking to efficiently merge two (fairly arbitrary) data structures: one representing a set of defaults values and one representing overrides. Example data below. (Naively iterating over the structures works, but is very slow.) Thoughts on the best approach for handling this case? _DEFAULT = { 'A': 1122, 'B': ...
0
1
1,800
0
19,423,078
0
0
0
0
1
false
2
2013-10-17T09:22:00.000
0
2
0
how to draw a nonlinear function using matplotlib?
19,422,749
0
python,matplotlib
my 2 cents: x^3+y^3+y^2+2xy^2=0 y^2=-x^3-y^3-2xy^2 y^2>0 => -x^3-y^3-2xy^2>0 => x^3+y^3+2xy^2<0 => x(x^2+2y^2)+y^3<0 => x(x^2+2y^2)<-y^3 => (x^2+2y^2)<-y^3/x 0<(x^2+2y^2) => 0<-y^3/x => 0>y^3/x => (x>0 && y<0) || (x<0 && y>0) your graph will span across the 2nd and 4th quadrants
I would like to draw the curve a generic cubic function using matplotlib. I want to draw curves that are defined by functions such as: x^3 + y^3 + y^2 + 2xy^2 = 0. Is this possible to do?
0
1
2,800
0
41,857,586
0
0
0
0
1
false
2
2013-10-17T20:06:00.000
0
1
0
Python Pandas Excel Display
19,436,220
0
python,excel,pandas
It sounds to me like your python code is inserting a carriage return either before or after the value. I've replicated this behavior in Excel 2016 and can confirm that the cell appears blank, but does contain a value. Furthermore, I've verified that using the text to columns will parse the carriage return out.
I used the Python Pandas library as a wrap-around instead of using SQL. Everything worked perfectly, except when I open the output excel file, the cells appear blank, but when I click on the cell, I can see the value in the cell above. Additionally, Python and Stata recognize the value in the cell, even though the ey...
0
1
311
0
58,662,181
0
0
0
0
1
false
6
2013-10-20T06:48:00.000
0
3
0
Pandas DataFrame.reset_index for columns
19,474,693
0
python,pandas
Transpose df, reset index, and transopse again. df.T.reset_index().T
Is there a reset_index equivalent for the column headings? In other words, if the column names are an MultiIndex, how would I drop one of the levels?
0
1
4,607
0
69,479,472
0
0
0
0
1
false
29
2013-10-21T03:04:00.000
1
4
0
Python random sample of two arrays, but matching indices
19,485,641
0.049958
python,random,numpy
Using the numpy.random.randint function, you generate a list of random numbers, meaning that you can select certain datapoints twice.
I have two numpy arrays x and y, which have length 10,000. I would like to plot a random subset of 1,000 entries of both x and y. Is there an easy way to use the lovely, compact random.sample(population, k) on both x and y to select the same corresponding indices? (The y and x vectors are linked by a function y(x) say....
0
1
15,029
0
19,501,335
0
1
0
0
1
false
10
2013-10-21T17:46:00.000
1
6
0
How do I ONLY round a number/float down in Python?
19,501,279
0.033321
python,integer,rounding
I'm not sure whether you want math.floor, math.trunc, or int, but... it's almost certainly one of those functions, and you can probably read the docs and decide more easily than you can explain enough for usb to decide for you.
I will have this random number generated e.g 12.75 or 1.999999999 or 2.65 I want to always round this number down to the nearest integer whole number so 2.65 would be rounded to 2. Sorry for asking but I couldn't find the answer after numerous searches, thanks :)
0
1
42,133
0
19,510,028
0
0
0
0
1
false
4
2013-10-22T05:00:00.000
4
1
0
How to merge specific axes without ambuigity with numpy.ndarray
19,509,314
0.664037
python,numpy,reshape
Using reshape is never ambiguous. It doesn't change the memory-layout of the data. Indexing is always done using the strides determined by the shape. The right-most axis has stride 1, while the axes to the left have strides given by the product of the sizes to their right. That means for you: as long as you collect nei...
Basically I want to reshape tensors represented by numpy.ndarray. For example, I want to do something like this (latex notation) A_{i,j,k,l,m,n,p} -> A_{i,jk,lm,np} or A_{i,j,k,l,m,n,p} -> A_{ij,k,l,m,np} where A is an ndarray. i,j,k,... denotes the original axes. so the new axis 2 becomes the "flattened" version of ax...
0
1
802
0
19,540,052
0
0
0
0
1
false
3
2013-10-22T17:50:00.000
4
2
0
Get most common colours in an image using OpenCV
19,524,905
0.379949
python,opencv
I would transform the images to the HSV color space and then compute a histogram of the H values. Then, take the bins with the largest values.
Hi I'm using Opencv and I want to find the n most common colors of an image using x sensitivity. How could I do this? Are there any opencv functions to do this? Cheers! *Note: this isn't homework, i'm just using opencv for fun!
0
1
4,153
0
19,532,207
0
0
0
0
1
false
0
2013-10-23T03:18:00.000
1
3
0
Most efficient way to store data on drive
19,532,159
0.066568
python,sqlite,csv
I would write all the lines to one file. For 10,000 lines it's probably not worthwhile, but you can pad all the lines to the same length - say 1000 bytes. Then it's easy to seek to the nth line, just multiply n by the line length
baseline - I have CSV data with 10,000 entries. I save this as 1 csv file and load it all at once. alternative - I have CSV data with 10,000 entries. I save this as 10,000 CSV files and load it individually. Approximately how much more inefficient is this computationally. I'm not hugely interested in memory concerns....
0
1
1,356
0
19,650,488
0
0
0
0
1
false
3
2013-10-25T06:02:00.000
0
3
0
How to find eigenvectors and eigenvalues without numpy and scipy?
19,582,197
0
python,linear-algebra
Writing a program to solve an eigenvalue problem is about 100 times as much work as fixing the library mismatch problem.
I need to calculate eigenvalues and eigenvectors in python. numpy and scipy do not work. They both write Illegal instruction (core dumped). I found out that to resolve the problem I need to check my blas/lapack. So, I thought that may be an easier way is to write/find a small function to solve the eigenvalue problem. D...
0
1
12,872
0
19,674,497
0
1
0
0
1
false
0
2013-10-25T13:59:00.000
0
1
0
Installing scikits.bvp_solver
19,591,907
0
python,scipy,enthought,scikits,canopy
You can try to download the package tar.gz and use easy_install . Or you can unpack the package and use the standard way of python setup.py install. I believe both ways require a fortran compiler.
I need to use scikits.bvp_solver in python. I currently use Canopy as my standard Python interface, where this package isn't available. Is there another available package for solving boundary value problems? I have also tried downloading using macports but the procedure sticks when it tries building gcc48 dependency.
0
1
611
0
19,611,533
0
1
0
0
1
false
2
2013-10-26T19:52:00.000
2
1
0
Find a point in 3d collinear with 2 other points
19,611,177
0.379949
python,algorithm,3d,point
Given 2 points, (x1,y1,z1) and (x2,y2,z2), you can take the difference between the two, so you end up with (x2-x1,y2-y1,z2-z1). Take the norm of this (i.e. take the distance between the original 2 points), and divide (x2-x1,y2-y1,z2-z1) by that value. You now have a vector with the same slope as the line between the f...
I need to write a script in python which given coordinates of 2 points in 3d space finds a collinear point in distane 1 unit from one the given points. This third point must lay between those two given. I think I will manage with scripting but I am not really sure how to calculate it from mathematical point of view. I ...
0
1
1,351