GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
5,852,526
0
0
0
0
3
false
14
2011-05-01T20:23:00.000
1
10
0
best algorithm for finding distance for all pairs where edges' weight is 1
5,851,154
0.019997
python,algorithm,dijkstra,shortest-path,graph-algorithm
I would refer you to the following paper: "Sub-cubic Cost Algorithms for the All Pairs Shortest Path Problem" by Tadao Takaoka. There a sequential algorithm with sub-cubic complexity for graphs with unit weight (actually max edge weight = O(n ^ 0.624)) is available.
As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you) The graph is unweighted. Meaning that all the edges can be considered as having weight of 1. |E| <= 4*|V| The graph is pretty big (at most ~144...
0
1
6,778
0
6,589,501
0
0
0
0
3
false
14
2011-05-01T20:23:00.000
1
10
0
best algorithm for finding distance for all pairs where edges' weight is 1
5,851,154
0.019997
python,algorithm,dijkstra,shortest-path,graph-algorithm
I'm assuming the graph is dynamic; otherwise, there's no reason not to use Floyd-Warshall to precompute all-pairs distances on such a small graph ;) Suppose you have a grid of points (x, y) with 0 <= x <= n, 0 <= y <= n. Upon removing an edge E: (i, j) <-> (i+1, j), you partition row j into sets A = { (0, j), ..., (i, ...
As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you) The graph is unweighted. Meaning that all the edges can be considered as having weight of 1. |E| <= 4*|V| The graph is pretty big (at most ~144...
0
1
6,778
0
5,851,436
0
0
0
0
3
false
14
2011-05-01T20:23:00.000
9
10
0
best algorithm for finding distance for all pairs where edges' weight is 1
5,851,154
1
python,algorithm,dijkstra,shortest-path,graph-algorithm
Run a breadth-first search from each node. Total time: O(|V| |E|) = O(|V|2), which is optimal.
As the title said, I'm trying to implement an algorithm that finds out the distances between all pairs of nodes in given graph. But there is more: (Things that might help you) The graph is unweighted. Meaning that all the edges can be considered as having weight of 1. |E| <= 4*|V| The graph is pretty big (at most ~144...
0
1
6,778
1
5,859,924
0
0
0
0
1
false
1
2011-05-02T14:34:00.000
0
3
0
IplImage 'None' error on CaptureFromFile() - Python 2.7.1 and OpenCV 2.2 WinXP
5,858,446
0
opencv,python-2.7,iplimage
This must be an issue with the default codecs. OpenCV uses brute force methods to open video files or capture from camera. It goes by trial and error through all sources/codecs/apis it can find in some reasonable order. (at least 1.1 did so). That means that on n different systems (or days) you may get n different ways...
I am running Python2.7.1 and OpenCV 2.2 without problems in my WinXP laptop and wrote a tracking program that is working without a glitch. But for some strange reason I cannot get the same program to run in any other computer where I tried to install OpenCV and Python (using the same binaries or appropriate 64 bit bina...
0
1
1,705
0
5,891,605
0
0
0
0
2
false
11
2011-05-04T23:13:00.000
2
4
0
Acquiring basic skills working with visualizing/analyzing large data sets
5,890,935
0.099668
python,dataset,visualization,data-visualization
If you are looking for visualization rather than data mining and analysis, The Visual Display of Quantitative Information by Edward Tufte is considered one of the best books in the field.
I'm looking for a way to learn to be comfortable with large data sets. I'm a university student, so everything I do is of "nice" size and complexity. Working on a research project with a professor this semester, and I've had to visualize relationships between a somewhat large (in my experience) data set. It was a 15...
0
1
2,645
0
5,908,938
0
0
0
0
2
false
11
2011-05-04T23:13:00.000
1
4
0
Acquiring basic skills working with visualizing/analyzing large data sets
5,890,935
0.049958
python,dataset,visualization,data-visualization
I like the book Data Analysis with Open Source Tools by Janert. It is a pretty broad survey of data analysis methods, focusing on how to understand the system that produced the data, rather than on sophisticated statistical methods. One caveat: while the mathematics used isn't especially advanced, I do think you will n...
I'm looking for a way to learn to be comfortable with large data sets. I'm a university student, so everything I do is of "nice" size and complexity. Working on a research project with a professor this semester, and I've had to visualize relationships between a somewhat large (in my experience) data set. It was a 15...
0
1
2,645
0
67,419,895
0
0
0
0
1
false
51
2011-05-08T11:36:00.000
0
7
0
How do I remove all zero elements from a NumPy array?
5,927,180
0
python,arrays,numpy,filtering
[i for i in Array if i != 0.0] if the numbers are float or [i for i in SICER if i != 0] if the numbers are int.
I have a rank-1 numpy.array of which I want to make a boxplot. However, I want to exclude all values equal to zero in the array. Currently, I solved this by looping the array and copy the value to a new array if not equal to zero. However, as the array consists of 86 000 000 values and I have to do this multiple times,...
0
1
149,688
0
5,950,881
0
0
0
1
4
false
0
2011-05-10T13:02:00.000
0
4
0
Insert performance with Cassandra
5,950,427
0
python,multithreading,insert,cassandra
It's possible you're hitting the python GIL but more likely you're doing something wrong. For instance, putting 2M rows in a single batch would be Doing It Wrong.
sorry for my English in advance. I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family. With one thread, that operation took around 3 min. But I would like do the same ...
0
1
1,686
0
5,956,519
0
0
0
1
4
false
0
2011-05-10T13:02:00.000
0
4
0
Insert performance with Cassandra
5,950,427
0
python,multithreading,insert,cassandra
Try running multiple clients in multiple processes, NOT threads. Then experiment with different insert sizes. 1M inserts in 3 mins is about 5500 inserts/sec, which is pretty good for a single local client. On a multi-core machine you should be able to get several times this amount provided that you use multiple client...
sorry for my English in advance. I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family. With one thread, that operation took around 3 min. But I would like do the same ...
0
1
1,686
0
6,078,703
0
0
0
1
4
false
0
2011-05-10T13:02:00.000
0
4
0
Insert performance with Cassandra
5,950,427
0
python,multithreading,insert,cassandra
You might consider Redis. Its single-node throughput is supposed to be faster. It's different from Cassandra though, so whether or not it's an appropriate option would depend on your use case.
sorry for my English in advance. I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family. With one thread, that operation took around 3 min. But I would like do the same ...
0
1
1,686
0
8,491,215
0
0
0
1
4
false
0
2011-05-10T13:02:00.000
0
4
0
Insert performance with Cassandra
5,950,427
0
python,multithreading,insert,cassandra
The time taken doubled because you inserted twice as much data. Is it possible that you are I/O bound?
sorry for my English in advance. I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family. With one thread, that operation took around 3 min. But I would like do the same ...
0
1
1,686
0
5,987,204
0
0
0
0
2
true
2
2011-05-13T04:19:00.000
4
3
1
does C has anything like python pickle for object serialisation?
5,987,185
1.2
python,c
An emphatic NO on that one, I'm afraid. C has basic file I/O. Any structuring of data is up to you. Make up a format, dump it out, read it in. There may be libraries which can do this, but by itself no C doesn't do this.
I'm wondering if C has anything similar to the python pickle module that can dump some structured data on disk and then load it back later. I know that I can write my structure byte by byte to a file on disk and then read it back later, but with this approach there's still quite some work to do. For example, if I have...
0
1
207
0
5,987,230
0
0
0
0
2
false
2
2011-05-13T04:19:00.000
2
3
1
does C has anything like python pickle for object serialisation?
5,987,185
0.132549
python,c
The C library functions fread(3) and fwrite(3) will read and write 'elements of data', but that's pretty fanciful way of saying "the C library will do some multiplication and pread(2) or pwrite(2) calls behind the scenes to fill your array". You can use them on structs, but it is probably not a good idea: holes in the...
I'm wondering if C has anything similar to the python pickle module that can dump some structured data on disk and then load it back later. I know that I can write my structure byte by byte to a file on disk and then read it back later, but with this approach there's still quite some work to do. For example, if I have...
0
1
207
0
6,022,144
0
0
0
0
1
true
0
2011-05-16T18:13:00.000
2
1
0
Purging numpy.memmap
6,021,550
1.2
python,numpy,mmap,memory-mapped-files,large-data
If you run "pmap SCRIPT-PID", the "real" memory shows as "[ anon ]" blocks, and all memory-mapped files show up with the file name in the last column. Purging the pages is possible at C level, if you manage to get ahold of the pointer to the beginning of the mapping and call madvise(ptr, length, MADV_DONTNEED) on it, b...
Given a numpy.memmap object created with mode='r' (i.e. read-only), is there a way to force it to purge all loaded pages out of physical RAM, without deleting the object itself? In other words, I'd like the reference to the memmap instance to remain valid, but all physical memory that's being used to cache the on-disk ...
0
1
487
0
6,046,352
0
1
0
0
1
false
10
2011-05-18T07:42:00.000
3
2
0
Python memory serialisation
6,041,395
0.291313
python,class,serialization,memory-management,pickle
Do you construct your tree once and then use it without modifying it further? In that case you might want to consider using separate structures for the dynamic construction and the static usage. Dicts and objects are very good for dynamic modification, but they are not very space efficient in a read-only scenario. I do...
I was wondering whether someone might know the answer to the following. I'm using Python to build a character-based suffix tree. There are over 11 million nodes in the tree which fits in to approximately 3GB of memory. This was down from 7GB by using the slot class method rather than the Dict method. When I serialise...
0
1
415
0
6,042,505
0
0
0
0
1
true
13
2011-05-18T09:09:00.000
7
1
0
numpy: inverting an upper triangular matrix
6,042,308
1.2
python,matrix,numpy,scipy,matrix-inverse
There really isn't an inversion routine, per se. scipy.linalg.solve is the canonical way of solving a matrix-vector or matrix-matrix equation, and it can be given explicit information about the structure of the matrix which it will use to choose the correct routine (probably the equivalent of BLAS3 dtrsm in this case)....
In numpy/scipy, what's the canonical way to compute the inverse of an upper triangular matrix? The matrix is stored as 2D numpy array with zero sub-diagonal elements, and the result should also be stored as a 2D array. edit The best I've found so far is scipy.linalg.solve_triangular(A, np.identity(n)). Is that it?
0
1
6,594
0
40,091,714
0
0
0
0
1
false
711
2011-05-21T10:01:00.000
2
12
0
Dump a NumPy array into a csv file
6,081,008
0.033321
python,arrays,csv,numpy
If you want to save your numpy array (e.g. your_array = np.array([[1,2],[3,4]])) to one cell, you could convert it first with your_array.tolist(). Then save it the normal way to one cell, with delimiter=';' and the cell in the csv-file will look like this [[1, 2], [2, 4]] Then you could restore your array like this: yo...
Is there a way to dump a NumPy array into a CSV file? I have a 2D NumPy array and need to dump it in human-readable format.
0
1
1,040,617
0
6,086,737
0
0
0
0
1
false
0
2011-05-22T07:07:00.000
1
1
0
Backward propagation - Character Recognizing - Seeking an example
6,086,560
0.197375
java,php,python,neural-network
Support Vector Machines tend to work much better for character recognition with error rates around 2% generally reported. I suggest that as an alternative if you're just using the character recognition as a module in a larger project.
i looking for a example of character recognizing (just one - for example X or A) using MLP, Backward propagation. I want a simple example, and not the entire library. Language does not matter, preferably one of those Java, Python, PHP
0
1
162
0
6,090,407
0
0
0
0
1
false
1
2011-05-22T19:43:00.000
1
2
0
python numpy array slicing
6,090,288
0.099668
python,arrays,indexing,numpy,slice
I don't think there is a better solution, unless you have some extra information about what's in those arrays. If they're just random numbers, you have to do (n^2)/2 calculations, and your algorithm is reflecting that, running in O((n^2)/2).
I have an 2d array, A that is 6x6. I would like to take the first 2 values (index 0,0 and 0,1) and take the average of the two and insert the average into a new array that is half the column size of A (6x3) at index 0,0. Then i would get the next two indexes at A, take average and put into the new array at 0,1. The o...
0
1
1,976
0
6,127,643
0
0
0
0
1
false
4
2011-05-25T15:53:00.000
0
3
0
Opencv... getting at the data in an IPLImage or CvMat
6,127,314
0
python,opencv,iplimage
I do not know opencv python bindings, but in C or C++ you have to get the buffer pointer stored in IplImage. This buffer is coded according to the image format (also stored in IplImage). For RGB you have a byte for R, a byte for G, a byte for B, and so on. Look at the API of python bindings,you will find how to access ...
I am doing some simple programs with opencv in python. I want to write a few algorithms myself, so need to get at the 'raw' image data inside an image. I can't just do image[i,j] for example, how can I get at the numbers? Thanks
0
1
7,276
0
6,213,966
0
0
0
0
1
false
9
2011-06-02T11:24:00.000
3
2
0
Finding the calculation that generates a NaN
6,213,869
0.291313
python,debugging,numpy,scipy,nan
You can use numpy.seterr to set floating point error handling behaviour globally for all numpy routines. That should let you pinpoint where in the code they are arising from (or a least where numpy see them for the first time).
I have a moderately large piece (a few thousand lines) of Python/Numpy/Scipy code that is throwing up NaNs with certain inputs. I've looked for, and found, some of the usual suspects (log(0) and the like), but none of the obvious ones seem to be the culprits in this case. Is there a relatively painless way (i.e., apar...
0
1
1,232
0
6,229,101
0
0
0
0
1
false
1
2011-06-03T13:18:00.000
1
2
0
Non sorted eigenvalues for finding features in Python
6,227,589
0.099668
python,pca
What Sven mentioned in his comments is correct. There is no "default" ordering of the eigenvalues. Each eigenvalue is associated with an eigenvector, and it is important is that the eigenvalue-eigenvector pair is matched correctly. You'll find that all languages and packages will do so. So if R gives you eigenvalues [...
I am now trying some stuff with PCA but it's very important for me to know which are the features responsible for each eigenvalue. numpy.linalg.eig gives us the diagonal matrix already sorted but I wanted this matrix with them at the original positions. Does anybody know how I can make it?
0
1
1,121
0
6,312,608
0
0
0
0
1
false
1
2011-06-10T17:38:00.000
2
3
0
sparse matrix from dictionaries
6,310,087
0.132549
python,scipy,sparse-matrix
No. Any matrix in Scipy, sparse or not, must be instantiated with a size.
I just started to learn to program in Python and I am trying to construct a sparse matrix using Scipy package. I found that there are different types of sparse matrices, but all of them require to store using three vectors like row, col, data; or if you want to each new entry separately, like S(i,j) = s_ij you need t...
0
1
4,150
0
6,320,431
0
1
0
0
1
true
1
2011-06-12T05:49:00.000
2
1
0
UnicodeDecodeError: 'gbk' codec can't decode bytes
6,320,415
1.2
python,unicode,python-3.x,decode,pickle
It's hard to say without you showing your code, but it looks like you opened the file in text mode with a "gbk" encoding. It should probably be opened in binary mode. If that doesn't happen, make a small code example that fails, and paste it in here.
I'm trying to load an object (of a custom class Area) from a file using pickler. I'm using python 3.1. The file was made with pickle.dump(area, f) I get the following error, and I would like help trying to understand and fix it. File "editIO.py", line 12, in load area = pickle.load(f) File "C:\Python31\lib\pic...
0
1
4,835
0
26,627,020
0
0
0
0
1
false
1
2011-06-13T16:32:00.000
0
2
0
python matplotlib -- regenerate graph?
6,333,345
0
python,matplotlib
from matplotlib.widgets import Button real_points = plt.axes().scatter(x=xpts, y=ypts, alpha=.4, s=size, c='green', label='real data') #Reset Button #rect = [left, bottom, width, height] reset_axis = plt.axes([...
I have a python function that generates a list with random values. After I call this function, I call another function that plots the random values using matplotlib. I want to be able to click some key on the keyboard / mouse, and have the following happen: (1) a new list of random values will be re-generated (2) the...
0
1
1,009
0
6,368,360
0
0
0
0
1
false
31
2011-06-15T23:28:00.000
2
6
0
Improving FFT performance in Python
6,365,623
0.066568
python,numpy,scipy,fft,fftw
Where I work some researchers have compiled this Fortran library which setups and calls the FFTW for a particular problem. This Fortran library (module with some subroutines) expect some input data (2D lists) from my Python program. What I did was to create a little C-extension for Python wrapping the Fortran library, ...
What is the fastest FFT implementation in Python? It seems numpy.fft and scipy.fftpack both are based on fftpack, and not FFTW. Is fftpack as fast as FFTW? What about using multithreaded FFT, or using distributed (MPI) FFT?
0
1
27,277
0
34,919,615
0
0
0
0
2
false
421
2011-06-17T18:49:00.000
23
10
0
Matplotlib make tick labels font size smaller
6,390,393
1
python,matplotlib
In current versions of Matplotlib, you can do axis.set_xticklabels(labels, fontsize='small').
In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller? Further, how can one rotate it from horizontal to vertical?
0
1
870,128
0
37,869,225
0
0
0
0
2
false
421
2011-06-17T18:49:00.000
16
10
0
Matplotlib make tick labels font size smaller
6,390,393
1
python,matplotlib
For smaller font, I use ax1.set_xticklabels(xticklabels, fontsize=7) and it works!
In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller? Further, how can one rotate it from horizontal to vertical?
0
1
870,128
0
6,398,543
0
0
0
0
1
true
15
2011-06-18T16:54:00.000
15
1
0
Unmap of NumPy memmap
6,397,495
1.2
python,numpy,mmap
Yes, it's only closed when the object is garbage-collected; memmap.close method does nothing. You can call x._mmap.close(), but keep in mind that any further access to the x object will crash python.
I can't find any documentation on how numpy handles unmapping of previously memory mapped regions: munmap for numpy.memmap() and numpy.load(mmap_mode). My guess is it's done only at garbage collection time, is that correct?
0
1
2,924
0
44,827,155
0
0
0
0
2
false
2
2011-06-21T17:54:00.000
0
5
0
ML/Data Mining/Big Data : Popular language for programming and community support
6,429,772
0
java,python,hadoop,machine-learning,bigdata
Python is gaining in popularity, has a lot of libraries, and is very useful for prototyping. I find that due to the many versions of python and its dependencies on C libs to be difficult to deploy though. R is also very popular, has a lot of libraries, and was designed for data science. However, the underlying language...
I am not sure if this question is correct, but I am asking to resolve the doubts I have. For Machine Learning/Data Mining, we need to learn about data, which means you need to learn Hadoop, which has implementation in Java for MapReduce(correct me if I am wrong). Hadoop also provides streaming api to support other...
0
1
1,400
0
6,436,938
0
0
0
0
2
false
2
2011-06-21T17:54:00.000
0
5
0
ML/Data Mining/Big Data : Popular language for programming and community support
6,429,772
0
java,python,hadoop,machine-learning,bigdata
I think in this field most popular combination is Java/Hadoop. When vacancies requires also python/perl/ruby it usually means that they are migrating from those script languages(usually main languages till that time) to java due to moving from startup code base to enterprise. Also in real world data mining application...
I am not sure if this question is correct, but I am asking to resolve the doubts I have. For Machine Learning/Data Mining, we need to learn about data, which means you need to learn Hadoop, which has implementation in Java for MapReduce(correct me if I am wrong). Hadoop also provides streaming api to support other...
0
1
1,400
0
6,432,586
0
0
0
0
2
false
29
2011-06-21T21:56:00.000
1
9
0
How to do weighted random sample of categories in python
6,432,499
0.022219
python,statistics,numpy,probability,random-sample
Howabout creating 3 "a", 4 "b" and 3 "c" in a list an then just randomly select one. With enough iterations you will get the desired probability.
Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time. What's the canonical way of doing this in python? I've looked at the random module...
0
1
12,259
0
6,432,588
0
0
0
0
2
false
29
2011-06-21T21:56:00.000
0
9
0
How to do weighted random sample of categories in python
6,432,499
0
python,statistics,numpy,probability,random-sample
I'm not sure if this is the pythonic way of doing what you ask, but you could use random.sample(['a','a','a','b','b','b','b','c','c','c'],k) where k is the number of samples you want. For a more robust method, bisect the unit interval into sections based on the cumulative probability and draw from the uniform dist...
Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time. What's the canonical way of doing this in python? I've looked at the random module...
0
1
12,259
0
37,608,130
0
0
0
0
2
false
10
2011-06-26T21:03:00.000
0
4
0
Clustering using Latent Dirichlet Allocation algo in gensim
6,486,738
0
python,algorithm,cluster-analysis,latent-semantic-indexing
The basic thing to understand here is that clustering requires your data to be present in a format and is not concerned with how did you arrive at your data. So, whether you apply clustering on the term-document matrix or on the reduced-dimension (LDA output matrix), clustering will work irrespective of that. Just do t...
Is it possible to do clustering in gensim for a given set of inputs using LDA? How can I go about it?
0
1
13,867
0
6,525,268
0
0
0
0
2
true
10
2011-06-26T21:03:00.000
10
4
0
Clustering using Latent Dirichlet Allocation algo in gensim
6,486,738
1.2
python,algorithm,cluster-analysis,latent-semantic-indexing
LDA produces a lower dimensional representation of the documents in a corpus. To this low-d representation you could apply a clustering algorithm, e.g. k-means. Since each axis corresponds to a topic, a simpler approach would be assigning each document to the topic onto which its projection is largest.
Is it possible to do clustering in gensim for a given set of inputs using LDA? How can I go about it?
0
1
13,867
0
6,550,992
0
0
0
0
3
false
1
2011-07-01T14:44:00.000
1
3
1
Distributing Real-Time Market Data Using ZeroMQ / NFS?
6,549,488
0.066568
python,linux,zeromq
I'm pretty sure sending with ZeroMQ will be substantially quicker than saving and loading files. There are other ways to send information over the network, such as raw sockets (lower level), AMQP implementations like RabbitMQ (more structured/complicated), HTTP requests/replies, and so on. ZeroMQ is a pretty good optio...
Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data. What would be the best / fastest way to send the data over to th...
0
1
1,280
0
6,552,072
0
0
0
0
3
true
1
2011-07-01T14:44:00.000
1
3
1
Distributing Real-Time Market Data Using ZeroMQ / NFS?
6,549,488
1.2
python,linux,zeromq
I would go with zeromq with pub/sub sockets.. in your 2 option, your "clients" will have to refresh in order to get your file modifications.. like polling.. if you have some write error, you will have to handle this by hand, which won't be easy as well.. zeromq is simple, reliable and powerful.. i think that perfectly ...
Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data. What would be the best / fastest way to send the data over to th...
0
1
1,280
0
6,643,883
0
0
0
0
3
false
1
2011-07-01T14:44:00.000
0
3
1
Distributing Real-Time Market Data Using ZeroMQ / NFS?
6,549,488
0
python,linux,zeromq
Definatly do NOT use the file system. ZeroMQ is a great solution wiht bindings in Py. I have some examples here: www.coastrd.com. Contact me if you need more help.
Suppose that you have a machine that gets fed with real-time stock prices from the exchange. These prices need to be transferred to 50 other machines in your network in the fastest possible way, so that each of them can run its own processing on the data. What would be the best / fastest way to send the data over to th...
0
1
1,280
0
6,581,184
0
0
0
0
1
true
1
2011-07-05T03:04:00.000
1
1
0
Efficient Datatype Python (list or numpy array?)
6,577,657
1.2
python,arrays,performance,numpy
As I see it, if you were doing this in C or Fortran, you'd have to have an idea of the size of the array so that you can allocate the correct amount of memory (ignoring realloc!). So assuming you do know this, why do you need to append to the array? In any case, numpy arrays have the resize method, which you can use to...
I'm still confused whether to use list or numpy array. I started with the latter, but since I have to do a lot of append I ended up with many vstacks slowing my code down. Using list would solve this problem, but I also need to delete elements which again works well with delete on numpy array. As it looks now I'll hav...
0
1
762
0
6,580,497
0
1
0
0
1
false
0
2011-07-05T03:49:00.000
0
2
0
matplotlib.pyplot how to add labels with .clabel?
6,577,807
0
python,matplotlib
You may use plt.annotate or plt.text. And, as an aside, 1) you probably want to use different variables for the file names and numpy arrays you're loading your data into (what is data in data=plb.loadtxt(data)), 2) you probably want to move the label positioning into the loop (in your code, what is data in the plt.cl...
How can I use pyplot.clabel to attach the file names to the lines being plotted? plt.clabel(data) line gives the error
0
1
1,086
0
6,585,193
0
1
0
0
1
true
1
2011-07-05T15:29:00.000
4
2
0
Python: 64bit Numpy?
6,585,176
1.2
python,numpy
NumPy has been used on 64-bit systems of all types for years now. I doubt you will find anything new that doesn't show up elsewhere as well.
I am currently working with numpy on a 32bit system (Ubuntu 10.04 LTS). Can I expect my code to work fluently, in the same manner, on a 64bit (Ubuntu) system? Does numpy have an compatibility issues with 64bit python?
0
1
2,531
0
9,271,325
0
0
0
0
1
false
7
2011-07-07T17:12:00.000
5
3
0
Python random seed not working with Genetic Programming example code
6,614,447
0.321513
python
I had the same problem just now with some completely unrelated code. I believe my solution was similar to that in eryksun's answer, though I didn't have any trees. What I did have were some sets, and I was doing random.choice(list(set)) to pick values from them. Sometimes my results (the items picked) were diverging ev...
I am trying to get reproducible results with the genetic programming code in chapter 11 of "Programming Collective Intelligence" by Toby Segaran. However, simply setting seed "random.seed(55)" does not appear to work, changing the original code "from random import ...." to "import random" doesn't help, nor does changin...
0
1
2,656
0
19,444,825
0
0
0
0
2
false
41
2011-07-07T18:58:00.000
0
7
0
Kmeans without knowing the number of clusters?
6,615,665
0
python,machine-learning,data-mining,k-means
If the cluster number is unknow, why not use Hierarchical Clustering instead? At the begining, every isolated one is a cluster, then every two cluster will be merged if their distance is lower than a threshold, the algorithm will end when no more merger goes. The Hierarchical clustering algorithm can carry out a suitab...
I am attempting to apply k-means on a set of high-dimensional data points (about 50 dimensions) and was wondering if there are any implementations that find the optimal number of clusters. I remember reading somewhere that the way an algorithm generally does this is such that the inter-cluster distance is maximized an...
0
1
26,563
0
33,374,054
0
0
0
0
2
false
41
2011-07-07T18:58:00.000
0
7
0
Kmeans without knowing the number of clusters?
6,615,665
0
python,machine-learning,data-mining,k-means
One way to do it is to run k-means with large k (much larger than what you think is the correct number), say 1000. then, running mean-shift algorithm on the these 1000 point (mean shift uses the whole data but you will only "move" these 1000 points). mean shift will find the amount of clusters then. Running mean shift ...
I am attempting to apply k-means on a set of high-dimensional data points (about 50 dimensions) and was wondering if there are any implementations that find the optimal number of clusters. I remember reading somewhere that the way an algorithm generally does this is such that the inter-cluster distance is maximized an...
0
1
26,563
0
6,631,872
0
0
0
0
1
false
4
2011-07-07T22:59:00.000
2
1
0
Any way to get a figure from Python's matplotlib into Matlab?
6,618,132
0.379949
python,matlab,matplotlib
Without access to (or experience with matlab) this is going to be a bit tricky. As Amro stated, .fig files store the underlying data, and not just an image, and you're going to have a hard time saving .fig files from python. There are however a couple of things which might work in your favour, these are: numpy/scipy ...
I'm processing some data for a research project, and I'm writing all my scripts in python. I've been using matplotlib to create graphs to present to my supervisor. However, he is a die-hard MATLAB user and he wants me to send him MATLAB .fig files rather than SVG images. I've looked all over but can't find anything to ...
0
1
2,637
0
6,620,533
0
0
0
0
1
false
181
2011-07-08T06:00:00.000
3
13
0
Fitting empirical distribution to theoretical ones with Scipy (Python)?
6,620,471
0.046121
python,numpy,statistics,scipy,distribution
What about storing your data in a dictionary where keys would be the numbers between 0 and 47 and values the number of occurrences of their related keys in your original list? Thus your likelihood p(x) will be the sum of all the values for keys greater than x divided by 30000.
INTRODUCTION: I have a list of more than 30,000 integer values ranging from 0 to 47, inclusive, e.g.[0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47,47,47,...] sampled from some continuous distribution. The values in the list are not necessarily in order, but order doesn't matter for this problem. PROBLEM: Based on my distributi...
0
1
181,959
0
6,626,730
0
0
0
0
1
false
0
2011-07-08T15:20:00.000
2
1
0
replace/modify tail of a gz file with gzip.open
6,626,629
0.379949
python,gzip,tail
Not possible - you can not replace parts of a compressed file without decompressing it first. At least not with the common compression algorithms.
I have a gz file that with a huge size, is it possible to replace the tail without touching the rest of the file? I tried gzip.open( filePath, mode = 'r+' ) but the write method was blocked .... saying it is a read-only object ... any idea? what I am doing now is... gzip.open as r and once I get the offset of the star...
0
1
350
0
6,696,330
0
1
0
0
1
true
5
2011-07-14T16:02:00.000
3
3
0
Two dimensional associative array in Python
6,696,279
1.2
python,associative-array
Is there any reason not to use a dict of dicts? It does what you want (though note that there's no such thing as ++ in Python), after all. There's nothing stylistically poor or non-Pythonic about using a dict of dicts.
I have a set() with terms like 'A' 'B' 'C'. I want a 2-d associative array so that i can perform an operation like d['A']['B'] += 1 . What is the pythonic way of doing this, I was thinking a dicts of dicts. Is there a better way.
0
1
10,309
0
11,928,786
0
0
0
0
1
false
70
2011-07-14T17:10:00.000
2
6
0
Interactive matplotlib plot with two sliders
6,697,259
0.066568
python,keyboard,matplotlib,interactive
Use waitforbuttonpress(timeout=0.001) then plot will see your mouse ticks.
I used matplotlib to create some plot, which depends on 8 variables. I would like to study how the plot changes when I change some of them. I created some script that calls the matplotlib one and generates different snapshots that later I convert into a movie, it is not bad, but a bit clumsy. I wonder if somehow I co...
0
1
98,213
0
6,831,300
0
0
0
0
1
false
9
2011-07-14T21:29:00.000
-2
3
0
Python zeromq -- Multiple Publishers To a Single Subscriber?
6,700,149
-0.132549
python,zeromq
In ZeroMQ there can only be one publisher per port. The only (ugly) workaround is to start each child PUB socket on a different port and have the parent listen on all those ports. but the pipeline pattern describe on 0MQ, user guide is a much better way to do this.
I'd like to write a python script (call it parent) that does the following: (1) defines a multi-dimensional numpy array (2) forks 10 different python scripts (call them children). Each of them must be able to read the contents of the numpy array from (1) at any single point in time (as long as they are alive). (3) each...
0
1
14,920
0
7,152,292
0
0
0
0
1
false
18
2011-07-17T08:40:00.000
1
5
0
OpenCV Python and SIFT features
6,722,736
0.039979
python,opencv,sift
Are you sure OpenCV is allowed to support SIFT? SIFT is a proprietary feature type, patented within the U.S. by the University of British Columbia and by David Lowe, the inventor of the algorithm. In my own research, I have had to re-write this algorithm many times. In fact, some vision researchers try to avoid SIFT an...
I know there is a lot of questions about Python and OpenCV but I didn't find help on this special topic. I want to extract SIFT keypoints from an image in python OpenCV. I have recently installed OpenCV 2.3 and can access to SURF and MSER but not SIFT. I can't see anything related to SIFT in python modules (cv and cv2)...
0
1
12,701
0
46,067,557
0
0
0
0
2
false
146
2011-07-18T17:10:00.000
5
8
0
Fast check for NaN in NumPy
6,736,590
0.124353
python,performance,numpy,nan
use .any() if numpy.isnan(myarray).any() numpy.isfinite maybe better than isnan for checking if not np.isfinite(prop).all()
I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic. I tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-effici...
0
1
183,568
0
6,736,673
0
0
0
0
2
false
146
2011-07-18T17:10:00.000
34
8
0
Fast check for NaN in NumPy
6,736,590
1
python,performance,numpy,nan
I think np.isnan(np.min(X)) should do what you want.
I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic. I tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-effici...
0
1
183,568
0
6,743,440
0
0
0
0
1
false
9
2011-07-19T07:05:00.000
5
4
0
How to sort files in a directory before reading?
6,743,407
0.244919
python,sorting,file-io
Sort your list of files in the program. Don't rely on operating system calls to give the files in the right order, it depends on the actual file system being used.
I am working with a program that writes output to a csv file based on the order that files are read in from a directory. However with a large number of files with the endings 1,2,3,4,5,6,7,8,9,10,11,12. My program actually reads the files by I guess alphabetical ordering: 1,10,11,12....,2,20,21.....99. The problem i...
0
1
34,380
0
6,760,471
0
0
0
0
2
true
2
2011-07-20T10:22:00.000
7
2
0
N-Dimensional Matrix Array in Python (with different sizes)
6,760,380
1.2
python,matrix
Just use a tuple or list. A tuple matrices = tuple(matrix1, matrix2, matrix3) will be slightly more efficient; A list matrices = [matrix1, matrix2, matrix3] is more flexible as you can matrix.append(matrix4). Either way you can access them as matrices[0] or for matrix in matricies: pass # do stuff.
In Matlab, there is something called struct, which allow the user to have a dynamic set of matrices. I'm basically looking for a function that allows me to index over dynamic matrices that have different sizes. Example: (with 3 matrices) Matrix 1: 3x2 Matrix 2: 2x2 Matrix 3: 2x1 Basically I want to store the 3 matr...
0
1
4,483
0
6,760,481
0
0
0
0
2
false
2
2011-07-20T10:22:00.000
0
2
0
N-Dimensional Matrix Array in Python (with different sizes)
6,760,380
0
python,matrix
Put those arrays into a list.
In Matlab, there is something called struct, which allow the user to have a dynamic set of matrices. I'm basically looking for a function that allows me to index over dynamic matrices that have different sizes. Example: (with 3 matrices) Matrix 1: 3x2 Matrix 2: 2x2 Matrix 3: 2x1 Basically I want to store the 3 matr...
0
1
4,483
0
6,761,407
0
0
1
0
1
true
1
2011-07-20T11:33:00.000
0
1
0
Ported python3 csv module to C# what license should I use for my module?
6,761,201
1.2
python,module,licensing
You need to pay a copyright lawyer to tell you that. But my guess is that you need to use the PSF license. Note that PSF does not have the copyright to Python source code. They coders do how that copyright translates into you making a C# port is something only a copyright expert can say. Also note that it is likely to ...
I have ported python3 csv module to C# what license could I use for my module? Should I distribute my module? Should I put PSF copyright in every header of my module? thanks
0
1
122
0
6,767,866
0
0
0
0
1
false
6
2011-07-20T20:01:00.000
9
1
0
NLTK - when to normalize the text?
6,767,770
1
python,nlp,nltk
By "normalize" do you just mean making everything lowercase? The decision about whether to lowercase everything is really dependent of what you plan to do. For some purposes, lowercasing everything is better because it lowers the sparsity of the data (uppercase words are rarer and might confuse the system unless you...
I've finished gathering my data I plan to use for my corpus, but I'm a bit confused about whether I should normalize the text. I plan to tag & chunk the corpus in the future. Some of NLTK's corpora are all lower case and others aren't. Can anyone shed some light on this subject, please?
0
1
2,668
0
6,787,446
0
0
0
0
1
false
0
2011-07-22T08:20:00.000
4
1
0
Quiz Generator using NLTK/Python
6,787,345
0.664037
python,nlp,nltk
In the general case, this is a very hard open research question. However, you might be able to get away with a simple solution a long as your "facts" follow a pretty simple grammar. You could write a fairly simple solution by creating a set of transformation rules that act on parse trees. So if you saw a structure ...
The goal of this application is produce a system that can generate quizzes automatically. The user should be able to supply any word or phrase they like (e.g. "Sachin Tendulkar"); the system will then look for suitable topics online, identify a range of interesting facts, and rephrase them as quiz questions. If I have ...
0
1
1,334
0
6,795,732
0
1
0
0
1
false
5
2011-07-22T20:20:00.000
0
5
0
Numpy: arr[...,0,:] works. But how do I store the data contained in the slice command (..., 0, :)?
6,795,657
0
python,indexing,numpy,slice
I think you want to just do myslice = slice(1,2) to for example define a slice that will return the 2nd element (i.e. myarray[myslice] == myarray[1:2])
In Numpy (and Python in general, I suppose), how does one store a slice-index, such as (...,0,:), in order to pass it around and apply it to various arrays? It would be nice to, say, be able to pass a slice-index to and from functions.
0
1
389
0
6,801,439
0
0
0
0
2
false
1
2011-07-23T13:03:00.000
2
3
0
numpy array access
6,800,534
0.132549
python,numpy
Use A[n-offset]. this turns offset to offset+len(A) into 0 to len(A).
I need to create a numpy array of N elements, but I want to access the array with an offset Noff, i.e. the first element should be at Noff and not at 0. In C this is simple to do with some simple pointer arithmetic, i.e. I malloc the array and then define a pointer and shift it appropriately. Furthermore, I do not want...
0
1
2,276
0
6,812,332
0
0
0
0
2
false
1
2011-07-23T13:03:00.000
2
3
0
numpy array access
6,800,534
0.132549
python,numpy
I would be very cautious about over-riding the [] operator through the __getitem__() method. Although it will be fine with your own code, I can easily imagine that when the array gets passed to an arbitrary library function, you could get problems. For example, if the function explicitly tried to get all values in the...
I need to create a numpy array of N elements, but I want to access the array with an offset Noff, i.e. the first element should be at Noff and not at 0. In C this is simple to do with some simple pointer arithmetic, i.e. I malloc the array and then define a pointer and shift it appropriately. Furthermore, I do not want...
0
1
2,276
0
6,819,725
0
0
0
0
1
false
10
2011-07-25T16:55:00.000
1
8
0
Plotting points in python
6,819,653
0.024995
python,plot
You could always write a plotting function that uses the turtle module from the standard library.
I want to plot some (x,y) points on the same graph and I don't need any special features at all short of support for polar coordinates which would be nice but not necessary. It's mostly for visualizing my data. Is there a simple way to do this? Matplotlib seems like way more than I need right now. Are there any more ba...
0
1
69,333
0
11,173,545
0
0
1
0
1
false
7
2011-07-27T17:42:00.000
1
4
0
Embed a function from a Matlab MEX file directly in Python
6,848,790
0.049958
python,matlab,mex
A mex function is an api that allows Matlab (i.e. a matlab program) to call a function written in c/c++. This function, in turn, can call Matlab own internal functions. As such, the mex function will be linked against Matlab libraries. Thus, to call a mex function directly from a Python program w/o Matlab libraries do...
I am using a proprietary Matlab MEX file to import some simulation results in Matlab (no source code available of course!). The interface with Matlab is actually really simple, as there is a single function, returning a Matlab struct. I would like to know if there is any way to call this function in the MEX file direct...
0
1
7,858
0
6,854,030
0
0
0
0
3
false
7
2011-07-28T03:53:00.000
1
6
0
Python: handling a large set of data. Scipy or Rpy? And how?
6,853,923
0.033321
python,r,numpy,scipy,memory-mapped-files
I don't know anything about Rpy. I do know that SciPy is used to do serious number-crunching with truly large data sets, so it should work for your problem. As zephyr noted, you may not need either one; if you just need to keep some running sums, you can probably do it in Python. If it is a CSV file or other common f...
In my python environment, the Rpy and Scipy packages are already installed. The problem I want to tackle is such: 1) A huge set of financial data are stored in a text file. Loading into Excel is not possible 2) I need to sum a certain fields and get the totals. 3) I need to show the top 10 rows based on the totals. Wh...
0
1
3,050
0
6,853,981
0
0
0
0
3
false
7
2011-07-28T03:53:00.000
5
6
0
Python: handling a large set of data. Scipy or Rpy? And how?
6,853,923
0.16514
python,r,numpy,scipy,memory-mapped-files
Neither Rpy or Scipy is necessary, although numpy may make it a bit easier. This problem seems ideally suited to a line-by-line parser. Simply open the file, read a row into a string, scan the row into an array (see numpy.fromstring), update your running sums and move to the next line.
In my python environment, the Rpy and Scipy packages are already installed. The problem I want to tackle is such: 1) A huge set of financial data are stored in a text file. Loading into Excel is not possible 2) I need to sum a certain fields and get the totals. 3) I need to show the top 10 rows based on the totals. Wh...
0
1
3,050
0
7,559,475
0
0
0
0
3
true
7
2011-07-28T03:53:00.000
2
6
0
Python: handling a large set of data. Scipy or Rpy? And how?
6,853,923
1.2
python,r,numpy,scipy,memory-mapped-files
As @gsk3 noted, bigmemory is a great package for this, along with the packages biganalytics and bigtabulate (there are more, but these are worth checking out). There's also ff, though that isn't as easy to use. Common to both R and Python is support for HDF5 (see the ncdf4 or NetCDF4 packages in R), which makes it ver...
In my python environment, the Rpy and Scipy packages are already installed. The problem I want to tackle is such: 1) A huge set of financial data are stored in a text file. Loading into Excel is not possible 2) I need to sum a certain fields and get the totals. 3) I need to show the top 10 rows based on the totals. Wh...
0
1
3,050
0
6,863,816
0
0
0
0
1
true
1
2011-07-28T18:21:00.000
2
2
0
Effeciently Removing Duplicates from a CSV in Python
6,863,756
1.2
python,csv,performance
In order to remove duplicates you will have to have some sort of memory that tells you if you have seen a line before. Either by remembering the lines or perhaps a checksum of them (which is almost safe...) Any solution like that will probably have a "brute force" feel to it. If you could have the lines sorted before ...
I am trying to effeciently remove duplicate rows from relatively large (several hundred MB) CSV files that are not ordered in any meaningful way. Although I have a technique to do this, it is very brute force and I am certain there is a moe elegant and more effecient way.
0
1
1,090
0
15,331,547
0
0
0
0
1
false
42
2011-08-03T18:16:00.000
16
3
0
Difference between scipy.spatial.KDTree and scipy.spatial.cKDTree
6,931,209
1
python,scipy,kdtree
In a use case (5D nearest neighbor look ups in a KDTree with approximately 100K points) cKDTree is around 12x faster than KDTree.
What is the difference between these two algorithms?
0
1
11,234
0
6,938,587
0
0
0
0
2
false
1
2011-08-04T08:32:00.000
1
2
0
How to rotate a numpy array?
6,938,377
0.099668
python,arrays,numpy,scipy,rotation
Take a look at the command numpy.shape I used it once to transpose an array, but I don't know if it might fit your needs. Cheers!
I have some numpy/scipy issue. I have a 3D array that represent an ellipsoid in a binary way [ 0 out of the ellipsoid]. The thing is I would like to rotate my shape of a certain degree. Do you think it's possible ? Or is there an efficient way to write directly the ellipsoid equation with the rotation ?
0
1
2,379
0
6,941,191
0
0
0
0
2
false
1
2011-08-04T08:32:00.000
2
2
0
How to rotate a numpy array?
6,938,377
0.197375
python,arrays,numpy,scipy,rotation
Just a short answer. If you need more informations or you don't know how to do it, then I will edit this post and add a small example. The right way to rotate your matrix of data points is to do a matrix multiplication. Your rotation matrix would be probably an n*n-matrix and you have to multiply it with every point. ...
I have some numpy/scipy issue. I have a 3D array that represent an ellipsoid in a binary way [ 0 out of the ellipsoid]. The thing is I would like to rotate my shape of a certain degree. Do you think it's possible ? Or is there an efficient way to write directly the ellipsoid equation with the rotation ?
0
1
2,379
0
6,948,576
0
0
0
0
1
false
4
2011-08-04T21:03:00.000
6
4
0
Best language for Molecular Dynamics Simulator, to be run in production. (Python+Numpy?)
6,948,483
1
python,scala,numpy,simulation,scientific-computing
I believe that most highly performant MD codes are written in native languages like Fortran, C or C++. Modern GPU programming techniques are also finding favour more recently. A language like Python would allow for much more rapid development that native code. The flip side of that is that the performance is typically ...
I need to build a heavy duty molecular dynamics simulator. I am wondering if python+numpy is a good choice. This will be used in production, so I wanted to start with a good language. I am wondering if I should rather start with a functional language like eg.scala. Do we have enough library support for scientific compu...
0
1
2,217
0
6,987,109
0
0
0
0
1
false
185
2011-08-08T18:46:00.000
5
9
0
Bin size in Matplotlib (Histogram)
6,986,986
0.110656
python,matplotlib,histogram
I guess the easy way would be to calculate the minimum and maximum of the data you have, then calculate L = max - min. Then you divide L by the desired bin width (I'm assuming this is what you mean by bin size) and use the ceiling of this value as the number of bins.
I'm using matplotlib to make a histogram. Is there any way to manually set the size of the bins as opposed to the number of bins?
0
1
355,218
0
7,000,381
0
0
0
0
1
true
37
2011-08-09T16:34:00.000
51
2
0
how to use 'extent' in matplotlib.pyplot.imshow
6,999,621
1.2
python,plot,matplotlib
Specify, in the coordinates of your current axis, the corners of the rectangle that you want the image to be pasted over Extent defines the left and right limits, and the bottom and top limits. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max]. Assuming you have longitude a...
I managed to plot my data and would like to add a background image (map) to it. Data is plotted by the long/lat values and I have the long/lat values for the image's three corners (top left, top right and bottom left) too. I am trying to figure out how to use 'extent' option with imshow. However, the examples I found d...
0
1
80,144
0
7,011,948
0
0
0
0
1
false
8
2011-08-10T13:20:00.000
0
4
0
imshow for 3D? (Python / Matplotlib)
7,011,428
0
python,numpy,matplotlib
What you want is a kind of 3D image (a block). Maybe you could plot it by slices (using imshow() or whatever the tool you want). Maybe you could tell us what kind of plot you want?
does there exist an equivalent to matplotlib's imshow()-function for 3D-drawing of datas stored in a 3D numpy array?
0
1
11,267
0
7,046,562
0
0
0
0
1
false
2
2011-08-10T13:30:00.000
0
2
0
Recode missing data Numpy
7,011,591
0
python,arrays,numpy,missing-data
you can use mask array when you do calculation. and when pass the array to ATPY, you can call filled(9999) method of the mask array to convert the mask array to normal array with invalid values replaced by 9999.
I am reading in census data using the matplotlib cvs2rec function - works fine gives me a nice ndarray. But there are several columns where all the values are '"none"" with dtype |04. This is cuasing problems when I lode into Atpy "TypeError: object of NoneType has no len()". Something like '9999' or other missing wou...
0
1
2,140
0
7,030,943
0
0
0
0
1
false
117
2011-08-11T17:05:00.000
3
4
0
Differences between numpy.random and random.random in Python
7,029,993
0.148885
python,random,random-seed
The source of the seed and the distribution profile used are going to affect the outputs - if you are looking for cryptgraphic randomness, seeding from os.urandom() will get nearly real random bytes from device chatter (ie ethernet or disk) (ie /dev/random on BSD) this will avoid you giving a seed and so generating det...
I have a big script in Python. I inspired myself in other people's code so I ended up using the numpy.random module for some things (for example for creating an array of random numbers taken from a binomial distribution) and in other places I use the module random.random. Can someone please tell me the major difference...
0
1
53,810
0
7,067,801
0
1
0
0
1
false
0
2011-08-15T16:30:00.000
7
2
0
linked list in python
7,067,726
1
python,linked-list
This sounds like a perfect use for a dictionary.
there are huge number of data, there are various groups. i want to check whether the new data fits in any group and if it does i want to put that data into that group. If datum doesn't fit to any of the group, i want to create new group. So, i want to use linked list for the purpose or is there any other way to doing s...
0
1
468
0
7,078,262
0
0
0
0
1
false
0
2011-08-15T02:07:00.000
0
3
0
Generating dynamic graphs
7,078,010
0
graphics,python
If performance is such an issue and you don't need fancy graphs, you may be able to get by with not creating images at all. Render explicitly sized and colored divs for a simple bar chart in html. Apply box-shadow and/or a gradient background for eye candy. I did this in some report web pages, displaying a small 5-bar ...
I'm building a web application in Django and I'm looking to generate dynamic graphs based on the data. Previously I was using the Google Image Charts, but I ran into significant limitations with the api, including the URL length constraint. I've switched to using matplotlib to create my charts. I'm wondering if this w...
1
1
228
0
7,129,002
0
0
0
0
1
false
4
2011-08-20T00:39:00.000
2
4
1
How to properly install Python on OSX for use with OpenCV?
7,128,761
0.099668
python,macos,opencv,homebrew
You need to install the module using your python2.7 installation. Pointing your PYTHONPATH at stuff installed under 2.6 to run under 2.7 is a Bad Idea. Depending on how you want to install it, do something like python2.7 setup.py or easy_install-2.7 opencv to install. fwiw, on OS X the modules are usually installed un...
I spent the past couple of days trying to get opencv to work with my Python 2.7 install. I kept getting an error saying that opencv module was not found whenever I try "import cv". I then decided to try installing opencv using Macports, but that didn't work. Next, I tried Homebrew, but that didn't work either. Eventual...
0
1
7,935
0
7,162,436
0
0
0
0
1
false
0
2011-08-23T14:09:00.000
0
1
0
temporal interpolation of artery angiogram images sequences
7,162,351
0
python,interpolation
If you imagine each of the images as a still photo the frame number would be a sequence number that shows what order the images should be displayed in to produce a movie from the stills. If the images are stored in an array it would be the array index of the individul frame in question.
I have many sets of medical image sequences of the artery on the heart. Each set of sequenced medical images shows the position of the artery as the heart pumps. Each set is taken from different views and has different amount of images taken. I want to do a temporal interpolation based on time (i was told that the time...
0
1
163
0
8,822,734
0
1
0
0
2
false
50
2011-09-03T00:27:00.000
1
5
0
Store and reload matplotlib.pyplot object
7,290,370
0.039979
python,matplotlib
I produced figures for a number of papers using matplotlib. Rather than thinking of saving the figure (as in MATLAB), I would write a script that plotted the data then formatted and saved the figure. In cases where I wanted to keep a local copy of the data (especially if I wanted to be able to play with it again) I fou...
I work in an psudo-operational environment where we make new imagery on receipt of data. Sometimes when new data comes in, we need to re-open an image and update that image in order to create composites, add overlays, etc. In addition to adding to the image, this requires modification of titles, legends, etc. Is th...
0
1
43,605
0
7,843,630
0
1
0
0
2
false
50
2011-09-03T00:27:00.000
0
5
0
Store and reload matplotlib.pyplot object
7,290,370
0
python,matplotlib
Did you try the pickle module? It serialises an object, dumps it to a file, and can reload it from the file later.
I work in an psudo-operational environment where we make new imagery on receipt of data. Sometimes when new data comes in, we need to re-open an image and update that image in order to create composites, add overlays, etc. In addition to adding to the image, this requires modification of titles, legends, etc. Is th...
0
1
43,605
0
7,301,095
0
0
1
0
1
false
18
2011-09-04T17:25:00.000
3
4
0
Preserve code readability while optimising
7,300,903
0.148885
python,performance,algorithm,optimization,code-readability
Yours is a very good question that arises in almost every piece of code, however simple or complex, that's written by any programmer who wants to call himself a pro. I try to remember and keep in mind that a reader newly come to my code has pretty much the same crude view of the problem and the same straightforward (m...
I am writing a scientific program in Python and C with some complex physical simulation algorithms. After implementing algorithm, I found that there are a lot of possible optimizations to improve performance. Common ones are precalculating values, getting calculations out of cycle, replacing simple matrix algorithms wi...
0
1
434
0
7,337,353
0
0
0
0
1
true
10
2011-09-07T14:10:00.000
8
6
0
Can I load a multi-frame TIFF through OpenCV?
7,335,308
1.2
python,image,opencv
Unfortunately OpenCV does not support TIFF directories and is able to read only the first frame from multi-frame TIFF files.
Anyone know if OpenCV is capable of loading a multi-frame TIFF stack? I'm using OpenCV 2.2.0 with python 2.6.
0
1
12,017
0
7,351,024
0
1
0
0
1
false
2
2011-09-08T15:46:00.000
0
2
0
Need to do a math operation on every line in several CSV files in Python
7,350,851
0
python,csv,datestamp
The basic outline of the program is going to be like this: Use the os module to get the filenames out of the directory/directories of interest Read in each file one at a time For each line in the file, split it into columns with columns = line.split(",") Use datetime.date to convert strings like "2011-05-03" to dateti...
I have about 100 CSV files I have to operate on once a month and I was trying to wrap my head around this but I'm running into a wall. I'm starting to understand some things about Python, but combining several things is still giving me issues, so I can't figure this out. Here's my problem: I have many CSV files, and h...
0
1
2,947
0
7,362,256
0
0
0
0
1
true
6
2011-09-09T12:12:00.000
4
6
0
How to find indices of non zero elements in large sparse matrix?
7,361,447
1.2
python,algorithm,r,sparse-matrix,indices
Since you have two dense matrices then the double for loop is the only option you have. You don't need a sparse matrix class at all since you only want to know the list of indices (i,j) for which a[i,j] != b[i,j]. In languages like R and Python the double for loop will perform poorly. I'd probably write this in native ...
i have two sq matrix (a, b) of size in order of 100000 X 100000. I have to take difference of these two matrix (c = a-b). Resultant matrix 'c' is a sparse matrix. I want to find the indices of all non-zero elements. I have to do this operation many times (>100). Simplest way is to use two for loops. But that's computat...
0
1
6,600
0
7,364,602
0
0
0
0
1
true
1
2011-09-09T15:28:00.000
1
1
0
Graphing the number of elements down based on timestamps start/end
7,363,997
1.2
python,graph
I'd start by parsing your indata to a map indexed by dates with counts as values. Just increase the count for each row with the same date you encounter. After that, use some plotting module, for instance matplotlib to plot the keys of the map versus the values. That should cover it! Do you need any more detailed ideas...
I am trying to graph alarm counts in Python to give some sort of display to give an idea of the peak amount of network elements down between two timespans. The way that our alarms report handles it is in CSV like this: Name,Alarm Start,Alarm Clear NE1,15:42 08/09/11,15:56 08/09/11 NE2,15:42 08/09/11,15:57 08/09/11 NE3,...
0
1
52
0
7,381,424
0
0
0
0
4
false
17
2011-09-11T21:08:00.000
2
11
0
Minimising reading from and writing to disk in Python for a memory-heavy operation
7,381,258
0.036348
python,memory,io
Two ideas: Use numpy arrays to represent vectors. They are much more memory-efficient, at the cost that they will force elements of the vector to be of the same type (all ints or all doubles...). Do multiple passes, each with a different set of vectors. That is, choose first 1M vectors and do only the calculations inv...
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read th...
0
1
3,167
0
7,381,462
0
0
0
0
4
false
17
2011-09-11T21:08:00.000
1
11
0
Minimising reading from and writing to disk in Python for a memory-heavy operation
7,381,258
0.01818
python,memory,io
Use a database. That problem seems large enough that language choice (Python, Perl, Java, etc) won't make a difference. If each dimension of the vector is a column in the table, adding some indexes is probably a good idea. In any case this is a lot of data and won't process terribly quickly.
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read th...
0
1
3,167
0
7,433,853
0
0
0
0
4
false
17
2011-09-11T21:08:00.000
0
11
0
Minimising reading from and writing to disk in Python for a memory-heavy operation
7,381,258
0
python,memory,io
Split the corpus evenly in size between parallel jobs (one per core) - process in parallel, ignoring any incomplete line (or if you cannot tell if it is incomplete, ignore the first and last line of that each job processes). That's the map part. Use one job to merge the 20+ sets of vectors from each of the earlier jobs...
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read th...
0
1
3,167
0
7,381,527
0
0
0
0
4
false
17
2011-09-11T21:08:00.000
0
11
0
Minimising reading from and writing to disk in Python for a memory-heavy operation
7,381,258
0
python,memory,io
From another comment I infer that your corpus fits into the memory, and you have some cores to throw at the problem, so I would try this: Find a method to have your corpus in memory. This might be a sort of ram disk with file system, or a database. No idea, which one is best for you. Have a smallish shell script moni...
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read th...
0
1
3,167
0
20,999,120
0
0
0
0
1
false
5
2011-09-12T17:10:00.000
4
2
0
Does Scikit-learn release the python GIL?
7,391,427
0.379949
python,multithreading,parallel-processing,machine-learning,scikit-learn
Some sklearn Cython classes do release the GIL internally on performance critical sections, for instance the decision trees (used in random forests for instance) as of 0.15 (to be released early 2014) and the libsvm wrappers do. This is not the general rule though. If you identify performance critical cython code in sk...
I would like to train multiple one class SVMs in different threads. Does anybody know if scikit's SVM releases the GIL? I did not find any answers online. Thanks
0
1
1,679
0
7,453,107
0
0
0
0
1
false
1
2011-09-17T06:22:00.000
0
4
0
I need a neat data structure suggestion to store a very large dataset (to train Naive Bayes in Python)
7,452,917
0
python,data-structures,machine-learning,spam-prevention
If you assume you didn't care about multiple occurrences of each word in an email, then all you really need to know is (that is, your features are booleans): For each feature, what is the count of positive associations and negative associations? You can do this online very easily in one pass, keeping track of just thos...
I am going to implement Naive Bayes classifier with Python and classify e-mails as Spam or Not spam. I have a very sparse and long dataset with many entries. Each entry is like the following: 1 9:3 94:1 109:1 163:1 405:1 406:1 415:2 416:1 435:3 436:3 437:4 ... Where 1 is label (spam, not spam), and each pair correspond...
0
1
381
0
7,513,167
0
0
0
0
1
false
16
2011-09-22T10:08:00.000
-5
5
0
Weighted logistic regression in Python
7,513,067
-1
python,regression
Do you know Numpy? If no, take a look also to Scipy and matplotlib.
I'm looking for a good implementation for logistic regression (not regularized) in Python. I'm looking for a package that can also get weights for each vector. Can anyone suggest a good implementation / package? Thanks!
0
1
23,971
0
7,539,484
0
0
0
0
2
false
5
2011-09-24T13:02:00.000
2
5
0
Divide set into subsets with equal number of elements
7,539,186
0.07983
python,algorithm,r
I would tackle this as follows: Divide into 3 equal subsets. Figure out the mean and variance of each subset. From them construct an "unevenness" measure. Compare each pair of elements, if swapping would reduce the "unevenness", swap them. Continue until there are either no more pairs to compare, or the total uneven...
For the purpose of conducting a psychological experiment I have to divide a set of pictures (240) described by 4 features (real numbers) into 3 subsets with equal number of elements in each subset (240/3 = 80) in such a way that all subsets are approximately balanced with respect to these features (in terms of mean and...
0
1
9,742
0
7,548,438
0
0
0
0
2
false
5
2011-09-24T13:02:00.000
1
5
0
Divide set into subsets with equal number of elements
7,539,186
0.039979
python,algorithm,r
In case you are still interested in the exhaustive search question. You have 240 choose 80 possibilities to choose the first set and then another 160 choose 80 for the second set, at which point the third set is fixed. In total, this gives you: 120554865392512357302183080835497490140793598233424724482217950647 * 920451...
For the purpose of conducting a psychological experiment I have to divide a set of pictures (240) described by 4 features (real numbers) into 3 subsets with equal number of elements in each subset (240/3 = 80) in such a way that all subsets are approximately balanced with respect to these features (in terms of mean and...
0
1
9,742
0
7,673,206
0
1
0
0
1
true
3
2011-10-04T23:56:00.000
1
1
0
Pyplot/Matplotlib: How to access figures opened by another interpreter?
7,655,323
1.2
python,matplotlib
There is no simple way to reuse plot windows if you must use eclipse to run it. When I am working interactively with matplotlib, I use either spyder or ipython. Edit class, reload class, and run code again. If you just want to get rid of all the open plot windows, hit the stacked stop icons to kill all your runing py...
I am using matplotlib.pyplot (with Eclipse on Windows). Every time I run my code it opens several pyplot figure windows. The problem is that if I don't close those windows manually they accumulate. I would like to use pyplot to find those windows (opened by another process of python.exe) and re-use them. In other word...
0
1
302
0
7,725,290
0
0
0
0
1
false
28
2011-10-10T20:05:00.000
4
4
0
Maximum Likelihood Estimate pseudocode
7,718,034
0.197375
python,statistics,machine-learning,pseudocode
You need a numerical optimisation procedure. Not sure if anything is implemented in Python, but if it is then it'll be in numpy or scipy and friends. Look for things like 'the Nelder-Mead algorithm', or 'BFGS'. If all else fails, use Rpy and call the R function 'optim()'. These functions work by searching the function ...
I need to code a Maximum Likelihood Estimator to estimate the mean and variance of some toy data. I have a vector with 100 samples, created with numpy.random.randn(100). The data should have zero mean and unit variance Gaussian distribution. I checked Wikipedia and some extra sources, but I am a little bit confused si...
0
1
47,657
0
7,734,072
0
0
0
0
1
false
7
2011-10-12T00:15:00.000
2
4
0
Uniformly distributed data in d dimensions
7,733,969
0.099668
python,numpy,machine-learning,scipy
You can import the random module and call random.random to get a random sample from [0, 1). You can double that and subtract 1 to get a sample from [-1, 1). Draw d values this way and the tuple will be a uniform draw from the cube [-1, 1)^d.
How can I generate a uniformly distributed [-1,1]^d data in Python? E.g. d is a dimension like 10. I know how to generate uniformly distributed data like np.random.randn(N) but dimension thing is confused me a lot.
0
1
13,886