GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
64,900,468
0
0
0
0
2
false
2
2012-06-04T18:07:00.000
0
6
0
Multiply each pixel in an image by a factor
10,885,984
0
python,image-processing,python-imaging-library,rgb,pixel
If the type is numpy.ndarray just img = np.uint8(img*factor)
I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this...
0
1
12,396
0
12,700,150
0
0
0
0
1
false
1
2012-06-04T20:23:00.000
1
2
1
QueryFrame very slow on Windows
10,887,836
0.099668
python,windows,performance,opencv
I had same issue and I found out that this is caused by prolonged exposure. It may be the case that Windows drivers increased exposure to increase brightness of picture. Try to point your camera to light source or manually set decreased exposure
I have build a simple webcam recorder on linux which works quite well. I get ~25fps video and good audio. I am porting the recorder on windows (win7) and while it works, it is unusable. The QueryFrame function takes something more than 350ms, i.e 2.5fps. The code is in python but the problem really seems to be the lib...
0
1
565
0
10,901,418
0
0
0
0
3
false
12
2012-06-05T16:09:00.000
0
6
0
May near seeds in random number generation give similar random numbers?
10,900,852
0
python,random,seed
First: define similarity. Next: code a similarity test. Then: check for similarity. With only a vague description of similarity it is hard to check for it.
I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well? I think it doesn't change anything, but I'm using python Edit: I have done some tests and the numbers don't look similar. ...
0
1
2,871
0
10,904,377
0
0
0
0
3
false
12
2012-06-05T16:09:00.000
0
6
0
May near seeds in random number generation give similar random numbers?
10,900,852
0
python,random,seed
What kind of simulation are you doing? For simulation purposes your argument is valid (depending on the type of simulation) but if you implement it in an environment other than simulation, then it could be easily hacked if it requires that there are security concerns of the environment based on the generated random nu...
I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well? I think it doesn't change anything, but I'm using python Edit: I have done some tests and the numbers don't look similar. ...
0
1
2,871
0
10,905,149
0
0
0
0
3
false
12
2012-06-05T16:09:00.000
0
6
0
May near seeds in random number generation give similar random numbers?
10,900,852
0
python,random,seed
To quote the documentation from the random module: General notes on the underlying Mersenne Twister core generator: The period is 2**19937-1. It is one of the most extensively tested generators in existence. I'd be more worried about my code being broken than my RNG not being random enough. In general, your gut fe...
I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well? I think it doesn't change anything, but I'm using python Edit: I have done some tests and the numbers don't look similar. ...
0
1
2,871
0
10,955,138
0
0
0
0
2
false
0
2012-06-06T18:43:00.000
0
2
0
Recommendation system - using different metrics
10,920,199
0
python,metrics,recommendation-engine,personalization,cosine-similarity
Recommender systems in the land of research generally work on a scale of 1 - 5. It's quite nice to get such an explicit signal from a user. However I'd imagine the reality is that most users of your system would never actually give a rating, in which case you have nothing to work with. Therefore I'd track page views b...
I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item. My question: what are some good methods to use these different metrics for the recommendation syst...
0
1
614
0
10,956,591
0
0
0
0
2
false
0
2012-06-06T18:43:00.000
0
2
0
Recommendation system - using different metrics
10,920,199
0
python,metrics,recommendation-engine,personalization,cosine-similarity
For recommendation system, there are two problems: how to quantify the user's interest in a certain item based on the numbers you collected how to use the quantified interest data to recommend new items to the user I guess you are more interested in the first problem. To solve the first problem, you need either linea...
I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item. My question: what are some good methods to use these different metrics for the recommendation syst...
0
1
614
0
10,921,702
0
1
0
0
2
false
0
2012-06-06T20:22:00.000
0
2
0
trouble with installing epdfree
10,921,655
0
python,enthought
The problem is that you don't have the library scipy installed, which is a totally different library of epdfree. you can install it from apt-get in linux I guess, or going to their website www.scipy.org
I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.) After the install on one machine, I run python and try to import scipy. Everything goes fine. On the other machine, I follow all the same steps as far as ...
0
1
242
0
10,922,030
0
1
0
0
2
true
0
2012-06-06T20:22:00.000
1
2
0
trouble with installing epdfree
10,921,655
1.2
python,enthought
Well, turns out there was one difference. File permissions were being set differently on the two machines. I installed epdfree as su on both machines. On the second machine, everything was locked out when I tried to run it without going under "su". Now my next task is to find out why the permissions were set differ...
I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.) After the install on one machine, I run python and try to import scipy. Everything goes fine. On the other machine, I follow all the same steps as far as ...
0
1
242
0
69,231,502
0
0
0
0
1
false
44
2012-06-12T13:02:00.000
0
9
0
"Converting" Numpy arrays to Matlab and vice versa
10,997,254
0
python,matlab,numpy
In latest R2021a, you can pass a python numpy ndarray to double() and it will convert to a native matlab matrix, even when calling in console the numpy array it will suggest at the bottom "Use double function to convert to a MATLAB array"
I am looking for a way to pass NumPy arrays to Matlab. I've managed to do this by storing the array into an image using scipy.misc.imsave and then loading it using imread, but this of course causes the matrix to contain values between 0 and 256 instead of the 'real' values. Taking the product of this matrix divided b...
0
1
75,113
0
11,027,196
0
0
0
0
3
true
4
2012-06-13T01:42:00.000
6
3
0
What should I worry about if I compress float64 array to float32 in numpy?
11,007,169
1.2
python,numpy,floating-point,compression
It is unlikely that a simple transformation will reduce error significantly, since your distribution is centered around zero. Scaling can have effect in only two ways: One, it moves values away from the denormal interval of single-precision values, (-2-126, 2-126). (E.g., if you multiply by, say, 2123 values that were ...
This is a particular kind of lossy compression that's quite easy to implement in numpy. I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error. Other than looking at the maximum error for my actual data, does anybody have a good idea ...
0
1
1,086
0
11,007,250
0
0
0
0
3
false
4
2012-06-13T01:42:00.000
2
3
0
What should I worry about if I compress float64 array to float32 in numpy?
11,007,169
0.132549
python,numpy,floating-point,compression
The exponent for float32 is quite a lot smaller (or bigger in the case of negative exponents), but assuming all you numbers are less than that you only need to worry about the loss of precision. float32 is only good to about 7 or 8 significant decimal digits
This is a particular kind of lossy compression that's quite easy to implement in numpy. I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error. Other than looking at the maximum error for my actual data, does anybody have a good idea ...
0
1
1,086
0
11,019,850
0
0
0
0
3
false
4
2012-06-13T01:42:00.000
7
3
0
What should I worry about if I compress float64 array to float32 in numpy?
11,007,169
1
python,numpy,floating-point,compression
The following assumes you are using standard IEEE-754 floating-point operations, which are common (with some exceptions), in the usual round-to-nearest mode. If a double value is within the normal range of float values, then the only change that occurs when the double is rounded to a float is that the significand (frac...
This is a particular kind of lossy compression that's quite easy to implement in numpy. I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error. Other than looking at the maximum error for my actual data, does anybody have a good idea ...
0
1
1,086
0
11,028,685
0
0
0
0
1
false
2
2012-06-14T00:37:00.000
0
1
0
In rtree, how can I specify the threshold for float equality testing?
11,025,297
0
python,indexing,spatial-index,spatial-query,r-tree
Actually it does not need to have a threshold to handle ties. They just happen. Assuming you have the data points (1.,0.) and (0.,1.) and query point (0.,0.), any implementation I've seen of Euclidean distance will return the exact same distance for both, without any threshold.
In rtree, how can I specify the threshold for float equality testing? When checking nearest neighbours, rtree can return more than the specified number of results, as if two points are equidistant, it returns both them. To check this equidistance, it must have some threshold since the distances are floats. I want to be...
0
1
236
0
11,077,060
0
0
0
0
2
false
202
2012-06-18T04:45:00.000
61
3
0
What are the differences between Pandas and NumPy+SciPy in Python?
11,077,023
1
python,numpy,scipy,pandas
Numpy is required by pandas (and by virtually all numerical tools for Python). Scipy is not strictly required for pandas but is listed as an "optional dependency". I wouldn't say that pandas is an alternative to Numpy and/or Scipy. Rather, it's an extra tool that provides a more streamlined way of working with numer...
They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.
0
1
135,225
0
11,077,215
0
0
0
0
2
true
202
2012-06-18T04:45:00.000
327
3
0
What are the differences between Pandas and NumPy+SciPy in Python?
11,077,023
1.2
python,numpy,scipy,pandas
pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has becom...
They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.
0
1
135,225
0
11,172,695
0
0
0
0
1
false
3
2012-06-18T13:27:00.000
4
2
0
Support Vector Regression with High Dimensional Output using python's libsvm
11,083,921
0.379949
python,machine-learning,svm,regression,libsvm
libsvm might not be the best tool for this task. The problem you describe is called multivariate regression, and usually for regression problems, SVM's are not necessarily the best choice. You could try something like group lasso (http://www.di.ens.fr/~fbach/grouplasso/index.htm - matlab) or sparse group lasso (http://...
I would like to ask if anyone has an idea or example of how to do support vector regression in python with high dimensional output( more than one) using a python binding of libsvm? I checked the examples and they are all assuming the output to be one dimensional.
0
1
3,847
0
11,098,023
0
0
0
0
1
true
0
2012-06-19T06:00:00.000
0
1
1
How to read other files in hadoop jobs?
11,095,220
1.2
python,hadoop
Problem solved by adding the file needed with the -file option or file= option in conf file.
I need to read in a dictionary file to filter content specified in the hdfs_input, and I have uploaded it to the cluster using the put command, but I don't know how to access it in my program. I tried to access it using path on the cluster like normal files, but it gives the error information: IOError: [Errno 2] No suc...
0
1
91
0
11,101,558
0
0
0
0
1
false
2
2012-06-19T11:33:00.000
3
1
0
how to save a boolean numpy array to textfile in python?
11,100,066
0.53705
python,numpy,save,boolean
Thats correct, bools are integers, so you can always go between the two. import numpy as np arr = np.array([True, True, False, False]) np.savetxt("test.txt", arr, fmt="%5i") That gives a file with 1 1 0 0
The following saves floating values of a matrix into textfiles numpy.savetxt('bool',mat,fmt='%f',delimiter=',') How to save a boolean matrix ? what is the fmt for saving boolean matrix ?
0
1
3,549
0
42,273,797
0
0
0
0
1
false
55
2012-06-19T18:11:00.000
2
4
0
Adding two pandas dataframes
11,106,823
0.099668
python,pandas
Both the above answers - fillna(0) and a direct addition would give you Nan values if either of them have different structures. Its Better to use fill_value df.add(other_df, fill_value=0)
I have two dataframes, both indexed by timeseries. I need to add the elements together to form a new dataframe, but only if the index and column are the same. If the item does not exist in one of the dataframes then it should be treated as a zero. I've tried using .add but this sums regardless of index and column. A...
0
1
102,462
0
11,109,571
0
1
0
0
3
false
5
2012-06-19T21:13:00.000
1
4
0
Can csv data be made lazy?
11,109,524
0.049958
python,csv,clojure,lazy-evaluation
The csv module does load the data lazily, one row at a time.
Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists? I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.
0
1
2,302
0
11,109,589
0
1
0
0
3
false
5
2012-06-19T21:13:00.000
2
4
0
Can csv data be made lazy?
11,109,524
0.099668
python,csv,clojure,lazy-evaluation
Python's reader or DictReader are generators. A row is produced only when the object's next() method is called.
Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists? I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.
0
1
2,302
0
11,109,568
0
1
0
0
3
false
5
2012-06-19T21:13:00.000
6
4
0
Can csv data be made lazy?
11,109,524
1
python,csv,clojure,lazy-evaluation
The csv module's reader is lazy by default. It will read a line in at a time from the file, parse it to a list, and return that list.
Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists? I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.
0
1
2,302
0
60,879,363
0
0
1
0
1
false
21
2012-06-21T15:18:00.000
0
6
0
FileStorage for OpenCV Python API
11,141,336
0
c++,python,image-processing,opencv
pip install opencv-contrib-python for video support to install specific version use pip install opencv-contrib-python
I'm currently using FileStorage class for storing matrices XML/YAML using OpenCV C++ API. However, I have to write a Python Script that reads those XML/YAML files. I'm looking for existing OpenCV Python API that can read the XML/YAML files generated by OpenCV C++ API
0
1
16,840
0
11,155,580
0
0
0
0
1
false
4
2012-06-22T10:02:00.000
1
3
0
0/1 Knapsack with few variables: which algorithm?
11,154,101
0.066568
python,algorithm,knapsack-problem
You can either use pseudopolynomial algorithm, which uses dynamic programming, if the sum of weights is small enough. You just calculate, whether you can get weight X with first Y items for each X and Y. This runs in time O(NS), where N is number of items and S is sum of weights. Another possibility is to use meet-in-t...
I have to implement the solution to a 0/1 Knapsack problem with constraints. My problem will have in most cases few variables (~ 10-20, at most 50). I recall from university that there are a number of algorithms that in many cases perform better than brute force (I'm thinking, for example, to a branch and bound algorit...
0
1
1,291
0
57,324,664
0
0
0
0
1
false
104
2012-06-23T12:15:00.000
4
13
0
NumPy style arrays for C++?
11,169,418
0.061461
c++,arrays,python-3.x,numpy,dynamic-arrays
Use LibTorch (PyTorch frontend for C++) and be happy.
Are there any C++ (or C) libs that have NumPy-like arrays with support for slicing, vectorized operations, adding and subtracting contents element-by-element, etc.?
0
1
81,916
0
11,187,374
0
0
0
0
1
false
0
2012-06-25T09:57:00.000
3
2
0
The best way to export openerp data to csv file using python
11,187,086
0.291313
python,export-to-excel,openerp,export-to-csv
Why not to use Open ERP client it self. you can go for xlwt if you really require to write a python program to generate it.
Which is the best to way to export openerp data to csv/xls file using python so that i can schedule it in openerp( i cant use the client side exporting)? using csv python package using xlwt python package or any other package? And also how can I dynamically provide the path and name to save this newly created csv fi...
0
1
4,385
0
11,198,804
0
0
0
0
1
true
2
2012-06-25T22:29:00.000
1
2
0
ABM under python with advanced visualization
11,198,288
1.2
python,netlogo,pycuda,agent-based-modeling,mayavi
You almost certainly do not want to use CUDA unless you are running into a significant performance problem. In general CUDA is best used for solving floating point linear algebra problems. If you are looking for a framework built around parallel computations, I'd look towards OpenCL which can take advantage of GPUs if ...
sorry if this all seem nooby and unclear, but I'm currently learning Netlogo to model agent-based collective behavior and would love to hear some advice on alternative software choices. My main thing is that I'd very much like to take advantage of PyCuda since, from what I understand, it enables parallel computation. H...
0
1
1,399
0
11,217,921
0
0
0
0
1
false
1
2012-06-27T00:39:00.000
1
2
0
Split quadrilateral into sub-regions of a maximum area
11,217,855
0.099668
python,math,geometry,gis
You could recursively split the quad in half on the long sides until the resulting area is small enough.
It is pretty easy to split a rectangle/square into smaller regions and enforce a maximum area of each sub-region. You can just divide the region into regions with sides length sqrt(max_area) and treat the leftovers with some care. With a quadrilateral however I am stumped. Let's assume I don't know the angle of any o...
0
1
1,141
0
11,267,361
0
0
0
0
1
false
3
2012-06-29T18:32:00.000
1
2
0
Storing text mining data
11,267,143
0.099668
python,database,data-mining,text-mining
Why not have simple SQL tables Tables: documents with a primary key of id or file name or something observations with foreign key into documents and the term (indexed on both fields probably unique) The array approach you mentioned seems like a slow way to get at terms. With sql you can easily allow new terms be adde...
I am looking to track topic popularity on a very large number of documents. Furthermore, I would like to give recommendations to users based on topics, rather than the usual bag of words model. To extract the topics I use natural language processing techniques that are beyond the point of this post. My question is how ...
0
1
612
0
11,289,709
0
0
0
0
1
false
0
2012-07-02T07:50:00.000
1
3
0
How to compare image with python
11,289,652
0.066568
python,image,compare
If you want to check if they are binary equal you can count a checksum on them and compare it. If you want to check if they are similar in some other way , it will be more complicated and definitely would not fit into simple answer posted on Stack Overflow. It just depends on how you define similarity but anyway it wou...
I'm looking for an algorithm to compare two images (I work with python). I find the PIL library, numpy/scipy and opencv. I know how to transform in greyscale, binary, make an histogram, .... that's ok but I don't know what I have to do with the two images to say "yes they're similar // they're probably similar // they ...
0
1
1,355
0
11,616,216
0
1
0
0
1
false
0
2012-07-02T07:52:00.000
0
3
0
.EXE installer crashes when installing Python modules: IPython, Pandas and Matplotlib
11,289,670
0
python,numpy,matplotlib,ipython,pandas
This happened to me too. It works if you right click and 'Run As Administrator'
I have recently installed numpy due to ease using the exe installer for Python 2.7. However, when I attempt to install IPython, Pandas or Matplotlib using the exe file, I consistently get a variant of the following error right after the installation commeces (pandas in the following case): pandas-0.8.0.win32-py2.7[1].e...
0
1
1,016
0
11,312,776
0
1
0
0
1
false
2
2012-07-02T17:06:00.000
1
1
0
concatenating TimeSeries of different lengths using Pandas
11,298,097
0.197375
python,dataframe,concat,pandas
If the Series are in a dict data, you need only do: frame = DataFrame(data) That puts things into a DataFrame and unions all the dates. If you want to fill values forward, you can call frame = frame.fillna(method='ffill').
I am using pandas in python. I have several Series indexed by dates that I would like to concat into a single DataFrame, but the Series are of different lengths because of missing dates etc. I would like the dates that do match up to match up, but where there is missing data for it to be interpolated or just use the ...
0
1
1,369
0
11,359,378
0
0
0
0
1
false
1
2012-07-05T19:31:00.000
1
2
0
Change numpy.seterr defaults?
11,351,264
0.099668
python,numpy
There is no configuration file for this. You will have to call np.seterr() yourself.
I'd like to change my seterr defaults to be either all 'warn' or all 'ignore'. This can be done interactively by doing np.seterr(all='ignore'). Is there a way to make it a system default? There is no .numpyrc as far as I can tell; is there some other configuration file where these defaults can be changed? (I'm using...
0
1
880
0
11,355,186
0
1
0
0
1
false
19
2012-07-05T19:32:00.000
2
3
0
nltk tokenization and contractions
11,351,290
0.132549
python,nlp,nltk
Because the number of contractions are very minimal, one way to do it is to search and replace all contractions to it full equivalent (Eg: "don't" to "do not") and then feed the updated sentences into the wordpunct_tokenizer.
I'm tokenizing text with nltk, just sentences fed to wordpunct_tokenizer. This splits contractions (e.g. 'don't' to 'don' +" ' "+'t') but I want to keep them as one word. I'm refining my methods for a more measured and precise tokenization of text, so I need to delve deeper into the nltk tokenization module beyond sim...
0
1
12,956
0
11,401,559
0
0
0
0
1
false
1
2012-07-06T12:39:00.000
1
1
0
Append extras informations to Series in Pandas
11,362,376
0.197375
python,pandas
Right now there is not an easy way to maintain metadata on pandas objects across computations. Maintaining metadata has been an open discussion on github for some time now but we haven't had to time code it up. We'd welcome any additional feedback you have (see pandas on github) and would love to accept a pull-request ...
Is it possible to customize Serie (in a simple way, and DataFrame by the way :p) from pandas to append extras informations on the display and in the plots? A great thing will be to have the possibility to append informations like "unit", "origin" or anything relevant for the user that will not be lost during computatio...
0
1
148
0
68,635,240
0
1
0
0
1
false
14
2012-07-08T01:03:00.000
0
2
0
How do I change the font size of the scale in matplotlib plots?
11,379,910
0
python,matplotlib
simply put, you can use the following command to set the range of the ticks and change the size of the ticks import matplotlib.pyplot as plt set the range of ticks for x-axis and y-axis plt.set_yticks(range(0,24,2)) plt.set_xticks(range(0,24,2)) change the size of ticks for x-axis and y-axis plt.yticks(fontsize=12,) pl...
While plotting using Matplotlib, I have found how to change the font size of the labels. But, how can I change the size of the numbers in the scale? For clarity, suppose you plot x^2 from (x0,y0) = 0,0 to (x1,y1) = (20,20). The scale in the x-axis below maybe something like 0 1 2 ... 20. I want to change the ...
0
1
44,462
0
11,462,417
0
0
0
0
1
true
2
2012-07-12T20:28:00.000
5
1
0
NLTK multiple feature sets in one classifier?
11,460,115
1.2
python,nlp,nltk
NLTK classifiers can work with any key-value dictionary. I use {"word": True} for text classification, but you could also use {"contains(word)": 1} to achieve the same effect. You can also combine many features together, so you could have {"word": True, "something something": 1, "something else": "a"}. What matters mos...
In NLTK, using a naive bayes classifier, I know from examples its very simply to use a "bag of words" approach and look for unigrams or bigrams or both. Could you do the same using two completely different sets of features? For instance, could I use unigrams and length of the training set (I know this has been mentione...
0
1
1,502
0
11,517,362
0
1
0
0
1
true
1
2012-07-17T06:33:00.000
0
1
0
Fast access and update integer matrix or array in Python
11,517,143
1.2
python,performance
What you ask is quite of a problem. Different data structures have different properties. In general, if you need quick access, do not use lists! They have linear access time, which means, the more you put in them, the longer it will take in average to access an element. You could perhaps use numpy? That library has ma...
I will need to create array of integer arrays like [[0,1,2],[4,4,5,7]...[4,5]]. The size of internal arrays changeable. Max number of internal arrays is 2^26. So what do you recommend for the fastest way for updating this array. When I use list=[[]] * 2^26 initialization is very fast but update is very slow. Instead I...
0
1
497
0
11,522,576
0
0
0
1
1
false
1
2012-07-17T12:15:00.000
0
3
0
Process 5 million key-value data in python.Will NoSql solve?
11,522,232
0
python,nosql
If this is just a one-time process, you might want to just setup an EC2 node with more than 1G of memory and run the python scripts there. 5 million items isn't that much, and a Python dictionary should be fairly capable of handling it. I don't think you need Hadoop in this case. You could also try to optimize your scr...
I would like to get the suggestion on using No-SQL datastore for my particular requirements. Let me explain: I have to process the five csv files. Each csv contains 5 million rows and also The common id field is presented in each csv.So, I need to merge all csv by iterating 5 million rows.So, I go with python ...
0
1
347
0
51,053,926
0
0
0
0
1
false
8
2012-07-17T17:48:00.000
0
2
0
OpenCV: Converting from NumPy to IplImage in Python
11,528,009
0
python,opencv
2-way to apply: img = cv2.imread(img_path) img_buf = cv2.imencode('.jpg', img)[1].tostring() just read the image file: img_buf = open(img_path, 'rb').read()
I have an image that I load using cv2.imread(). This returns an NumPy array. However, I need to pass this into a 3rd party API that requires the data in IplImage format. I've scoured everything I could and I've found instances of converting from IplImage to CvMat,and I've found some references to converting in C++, b...
0
1
10,288
0
11,564,801
0
0
0
0
1
false
1
2012-07-18T14:41:00.000
1
2
0
How to determine set of connected line from an array in python
11,543,991
0.099668
python,numpy,nearest-neighbor
I don't know of anything which provides the functionality you desire out of the box. If you have already written the logic, and it is just slow, have you considered Cython-ing your code. For simple typed looping operations you could get a significant speedup.
I have an array that looks something like: [0 x1 0 0 y1 0 z1 0 0 x2 0 y2 0 z2 0 0 x3 0 0 y3 z3 0 0 x4 0 0 y4 z4 0 x5 0 0 0 y5 z5 0 0 0 0 y6 0 0] I need to determine set of connected line (i.e. line that connects to the points [x1,x2,x3..], [y1,y2,y3...
0
1
526
0
11,585,354
0
1
0
0
1
false
1
2012-07-20T18:18:00.000
0
1
0
How to efficiently convert dates in numpy record array?
11,584,856
0
python,numpy
I could be wrong, but it seems to me like your issue is having repeated occurrences, thus doing the same conversion more times than necessary. IF that interpretation is correct, the most efficient method would depend on how many repeats there are. If you have 100,000 repeats out of 1.7 million, then writing 1.6 milli...
I have to read a very large (1.7 million records) csv file to a numpy record array. Two of the columns are strings that need to be converted to datetime objects. Additionally, one column needs to be the calculated difference between those datetimes. At the moment I made a custom iterator class that builds a list of lis...
0
1
420
0
11,610,785
0
0
0
0
1
true
20
2012-07-23T06:24:00.000
9
3
0
Is there a C/C++ API for python pandas?
11,607,387
1.2
python,c,api,pandas
All the pandas classes (TimeSeries, DataFrame, DatetimeIndex etc.) have pure-Python definitions so there isn't a C API. You might be best off passing numpy ndarrays from C to your Python code and letting your Python code construct pandas objects from them. If necessary you could use PyObject_CallFunction etc. to call ...
I'm extracting mass data from a legacy backend system using C/C++ and move it to Python using distutils. After obtaining the data in Python, I put it into a pandas DataFrame object for data analysis. Now I want to go faster and would like to avoid the second step. Is there a C/C++ API for pandas to create a DataFrame ...
0
1
28,186
0
42,165,467
0
0
0
0
1
false
96
2012-07-24T00:50:00.000
1
6
0
Large, persistent DataFrame in pandas
11,622,652
0.033321
python,pandas,sas
You can use Pytable rather than pandas df. It is designed for large data sets and the file format is in hdf5. So the processing time is relatively fast.
I am exploring switching to python and pandas as a long-time SAS user. However, when running some tests today, I was surprised that python ran out of memory when trying to pandas.read_csv() a 128mb csv file. It had about 200,000 rows and 200 columns of mostly numeric data. With SAS, I can import a csv file into a SA...
0
1
76,790
0
18,619,898
0
0
0
1
1
false
1
2012-07-28T22:20:00.000
0
4
0
Junk characters (smart quotes, etc.) in output file
11,705,114
0
python,mysql,vim,encoding,smart-quotes
Are all these "junk" characters in the range <80> to <9F>? If so, it's highly likely that they're Microsoft "Smart Quotes" (Windows-125x encodings). Someone wrote up the text in Word or Outlook, and copy/pasted it into a Web application. Both Latin-1 and UTF-8 regard these characters as control characters, and the usua...
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like <92>,<89>, <94> etc. Any thoughts? I tried doing string.encode...
0
1
1,903
0
11,736,635
0
0
0
0
1
false
2
2012-07-31T04:28:00.000
2
2
0
Detecting Halo in Images
11,733,106
0.197375
python,image-processing,machine-learning,computer-vision
If I understand you correctly then you have complete black images with white borders? In this case I think the easiest approach is to compute a histogram of the intensity values of the pixels, i.e. how „dark/bright” is the overall image. I guess that the junk images are significantly darker than the non-junk images. Yo...
I've never done any image processing and I was wondering if someone can nudge me in the right direction. Here's my issue: I have a bunch of images of black and white images of places around a city. Due to some problems with the camera system, some images contain nothing but a black image with a white vignette around t...
0
1
1,550
0
52,399,670
0
0
0
0
1
false
9
2012-08-02T00:30:00.000
3
4
0
rpy2: Convert FloatVector or Matrix back to a Python array or list?
11,769,471
0.148885
python,rpy2
In the latest version of rpy2, you can simply do this in a direct way: import numpy as np array=np.array(vector_R)
I'm using rpy2 and I have this issue that's bugging me: I know how to convert a Python array or list to a FloatVector that R (thanks to rpy2) can handle within Python, but I don't know if the opposite can be done, say, I have a FloatVector or Matrix that R can handle and convert it back to a Python array or list...can ...
0
1
5,269
0
11,788,967
0
1
0
0
1
false
4
2012-08-03T03:40:00.000
1
2
0
Importing Numpy into functions
11,788,950
0.099668
python,numpy,python-import
You have to import modules in every file in which you use them. Does that answer your question?
I do not know the right way to import modules. I have a main file which initializes the code, does some preliminary calculations etc. I also have 5 functions f1, f2, ... f5. The main code and all functions need Numpy. If I define all functions in the main file, the code runs fine. (Importing with : import numpy as np)...
0
1
9,291
0
11,790,050
0
0
0
0
1
true
2
2012-08-03T05:42:00.000
2
1
0
What's a good way to output matplotlib graphs on a PHP website?
11,789,917
1.2
php,python,matplotlib
You could modify your python script so it outputs an image (image/jpeg) instead of saving it to a file. Then use the tag as normal, but pointing directly to the python script. Your php wouldn't call the python script at all, It would just include it as the src of the image.
I have a python script that can output a plot using matplotlib and command line inputs. What i'm doing right now is making the script print the location/filename of the generated plot image and then when PHP sees it, it outputs an img tag to display it. The python script deletes images that are older than 20 minutes wh...
0
1
2,549
0
11,809,642
0
0
0
0
1
false
3
2012-08-04T04:49:00.000
2
2
0
Heat map generator of a floor plan image
11,805,983
0.197375
matlab,python-3.x,heatmap,color-mapping
One way to do this would be: 1) Load in the floor plan image with Matlab or NumPy/matplotlib. 2) Use some built-in edge detection to locate the edge pixels in the floor plan. 3) Form a big list of (x,y) locations where an edge is found in the floor plan. 4) Plot your heat map 5) Scatterplot the points of the floor plan...
I want to generate a heat map image of a floor. I have the following things: A black & white .png image of the floor A three column array stored in Matlab. -- The first two columns indicate the X & Y coordinates of the floorpan image -- The third coordinate denotes the "temperature" of that particular coordinate I wa...
0
1
4,036
0
11,826,057
0
0
0
0
2
false
0
2012-08-06T08:15:00.000
0
2
0
Python For OpenCV2.4
11,824,697
0
python,opencv
Have you run 'make install' or 'sudo make install'? While not absolutely necessary, it copies the generated binaries to your system paths.
I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help.
0
1
329
0
11,824,855
0
0
0
0
2
true
0
2012-08-06T08:15:00.000
2
2
0
Python For OpenCV2.4
11,824,697
1.2
python,opencv
You should either copy the cv2 library to a location in your PYTHONPATH or add your current directory to the PYTHONPATH.
I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help.
0
1
329
0
13,551,794
0
1
0
0
1
true
1
2012-08-07T11:08:00.000
3
1
0
Matplotlib with TkAgg error: PyEval_RestoreThread: null tstate on save_fig() - do I need threads enabled?
11,844,628
1.2
python,c,matplotlib
Finally resolved this - so going to explain what occurred for the sake of Googlers! This only happened when using third-party libraries like numpy or matplotlib, but actually related to an error elsewhere in my code. As part of the software I wrote, I was extending the Python interpreter following the same basic patte...
I'm puzzling over an embedded Python 2.7.2 interpreter issue. I've embedded the interpreter in a Visual C++ 2010 application and it essentially just calls user-written scripts. My end-users want to use matplotlib - I've already resolved a number of issues relating to its dependence on numpy - but when they call savefi...
0
1
2,495
0
71,602,253
0
0
0
0
1
false
46
2012-08-09T10:15:00.000
0
3
0
Slice Pandas DataFrame by Row
11,881,165
0
python,pandas,slice
If you just need to get the top rows; you can use df.head(10)
I am working with survey data loaded from an h5-file as hdf = pandas.HDFStore('Survey.h5') through the pandas package. Within this DataFrame, all rows are the results of a single survey, whereas the columns are the answers for all questions within a single survey. I am aiming to reduce this dataset to a smaller DataFr...
0
1
92,185
0
11,890,470
0
0
0
0
1
false
10
2012-08-09T19:17:00.000
15
3
0
Is there an equivalent of the Python range function in MATLAB?
11,890,437
1
python,matlab
Yes, there is the : operator. The command -10:5:11 would produce the vector [-10, -5, 0, 5, 10];
Is there an equivalent MATLAB function for the range() function in Python? I'd really like to be able to type something like range(-10, 11, 5) and get back [-10, -5, 0, 5, 10] instead of having to write out the entire range by hand.
0
1
21,375
0
11,903,766
0
1
0
0
1
false
18
2012-08-10T13:50:00.000
-1
3
0
Find the set difference between two large arrays (matrices) in Python
11,903,083
-0.066568
python,numpy,set-difference
I'm not sure what you are going for, but this will get you a boolean array of where 2 arrays are not equal, and will be numpy fast: import numpy as np a = np.random.randn(5, 5) b = np.random.randn(5, 5) a[0,0] = 10.0 b[0,0] = 10.0 a[1,1] = 5.0 b[1,1] = 5.0 c = ~(a-b==0) print c [[False True True True True] [ Tru...
I have two large 2-d arrays and I'd like to find their set difference taking their rows as elements. In Matlab, the code for this would be setdiff(A,B,'rows'). The arrays are large enough that the obvious looping methods I could think of take too long.
0
1
13,283
0
58,766,819
0
1
0
0
1
false
19
2012-08-10T23:24:00.000
0
3
0
Best Machine Learning package for Python 3x?
11,910,481
0
python,python-3.x,machine-learning,scikit-learn
Old question, Scikit-Learn is supported by Python3 now.
I was bummed out to see that scikit-learn does not support Python 3...Is there a comparable package anyone can recommend for Python 3?
0
1
3,322
0
11,926,117
0
0
0
0
1
false
0
2012-08-12T20:48:00.000
1
3
0
What are some good methods for detecting movement using a camera? (opencv)
11,925,782
0.066568
python,opencv
Please go through the book Learning OpenCV: Computer Vision with the OpenCV Library It has theory as well as example codes.
I am looking for some methods for detecting movement. I've tried two of them. One method is to have background frame that has been set on program start and other frames are compared (threshold) to that background frame. Other method is to compare current frame (let's call that frame A) with frame-1 (frame before A). No...
0
1
637
0
12,489,308
0
0
0
0
1
true
2
2012-08-14T02:02:00.000
1
1
0
cv.KMeans2 clustering indices inconsistent
11,944,796
1.2
python,opencv,cluster-analysis,k-means
Since k-means is a randomized approach, you will probably encounter this problem even when analyzing the same frame multiple times. Try to use the previous frames cluster centers as initial centers for k-means. This may make the ordering stable enough for you, and it may even significantly speed up k-means (assuming th...
So I have a video with 3 green spots on it. These spots have a bunch of "good features to track" around their perimeter. The spots are very far away from each other so using KMeans I am easily able to identify them as separate clusters. The problem comes in that the ordering of the clusters changes from frame to frame....
0
1
232
0
11,998,662
0
0
0
0
1
false
3
2012-08-16T00:57:00.000
1
2
0
efficient way to resize numpy or dataset?
11,979,316
0.099668
python,numpy,h5py
NumPy arrays are not designed to be resized. It's doable, but wasteful in terms of memory (because you need to create a second array larger than your first one, then fill it with your data... That's two arrays you have to keep) and of course in terms of time (creating the temporary array). You'd be better off starting ...
I want to understand the effect of resize() function on numpy array vs. an h5py dataset. In my application, I am reading a text file line by line and then after parsing the data, write into an hdf5 file. What would be a good approach to implement this. Should I add each new row into a numpy array and keep resizing (inc...
0
1
1,987
0
12,011,024
0
0
0
0
3
true
7
2012-08-17T02:48:00.000
2
3
0
Is it possible to use complex numbers as target labels in scikit learn?
11,999,147
1.2
python,numpy,scipy,scikit-learn
So far I discovered that most classifiers, like linear regressors, will automatically convert complex numbers to just the real part. kNN and RadiusNN regressors, however, work well - since they do a weighted average of the neighbor labels and so handle complex numbers gracefully. Using a multi-target classifier is anot...
I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating. I cannot find any documenta...
0
1
2,406
0
12,003,586
0
0
0
0
3
false
7
2012-08-17T02:48:00.000
1
3
0
Is it possible to use complex numbers as target labels in scikit learn?
11,999,147
0.066568
python,numpy,scipy,scikit-learn
Good question. How about transforming angles into a pair of labels, viz. x and y co-ordinates. These are continuous functions of angle (cos and sin). You can combine the results from separate x and y classifiers for an angle? $\theta = \sign(x) \arctan(y/x)$. However that result will be unstable if both classifiers ret...
I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating. I cannot find any documenta...
0
1
2,406
0
12,004,759
0
0
0
0
3
false
7
2012-08-17T02:48:00.000
4
3
0
Is it possible to use complex numbers as target labels in scikit learn?
11,999,147
0.26052
python,numpy,scipy,scikit-learn
Several regressors support multidimensional regression targets. Just view the complex numbers as 2d points.
I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating. I cannot find any documenta...
0
1
2,406
0
12,014,071
0
1
0
0
1
true
2
2012-08-17T22:34:00.000
1
3
0
Python muliple deepcopy behaviors
12,014,042
1.2
python,deep-copy
If there are no other objects referenced in graph (just simple fields), then copy.copy(graph) should make a copy, while copy.deepcopy(manager) should copy the manager and its graphs, assuming there is a list such as manager.graphs. But in general you are right, the copy module does not have this flexibility, and for sl...
Suppose I have two classes, say Manager and Graph, where each Graph has a reference to its manager, and each Manager has references to a collection of graphs that it owns. I want to be able to do two things 1) Copy a graph, which performs a deepcopy except that the new graph references the same manager as the old one. ...
0
1
1,009
0
12,079,563
0
0
0
0
1
false
1
2012-08-20T21:13:00.000
0
1
1
Data analysis using MapReduce in MongoDb vs a Distributed Queue using Celery & RabbitMq
12,045,278
0
python,mongodb,mapreduce,celery,distributed-computing
It's impossible to say without benchmarking for certain, but my intuition leans toward doing more calculations in Python rather than mapreduce. My main concern is that mapreduce is single-threaded: One MongoDB process can only run one Javascript function at a time. It can, however, serve thousands of queries simultaneo...
I am currently working on a project which involves performing a lot of statistical calculations on many relatively small datasets. Some of these calculations are as simple as computing a moving average, while others involve slightly more work, like Spearman's Rho or Kendell's Tau calculations. The datasets are essenti...
0
1
944
0
12,150,625
0
0
0
0
1
false
0
2012-08-27T22:31:00.000
0
1
0
Numpy/Scipy modulus function
12,150,513
0
python,numpy,scipy
According to the doc, np.mod(x1,x2)=x1-floor(x1/x2)*x2. The problem here is that you are working with very small values, a dark domain where floating point errors (truncation...) happen quite often and results are often unpredictable... I don't think you should spend a lot of time worrying about that.
The Numpy 'modulus' function is used in a code to check if a certain time is an integral multiple of the time-step. But some weird behavior is seeen. numpy.mod(121e-12,1e-12) returns 1e-12 numpy.mod(60e-12,1e-12) returns 'a very small value' (compared to 1e-12). If you play around numpy.mode('122-126'e-12,1e-12) it ...
0
1
4,240
0
12,151,167
0
1
0
0
1
false
2
2012-08-27T23:27:00.000
0
2
0
How do I shuffle in Python for a column (years) with keeping the corrosponding column values?
12,150,908
0
python-2.7
Are you familiar with NumPy ? Once you have your data in a numpy ndarray, it's a breeze to shuffle the rows while keeping the column orders, without the hurdle of creating many temporaries. You could use a function like np.genfromtxt to read your data file and create a ndarray with different named fields. You could the...
I have a text file with five columns. First column has year(2011 to 2040), 2nd has Tmax, 3rd has Tmin, 4th has Precip, and fifth has Solar for 30 years. I would like to write a python code which shuffles the first column (year) 10 times with remaining columns having the corresponding original values in them, that is: I...
0
1
1,120
0
12,161,433
0
0
1
0
2
true
0
2012-08-28T14:12:00.000
2
2
0
Computing the null space of a large matrix
12,161,182
1.2
c++,python,c,algorithm,matrix
The manner to avoid trashing CPU caches greatly depends on how the matrix is stored/loaded/transmitted, a point that you did not address. There are a few generic recommendations: divide the problem into worker threads addressing contiguous rows per threads increment pointers (in C) to traverse rows and keep the count ...
I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated!
0
1
766
0
12,161,500
0
0
1
0
2
false
0
2012-08-28T14:12:00.000
-1
2
0
Computing the null space of a large matrix
12,161,182
-0.099668
c++,python,c,algorithm,matrix
In what kind of data structure is your matrix represented? If you use an element list to represent the matrix, i.e. "column, row, value" tuple for one matrix element, then the solution would be just count the number of the tuples (subtracted by the matrix size)
I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated!
0
1
766
0
12,170,479
0
0
0
0
1
false
6
2012-08-28T20:49:00.000
2
1
0
How to broadcast to a multiindex
12,167,324
0.379949
python,arrays,pandas,multi-index
If you just want to do simple arithmetic operations, I think something like A.div(B, level='date') should work. Alternatively, you can do something like B.reindex(A.index, level='date') to manually match the indices.
I have two pandas arrays, A and B, that result from groupby operations. A has a 2-level multi-index consisting of both quantile and date. B just has an index for date. Between the two of them, the date indices match up (within each quantile index for A). Is there a standard Pandas function or idiom to "broadcast" B suc...
0
1
1,335
0
12,610,165
0
0
0
0
1
true
3
2012-08-29T08:16:00.000
5
2
0
MatPlotLib with Sublime Text 2 on OSX
12,173,541
1.2
python,matplotlib,sublimetext2,sublimetext
I had the same problem and the following fix worked for me: 1 - Open Sublime Text 2 -> Preferences -> Browse Packages 2 - Go to the Python folder, select file Python.sublime-build 3 - Replace the existing cmd line for this one: "cmd": ["/Library/Frameworks/Python.framework/Versions/Current/bin/python", "$file"], Then...
I want to use matplotlib from my Sublime Text 2 directly via the build command. Does anybody know how I accomplish that? I'm really confused about the whole multiple python installations/environments. Google didn't help. My python is installed via homebrew and in my terminal (which uses brew python), I have no problem ...
0
1
3,245
0
12,199,026
0
0
0
0
1
true
7
2012-08-29T10:01:00.000
13
1
0
scikit-learn GMM produce positive log probability
12,175,404
1.2
python,machine-learning,scikit-learn,mixture-model
Positive log probabilities are okay. Remember that the GMM computed probability is a probability density function (PDF), so can be greater than one at any individual point. The restriction is that the PDF must integrate to one over the data domain. If the log probability grows very large, then the inference algorithm m...
I am using Gaussian Mixture Model from python scikit-learn package to train my dataset , however , I fount that when I code -- G=mixture.GMM(...) -- G.fit(...) -- G.score(sum feature) the resulting log probability is positive real number... why is that? isn't log probability guaranteed to be negative? I get it. what Ga...
0
1
5,301
0
12,618,627
0
0
0
0
1
false
1
2012-08-29T17:55:00.000
0
1
1
Using Numpy and SciPy on Apache Pig
12,183,759
0
python,numpy,scipy,apache-pig
You can stream through a (C)Python script that imports scipy. I am for instance using this to cluster data inside bags, using import scipy.cluster.hierarchy
I want to write UDFs in Apache Pig. I'll be using Python UDFs. My issue is I have tons of data to analyse and need packages like NumPy and SciPy. Buy this they dont have Jython support I cant use them along with Pig. Do we have a substitue ?
0
1
524
0
12,185,246
0
0
0
0
1
true
1
2012-08-29T19:26:00.000
3
1
0
Python - Numpy matrix multiplication
12,185,117
1.2
python,matrix,numpy,python-2.7,matrix-multiplication
In numpy convention, the transpose of X is represented byX.T and you're in luck, X.T is just a view of the original array X, meaning that no copy is done.
I am trying to optimize (memorywise the multiplication of X and its transpose X' Does anyone know if numpys matrix multiplication takes into consideration that X' is just the transpose of X. What I mean is that if it detects this and therfore does not create the object X' but just works on the cols/rows of X to produce...
0
1
505
0
12,187,770
0
0
0
0
1
true
13
2012-08-29T21:47:00.000
9
4
0
Johansen cointegration test in python
12,186,994
1.2
python,statistics,pandas,statsmodels
statsmodels doesn't have a Johansen cointegration test. And, I have never seen it in any other python package either. statsmodels has VAR and structural VAR, but no VECM (vector error correction models) yet. update: As Wes mentioned, there is now a pull request for Johansen's cointegration test for statsmodels. I have...
I can't find any reference on funcionality to perform Johansen cointegration test in any Python module dealing with statistics and time series analysis (pandas and statsmodel). Does anybody know if there's some code around that can perform such a test for cointegration among time series?
0
1
18,067
0
12,205,078
0
1
0
0
3
false
2
2012-08-29T21:59:00.000
1
5
0
Open Source Scientific Project - Use Python 2.6 or 2.7?
12,187,115
0.039979
python,numpy,version,scipy,optparse
I personally use Debian stable for my own projects so naturally I gravitate toward what the distribution uses as the default Python installation. For Squeeze (current stable), it's 2.6.6 but Wheezy will use 2.7. Why is this relevant? Well, as a programmer there are a number of times I wish I had access to new feature...
I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools...
0
1
169
0
12,187,327
0
1
0
0
3
false
2
2012-08-29T21:59:00.000
2
5
0
Open Source Scientific Project - Use Python 2.6 or 2.7?
12,187,115
0.07983
python,numpy,version,scipy,optparse
If you intend to distribute this code, your answer depends on your target audience, actually. A recent stint in some private sector research lab showed me that Python 2.5 is still often use. Another example: EnSight, a commercial package for 3D visualization/manipulation, ships with Python 2.5 (and NumPy 1.3 or 1.4, i...
I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools...
0
1
169
0
12,187,140
0
1
0
0
3
true
2
2012-08-29T21:59:00.000
9
5
0
Open Source Scientific Project - Use Python 2.6 or 2.7?
12,187,115
1.2
python,numpy,version,scipy,optparse
If everything you need would work with 2.7 I would use it, no point staying with 2.6. Also, .format() works a bit nicer (no need to specify positions in the {} for the arguments to the formatting directives). FWIW, I usually use 2.7 or 3.2 and every once in a while I end up porting some code to my Linux box which stil...
I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools...
0
1
169
0
12,187,989
0
0
0
0
1
true
0
2012-08-29T23:16:00.000
0
1
1
How can I stream data, on my Mac, from a bluetooth source using R?
12,187,795
1.2
python,macos,r,bluetooth
there is a strong probability that you can enumerate the bluetooth as serial port for the bluetooth and use pyserial module to communicate pretty easily... but if this device does not enumerate serially you will have a very large headache trying to do this... see if there are any com ports that are available if there ...
I have a device that is connected to my Mac via bluetooth. I would like to use R (or maybe Python, but R is preferred) to read the data real-time and process it. Does anyone know how I can do the data streaming using R on a Mac? Cheers
0
1
377
0
12,212,637
0
0
0
0
1
true
2
2012-08-31T09:14:00.000
4
1
0
Cassandra get_range_slice
12,212,321
1.2
c#,java,c++,python,cassandra
The columns for each row will be returned in sorted order, sorted by the column key, depending on you comparator_type. The row ordering will depend on your partitioner, and if you use the random partitioner, the rows will come back in a 'random' order. In Cassandra, it is possible for each row to have a different set ...
When get_range_slice returns, in what order are the columns returned? Is it random or the order in which the columns were created? Is it best practice to iterate through all resulting columns for each row and compare the column name prior to using the value or can one just index into the returning array?
0
1
161
0
14,346,374
0
0
0
0
1
false
1
2012-09-02T02:01:00.000
0
2
0
How to make a 2d map with perlin noise python
12,232,901
0
python,pyglet,terrain,perlin-noise
You could also use 1d perlin noise to calculate the radius from each point to the "center" of the island. It should be really easy to implement, but it will make more circular islands, and won't give each point different heights.
I have been experimenting on making a random map for a top down RPG I am making in Python. (and Pyglet) So far I have been making island by starting at 0,0 and going in a random direction 500 times (x+=32 or y -=32 sort of thing) However this doesn't look like a real image very much so I had a look at the Perlin Noise ...
0
1
5,966
0
12,243,670
0
0
1
0
1
true
0
2012-09-03T04:37:00.000
1
1
0
What algorithms i can use from machine learning or Artificial intelligence which i can show via web site
12,242,054
1.2
python,web,machine-learning,artificial-intelligence
I assume you are mostly concerned with a general approach to implementing AI in a web context, and not in the details of the AI algorithms themselves. Any computable algorithm can be implemented in any turing complete language (i.e.all modern programming languages). There's no special limitations for what you can do on...
I am die hard fan of Artificial intelligence and machine learning. I don't know much about them but i am ready to learn. I am currently a web programmer in PHP , and I am learning python/django for a website. Now as this AI field is very wide and there are countless algorithms I don't know where to start. But eventual...
1
1
1,093
0
12,283,724
0
0
0
0
2
false
1
2012-09-03T10:14:00.000
0
3
0
Integrating a function using non-uniform measure (python/scipy)
12,245,859
0
python,numpy,scipy,probability,numerical-methods
Another possibilty would be to integrate x -> f( H(x)) where H is the inverse of the cumulative distribution of your probability distribtion. [This is because of change of variable: replacing y=CDF(x) and noting that p(x)=CDF'(x) yields the change dy=p(x)dx and thus int{f(x)p(x)dx}==int{f(x)dy}==int{f(H(y))dy with H t...
I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential). I can do it myself, using...
0
1
545
0
12,268,227
0
0
0
0
2
false
1
2012-09-03T10:14:00.000
0
3
0
Integrating a function using non-uniform measure (python/scipy)
12,245,859
0
python,numpy,scipy,probability,numerical-methods
Just for the sake of brevity, 3 ways were suggested for calculating the expected value of f(x) under the probability p(x): Assuming p is given in closed-form, use scipy.integrate.quad to evaluate f(x)p(x) Assuming p can be sampled from, sample N values x=P(N), then evaluate the expected value by np.mean(f(X)) and the ...
I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential). I can do it myself, using...
0
1
545
0
12,277,912
0
1
0
0
1
false
8
2012-09-05T09:02:00.000
1
3
0
python clear csv file
12,277,864
0.066568
python,csv
The Python csv module is only for reading and writing whole CSV files but not for manipulating them. If you need to filter data from file then you have to read it, create a new csv file and write the filtered rows back to new file.
how can I clear a complete csv file with python. Most forum entries that cover the issue of deleting row/columns basically say, write the stuff you want to keep into a new file. I need to completely clear a file - how can I do that?
0
1
41,353
0
12,363,743
0
0
0
0
1
false
1
2012-09-10T11:25:00.000
0
1
0
Inverted color marks in matplotlib
12,350,693
0
python,matplotlib
I post here a schematic approach how to solve your problem with out any real python code, it might help though. When actually plotting you need to store all in some kind of two lists, which will enable you to access them later. For each element, bar and marker you can get the color. For each marker you can find if i...
I am using matplotlib to draw a bar chart with many different colors. I also draw a number of markers on the plot with scatter. Since I am already using many different colors for the bars, I do not want to use a separate contrasting color for the marks, as that would add a big limit to the color space I can choose m...
0
1
385
0
12,474,645
0
0
0
0
1
true
1
2012-09-18T03:43:00.000
0
1
0
Fast algorithm comparing unsorted data
12,470,094
1.2
python,sql,dna-sequence,genome
probably what you want is called "de novo assembly" an approach would be to calculate N-mers, and use these in an index nmers will become more important if you need partial matches / mismatches if billion := 1E9, python might be too weak also note that 18 bases* 2 bits := 36 bits of information to enumerate them. That ...
I have data that needs to stay in the exact sequence it is entered in (genome sequencing) and I want to search approximately one billion nodes of around 18 members each to locate patterns. Obviously speed is an issue with this large of a data set, and I actually don't have any data that I can currently use as a discret...
0
1
386
0
12,473,913
0
0
0
0
1
true
17
2012-09-18T08:52:00.000
25
1
0
What does matplotlib `imshow(interpolation='nearest')` do?
12,473,511
1.2
python,image-processing,numpy,matplotlib
interpolation='nearest' simply displays an image without trying to interpolate between pixels if the display resolution is not the same as the image resolution (which is most often the case). It will result an image in which pixels are displayed as a square of multiple pixels. There is no relation between interpolation...
I use imshow function with interpolation='nearest' on a grayscale image and get a nice color picture as a result, looks like it does some sort of color segmentation for me, what exactly is going on there? I would also like to get something like this for image processing, is there some function on numpy arrays like int...
0
1
22,970
0
12,497,790
0
0
0
0
2
false
1
2012-09-19T15:09:00.000
3
2
0
Using NumPy in Pyramid
12,497,545
0.291313
python,numpy,pyramid
If the array is something that can be shared between threads then you can store it in the registry at application startup (config.registry['my_big_array'] = ??). If it cannot be shared then I'd suggest using a queuing system with workers that can always have the data loaded, probably in another process. You can hack th...
I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use. Right now my application is a single page and I am using a single view callable. The array will be loaded from di...
0
1
409
0
12,497,850
0
0
0
0
2
false
1
2012-09-19T15:09:00.000
2
2
0
Using NumPy in Pyramid
12,497,545
0.197375
python,numpy,pyramid
I would just load it in the obvious place in the code, where you need to use it (in your view, I guess?) and see if you have performance problems. It's better to work with actual numbers than try to guess what's going to be a problem. You'll usually be surprised by the reality. If you do see performance problems, assum...
I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use. Right now my application is a single page and I am using a single view callable. The array will be loaded from di...
0
1
409
0
42,903,054
0
1
0
0
1
false
22
2012-09-20T01:20:00.000
1
5
0
Save session in IPython like in MATLAB?
12,504,951
0.039979
python,ipython,pandas
There is also a magic command, history, that can be used to write all the commands/statements given by user. Syntax : %history -f file_name. Also %save file_name start_line-end_line, where star_line is the starting line number and end_line is ending line number. Useful in case of selective save. %run can be used to exe...
It would be useful to save the session variables which could be loaded easily into memory at a later stage.
0
1
10,066
0
12,536,067
0
0
0
0
1
true
2
2012-09-21T16:54:00.000
1
2
0
How interpolate 3D coordinates
12,534,813
1.2
python,r,3d,interpolation,splines
By "compact manifold" do you mean a lower dimensional function like a trajectory or a surface that is embedded in 3d? You have several alternatives for the surface-problem in R depending on how "parametric" or "non-parametric" you want to be. Regression splines of various sorts could be applied within the framework of ...
I have data points in x,y,z format. They form a point cloud of a closed manifold. How can I interpolate them using R-Project or Python? (Like polynomial splines)
0
1
1,960
0
12,616,286
0
1
0
0
1
true
0
2012-09-24T12:49:00.000
0
1
0
Trouble installing scipy on Mac OSX Lion
12,565,351
1.2
numpy,python-2.7,scipy
EPD distribution saved the day.
I've installed new instance of python-2.7.2 with brew. Installed numpy from pip, then from sources. I keep getting numpy.distutils.npy_pkg_config.PkgNotFound: Could not find file(s) ['/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/lib/npy-pkg-config/npyma...
0
1
253
1
12,638,921
0
0
0
0
1
true
2
2012-09-25T09:37:00.000
2
2
0
How do you display a 2D numpy array in glade-3 ?
12,580,198
1.2
python,arrays,numpy,gtk,glade
In the end i decided to create a buffer for the pixels using: self.pixbuf = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,0,8,1280,1024) I then set the image from the pixel buffer: self.liveImage.set_from_pixbuf(self.pixbuf)
I'm making live video GUI using Python and Glade-3, but I'm finding it hard to convert the Numpy array that I have into something that can be displayed in Glade. The images are in black and white with just a single value giving the brightness of each pixel. I would like to be able to draw over the images in the GUI so ...
0
1
760
0
12,584,253
0
0
0
0
1
false
3
2012-09-25T11:36:00.000
1
2
0
Check Arrayfire Array against NaNs
12,582,140
0.099668
python,arrayfire
bottleneck is worth looking into. They have performed several optimizations over the numpy.nanxxx functions which, in my experience, makes it around 5x faster than numpy.
I'm using Arrayfire on Python and I can't use the af.sum() function since my input array has NaNs in it and it would return NAN as sum. Using numpy.nansum/numpy.nan_to_num is not an option due to speed problems. I just need a way to convert those NaNs to floating point zeros in arrayfire.
0
1
299
0
12,594,030
0
0
0
0
1
true
1
2012-09-26T02:30:00.000
0
1
0
Pandas Data Reconcilation
12,593,759
1.2
python,pandas
Try DataFrame.duplicated and DataFrame.drop_duplicates
I need to reconcile two separate dataframes. Each row within the two dataframes has a unique id that I am using to match the two dataframes. Without using a loop, how can I reconcile one dataframe against another and vice-versa? I tried merging the two dataframes on an index (unique id) but the problem I run into w...
0
1
745
0
12,606,715
0
0
0
0
1
false
1
2012-09-26T16:12:00.000
1
2
0
f2py speed with array ordering
12,606,027
0.099668
python,performance,numpy,f2py
There shouldn't be any slow-down. Since NumPy 1.6, most ufuncs (ie, the basic 'universal' functions) take an optional argument allowing a user to specify the memory layout of her array: by default, it's K, meaning that the 'the element ordering of the inputs (is matched) as closely as possible`. So, everything should ...
I'm writing some code in fortran (f2py) in order to gain some speed because of a large amount of calculations that would be quite bothering to do in pure Python. I was wondering if setting NumPy arrays in Python as order=Fortran will kind of slow down the main python code with respect to the classical C-style order.
0
1
502
0
12,699,435
0
0
0
0
1
true
1
2012-10-02T22:30:00.000
2
1
0
Can normal algos run on PyOpenGL?
12,699,376
1.2
python,opengl
Is PyOpenGL the answer? No. At least not in the way you expect it. If your GPU does support OpenGL-4.3 you could use Compute Shaders in OpenGL, but those are not written in Python but simply run a "vanilla" python script ported to the GPU. That's not how GPU computing works. You have to write the shaders of computat...
I want to write an algorithm that would benefit from the GPU's superior hashing capability over the CPU. Is PyOpenGL the answer? I don't want to use drawing tools, but simply run a "vanilla" python script ported to the GPU. I have an ATI/AMD GPU if that means anything.
0
1
389
0
12,711,895
0
1
0
0
1
false
3
2012-10-03T15:27:00.000
1
3
0
categorizing items in a list with python
12,711,743
0.066568
python,list
I assume that the data are noisy, in the sense that it could just be anything at all, written in. The main difficulty here is going to be how to define the mapping between your input data, and categories, and that is going to involve, in the first place, looking through the data. I suggest that you look at what you hav...
Currently I have a list of 110,000 donors in Excel. One of the pieces of information they give to us is their occupation. I would like to condense this list down to say 10 or 20 categories that I define. Normally I would just chug through this, going line by line, but since I have to do this for a years worth of data...
0
1
1,266