GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
15,066,821
0
0
0
0
1
false
3
2012-10-03T17:32:00.000
0
3
0
Topic-based text and user similarity
12,713,797
0
python,numpy,recommendation-engine,topic-modeling,gensim
My tricks are using a search engine such as ElasticSearch, and it works very well, and in this way we unified the api of all our recommend systems. Detail is listed as below: Training the topic model by your corpus, each topic is an array of words and each of the word is with a probability, and we take the first 6 mos...
I am looking to compute similarities between users and text documents using their topic representations. I.e. each document and user is represented by a vector of topics (e.g. Neuroscience, Technology, etc) and how relevant that topic is to the user/document. My goal is then to compute the similarity between these vect...
0
1
1,318
0
50,292,755
0
0
0
0
1
false
35
2012-10-05T02:50:00.000
1
6
0
How can I call scikit-learn classifiers from Java?
12,738,827
0.033321
java,python,jython,scikit-learn
I found myself in a similar situation. I'll recommend carving out a classifier microservice. You could have a classifier microservice which runs in python and then expose calls to that service over some RESTFul API yielding JSON/XML data-interchange format. I think this is a cleaner approach.
I have a classifier that I trained using Python's scikit-learn. How can I use the classifier from a Java program? Can I use Jython? Is there some way to save the classifier in Python and load it in Java? Is there some other way to use it?
0
1
42,287
0
14,583,682
0
0
0
0
2
false
2
2012-10-06T20:31:00.000
0
2
0
User profiling for topic-based recommender system
12,763,608
0
python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling
"represent a user as the aggregation of all the documents viewed" : that might work indeed, given that you are in linear spaces. You can easily add all the documents vectors in one big vector. If you want to add the ratings, you could simply put a coefficient in the sum. Say you group all documents rated 2 in a vector ...
I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users. I trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, o...
0
1
602
0
12,764,041
0
0
0
0
2
false
2
2012-10-06T20:31:00.000
1
2
0
User profiling for topic-based recommender system
12,763,608
0.099668
python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling
I don't think that's working with lsa. But you maybe could do some sort of k-NN classification, where each user's coordinates are the documents viewed. Each object (=user) sends out radiation (intensity is inversely proportional to the square of the distance). The intensity is calculated from the ratings on the single...
I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users. I trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, o...
0
1
602
0
12,807,402
0
0
0
0
1
false
1
2012-10-09T15:31:00.000
1
2
0
Algorithms for Mining Tuples of Data on huge sample space
12,803,495
0.099668
python,data-mining,graph-algorithm,recommendation-engine,apriori
For Apriori, you do not need to have tuples or vectors. It can be implemented with very different data types. The common data type is a sorted item list, which could as well look like 1 13 712 1928 123945 191823476 stored as 6 integers. This is essentially equivalent to a sparse binary vector and often very memory effi...
I read that Apriori algorithm is used to fetch association rules from the dataset like a set of tuples. It helps us to find the most frequent 1-itemsets, 2-itemsets and so-on. My problem is bit different. I have a dataset, which is a set of tuples, each of varying size - as follows : (1, 234, 56, 32) (25, 4575, 575, 46...
0
1
631
0
12,810,026
0
0
0
0
1
false
1
2012-10-09T20:39:00.000
0
2
0
Can anyone provide me with some clustering examples?
12,808,050
0
python,scipy,cluster-analysis
The second is what clustering is: group objects that are somewhat similar (and that could be images). Clustering is not a pure imaging technique. When processing a single image, it can for example be applied to colors. This is a quite good approach for reducing the number of colors in an image. If you cluster by colors...
I am having a hard time understanding what scipy.cluster.vq really does!! On Wikipedia it says Clustering can be used to divide a digital image into distinct regions for border detection or object recognition. on other sites and books it says we can use clustering methods for clustering images for finding groups of s...
0
1
1,663
0
12,810,655
0
0
0
0
1
false
2
2012-10-10T00:56:00.000
1
3
0
randomly choose pair (i, j) with probability P[i, j] given stochastic matrix P
12,810,499
0.066568
python,numpy,statistics
Here's a simple algorithm in python that does what you are expecting. Let's take for example a single dimension array P equal to [0.1,0.3,0.4,0.2]. The logic can be extended to any number of dimensions. Now we set each element to the sum of all the elements that precede it: P => [0, 0.1, 0.4, 0.8, 1] Using a random gen...
I have numpy two dimension array P such that P[i, j] >= 0 and all P[i, j] sums to one. How to choose pair of indexes (i, j) with probability P[i, j] ? EDIT: I am interested in numpy build function. Is there something for this problem? May be for one dimensional array?
0
1
1,166
0
12,821,336
0
1
0
0
1
false
14
2012-10-10T14:01:00.000
-1
4
0
What are ngram counts and how to implement using nltk?
12,821,201
-0.049958
python,nlp,nltk
I don't think there is a specific method in nltk to help with this. This isn't tough though. If you have a sentence of n words (assuming you're using word level), get all ngrams of length 1-n, iterate through each of those ngrams and make them keys in an associative array, with the value being the count. Shouldn't be m...
I've read a paper that uses ngram counts as feature for a classifier, and I was wondering what this exactly means. Example text: "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam" I can create unigrams, bigrams, trigrams, etc. out of this text, where I have to define on which "level" to create these un...
0
1
21,407
0
19,391,151
0
0
0
0
1
false
3
2012-10-11T00:25:00.000
0
1
0
How to read tbf file with STAMP encryption
12,830,437
0
c#,python,sql,database,sml
Have you looked in using something like FileViewerPro? Its free download tool in order to open files. Also, what windows programs have you tried so far, like notepad, excel?
I have Toronto Stock Exchange stock data in a maxtor hard drive. The data is a TBF file with .dat and .pos components. The .dat file contains all the Stamp format transmission information in binary format. I can read .pos file using R. It has 3 column with numbers, which make no sense to me. The data is information on ...
0
1
188
0
12,930,212
0
0
0
0
1
true
1
2012-10-15T16:02:00.000
5
1
0
Python: OpenCV can not be loaded on windows xp
12,899,513
1.2
python,opencv,windows-xp,py2exe
The problem comes from 4 dlls which are copyied by py2exe: msvfw32.dll msacm32.dll, avicap32.dll and avifil32.dll As I am building on Vista, I think that it forces the use of Vista dlls on Windows XP causing some mismatch when trying to load it. I removed these 4 dlls and everything seems to work ok (in this case it us...
I have a Python app built with Python, OpenCv and py2exe. When I distribute this app and try to run it on a windows XP machine, I have an error on startup due to error loading cv2.pyd (opencv python wrapper) I looked at cv2.pyd with dependency walker and noticed that some dlls are missing : ieshims.dll and wer.dll. Un...
0
1
941
0
12,926,989
0
0
0
0
1
true
36
2012-10-17T03:58:00.000
62
2
0
numpy unique without sort
12,926,898
1.2
python,numpy
You can do this with the return_index parameter: >>> import numpy as np >>> a = [4,2,1,3,1,2,3,4] >>> np.unique(a) array([1, 2, 3, 4]) >>> indexes = np.unique(a, return_index=True)[1] >>> [a[index] for index in sorted(indexes)] [4, 2, 1, 3]
How can I use numpy unique without sorting the result but just in the order they appear in the sequence? Something like this? a = [4,2,1,3,1,2,3,4] np.unique(a) = [4,2,1,3] rather than np.unique(a) = [1,2,3,4] Use naive solution should be fine to write a simple function. But as I need to do this multiple times, are the...
0
1
32,836
0
12,938,704
0
0
0
0
1
false
12
2012-10-17T16:07:00.000
1
3
0
Matplotlib savefig into different pages of a PDF
12,938,568
0.066568
python,pdf,pagination,matplotlib
I suspect that there is a more elegant way to do this, but one option is to use tempfiles or StringIO to avoid making traditional files on the system and then you can piece those together.
I have a lengthy plot, composed o several horizontal subplots organized into a column. When I call fig.savefig('what.pdf'), the resulting output file shows all the plots crammed onto a single page. Question: is there a way to tell savefig to save on any number (possibly automatically determined) of pdf pages? I'd rathe...
0
1
16,939
0
13,009,064
0
0
0
0
1
false
0
2012-10-19T06:20:00.000
1
2
0
Python: Perform an operation on each pixel of a 2-d array simultaneously
12,968,446
0.099668
python,image,filter,numpy,scipy
Even if python did provide functionality to apply an operation to an NxM array without looping over it, the operation would still not be executed simultaneously in the background since the amount of instructions a CPU can handle per cycle is limited and thus no time could be saved. For your use case this might even be ...
I want to apply a 3x3 or larger image filter (gaussian or median) on a 2-d array. Though there are several ways for doing that such as scipy.ndimage.gaussian_filter or applying a loop, I want to know if there is a way to apply a 3x3 or larger filter on each pixel of a mxn array simultaneously, because it would save a l...
0
1
854
0
15,449,900
0
0
0
0
1
false
1
2012-10-19T19:58:00.000
2
2
0
Panda3D and Python, render only one frame and other questions
12,981,607
0.197375
python,3d,rendering,panda3d
You can use a buffer with setOneShot enabled to make it render only a single frame. You can start Panda3D without a window by setting the "window-type" PRC variable to "none", and then opening an offscreen buffer yourself. (Note: offscreen buffers without a host window may not be supported universally.) If you set "w...
I would like to use Panda3D for my personal project, but after reading the documentation and some example sourcecodes, I still have a few questions: How can I render just one frame and save it in a file? In fact I would need to render 2 different images: a single object, and a scene of multiple objects including the...
0
1
2,225
0
12,990,987
0
1
0
0
1
false
3
2012-10-20T16:17:00.000
0
2
0
Data interpolation in python
12,990,315
0
python,graph,plot,interpolation
If you use matplotlib, you can just call plot(X1, Y1, 'bo', X2, Y2, 'r+'). Change the formatting as you'd like, but it can cope with different lengths just fine. You can provide more than two without any issue.
I have four one dimensional lists: X1, Y1, X2, Y2. X1 and Y1 each have 203 data points. X2 and Y2 each have 1532 data points. X1 and X2 are at different intervals, but both measure time. I want to graph Y1 vs Y2. I can plot just fine once I get the interpolated data, but can't think of how to interpolate data. I'...
0
1
2,501
0
13,016,728
0
0
0
0
1
false
1
2012-10-22T15:35:00.000
0
2
0
Calling Python from Stata
13,014,789
0
python,merge,python-3.x,stata
Type "help shell" in Stata. What you want to do is shell out from Stata, call Python, and then have Stata resume whatever you want it to do after the Python script has completed.
This is probably very easy, but after looking through documentation and possible examples online for the past several hours I cannot figure it out. I have a large dataset (a spreadsheet) that gets heavily cleaned by a DO file. In the DO file I then want to save certain variables of the cleaned data as a temp .csv run s...
0
1
2,897
0
15,627,502
0
0
0
0
1
false
8
2012-10-22T16:22:00.000
0
2
0
NLTK: Document Classification with numeric score instead of labels
13,015,593
0
python,nltk
This is a very late answer, but perhaps it will help someone. What you're asking about is regression. Regarding Jacob's answer, linear regression is only one way to do it. However, I agree with his recommendation of scikit-learn.
In the light of a project I've been playing with Python NLTK and Document Classification and the Naive Bayes classifier. As I understand from the documentation, this works very well if your different documents are tagged with either pos or neg as a label (or more than 2 labels) The documents I'm working with that are ...
0
1
1,239
0
13,019,636
0
0
0
0
1
false
1
2012-10-22T20:05:00.000
3
2
0
How to automatically detect if image is of high quality?
13,018,968
0.291313
python,image-processing
You are making this way too hard. I handled this in production code by generating a histogram of the image, throwing away outliers (1 black pixel doesn't mean that the whole image has lots of black; 1 white pixel doesn't imply a bright image), then seeing if the resulting distribution covered a sufficient range of brig...
I want an algorithm to detect if an image is of high professional quality or is done with poor contrast, low lighting etc. How do I go about designing such an algorithm. I feel that it is feasible, since if I press a button in picassa it tries to fix the lighting, contrast and color. Now I have seen that in good pictu...
0
1
3,360
0
13,057,566
0
0
0
0
1
true
2
2012-10-24T20:24:00.000
3
1
0
Using custom Pipeline for Cross Validation scikit-learn
13,057,113
1.2
python,machine-learning,scikit-learn
You probably need to derive from the KMeans class and override the following methods to use your vocabulary logic: fit_transform will only be called on the train data transform will be called on the test data Maybe class derivation is not alway the best option. You can also write your own transformer class that wraps...
I would like to be use GridSearchCV to determine the parameters of a classifier, and using pipelines seems like a good option. The application will be for image classification using Bag-of-Word features, but the issue is that there is a different logical pipeline depending on whether training or test examples are used....
0
1
1,601
0
13,062,863
0
1
0
0
1
false
8
2012-10-25T00:50:00.000
0
3
0
Fast algorithm to detect main colors in an image?
13,060,069
0
python,algorithm,colors,python-imaging-library
K-means is a good choice for this task because you know number of main colors beforehand. You need to optimize K-means. I think you can reduce your image size, just scale it down to 100x100 pixels or so. Find the size on witch your algorithm works with acceptable speed. Another option is to use dimensionality reduction...
Does anyone know a fast algorithm to detect main colors in an image? I'm currently using k-means to find the colors together with Python's PIL but it's very slow. One 200x200 image takes 10 seconds to process. I've several hundred thousand images.
0
1
4,441
0
14,449,025
0
0
0
1
2
false
3
2012-10-25T04:54:00.000
1
3
0
Choice of technology for loading large CSV files to Oracle tables
13,061,800
0.066568
python,csv,etl,sql-loader,smooks
Create a process / script that will call a procedure to load csv files to external Oracle table and another script to load it to the destination table. You can also add cron jobs to call these scripts that will keep track of incoming csv files into the directory, process it and move the csv file to an output/processed ...
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then...
0
1
2,011
0
13,062,737
0
0
0
1
2
true
3
2012-10-25T04:54:00.000
2
3
0
Choice of technology for loading large CSV files to Oracle tables
13,061,800
1.2
python,csv,etl,sql-loader,smooks
Unless you can use some full-blown ETL tool (e.g. Informatica PowerCenter, Pentaho Data Integration), I suggest the 4th solution - it is straightforward and the performance should be good, since Oracle will handle the most complicated part of the task.
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then...
0
1
2,011
0
13,084,224
0
0
0
0
1
false
10
2012-10-25T12:10:00.000
11
2
0
Multiprocessing scikit-learn
13,068,257
1
python,multithreading,numpy,machine-learning,scikit-learn
For linear models (LinearSVC, SGDClassifier, Perceptron...) you can chunk your data, train independent models on each chunk and build an aggregate linear model (e.g. SGDClasifier) by sticking in it the average values of coef_ and intercept_ as attributes. The predict method of LinearSVC, SGDClassifier, Perceptron compu...
I got linearsvc working against training set and test set using load_file method i am trying to get It working on Multiprocessor enviorment. How can i get multiprocessing work on LinearSVC().fit() LinearSVC().predict()? I am not really familiar with datatypes of scikit-learn yet. I am also thinking about splitting sam...
0
1
11,435
0
13,242,515
0
1
0
0
1
false
1
2012-10-30T10:15:00.000
0
2
0
Rotations in 3D
13,136,828
0
python,3d,geometry
I'll assume the "geometry library for python" already answered in the comments on the question. So once you have a transformation that takes 'a' parallel to 'b', you'll just apply it to 'c' The vectors 'a' and 'b' uniquely define a plane. Each vector has a canonical representation as a point difference from the origin,...
I have three vectors in 3D a,b,c. Now I want to calculate a rotation r that when applied to a yields a result parallel to b. Then the rotation r needs to be applied to c. How do I do this in python? Is it possible to do this with numpy/scipy?
0
1
1,659
0
13,150,059
0
0
0
0
1
false
1
2012-10-31T01:28:00.000
2
2
0
Is it possible to create a numpy matrix with 10 rows and 0 columns?
13,150,020
0.197375
python,numpy
Add columns to ndarray(or matrix) need full copy of the content, so you should use other method such as list or the array module, or create a large matrix first, and fill data in it.
My objective is to start with an "empty" matrix and repeatedly add columns to it until I have a large matrix.
0
1
255
0
13,162,109
0
0
0
0
1
true
5
2012-10-31T04:47:00.000
2
2
0
Feature extraction for butterfly images
13,151,428
1.2
python,image-processing,opencv,image-segmentation
Are you willing to write your own image processing logic? Your best option will likely be to optimize the segmentation/feature extraction for your problem, instead of using previous implementations like opencv meant for more general use-cases. An option that I've found to work well in noisy/low-contrast environments i...
I have a set of butterfly images for training my system to segment a butterfly from a given input image. For this purpose, I want to extract the features such as edges, corners, region boundaries, local maximum/minimum intensity etc. I found many feature extraction methods like Harris corner detection, SIFT but they di...
0
1
2,622
0
13,156,356
0
0
0
0
1
false
1
2012-10-31T05:45:00.000
0
2
0
Looking for a specific python gui module to perform the following task
13,151,907
0
python,graph,matplotlib,tkinter,wxwidgets
You can do what you want with Tkinter, though there's no specific widget that does what you ask. There is a general purpose canvas widget that allows you to draw objects (rectangles, circles, images, buttons, etc), and it's pretty easy to add the ability to drag those items around.
I am looking for a GUI python module that is best suited for the following job: I am trying to plot a graph with many columns (perhaps hundreds), each column representing an individual. The user should be able to drag the columns around and drop them onto different columns to switch the two. Also, there are going to be...
0
1
244
0
16,198,408
0
0
0
0
1
false
1
2012-11-02T06:08:00.000
0
1
0
How to use BaseMap with chaco plots
13,190,187
0
python,matplotlib-basemap,chaco
Chaco and matplotlib are completely different tools. Basemap has been built on top of matplotlib so it is not possible to add a Basemap map on a Chaco plot. I'm afraid I couldn't find any mapping layer to go with Chaco. Is there a reason you cannot use matplotlib for you plot?
I had developed scatter and lasso selection plots with Chaco. Now, I need to embed a BaseMap [with few markers on a map] onto the plot area side by side. I created a BaseMap and tried to add to the traits_view; but it is failing with errors. Please give me some pointers to achieve the same.
0
1
156
0
13,198,574
0
0
0
0
1
true
1
2012-11-02T14:22:00.000
5
1
0
Forecast Package from R in Python
13,197,097
1.2
python,time-series,forecasting
Yes, you could use the [no longer developed or extended, but maintained] package RPy, or you could use the newer package RPy2 which is actively developed. There are other options too, as eg headless network connections to Rserve.
I found forecast package from R the best solution for time series analysis and forecasting. I want to use it in Python. Could I use rpy and after take the forecast package in Python?
0
1
1,673
0
13,203,622
0
1
0
0
1
false
61
2012-11-02T21:59:00.000
10
3
0
Big-O of list slicing
13,203,601
1
python,list,big-o
For a list of size N, and a slice of size M, the iteration is actually only O(M), not O(N). Since M is often << N, this makes a big difference. In fact, if you think about your explanation, you can see why. You're only iterating from i_1 to i_2, not from 0 to i_1, then I_1 to i_2.
Say I have some Python list, my_list which contains N elements. Single elements may be indexed by using my_list[i_1], where i_1 is the index of the desired element. However, Python lists may also be indexed my_list[i_1:i_2] where a "slice" of the list from i_1 to i_2 is desired. What is the Big-O (worst-case) notation ...
0
1
47,961
0
13,229,534
0
1
0
0
1
false
2
2012-11-05T06:38:00.000
0
3
0
Backward integration in time using scipy odeint
13,227,115
0
python-2.7,scipy
You can make a change of variables s = t_0 - t, and integrate the differential equation with respect to s. odeint doesn't do this for you.
Is it possible to integrate any Ordinary Differential Equation backward in time using scipy.integrate.odeint ? If it is possible, could someone tell me what should be the arguement 'time' in 'odeint.
0
1
4,017
0
13,236,277
0
0
0
0
1
true
9
2012-11-05T16:20:00.000
2
2
0
How to load only specific columns from csv file into a DataFrame
13,236,098
1.2
python,pandas,csv
There's no default way to do this right now. I would suggest chunking the file and iterating over it and discarding the columns you don't want. So something like pd.concat([x.ix[:, cols_to_keep] for x in pd.read_csv(..., chunksize=200)])
Suppose I have a csv file with 400 columns. I cannot load the entire file into a DataFrame (won't fit in memory). However, I only really want 50 columns, and this will fit in memory. I don't see any built in Pandas way to do this. What do you suggest? I'm open to using the PyTables interface, or pandas.io.sql. T...
0
1
7,641
0
58,968,554
0
0
0
0
1
false
76
2012-11-06T11:27:00.000
3
6
0
How to keep leading zeros in a column when reading CSV with Pandas?
13,250,046
0.099668
python,pandas,csv,types
You Can do This , Works On all Versions of Pandas pd.read_csv('filename.csv', dtype={'zero_column_name': object})
I am importing study data into a Pandas data frame using read_csv. My subject codes are 6 numbers coding, among others, the day of birth. For some of my subjects this results in a code with a leading zero (e.g. "010816"). When I import into Pandas, the leading zero is stripped of and the column is formatted as int64. ...
0
1
62,070
0
29,600,848
0
1
0
0
2
false
2
2012-11-10T02:17:00.000
2
2
0
Sort a list of ints and floats with negative and positive values?
13,318,611
0.197375
sorting,python-2.7,absolute-value
I had the same problem. The answer: Python will sort numbers by the absolute value if you have them as strings. So as your key, make sure to include an int() or float() argument. My working syntax was data = sorted(data, key = lambda x: float(x[0])) ...the lambda x part just gives a function which outputs the thing you...
I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?
0
1
5,933
0
65,985,248
0
1
0
0
2
false
2
2012-11-10T02:17:00.000
0
2
0
Sort a list of ints and floats with negative and positive values?
13,318,611
0
sorting,python-2.7,absolute-value
In addition to doublefelix,below code gives the absolute order to me from string. siparis=sorted(siparis, key=lambda sublist:abs(float(sublist[1])))
I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?
0
1
5,933
0
13,345,287
0
0
0
0
1
false
1
2012-11-10T10:01:00.000
0
1
0
Creating a 5D array in Python
13,321,042
0
python,numpy
If I understand correctly, every pixel in the gray image is mapped to a single pixel in N other images. In that case, the map array is numpy.zeros((i.shape[0], i.shape[1], N, 2), dtype=numpy.int32) since you need to store 1 x and 1 y coordinate into each other N arrays, not the full Nth array every time. Using integer...
I have a gray image in which I want to map every pixel to N other matrices of size LxM.How do I initialize such a matrix?I tried result=numpy.zeros(shape=(i_size[0],i_size[1],N,L,M)) for which I get the Value Error 'array is too big'.Can anyone suggest an alternate method?
0
1
3,179
0
13,397,869
0
0
0
0
1
false
4
2012-11-13T06:20:00.000
2
1
0
Smoothing in python NLTK
13,356,348
0.379949
python,nltk,smoothing
I'd suggest to replace all the words with low (specially 1) frequency to <unseen>, then train the classifier in this data. For classifying you should query the model for <unseen> in the case of a word that is not in the training data.
I am using Naive Bayes classifier in python for text classification. Is there any smoothing methods to avoid zero probability for unseen words in python NLTK? Thanks in advance!
0
1
1,430
0
13,445,709
0
0
0
0
1
false
0
2012-11-14T17:09:00.000
0
1
0
Is it possible to use in Python the svm_model, generated in matlab?
13,383,684
0
python,matlab,libsvm
Normally you would just call a method in libsvm to save your model to a file. You then can just use it in Python using their svm.py. So yes, you can - it's all saved in libsvm format.
Is it possible to use in Python the svm_model, generated in matlab? (I use libsvm)
0
1
77
0
13,399,425
0
0
0
0
1
false
4
2012-11-15T09:51:00.000
0
2
0
How to expose an NLTK based ML(machine learning) Python Script as a Web Service?
13,394,969
0
python,machine-learning,cherrypy
NLTK based system tends to be slow at response per request, but good throughput can be achieved given enough RAM.
Let me explain what I'm trying to achieve. In the past while working on Java platform, I used to write Java codes(say, to push or pull data from MySQL database etc.) then create a war file which essentially bundles all the class files, supporting files etc and put it under a servlet container like Tomcat and this becom...
1
1
1,090
0
60,718,937
0
0
0
0
1
false
49
2012-11-15T20:49:00.000
0
5
0
Convert an image RGB->Lab with python
13,405,956
0
python,numpy,scipy,python-imaging-library,color-space
At the moment I haven't found a good package to do that. You have to bear in mind that RGB is a device-dependent colour space so you can't convert accurately to XYZ or CIE Lab if you don't have a profile. So be aware that many solutions where you see converting from RGB to CIE Lab without specifying the colour space o...
What is the preferred way of doing the conversion using PIL/Numpy/SciPy today?
0
1
55,003
0
13,420,016
0
0
0
0
1
true
19
2012-11-16T15:43:00.000
41
1
0
pandas dataframe, copy by value
13,419,822
1.2
python,pandas
All functions in Python are "pass by reference", there is no "pass by value". If you want to make an explicit copy of a pandas object, try new_frame = frame.copy().
I noticed a bug in my program and the reason it is happening is because it seems that pandas is copying by reference a pandas dataframe instead of by value. I know immutable objects will always be passed by reference but pandas dataframe is not immutable so I do not see why it is passing by reference. Can anyone provid...
0
1
17,836
0
66,955,473
0
0
1
0
2
false
92
2012-11-17T17:14:00.000
3
5
0
Does performance differ between Python or C++ coding of OpenCV?
13,432,800
0.119427
c++,python,performance,opencv
Why choose? If you know both Python and C++, use Python for research using Jupyter Notebooks and then use C++ for implementation. The Python stack of Jupyter, OpenCV (cv2) and Numpy provide for fast prototyping. Porting the code to C++ is usually quite straight-forward.
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between th...
0
1
76,398
0
13,432,830
0
0
1
0
2
false
92
2012-11-17T17:14:00.000
6
5
0
Does performance differ between Python or C++ coding of OpenCV?
13,432,800
1
c++,python,performance,opencv
You're right, Python is almost always significantly slower than C++ as it requires an interpreter, which C++ does not. However, that does require C++ to be strongly-typed, which leaves a much smaller margin for error. Some people prefer to be made to code strictly, whereas others enjoy Python's inherent leniency. If yo...
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between th...
0
1
76,398
0
13,436,279
0
0
0
0
1
true
0
2012-11-17T23:55:00.000
0
1
0
K-Means plus plus implementation
13,436,032
1.2
python,colors,cluster-computing,k-means
You can use a vector quantisation. You can make a list of each pixel and each adjacent pixel in x+1 and y+1 direction and pick the difference and plot it along a diagonale. Then you can calculate a voronoi diagram and get the mean color and compute a feature vector. It's a bit more effectice then to use a simple grid ...
My score was to get the most frequent color in a image, so I implemented a k-means algorithm. The algorithm works good, but the result is not the one I was waiting for. So now I'm trying to do some improvements, the first I thought was to implement k-means++, so I get a beter position for the inicial clusters centers. ...
0
1
1,239
0
13,463,491
0
0
0
0
1
false
0
2012-11-19T19:08:00.000
0
1
0
Scipy / Numpy Reimann Sum Height
13,460,428
0
python,numpy,scipy
For computing Reimann sums you could look into numpy.cumsum(). I am not sure if you can do a surface or only an array with this method. However, you could always loop through all the rows of your terrain and store each row in a two dimensional array as you go. Leaving you with an array of all the terrain heights.
I am working on a visualization that models the trajectory of an object over a planar surface. Currently, the algorithm I have been provided with uses a simple trajectory function (where velocity and gravity are provided) and Runge-Kutta integration to check n points along the curve for a point where velocity becomes ...
0
1
329
0
13,467,084
0
1
0
0
1
true
2
2012-11-20T05:08:00.000
3
1
0
Installing (and using) numpy without access to a compiler or binaries
13,466,939
1.2
python,numpy
a compiler is unavailable, and no pre-built binaries can be installed This... makes numpy impossible. If you cannot install numpy binaries, and you cannot compile numpy source code, then you are left with no options.
Assuming performance is not an issue, is there a way to deploy numpy in a environment where a compiler is unavailable, and no pre-built binaries can be installed? Alternatively, is there a pure-python numpy implementation?
0
1
435
0
13,491,927
0
0
0
0
1
true
0
2012-11-21T10:55:00.000
2
1
0
Numpy save file is larger than the original
13,491,731
1.2
python,numpy
Have you looked at the way floats are represented in text before and after? You might have a line "1.,2.,3." become "1.000000e+0, 2.000000e+0,3.000000e+0" or something like that, the two are both valid and both represent the same numbers. More likely, however, is that if the original file contained floats as values wi...
I'm extracting a large CSV file (200Mb) that was generated using R with Python (I'm the one using python). I do some tinkling with the file (normalization, scaling, removing junk columns, etc) and then save it again using numpy's savetxt with data delimiter as ',' to kee the csv property. Thing is, the new file is almo...
0
1
280
0
13,520,565
0
0
0
0
1
false
3
2012-11-22T21:37:00.000
1
1
0
Peak detection in Python
13,520,319
0.197375
python,r,time-series
Calculate the derivation of your sample points, for example for every 5 points (THRESHOLD!) calculate the slope of the five points with Least squares methods (search on wiki if you dont know what it is. Any lineair regression function uses it). And when this slope is almost (THRESHOLD!) zero there is a peak.
In time series we can find peak (min and max values). There are algorithms to find peaks. My question is: In python are there libraries for peak detection in time series data? or something in R using RPy?
0
1
1,640
0
13,530,687
0
0
0
0
2
true
2
2012-11-23T09:57:00.000
0
2
0
Tracking a multicolor object
13,526,654
1.2
python,opencv,tracking
For a full proof track you need to combine more than one method...following are some of the hints... if you have prior knowledge of the object then you can use template matching...but template matching is little process intensive...if you are using GPU then you might have some benefit from your write up i presume exte...
I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. This method is quite stable most of the time but when the external light varies a bit...
0
1
475
0
13,534,342
0
0
0
0
2
false
2
2012-11-23T09:57:00.000
0
2
0
Tracking a multicolor object
13,526,654
0
python,opencv,tracking
You might try having multiple or an infinite number of models of the object depending upon the light sources available, and then classifying your object as either the object with one of the light sources or not the object. Note: this is a machine learning-type approach to the problem. Filtering with a Kalman, extended...
I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. This method is quite stable most of the time but when the external light varies a bit...
0
1
475
0
13,544,567
0
0
0
0
1
true
5
2012-11-24T16:25:00.000
1
4
0
how to create random single source random acyclic directed graphs with negative edge weights in python
13,543,069
1.2
python,random,graph,networkx,bellman-ford
I noticed that the generated graphs have always exactly one sink vertex which is the first vertex. You can reverse direction of all edges to get a graph with single source vertex.
I want to do a execution time analysis of the bellman ford algorithm on a large number of graphs and in order to do that I need to generate a large number of random DAGS with the possibility of having negative edge weights. I am using networkx in python. There are a lot of random graph generators in the networkx librar...
0
1
4,642
1
13,585,375
0
0
0
0
1
true
1
2012-11-27T13:04:00.000
1
1
0
What is PyOpenGL's "context specific data"?
13,584,900
1.2
python,opengl,ctypes,pyopengl
Are there any scenarios I'm missing? Buffer mappings obtained through glMapBuffer
PyOpenGL docs say: Because of the way OpenGL and ctypes handle, for instance, pointers, to array data, it is often necessary to ensure that a Python data-structure is retained (i.e. not garbage collected). This is done by storing the data in an array of data-values that are indexed by a context-specific key. The fun...
0
1
176
0
13,593,942
0
1
0
0
1
true
22
2012-11-27T20:38:00.000
18
2
0
python pandas dataframe thread safe?
13,592,618
1.2
python,thread-safety,pandas
The data in the underlying ndarrays can be accessed in a threadsafe manner, and modified at your own risk. Deleting data would be difficult as changing the size of a DataFrame usually requires creating a new object. I'd like to change this at some point in the future.
I am using multiple threads to access and delete data in my pandas dataframe. Because of this, I am wondering is pandas dataframe threadsafe?
0
1
17,987
0
13,595,084
0
1
0
0
2
true
0
2012-11-27T23:15:00.000
0
2
0
numpy for 64 bit windows
13,594,953
1.2
windows,numpy,python-2.7,64-bit
It should work if you're using 32-bit Python. If you're using 64-bit Python you'll need 64-bit Numpy.
I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?
0
1
380
0
33,553,807
0
1
0
0
2
false
0
2012-11-27T23:15:00.000
0
2
0
numpy for 64 bit windows
13,594,953
0
windows,numpy,python-2.7,64-bit
If you are getting it from pip and you want a 64 bit version of NumPy, you need MSVS 2008. pip needs to compile NumPy module with the same compiler that Python binary was compiled with. The last I checked (this Summer), python's build.py on Windows only supported up to that version of MSVS. Probably because build.py ...
I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?
0
1
380
0
13,615,685
0
0
0
0
1
false
53
2012-11-28T11:21:00.000
1
5
0
Feature Selection and Reduction for Text Classification
13,603,882
0.039979
python,nlp,svm,sentiment-analysis,feature-extraction
Linear svm is recommended for high dimensional features. Based on my experience the ultimate limitation of SVM accuracy depends on the positive and negative "features". You can do a grid search (or in the case of linear svm you can just search for the best cost value) to find the optimal parameters for maximum accuracy...
I am currently working on a project, a simple sentiment analyzer such that there will be 2 and 3 classes in separate cases. I am using a corpus that is pretty rich in the means of unique words (around 200.000). I used bag-of-words method for feature selection and to reduce the number of unique features, an elimination ...
0
1
30,760
0
13,612,350
0
0
0
0
1
true
1
2012-11-28T17:38:00.000
1
1
0
Creating a haar classifier using opencv_traincascade
13,611,126
1.2
python,opencv,machine-learning,computer-vision,object-detection
This looks like you need to determine what features you would like to train your classifier on first, as using the haar classifier it benefits from those extra features. From there you will need to train the classifier, this requires you to get a lot of images that have cars and those that do not have cars in them and ...
I am having a little bit of trouble creating a haar classifier. I need to build up a classifier to detect cars. At the moment I made a program in python that reads in an image, I draw a rectangle around the area the object is in, Once the rectangle is drawn, it outputs the image name, the top left and bottom right coor...
0
1
1,867
0
13,637,244
0
0
0
0
1
true
6
2012-11-29T23:05:00.000
3
2
0
ElasticSearch: EdgeNgrams and Numbers
13,636,419
1.2
python,elasticsearch,django-haystack
if you're using the edgeNGram tokenizer, then it will treat "EdgeNGram 12323" as a single token and then apply the edgeNGram'ing process on it. For example, if min_grams=1 max_grams=4, you'll get the following tokens indexed: ["E", "Ed", "Edg", "Edge"]. So I guess this is not what you're really looking for - consider u...
Any ideas on how EdgeNgram treats numbers? I'm running haystack with an ElasticSearch backend. I created an indexed field of type EdgeNgram. This field will contain a string that may contain words as well as numbers. When I run a search against this field using a partial word, it works how it's supposed to. But if I...
0
1
2,671
0
13,645,588
0
1
0
0
1
false
2
2012-11-29T23:33:00.000
3
1
0
Fast Fourier Transform (fft) with Time Associated Data Python
13,636,758
0.53705
python,numpy,scipy,fft
If the data is not uniformly sampled (i.e. Tx[i]-Tx[i-1] is constant), then you cannot do an FFT on it. Here's an idea: If you have a pretty good idea of the bandwidth of the signal, then you could create a resampled version of the DFT basis vectors R. I.e. the complex sinusoids evaluated at the Tx times. Then solve ...
I have data and a time 'value' associated with it (Tx and X). How can I perform a fast Fourier transform on my data. Tx is an array I have and X is another array I have. The length of both arrays are of course the same and they are associated by Tx[i] with X[i] , where i goes from 0 to len(X). How can I perform a fft o...
0
1
1,901
0
53,256,590
0
0
0
0
2
false
123
2012-11-30T18:38:00.000
-4
7
0
How can I filter lines on load in Pandas read_csv function?
13,651,117
-1
python,pandas
You can specify nrows parameter. import pandas as pd df = pd.read_csv('file.csv', nrows=100) This code works well in version 0.20.3.
How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something? Example: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.
0
1
95,461
0
60,026,814
0
0
0
0
2
false
123
2012-11-30T18:38:00.000
4
7
0
How can I filter lines on load in Pandas read_csv function?
13,651,117
0.113791
python,pandas
If the filtered range is contiguous (as it usually is with time(stamp) filters), then the fastest solution is to hard-code the range of rows. Simply combine skiprows=range(1, start_row) with nrows=end_row parameters. Then the import takes seconds where the accepted solution would take minutes. A few experiments with th...
How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something? Example: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.
0
1
95,461
0
13,852,311
0
0
0
0
1
false
26
2012-11-30T20:54:00.000
4
7
0
How do I Pass a List of Series to a Pandas DataFrame?
13,653,030
0.113791
python,pandas
Check out DataFrame.from_items too
I realize Dataframe takes a map of {'series_name':Series(data, index)}. However, it automatically sorts that map even if the map is an OrderedDict(). Is there a simple way to pass a list of Series(data, index, name=name) such that the order is preserved and the column names are the series.name? Is there an easy way i...
0
1
51,167
0
13,663,566
0
1
0
0
1
true
0
2012-12-01T20:12:00.000
0
1
0
Efficient Hadoop Word counting for large file
13,663,294
1.2
python,hadoop,hadoop-streaming
The most efficient way to do this is to maintain a hash map of word frequency in your mappers, and flush them to the output context when they reach a certain size (say 100,000 entries). Then clear out the map and continue (remember to flush the map in the cleanup method too). If you still truely have 100 of millions of...
I want to implement a hadoop reducer for word counting. In my reducer I use a hash table to count the words.But if my file is extremely large the hash table will use extreme amount of memory.How I can address this issue ? (E.g A file with 10 million lines each reducer receives 100million words how can he count the word...
0
1
442
0
13,701,036
0
0
0
0
2
false
1
2012-12-04T10:39:00.000
0
3
0
boolean indexing on index (instead of dataframe)
13,701,035
0
python,pandas
I see two ways of getting this, both of which look like a detour – which makes me think there must be a better way which I'm overlooking. Converting the MultiIndex into columns: df[df.reset_index()["B"] == 2] Swapping the name I want to use to the start of the MultiIndex and then use lookup by index: df.swaplevel(0, "...
When I have a pandas.DataFrame df with columns ["A", "B", "C", "D"], I can filter it using constructions like df[df["B"] == 2]. How do I do the equivalent of df[df["B"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby(["A", "B"]).mean() or df.setindex(["A", "B"]))
0
1
148
0
13,755,051
0
0
0
0
2
true
1
2012-12-04T10:39:00.000
1
3
0
boolean indexing on index (instead of dataframe)
13,701,035
1.2
python,pandas
I would suggest either: df.xs(2, level='B') or df[df.index.get_level_values('B') == val] I'd like to make the syntax for the latter operation a little nicer.
When I have a pandas.DataFrame df with columns ["A", "B", "C", "D"], I can filter it using constructions like df[df["B"] == 2]. How do I do the equivalent of df[df["B"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby(["A", "B"]).mean() or df.setindex(["A", "B"]))
0
1
148
0
13,774,224
0
0
0
1
1
false
0
2012-12-07T23:52:00.000
0
1
0
Plotting data using Flot and MySQL
13,772,857
0
python,mysql,flot
Install a httpd server Install php Write a php script to fetch the data from the database and render it as a webpage. This is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.
So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows: Database: testDB Table: sensors First record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor d...
0
1
428
0
37,094,880
0
0
0
0
1
false
14
2012-12-08T23:33:00.000
4
2
0
Finding the indices of the top three values via argmin() or min() in python/numpy without mutation of list?
13,783,071
0.379949
python,list,numpy,min
numpy.argpartition(cluster, 3) would be much more effective.
So I have this list called sumErrors that's 16000 rows and 1 column, and this list is already presorted into 5 different clusters. And what I'm doing is slicing the list for each cluster and finding the index of the minimum value in each slice. However, I can only find the first minimum index using argmin(). I don't th...
0
1
11,034
0
13,795,874
0
0
0
0
1
true
13
2012-12-10T05:47:00.000
25
2
0
Numpy error: Singular matrix
13,795,682
1.2
python,numpy
A singular matrix is one that is not invertible. This means that the system of equations you are trying to solve does not have a unique solution; linalg.solve can't handle this. You may find that linalg.lstsq provides a usable solution.
What does the error Numpy error: Matrix is singular mean specifically (when using the linalg.solve function)? I have looked on Google but couldn't find anything that made it clear when this error occurs.
0
1
60,944
0
19,329,962
0
0
0
0
1
false
4
2012-12-12T13:52:00.000
2
2
0
FFT in Numpy (Python) when N is not a power of 2
13,841,296
0.197375
python,numpy,fft
In my experience the algorithms don't do automatic padding, or at least some of them don't. For example, running the scipy.signal.hilbert method on a signal that wasn't of length == a power of two took about 45 seconds. When I padded the signal myself with zeros to such a length, it took 100ms. YMMV but it's somethin...
My question is about the algorithm which is used in Numpy's FFT function. The documentation of Numpy says that it uses the Cooley-Tukey algorithm. However, as you may know, this algorithm works only if the number N of points is a power of 2. Does numpy pad my input vector x[n] in order to calculate its FFT X[k]? (I do...
0
1
6,896
0
13,858,423
0
1
1
0
1
false
5
2012-12-13T03:51:00.000
2
3
0
I want Python as front end, Fortran as back end. I also want to make fortran part parallel - best strategy?
13,852,646
0.132549
python,arrays,parallel-processing,fortran,f2py
An alternative approach to VladimirF's suggestion, could be to set up the two parts as a client server construct, where your Python part could talk to the Fortran part using sockets. Though this comes with the burden to implement some protocol for the interaction, it has the advantage, that you get a clean separation a...
I have a python script I hope to do roughly this: calls some particle positions into an array runs algorithm over all 512^3 positions to distribute them to an NxNxN matrix feed that matrix back to python use plotting in python to visualise matrix (i.e. mayavi) First I have to write it in serial but ideally I want to ...
0
1
929
0
13,875,710
0
0
0
0
1
false
0
2012-12-14T09:09:00.000
1
2
0
Efficient way to find number of distinct elements in a list
13,875,584
0.099668
python,python-3.x,k-means
One way is to sort your list and then run over the elements by comparing each one to the previous one. If they are not equal sum 1 to your "distinct counter". This operation is O(n), and for sorting you can use the sorting algorithm you prefer, such as quick sort or merge sort, but I guess there is an available sorting...
I'm trying to do K-Means Clustering using Kruskal's Minimum Spanning Tree Algorithm. My original design was to run the full-length Kruskal algorithm of the input and produce an MST, after which delete the last k-1 edges (or equivalently k-1 most expensive edges). Of course this is the same as running Kruskal algorithm ...
0
1
296
0
13,921,674
0
0
0
0
1
true
104
2012-12-17T20:27:00.000
165
2
0
Python - Dimension of Data Frame
13,921,647
1.2
python,pandas
df.shape, where df is your DataFrame.
New to Python. In R, you can get the dimension of a matrix using dim(...). What is the corresponding function in Python Pandas for their data frame?
0
1
147,970
0
13,986,712
0
0
0
0
1
true
2
2012-12-21T01:14:00.000
0
1
0
sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset?
13,982,983
1.2
python,machine-learning,libsvm,scikit-learn,scikits
Not without going to the cython code I am afraid. This has been on the todo list for way to long. Any help with it would be much appreciated. It shouldn't be too hard, I think.
sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset. Is there any hack/way to get the index of SVs?
0
1
265
0
15,980,514
0
1
0
0
1
false
0
2012-12-21T11:18:00.000
0
1
0
Why are my Pylot graphs blank?
13,989,166
0
python,numpy,matplotlib,pylot
I had the same identical problem. I spent sometime on it today debugging few things, I realized the problem with me was that the data collected to plot charts wasn't correct and i needed to adjust. What I did was just changing the time from absolute to relative and dynamically adjusting the range of the axis. I'm not t...
I'm using Pylot 1.26 with Python 2.7 on Windows 7 64bit having installed Numpy 1.6.2 and Matplotlib 1.1.0. The test case executes and produces a report but the response time graph is empty (no data) and the throughput graph is just one straight line. I've tried the 32 bit and 64 bit installers but the result is the sa...
0
1
682
0
15,859,052
0
0
0
0
1
false
4
2012-12-24T10:35:00.000
0
1
0
update U V data for matplotlib streamplot
14,020,155
0
python,matplotlib,scipy
I suspect the answer is no, because if you change the vectors, it would need to re-compute the stream lines. The objects returned by streamline are a line and patch collections, which know nothing about the vectors. To get this functionality would require writing a new class to wrap everything up and finding a sensible...
After plotting streamlines using 'matplotlib.streamplot' I need to change the U V data and update the plot. For imshow and quiver there are the functions 'set_data' and 'set_UVC', respectively. There does not seem to be any similar function for streamlines. Is there any way to still updateget similar functionality?
0
1
1,159
0
14,048,691
0
1
0
0
1
false
2
2012-12-26T01:23:00.000
2
2
0
OpenCV anonymous/guaranteed unique window
14,035,161
0.197375
python,opencv
In modules/highgui/src/window_w32.cpp(or in some other file if you are not using windows - look at void cv::namedWindow( const string& winname, int flags ) in ...src/window.cpp) there is a function static CvWindow* icvFindWindowByName( const char* name ) which probably is what you need, but it's internal so authors of ...
quite new to OpenCV so please bear with me: I need to open up a temporary window for user input, but I need to be certain it won't overwrite a previously opened window. Is there a way to open up either an anonymous window, or somehow create a guaranteed unique window name? Obviously a long random string would be pretty...
0
1
343
0
14,070,812
0
0
0
0
1
false
7
2012-12-28T13:53:00.000
1
4
0
Calculating Point Density using Python
14,070,565
0.049958
python
Yes, you do have edges, and they are the distances between the nodes. In your case, you have a complete graph with weighted edges. Simply derive the distance from each node to each other node -- which gives you O(N^2) in time complexity --, and use both nodes and edges as input to one of these approaches you found. Hap...
I have a list of X and Y coordinates from geodata of a specific part of the world. I want to assign each coordinate, a weight, based upon where it lies in the graph. For Example: If a point lies in a place where there are a lot of other nodes around it, it lies in a high density area, and therefore has a higher weight....
0
1
12,400
0
14,126,790
0
0
0
0
1
false
5
2013-01-02T17:14:00.000
1
2
0
Performance/standard using 1d vs 2d vectors in numpy
14,126,201
0.099668
python,matlab,numpy,linear-algebra
In matlab (for historical reason I would argue) the basic type is an M-by-N array (matrix) so that scalars are 1-by-1 arrays and vectors either N-by-1 or 1-by-N arrays. (Memory layout is always Fortran style). This "limitation" is not present in numpy: you have true scalars and ndarray's can have as many dimensions you...
Is there a standard practice for representing vectors as 1d or 2d ndarrays in NumPy? I'm moving from MATLAB which represents vectors as 2d arrays.
0
1
1,655
0
26,791,595
0
0
0
0
1
false
0
2013-01-05T20:49:00.000
-2
4
0
Test for statistically significant difference between two arrays
14,176,280
-0.099668
python,arrays,numpy,statistics,scipy
Go to MS Excel. If you don't have it your work does, there are alternatives Enter the array of numbers in Excel worksheet. Run the formula in the entry field, =TTEST (array1,array2,tail). One tail is one, Two tail is two...easy peasy. It's a simple Student's T and I believe you may still need a t-table to interpre...
I have two 2-D arrays with the same shape (105,234) named A & B essentially comprised of mean values from other arrays. I am familiar with Python's scipy package, but I can't seem to find a way to test whether or not the two arrays are statistically significantly different at each individual array index. I'm thinking...
0
1
7,302
0
28,408,552
0
0
0
0
1
false
3
2013-01-07T07:14:00.000
1
4
0
Microarray hierarchical clustering and PCA with python
14,191,487
0.049958
python,bioinformatics,pca,biopython,hierarchical-clustering
I recommend to use R Bioconductor and free software like Expander and MeV. Good flexible choice is a Cluster software with TreeViews. You can also run R and STATA or JMP from your Python codes and completely automate your data management.
I'm trying to analyze microarray data using hierarchical clustering of the microarray columns (results from the individual microarray replicates) and PCA. I'm new to python. I have python 2.7.3, biopyhton, numpy, matplotlib, and networkx. Are there functions in python or biopython (similar to MATLAB's clustergram and ...
0
1
1,029
0
14,223,556
0
0
0
0
1
true
1
2013-01-07T17:50:00.000
1
1
0
heterogeneous data logging and analysis
14,201,284
1.2
python,logging,numpy,matplotlib
Another option for storage could be using hdf5 or pytables. Depending on how you structure the data, with pytables you can query the data at key "points". As noted in comments, I dont think an off the shelf solution exists.
I'm using python to prototype the algorithms of a computer vision system I'm creating. I would like to be able to easily log heterogeneous data, for example: images, numpy arrays, matplotlib plots, etc, from within the algorithms, and do that using two keys, one for the current frame number and another to describe the ...
0
1
281
1
14,233,016
0
0
0
0
1
false
0
2013-01-09T09:53:00.000
2
1
0
2D image projections to 3D Volume
14,232,451
0.379949
python,image-processing,3d,2d
There are several things you can mean, I think none of which currently exists in free software (but I may be wrong about that), and they differ in how hard they are to implement: First of all, "a 3D volume" is not a clear definition of what you want. There is not one way to store this information. A usual way (for comp...
I am looking for a library, example or similar that allows me to loads a set of 2D projections of an object and then converts it into a 3D volume. For example, I could have 6 pictures of a small toy and the program should allow me to view it as a 3D volume and eventually save it. The object I need to convert is very si...
0
1
1,288
0
14,236,501
0
0
0
0
1
false
1
2013-01-09T13:31:00.000
2
2
0
Advantage of metropolis hastings or MonteCarlo methods over a simple grid search?
14,236,371
0.197375
python,montecarlo
When the search space becomes larger, it can become infeasible to do an exhaustive search. So we turn to Monte Carlo methods out of necessity.
I have a relatively simple function with three unknown input parameters for which I only know the upper and lower bounds. I also know what the output Y should be for all of my data. So far I have done a simple grid search in python, looping through all of the possible parameter combinations and returning those results...
0
1
983
0
14,242,912
0
1
0
0
1
false
1
2013-01-09T17:18:00.000
0
1
0
use / load new python module without installation
14,242,764
0
python,numpy,scipy,python-module
Use the --user option to easy_install or setup.py to indicate where the installation is to take place. It should point to a directory where you have write access. Once the module has been built and installed, you then need to set the environmental variable PYTHONPATH to point to that location. When you next run the pyt...
I am totally new to Python, and I have to use some modules in my code, like numpy and scipy, but I have no permission on my hosting to install new modules using easy-install or pip ( and of course I don't know how to install new modules in a directory where I have permission [ I have SSH access ] ). I have downloaded n...
0
1
1,476
0
34,036,255
0
0
0
0
3
false
76
2013-01-10T09:08:00.000
15
6
0
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
14,254,203
1
python,machine-learning,data-mining,classification,scikit-learn
The simple answer: multiply result!! it's the same. Naive Bayes based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features - meaning you calculate the Bayes probability dependent on a specific feature without holding the others - which means that the algorithm multiply e...
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age",...
0
1
29,596
0
69,929,209
0
0
0
0
3
false
76
2013-01-10T09:08:00.000
0
6
0
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
14,254,203
0
python,machine-learning,data-mining,classification,scikit-learn
You will need the following steps: Calculate the probability from the categorical variables (using predict_proba method from BernoulliNB) Calculate the probability from the continuous variables (using predict_proba method from GaussianNB) Multiply 1. and 2. AND Divide by the prior (either from BernoulliNB or from Gaus...
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age",...
0
1
29,596
0
14,255,284
0
0
0
0
3
true
76
2013-01-10T09:08:00.000
74
6
0
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
14,254,203
1.2
python,machine-learning,data-mining,classification,scikit-learn
You have at least two options: Transform all your data into a categorical representation by computing percentiles for each continuous variables and then binning the continuous variables using the percentiles as bin boundaries. For instance for the height of a person create the following bins: "very small", "small", "r...
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age",...
0
1
29,596
0
14,260,955
0
0
0
0
1
false
4
2013-01-10T15:06:00.000
18
8
0
How to randomly generate decreasing numbers in Python?
14,260,923
1
python,random,python-2.7,numbers
I would generate a list of n random numbers then sort them highest to lowest.
I'm wondering if there's a way to generate decreasing numbers within a certain range? I want to program to keep outputting until it reaches 0, and the highest number in the range must be positive. For example, if the range is (0, 100), this could be a possible output: 96 57 43 23 9 0 Sorry for the confusion from my ori...
0
1
5,706
0
59,647,574
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
-2
16
0
"Large data" workflows using pandas
14,262,433
-0.024995
python,mongodb,pandas,hdf5,large-data
At the moment I am working "like" you, just on a lower scale, which is why I don't have a PoC for my suggestion. However, I seem to find success in using pickle as caching system and outsourcing execution of various functions into files - executing these files from my commando / main file; For example i use a prepare_u...
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I ...
0
1
341,120
0
29,910,919
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
21
16
0
"Large data" workflows using pandas
14,262,433
1
python,mongodb,pandas,hdf5,large-data
One more variation Many of the operations done in pandas can also be done as a db query (sql, mongo) Using a RDBMS or mongodb allows you to perform some of the aggregations in the DB Query (which is optimized for large data, and uses cache and indexes efficiently) Later, you can perform post processing using pandas. Th...
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I ...
0
1
341,120
0
20,690,383
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
167
16
0
"Large data" workflows using pandas
14,262,433
1
python,mongodb,pandas,hdf5,large-data
I think the answers above are missing a simple approach that I've found very useful. When I have a file that is too large to load in memory, I break up the file into multiple smaller files (either by row or cols) Example: In case of 30 days worth of trading data of ~30GB size, I break it into a file per day of ~1GB si...
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I ...
0
1
341,120
0
19,739,768
0
0
0
0
4
false
1,156
2013-01-10T16:20:00.000
72
16
0
"Large data" workflows using pandas
14,262,433
1
python,mongodb,pandas,hdf5,large-data
If your datasets are between 1 and 20GB, you should get a workstation with 48GB of RAM. Then Pandas can hold the entire dataset in RAM. I know its not the answer you're looking for here, but doing scientific computing on a notebook with 4GB of RAM isn't reasonable.
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I ...
0
1
341,120
0
14,271,696
0
1
0
0
1
false
1
2013-01-11T01:18:00.000
1
2
0
Pandas storing 1000's of dataframe objects
14,270,163
0.099668
python,object,pandas,dataframe,storage
Redis with redis-py is one solution. Redis is really fast and there are nice Python bindings. Pytables, as mentioned above, is a good choice as well. PyTables is HDF5, and is really really fast.
I am working on a large project that does SPC analysis and have 1000's of different unrelated dataframe objects. Does anyone know of a module for storing objects in memory? I could use a python dictionary but would like it more elaborate and functional mechanisms like locking, thread safe, who has it and a waiting lis...
0
1
1,701
0
14,309,807
0
0
0
0
1
true
1
2013-01-13T14:27:00.000
0
1
0
Feature importance based on extremely randomize trees and feature redundancy
14,304,420
1.2
python-2.7,scikit-learn
Maybe you could extract the top n important features and then compute pairwise Spearman's or Pearson's correlations for those in order to detect redundancy only for the top informative features as it might not be feasible to compute all pairwise feature correlations (quadratic with the number of features). There might ...
I am using the Scikit-learn Extremely Randomized Trees algorithm to get info about the relative feature importances and I have a question about how "redundant features" are ranked. If I have two features that are identical (redundant) and important to the classification, the extremely randomized trees cannot detect the...
0
1
203
0
14,309,992
0
0
0
0
1
true
0
2013-01-13T22:23:00.000
2
1
0
How to save memory for a large python array?
14,308,889
1.2
python,arrays
Given your description, a sparse representation may not be very useful to you. There are many other options, though: Make sure your values are represented using the smallest data type possible. The example you show above is best represented as single-byte integers. Reading into a numpy array or python array will give...
I read in an large python array from a csv file (20332 *17009) using window7 64 bit OS machine with 12 G ram. The array has values in the half of places, like the example below. I only need the array where has values for analysis, rather than the whole array. [0 0 0 0 0 0 0 0 0 3 8 0 0 4 2 7 0 0 0 0 5 2 0 0 0 0 1 0 0 ...
0
1
733
0
44,592,825
0
0
0
0
1
false
18
2013-01-16T16:57:00.000
1
3
0
Python Pandas - Deleting multiple series from a data frame in one command
14,363,640
0.066568
python,pandas
You can also specify a list of columns to keep with the usecols option in pandas.read_table. This speeds up the loading process as well.
In short ... I have a Python Pandas data frame that is read in from an Excel file using 'read_table'. I would like to keep a handful of the series from the data, and purge the rest. I know that I can just delete what I don't want one-by-one using 'del data['SeriesName']', but what I'd rather do is specify what to kee...
0
1
30,755
0
14,369,860
0
0
0
0
2
false
3
2013-01-16T23:21:00.000
0
3
0
How do I make large datasets load quickly in Python?
14,369,696
0
python,performance,data-mining,pdb,large-data
Write a script that does the selects, the object-relational conversions, then pickles the data to a local file. Your development script will start by unpickling the data and proceeding. If the data is significantly smaller than physical RAM, you can memory map a file shared between two processes, and write the pickled ...
I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. Loading data streams (e.g. from a SQLite database) s...
0
1
806
0
63,300,344
0
0
0
0
2
false
3
2013-01-16T23:21:00.000
0
3
0
How do I make large datasets load quickly in Python?
14,369,696
0
python,performance,data-mining,pdb,large-data
Jupyter notebook allows you to load a large data set into a memory resident data structure, such as a Pandas dataframe in one cell. Then you can operate on that data structure in subsequent cells without having to reload the data.
I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. Loading data streams (e.g. from a SQLite database) s...
0
1
806
0
14,386,145
0
0
0
0
1
true
6
2013-01-17T18:42:00.000
1
2
0
Scipy Binary Closing - Edge Pixels lose value
14,385,921
1.2
python,image,image-processing,numpy,scipy
Operations that involve information from neighboring pixels, such as closing will always have trouble at the edges. In your case, this is very easy to get around: just process subimages that are slightly larger than your tiling, and keep the good parts when stitching together.
I am attempting to fill holes in a binary image. The image is rather large so I have broken it into chunks for processing. When I use the scipy.ndimage.morphology.binary_fill_holes functions, it fills larger holes that belong in the image. So I tried using scipy.ndimage.morphology.binary_closing, which gave the desir...
0
1
2,670
0
14,389,347
0
0
1
0
1
true
1
2013-01-17T22:26:00.000
5
2
0
Compress large python objects
14,389,279
1.2
python,memory,numpy,compression
Incremental (de)compression should be done with zlib.{de,}compressobj() so that memory consumption can be minimized. Additionally, higher compression ratios can be attained for most data by using bz2 instead.
I am trying to compress a huge python object ~15G, and save it on the disk. Due to requrement constraints I need to compress this file as much as possible. I am presently using zlib.compress(9). My main concern is the memory taken exceeds what I have available on the system 32g during compression, and going forward the...
0
1
1,086