GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
28,419,293
0
0
0
1
1
true
0
2015-02-09T19:34:00.000
1
2
0
How import dataset from S3 to cassandra?
28,417,806
1.2
python,cassandra,datastax-enterprise
The details depend on your file format and C* data model but it might look something like this: Read the file from s3 into an RDD val rdd = sc.textFile("s3n://mybucket/path/filename.txt.gz") Manipulate the rdd Write the rdd to a cassandra table: rdd.saveToCassandra("test", "kv", SomeColumns("key", "value"))
i Launch cluster spark cassandra with datastax dse in aws cloud. So my dataset storage in S3. But i don't know how transfer data from S3 to my cluster cassandra. Please help me
0
1
1,657
0
34,094,065
0
0
0
0
1
true
1
2015-02-09T20:34:00.000
0
1
0
getting error with softmax and cross entropy in theano
28,418,823
1.2
python,theano,softmax
solved. I had to use T.nnet.categorical_crossentropy since my target variable is an integer vector.
I'm implementing a DNN with Theano. At the last layer of DNN, I'm using a softmax as a nonlinear function from theano.tensor.nnet.softmax As a lost function i'm using cross entropy from T.nnet.binary_crossentropy But I get a strange error: "The following error happened while compiling the node', GpuDnnSoftmaxGrad{tenso...
0
1
1,358
0
57,704,035
0
0
0
0
1
false
33
2015-02-09T21:37:00.000
52
3
0
Check whether non-index column sorted in Pandas
28,419,877
1
python,pandas
Meanwhile, since 0.19.0, there is pandas.Series.is_monotonic_increasing, pandas.Series.is_monotonic_decreasing, and pandas.Series.is_monotonic.
Is there a way to test whether a dataframe is sorted by a given column that's not an index (i.e. is there an equivalent to is_monotonic() for non-index columns) without calling a sort all over again, and without converting a column into an index?
0
1
17,253
0
64,579,432
0
0
0
0
1
false
7
2015-02-11T00:08:00.000
0
5
0
numpy.loadtxt: how to ignore comma delimiters that appear inside quotes?
28,444,272
0
python,csv,numpy
While there is not such a parameter in numpy.loadtxt to ignore quoted or otherwise escaped commas, one alternative that has not been suggested yet would be the following... Perform a find and replace using some text editor to replace commas with tabs OR save the file in Excel as tab delimited. When you use numpy.loadtx...
I have a csv file where a line of data might look like this: 10,"Apple, Banana",20,... When I load the data in Python, the extra comma inside the quotes shifts all my column indices around, so my data is no longer a consistent structure. While I could probably write a complex algorithm that iterates through each row an...
0
1
5,651
0
28,488,596
0
0
0
0
1
true
8
2015-02-12T21:54:00.000
12
2
0
Networkx duplicate edges
28,488,559
1.2
python,networkx
You can test it pretty quickly, but it only adds them once. Edges and nodes are represented as a dictionaries inside the graph structure, and they are only added if they don't actually exist. For already existing edges, adding them again has no effect.
If the same edge is added twice to the networkx edge data structure, will it then have two edges between the nodes or still just one? For example, would a spring layout show the nodes converge more with edges [(a,b),(a,b),(a,b),(a,b)] than [(a,b),(a,b)]? If I want to weight the edge, how would I go about it?
0
1
13,392
0
40,033,364
0
0
0
0
1
false
10
2015-02-12T22:04:00.000
4
2
0
Retrieve string version of document by ID in Gensim
28,488,714
0.379949
python,gensim
Sadly, as far as I can tell, you have to start from the very beginning of the analysis knowing that you'll want to retrieve documents by the ids. This means you need to create your own mapping between ids and the original documents and make sure the ids gensim uses are preserved throughout the process. As is, I don't...
I am using Gensim for some topic modelling and I have gotten to the point where I am doing similarity queries using the LSI and tf-idf models. I get back the set of IDs and similarities, eg. (299501, 0.64505910873413086). How do I get the text document that is related to the ID, in this case 299501? I have looked at t...
0
1
2,031
0
28,511,785
0
0
0
0
1
true
0
2015-02-13T22:48:00.000
1
1
0
data exchange format ocaml to python numpy or pandas
28,510,059
1.2
python,numpy,ocaml,export-to-csv,hdf5
First of all I would like to mention, that there're actually bindings for HDF-5 for OCaml. But, when I was faced with the same problem I didn't find one that suits my purposes and is mature enough. So I wouldn't suggest you to use it, but who knows, maybe today there is something more descent. So, to my experience the...
I'm generating time series data in ocaml which are basically long lists of floats, from a few kB to hundreds of MB. I would like to read, analyze and plot them using the python numpy and pandas libraries. Right now, i'm thinking of writing them to csv files. A binary format would probably be more efficient? I'd use HD...
0
1
245
0
28,523,302
0
0
0
0
1
true
1
2015-02-15T05:19:00.000
8
1
0
Sorting algorithm times using sorting methods
28,523,247
1.2
python,algorithm,sorting,time-complexity,computation-theory
The "similarity" (?!) that you see is completely illusory. The elementary, O(N squared), approaches, repeat their workings over and over, without taking any advantage, for the "next step", of any work done on the "previous step". So the first step takes time proportional to N, the second one to N-1, and so on -- and t...
So I just learned about sorting algorithm s bubble, merge, insertion, sort etc. they all seem to be very similar in their methods of sorting with what seems to me minimal changes in their approach. So why do they produce such different sorting times ie O(n^2) vs O(nlogn) as an example
0
1
297
0
28,546,908
0
0
0
0
1
false
0
2015-02-16T15:41:00.000
0
1
0
How to cluster a series of right-skewed integers
28,545,060
0
python,cluster-analysis
On skewed data, it can help a lot to go into logspace. You may first want to understand the distribution better, then split them. Have you tried visualizing them, to identify appropriate splitting values? One-dimensional data can be well visualized, and the results of a manual approach are often better than those of so...
I have a series of integers. What I would like to do is split them into 5 discrete categories. I tried z-scores with bounds (-oo, -2), [-2, -1), [-1, +1], (+1, +2], (+2, +oo) but it doesn't seem to work probably because of the right-skewed data. So, I though that it might work with some sort of clustering. Any ideas?
0
1
80
0
55,585,713
0
0
0
0
1
false
43
2015-02-18T07:33:00.000
2
3
0
How to multiply two vector and get a matrix?
28,578,302
0.132549
python,numpy,matrix,vector,matrix-multiplication
If you are using numpy. First, make sure you have two vectors. For example, vec1.shape = (10, ) and vec2.shape = (26, ); in numpy, row vector and column vector are the same thing. Second, you do res_matrix = vec1.reshape(10, 1) @ vec2.reshape(1, 26) ;. Finally, you should have: res_matrix.shape = (10, 26). numpy docume...
In numpy operation, I have two vectors, let's say vector A is 4X1, vector B is 1X5, if I do AXB, it should result a matrix of size 4X5. But I tried lot of times, doing many kinds of reshape and transpose, they all either raise error saying not aligned or return a single value. How should I get the output product of mat...
0
1
49,959
0
28,608,797
0
0
0
0
1
true
1
2015-02-19T14:02:00.000
1
1
0
How to get the content of a row of a Numpy array?
28,608,320
1.2
arrays,python-3.x,numpy
Use array indexing as below: color[0]
How to get the content of a row of a Numpy array ? For example I have a Numpy array with 3 rows color=np.array([[255,0,0],[255,255,0],[0,255,0]]) and I want to retrieve the content of the first row [255,0,0].
0
1
41
0
35,262,231
0
1
0
0
1
false
0
2015-02-19T19:11:00.000
0
2
0
unable to install scikit-earn on python 2.7.9 in Windows?
28,614,874
0
python-2.7,pip,scikit-learn
Changing the directory worked in my case. Suppose your python 2.7.9 is in C drive so you set you directory as follows and write your command like this : C:\python27\scripts> pip install -U scikit-learn
I have python 2.7.9 (which comes with pip already installed), I have numpy 1.8.2 and scipy 0.15.1 installed as well. When I try to install scikit-learn, I get the following error pip install -U scikit-learn SyntaxError: invalid syntax What am I doing wrong? Or is there another way to install scikit- learn on wi...
0
1
771
0
58,636,713
1
0
0
0
1
false
35
2015-02-19T22:42:00.000
20
10
0
Read a file line by line from S3 using boto?
28,618,468
1
python,amazon-web-services,amazon-s3,boto
I know it's a very old question. But as for now, we can just use s3_conn.get_object(Bucket=bucket, Key=key)['Body'].iter_lines()
I have a csv file in S3 and I'm trying to read the header line to get the size (these files are created by our users so they could be almost any size). Is there a way to do this using boto? I thought maybe I could us a python BufferedReader, but I can't figure out how to open a stream from an S3 key. Any suggestions wo...
0
1
83,468
0
28,618,872
0
0
0
0
1
false
12
2015-02-19T22:54:00.000
3
3
0
numpy.fft() what is the return value amplitude + phase shift OR angle?
28,618,591
0.197375
python,numpy,fft
The magnitude, r, at a given frequency represents the amount of that frequency in the original signal. The complex argument represents the phase angle, theta. x + i*y = r * exp(i*theta) Where x and y are the numbers that that the numpy FFT returns.
The np.fft.fft() returns a complex array .... what is the meaning of the complex number ? I suppose the real part is the amplitude ! The imaginary part is phase-shift ? phase-angle ? Or something else ! I figured out the position in the array represent the frequency.
0
1
29,182
0
28,665,731
0
0
0
0
1
true
33
2015-02-22T03:53:00.000
9
3
0
What is python's equivalent of R's NA?
28,654,325
1.2
python,numpy,pandas,scikit-learn,data-scrubbing
Scikit-learn doesn't handle missing values currently. For most machine learning algorithms, it is unclear how to handle missing values, and so we rely on the user of handling them prior to giving them to the algorithm. Numpy doesn't have a "missing" value. Pandas uses NaN, but inside numeric algorithms that might lead ...
What is python's equivalent of R's NA? To be more specific: R has NaN, NA, NULL, Inf and -Inf. NA is generally used when there is missing data. What is python's equivalent? How libraries such as numpy and pandas handle missing values? How does scikit-learn handle missing values? Is it different for python 2.7 and pytho...
0
1
54,508
0
37,332,201
0
1
0
0
2
false
558
2015-02-22T22:05:00.000
7
31
0
How to count the occurrence of certain item in an ndarray?
28,663,856
1
python,numpy,multidimensional-array,count
y.tolist().count(val) with val 0 or 1 Since a python list has a native function count, converting to list before using that function is a simple solution.
In Python, I have an ndarray y that is printed as array([0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1]) I'm trying to count how many 0s and how many 1s are there in this array. But when I type y.count(0) or y.count(1), it says numpy.ndarray object has no attribute count What should I do?
0
1
883,665
0
59,595,030
0
1
0
0
2
false
558
2015-02-22T22:05:00.000
0
31
0
How to count the occurrence of certain item in an ndarray?
28,663,856
0
python,numpy,multidimensional-array,count
here I have something, through which you can count the number of occurrence of a particular number: according to your code count_of_zero=list(y[y==0]).count(0) print(count_of_zero) // according to the match there will be boolean values and according to True value the number 0 will be return
In Python, I have an ndarray y that is printed as array([0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1]) I'm trying to count how many 0s and how many 1s are there in this array. But when I type y.count(0) or y.count(1), it says numpy.ndarray object has no attribute count What should I do?
0
1
883,665
0
28,703,110
0
0
0
0
1
false
0
2015-02-24T06:45:00.000
0
1
0
Normalizing constant of mixture of dirichlet distribution goes unbounded
28,689,687
0
python,machine-learning,statistics,probability,dirichlet
Some ideas. (1) To calculate the normalizing factor exactly, maybe you can rewrite the gamma function via gamma(a_i + 1) = a_i gamma(a_i) (a_i need not be an integer, let the base case be a_i < 1) and then you'll have sum(a_i, i, 1, n) terms in the numerator and denominator and you can reorder them so that you divide t...
I need to calculate PDFs of mixture of Dirichlet distribution in python. But for each mixture component there is the normalizing constant, which is the inverse beta function which has gamma function of sum of the hyper-parameters as the numerator. So even for a sum of hyper-parameters of size '60' it goes unbounded. Pl...
0
1
492
0
28,753,451
0
0
0
0
1
true
0
2015-02-26T20:28:00.000
7
3
0
Numpy fft.pack vs FFTW vs Implement DFT on your own
28,752,126
1.2
python,numpy,fft,fftw
If you are implementing the DFFT entirely within Python, your code will run orders of magnitude slower than either package you mentioned. Not just because those libraries are written in much lower-level languages, but also (FFTW in particular) they are written so heavily optimized, taking advantage of cache locality, v...
I am currently need to run FFT on 1024 sample points signal. So far I have implementing my own DFT algorithm in python, but it is very slow. If I use the NUMPY fftpack, or even move to C++ and use FFTW, do you guys think it would be better?
0
1
4,655
0
28,871,022
0
0
0
0
1
true
0
2015-03-01T04:17:00.000
0
1
0
Obtain optimal number of boosting iterations in GradientBoostingClassifier using grid search
28,790,032
1.2
python,scikit-learn
Currently there is no way to directly get the optimum number of estimators from GradientBoostingClassifier. If you also pass n_estimators in the parameter grid to GridSearchCV it will only try the exact values you give it, and return one of these. We are looking to improve this, by searching over the number of estimato...
With GradientBoostingClassifier suppose I set n_estimators to 2000 and use GridSearchCV to search across learning_rate in [0.01, 0.05, 0.10] - how do I know the number of boosting iterations that produced the optimal result - is the model always going to fit 2000 trees for each value of learning_rate or is it going to ...
0
1
1,119
0
28,948,827
0
0
0
0
1
false
0
2015-03-03T10:46:00.000
-1
1
0
Create splitting criterion for sklearn trees
28,829,805
-0.197375
python,scikit-learn
As Andreas said above, splitting criteria are coded in cython.
I need other splitting criteria for a Desicion tree than the provided 'gini' and 'entropy. I want to use the wonderful sklearn package as base though. Is there a way to go around the C-implementation of the tree building process? As in implementing the criterion in Python and let the TreeBuilder work with it?
0
1
601
0
28,862,593
0
0
0
1
1
false
1
2015-03-03T19:10:00.000
2
1
0
How to do formmating with combination of pandas dataframe.to_excel and xlsxwriter?
28,839,976
0.379949
python,pandas,xlsxwriter
I found xlwings. It's intuitive and does all the things I want to do. Also, it does well with all pandas data types.
Is it possible to write nice-formatted excel files with dataframe.to_excel-xlsxwriter combo? I am aware that it is possible to format cells when writing with pure xlsxwriter. But dataframe.to_excel takes so much less space. I would like to adjust cell width and add some colors to column names. What other alternatives w...
0
1
100
0
28,854,271
0
0
0
0
1
false
0
2015-03-04T08:57:00.000
0
1
0
opencv FaceRecognition during login
28,850,205
0
python,windows,visual-studio,opencv,login
You can store the snapshots in an array, run your recognition on each image and see if the user is recognized as one of the users you have trained your model on. If not then prompt the user for their name, if the name matches one of the users you trained your model on, add these snapshots to their training set and re-t...
maybe some of you can point me in the right direction. I've been playing around with OpenCV FaceRecognition for some time with Eigenfaces to let it learn to recognize faces. Now I would like to let it run during windows logon. Precisely, I want to make Snapshots of Faces when I log into a user so after the software ha...
0
1
211
0
28,858,326
0
0
0
0
1
false
1
2015-03-04T12:42:00.000
0
1
0
converting dataframe from Hex to binary in using python
28,854,821
0
python,pandas,binary,hex
If I understood, in column 1 you have 00, column 2 : 55, ... If I am right, you first need to concat three columns in a string value = str(col1)+str(col2)+str(col3) and then use the method to convert it in binary.
I am pretty new to Python and pandas library, i just learned how to read a csv file using pandas. my data is actually raw packets i captured from sensor networks, to analyze corrupt packets. what i have now is, thousands of rows and hundreds of columns, literally, and the values are all in Hex. i need to convert all t...
0
1
778
0
28,891,088
0
0
0
0
2
false
1
2015-03-05T19:13:00.000
1
2
0
python-igraph pickling efficiency
28,885,814
0.099668
python-2.7,igraph
Pickle is a serializer from the standard library in Python. These guesses seem quite likely to me: When igraph was started they did not want to create an own file format so they used pickle. Now the default behavior for saving graphs is not pickle but the own format. When saving objects with igraph in graphml, the libr...
I am a beginner in igraph. I have a graph data of 60000 nodes and 900K edges. I could successfully create the graph using python-igraph and write to disk. My machine has 3G memory. When I wrote the graph to disk in graphml format, the memory usage was around 19%; with write_pickle, the usage went up to 50% and took si...
0
1
650
0
28,895,245
0
0
0
0
2
true
1
2015-03-05T19:13:00.000
1
2
0
python-igraph pickling efficiency
28,885,814
1.2
python-2.7,igraph
Pickling is a generic format to store arbitrary objects, which may reference other objects, which may in turn also reference other objects. Therefore, when Python is pickling an object, it must keep track of all the objects that it has "seen" and serialized previously to avoid getting stuck in an infinite loop. That's ...
I am a beginner in igraph. I have a graph data of 60000 nodes and 900K edges. I could successfully create the graph using python-igraph and write to disk. My machine has 3G memory. When I wrote the graph to disk in graphml format, the memory usage was around 19%; with write_pickle, the usage went up to 50% and took si...
0
1
650
0
28,910,572
0
0
0
0
1
true
0
2015-03-06T22:04:00.000
1
1
0
Identify Data Vectors with New Attributes and/or Values
28,908,468
1.2
python,scikit-learn
Not with anything built into scikit-learn, as removing rows is something that is not easily done in the current API. It should be quite easy to write a custom function / class that does that based on the output of DictVectorizer.
I am setting up a classification system using scikit-learn. After training a classifier I would like to save it for reuse along with the necessary transforms such as the DictVectorizer. I am looking for a way to filter the incoming stream of unclassified data that will feed into the feature transforms and classifier...
0
1
34
0
28,933,536
0
0
0
0
1
false
1
2015-03-09T00:01:00.000
0
1
0
How can I find the optimal buy and sell points for a stock if I have a transaction cost?
28,933,388
0
python,algorithm
It seems that optimal selling indices are those i such that price[i-1] < price[i] and price[i+1] <= price[i] and for some j > i, price[i] - price[j] > 2. I don't know about names for an algorithm like that, but list comprehensions and the function any should be enough.
If I have a list of prices, say [4,2,5,6,9,3,1,2,5] and I have a transaction cost of $2 and I am able to buy and short sell then the optimal strategy is buy at 2 switch positions at 9 and and switch again at 1. So the optimal buy indices are [1,6] and the optimal sell indices are [4]. How can this be solved programmati...
0
1
717
0
28,940,905
0
0
0
0
1
true
1
2015-03-09T11:18:00.000
5
1
0
Not losing the quality of pictures saved with cv2.imwrite()
28,940,711
1.2
python,opencv,colors,python-2.x
JPEG is a lossy format, you need to save your images as PNG as it is a lossless format.
I am wondering seriously about the effects of cv2.imwrite() function of OpenCV. I noticed that when I read pictures with cv2.imread() and save them again with cv2.imwrite() function, their quality is not the same any more for the human eyes. I ask you how can I keep the quality of the image the same as the original aft...
0
1
2,752
0
49,022,627
0
0
0
0
1
false
4
2015-03-11T11:30:00.000
6
3
0
Color a pixel in python opencv
28,985,490
1
python,opencv,image-processing
img[x,y]=[255, 255, 255] is wrong because opencv img[a,b] is a matrics then you need to change x,y then you must use img[y,x] actualy mistake in the order of x,y if you want to change color of point x,y use this >> img[y,x] = color
I need to color a pixel in an image. I use opencv and python. I tried img[x,y]=[255 255 255] to color a pixel(x,y) but it wont work :( Is there is any mistake in this? Can you suggest any method? Thanks in advance.
0
1
24,622
0
28,997,147
0
0
0
0
1
false
4
2015-03-11T18:44:00.000
0
3
0
Document Clustering in python using SciKit
28,994,857
0
python,machine-learning,scikit-learn,cluster-analysis,unsupervised-learning
For the large matrix after TF/IDF transformation, consider using sparse matrix. You could try different k values. I am not an expert in unsupervised clustering algorithms, but I bet with such algorithms and different parameters, you could also end up with a varied number of clusters.
I recently started working on Document clustering using SciKit module in python. However I am having a hard time understanding the basics of document clustering. What I know ? Document clustering is typically done using TF/IDF. Which essentially converts the words in the documents to vector space model which is then ...
0
1
6,083
0
29,026,455
0
1
0
0
1
true
1
2015-03-12T19:40:00.000
1
1
0
How to convert a sparse dict to scipy.sparse matrix in python?
29,018,843
1.2
python,numpy,matrix,scipy
With standard dict methods you can get a list of the keys, and another list of the values. Pass the 2nd to numpy.array and you should get a 100 x 7000 array. The keys list could also be made into array, but it might not be any more useful than the list. The values array could be turned into a sparse matrix. But its...
I have a very large dictionary of the following format {str: [0, 0, 1, 2.5, 0, 0, 0, ...], str: [0, 0, 0, 1.1, 0, 0, ...], ...}. The number of elements for each str key can be very big so I need an effective way to store and make calculations over this data. For example right now my dict of str keys has 100 keys. Each...
0
1
1,646
0
29,050,296
0
0
0
0
1
false
1
2015-03-14T14:23:00.000
1
3
0
Pandas Time-Series: Find previous value for each ID based on year and semester
29,049,985
0.066568
python,pandas,time-series
Use this function to create the new column... DataFrame.shift(periods=1, freq=None, axis=0, **kwds) Shift index by desired number of periods with an optional time freq
I realize this is a fairly basic question, but I couldn't find what I'm looking for through searching (partly because I'm not sure how to summarize what I want). In any case: I have a dataframe that has the following columns: * ID (each one represents a specific college course) * Year * Term (0 = fall semester, 1 = spr...
0
1
1,352
0
29,058,672
0
0
0
0
1
true
1
2015-03-15T01:35:00.000
0
1
0
change the type of numpyndarray float element to string
29,056,302
1.2
python,numpy,casting
You can't change the type of parts of an ordinary ndarray. An ndarray requires all elements in the array to have the same numpy type (the dtype), so that mathematical operations can be done efficiently. The only way to do this is to change the dtype to object, which allows you to store arbitrary types in each element...
I have an arff file as input. I read the arff file and put the element values in a numpy ndarray.Now my arff file contains some '?' as some of the elements. Basically these are property values of matrices calculated by anamod. Whichever values anamod cannot calculate it plugs in a '?' character for those. I want to do ...
0
1
117
0
29,061,597
0
0
0
0
1
false
9
2015-03-15T13:11:00.000
8
1
0
a value too large for dtype('float64')
29,060,962
1
python,numpy,scikit-learn
Ok I got it. After i used Imputer(missing_values='NaN', strategy='median', axis=1) imp.fit(X2). I also had to write : X2 = imp.fit_transform(X2). The reason being sklearn.preprocessing.Imputer.fit_transform returns a new array, it doesn't alter the argument array
I'm using numpy for reading an arff file and I'm getting the following error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). I used np.isnan(X2.any()) and np.isfinite(X2.all())to check if it's a nan or infinite case. But it's none of these. This means it's the third case, which is...
0
1
26,758
0
42,962,569
0
0
0
0
1
false
5
2015-03-16T01:12:00.000
0
5
0
Is there a way to profile an OpenCL or a pyOpenCL program?
29,068,229
0
python,opencl,pyopencl
CodeXL from AMD works very well.
I am trying to optimize a pyOpenCL program. For this reason I was wondering if there is a way to profile the program and see where most of the time is needed for. Do you have any idea how to approach this problem? Thanks in advance Andi EDIT: For example nvidias nvprof for CUDA would do the trick for pyCuda, however, n...
0
1
2,168
0
29,085,388
0
0
0
0
1
true
2
2015-03-16T19:24:00.000
0
3
0
Shared file access between Python and Matlab
29,085,298
1.2
python,windows,matlab,file,shared
I am not sure about window's API for locking files Heres a possible solution: While matlab has the file open, you create an empty file called "data.lock" or something to that effect. When python tries to read the file, it will check for the lock file, and if it is there, then it will sleep for a given interval. When m...
I have a Matlab application that writes in to a .csv file and a Python script that reads from it. These operations happen concurrently and at their own respective periods (not necessarily the same). All of this runs on Windows 7. I wish to know : Would the OS inherently provide some sort of locking mechanism so that o...
0
1
212
0
29,088,394
0
0
0
0
1
false
2
2015-03-16T22:27:00.000
0
1
0
Logo recognition in OpenCV
29,088,095
0
python,opencv
You could probably use Haar cascades in openCv to do this. You will need to train haar detectors with both positive and negative samples of the logo but there is already utilities in openCv to help you with this. Just read up about haar in openCv documentation
I am currently making an application with OpenCV and a web server that finds certain car brands as part of an ongoing game in my family. However, I don't know where to start. I googled it but all I found was a post on finding a yellow ball. I want to find a car logo from a picture (which could be angled or glaring) so ...
0
1
1,164
0
60,111,168
0
0
0
0
3
false
3
2015-03-17T09:42:00.000
0
4
0
Sklearn-GMM on large datasets
29,095,769
0
python,scikit-learn,bigdata,mixture-model
As Andreas Mueller mentioned, GMM doesn't have partial_fit yet which will allow you to train the model in an iterative fashion. But you can make use of warm_start by setting it's value to True when you create the GMM object. This allows you to iterate over batches of data and continue training the model from where you ...
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set. Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??
0
1
3,292
0
36,488,496
0
0
0
0
3
false
3
2015-03-17T09:42:00.000
0
4
0
Sklearn-GMM on large datasets
29,095,769
0
python,scikit-learn,bigdata,mixture-model
I think you can set the init_para to empty string '' when you create the GMM object, then you might be able to train the whole data set.
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set. Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??
0
1
3,292
0
29,109,730
0
0
0
0
3
false
3
2015-03-17T09:42:00.000
2
4
0
Sklearn-GMM on large datasets
29,095,769
0.099668
python,scikit-learn,bigdata,mixture-model
fit will always forget previous data in scikit-learn. For incremental fitting, there is the partial_fit function. Unfortunately, GMM doesn't have a partial_fit (yet), so you can't do that.
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set. Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??
0
1
3,292
0
29,120,093
0
0
0
0
1
false
1
2015-03-18T10:43:00.000
0
3
0
How to calculate inverse using cramer's rule in python?
29,119,880
0
python,numpy,matrix-inverse
linalg is right and you are wrong. The matrix it gave you is indeed the inverse. However, if you are using np.array instead of np.matrix then the multiplication operator doesn't work as expected, since it calculates the component-wise product. In that case you have to do mat.dot(inv(mat)). In any case, what you will ge...
I'm trying to generate the inverse matrix using numpy package in python. Unfortunately , I'm not getting the answers I expected. Original matrix: ([17 17 5] [21 18 21] [2 2 19]) Inverting the original matrix by Cramer's rule gives: ([4 9 15] [15 17 6] [24 0 17]) Apparently using numpy.linalg.inv() gives -3.1948881...
0
1
2,359
0
29,149,537
0
0
0
0
1
true
1
2015-03-19T15:32:00.000
1
1
0
Generating high dimensional datasets with Scikit-Learn
29,148,746
1.2
python,scikit-learn,cluster-analysis,mean-shift
The standard deviation of the clusters isn't 1. You have 8 dimensions, each of which has a stddev of 1, so you have a total standard deviation of sqrt(8) or something like that. Kernel density estimation does not work well in high-dimensional data because of bandwidth problems.
I am working with the Mean Shift clustering algorithm, which is based on the kernel density estimate of a dataset. I would like to generate a large, high dimensional dataset and I thought the Scikit-Learn function make_blobs would be suitable. But when I try to generate a 1 million point, 8 dimensional dataset, I end u...
0
1
681
0
29,256,366
0
0
0
0
1
true
2
2015-03-19T23:50:00.000
1
1
1
MPI_Sendrecv with operation on recvbuf?
29,157,039
1.2
python,c,mpi,mpi4py
MPI_Recvreduce is what you're looking for. Unfortunately, it doesn't exist yet. It's something that the MPI Forum has been looking at adding to a future version of the standard, but hasn't yet been adopted and won't be in the upcoming MPI 3.1.
I use the MPI_Sendrecv MPI function to communicate arrays of data between processes. I do this in Python using mpi4py, but I'm pretty sure my question is independent of the language used. What I really want is to add an array residing on another process to an existing local array. This should be done for all processes,...
0
1
127
0
29,191,059
0
0
0
0
1
false
1
2015-03-21T13:18:00.000
1
1
0
Classifying users by demographic using incomplete data
29,183,178
0.197375
python,statistics,scipy,scikit-learn,scikits
I think you can make use of a "naive Bayes" classifier here. In that case, the class (M or F) probability is a product of terms, one term for each available feature set, and you just ignore (exclude from the product) any feature set that is missing. Here is the justification. Let's say the feature sets are X1, X2, X3. ...
I have some data containing usernames and their respective genders. For example, an entry in my data list may look like: {User: 'abc123', Gender: 'M'} For each username, I am also given a bag of text, images, and locations attached to each of them, although it's not necessary that a user has at least one text, one imag...
0
1
74
0
61,935,681
0
0
0
0
1
false
4
2015-03-23T08:07:00.000
0
2
0
How to save a bokeh gridplot as single file
29,205,574
0
python,bokeh
First you save your grid in an object, Let's say "a" the you confirms that "a" is a grid. Example grid = bly.gridplot(graficas,ncols=3) # Here you save your grid bpl.show(grid). # Here you show your grid from bokeh.io import export_png # You need this library to export Exportar grid export_png(grid, filename="your pat...
I am using bokeh (0.8.1) in combination with the ipython notebook to generate GridPlots. I would like to automatically store the generated GridPlots to a single png (hopefully pdf one day) on disk, but when using "Preview/Save" it cycles through all of the figures asking me to store them separately. Is there a more eff...
0
1
1,852
0
29,212,482
0
0
0
0
1
false
1
2015-03-23T14:08:00.000
1
2
0
Polyval(p,x) to evaluate polynomials
29,212,385
0.099668
python,numpy
Your p array is in the wrong order. You should start with the coefficient of the highest exponent. Try with p=[1,0,2,1].
I don't know much about Python and I'm trying to use it to do some simple polynomial interpolation, but there's something I'm not understanding about one of the built-in functions. I'm trying to use polyval(p,x) to evaluate a polynomial p at x. I made an example polynomial p(x) = 1 + 2x + x^3, I created an array p = ...
0
1
5,765
0
29,240,968
0
0
0
0
2
false
0
2015-03-24T19:05:00.000
0
2
0
Using OpenCV in Parse Cloud Code
29,240,840
0
android,python,opencv,parse-platform
No. JavaScript is the only language currently supported for writing CloudCode.
This will be my new post if I go wrong please don't judge me hard :) I'm developing an OpenCV project with Python and also I'm developing its mobile interface in Android. My purpose is to compare plant pictures and decide their species.Researchers who use Android application will take plant photos and upload them (such...
0
1
89
0
29,246,105
0
0
0
0
2
true
0
2015-03-24T19:05:00.000
0
2
0
Using OpenCV in Parse Cloud Code
29,240,840
1.2
android,python,opencv,parse-platform
You can have Parse Cloud Code call out to your Python code using HTTP if you want. Just as you can do the same from the Android app. This code can tell the web hook what images to download and process based on some condition (such as a researcher has uploaded a photo to be processed). Purely up to you how you trigger t...
This will be my new post if I go wrong please don't judge me hard :) I'm developing an OpenCV project with Python and also I'm developing its mobile interface in Android. My purpose is to compare plant pictures and decide their species.Researchers who use Android application will take plant photos and upload them (such...
0
1
89
0
29,287,544
0
0
0
0
1
false
347
2015-03-26T19:27:00.000
14
4
0
Pandas read in table without headers
29,287,224
1
python,pandas
Make sure you specify pass header=None and add usecols=[3,6] for the 4th and 7th columns.
How can I read in a .csv file (with no headers) and when I only want a subset of the columns (say 4th and 7th out of a total of 20 columns), using pandas? I cannot seem to be able to do usecols
0
1
539,397
0
60,968,639
0
0
0
0
1
false
1
2015-03-27T12:19:00.000
-1
1
0
matplotlib UserWarning When log axis is used in some cases
29,300,510
-0.197375
python,matplotlib
not sure if this issue is still open, but I had a similar problem and updating seaborn fixed the issue for me. For you it might be matplotlib, depending which package you used for creating the graph.
Basically I am running some optimisation algorithms that I have created using Numpy and I want to plot the log of the error against the number of iterations. Having done this with linear regression and having had no issues, it is very strange that I seem to get issues when doing the exact same thing with logistic regre...
0
1
1,303
0
56,953,319
0
0
0
0
1
false
18
2015-03-27T23:56:00.000
1
3
0
Is there a Python equivalent to the smooth.spline function in R
29,312,005
0.066568
python,r,smoothing,splines
From research on google, I concluded that By contrast, the smooth.spline in R allows having knots at all the x values, without necessarily having a wiggly curve that hits all the points -- the penalty comes from the second derivative.
The smooth.spline function in R allows a tradeoff between roughness (as defined by the integrated square of the second derivative) and fitting the points (as defined by summing the squares of the residuals). This tradeoff is accomplished by the spar or df parameter. At one extreme you get the least squares line, and ...
0
1
4,069
0
45,366,940
0
1
0
0
3
false
5
2015-03-28T02:26:00.000
1
6
0
Cannot install ggplot with anaconda
29,312,985
0.033321
python,windows,anaconda,python-ggplot
I ran across the same issue when installing ggplot. None of the methods worked, eventually I reinstalled anaconda. Then everything works smoothly.
I want to be able to use geom_smooth in ggplot. However, when I typed conda install ggplot, I get the error no packages found in current win-32 channels matching ggplot. Anyone know what is going on?
0
1
19,615
0
41,456,234
0
1
0
0
3
false
5
2015-03-28T02:26:00.000
0
6
0
Cannot install ggplot with anaconda
29,312,985
0
python,windows,anaconda,python-ggplot
As of Jan 2016, ggplot now comes installed by default if you are using the Anaconda distribution so you can just use install ggplot. New to Python so this is still tripping me up.
I want to be able to use geom_smooth in ggplot. However, when I typed conda install ggplot, I get the error no packages found in current win-32 channels matching ggplot. Anyone know what is going on?
0
1
19,615
0
29,313,575
0
1
0
0
3
false
5
2015-03-28T02:26:00.000
4
6
0
Cannot install ggplot with anaconda
29,312,985
0.132549
python,windows,anaconda,python-ggplot
I think ggplot is simply not packaged for Anaconda as conda search ggplot doesn't find anything. How it can be easily installed via pip -- pip install ggplot.
I want to be able to use geom_smooth in ggplot. However, when I typed conda install ggplot, I get the error no packages found in current win-32 channels matching ggplot. Anyone know what is going on?
0
1
19,615
0
54,869,627
0
1
0
0
1
false
29
2015-03-29T18:08:00.000
22
3
0
What does NN VBD IN DT NNS RB means in NLTK?
29,332,851
1
python,nlp,nltk,text-parsing,pos-tagger
Even though the above links have all kinds. But hope this is still helpful for someone, added a few that are missed on other links. CC: Coordinating conjunction CD: Cardinal number DT: Determiner EX: Existential there FW: Foreign word IN: Preposition or subordinating conjunction JJ: Adjective VP: Verb Phrase JJR: Adjec...
when I chunk text, I get lots of codes in the output like NN, VBD, IN, DT, NNS, RB. Is there a list documented somewhere which tells me the meaning of these? I have tried googling nltk chunk code nltk chunk grammar nltk chunk tokens. But I am not able to find any documentation which explains what these codes mean.
0
1
25,538
0
46,616,236
0
1
0
0
1
false
67
2015-03-30T21:05:00.000
50
5
0
Plot inline or a separate window using Matplotlib in Spyder IDE
29,356,269
1
python,matplotlib,spyder
Go to Tools >> Preferences >> IPython console >> Graphics >> Backend:Inline, change "Inline" to "Automatic", click "OK" Reset the kernel at the console, and the plot will appear in a separate window
When I use Matplotlib to plot some graphs, it is usually fine for the default inline drawing. However, when I draw some 3D graphs, I'd like to have them in a separate window so that interactions like rotation can be enabled. Can I configure in Python code which figure to display inline and which one to display in a new...
0
1
218,892
0
29,516,550
0
0
0
0
1
false
0
2015-03-31T00:14:00.000
0
1
0
Get the count of each key in each Mapper or globally in Spark MapReduce model
29,358,494
0
java,python,hadoop,mapreduce,apache-spark
You can use pairRDD.countByKey() function for counting the rows according their keys.
We need to get the count of each key (the keys are not known before executing), and do some computation dynamically in each Mapper. The key count could be global or only in each Mapper. What is the best way to implement that? In Hadoop this is similar to an aggregator function. The accumulator in Spark needs to be defi...
0
1
110
0
29,385,747
0
0
0
0
2
false
0
2015-04-01T06:46:00.000
0
3
0
The intersection between a trajectory and the circles in the same area
29,384,494
0
python,geometry,intersection
In general I would recommend to first make your algorithm work and then make it faster if you need to. You would be amazed by how fast Python in combination with a set of carefully selected libraries can be. So for your problem, I would do the following: 1.) Install a set of libraries that makes your life easier: - ...
I am new in coding. Now I have a question. I have an object who keep moving in an rectangle area. And I also have a lot of circle in this area too. I want to get all the intersection point between the trajectory and the all the circle. As the object is moving step by step, so was thinking that I can calculate the dista...
0
1
1,435
0
29,388,615
0
0
0
0
2
true
0
2015-04-01T06:46:00.000
1
3
0
The intersection between a trajectory and the circles in the same area
29,384,494
1.2
python,geometry,intersection
Let a be a number somewhere between the radius and diameter of the larger circles (if they have different radii). Generate a grid of square tiles of side length a, so that grid(i,k) is the square from (i*a,k*a) to ((i+1)*a, (k+1)*a). Each tile of the grid contains a list with pointers to circles or indices into the ci...
I am new in coding. Now I have a question. I have an object who keep moving in an rectangle area. And I also have a lot of circle in this area too. I want to get all the intersection point between the trajectory and the all the circle. As the object is moving step by step, so was thinking that I can calculate the dista...
0
1
1,435
0
45,776,219
0
0
0
0
1
false
1
2015-04-02T20:42:00.000
0
1
0
Machine Learning -Issues with big dataset
29,422,093
0
python,machine-learning,scikit-learn,pca,logistic-regression
you can segment your data on few models which output will be input to the next model which will give you result. Basically its RNN architecture. Put such massive data in one network just not possible due to memory limitation.
I am trying to apply Machine Learning to a Kaggle.com dataset. The dimension of my dataset is 244768 x 34756. Now at this size none of the scikit algorithms work. I thought i would apply PCA , but even that doesnt scale up to this dataset. Is there anyway i can reduce redundant data from my training dataset? I can red...
0
1
110
0
29,477,966
0
0
0
0
1
false
2
2015-04-03T11:04:00.000
0
1
0
Storing a large table on disk, with fast retrieval of a specified subset to np.ndarray
29,430,979
0
python,sqlite,python-3.x,numpy,hdf5
You could create a region reference dataset where each element relates to one of the ~2000 identifiers. Then the Python code to reference a particular identifier would look like this: reg_ref - reg_ref_dset[identifier] mysub = data_dset[reg_ref]
I need to store a table on disk, and be able to retrieve a subset of that table into a numpy.ndarray very fast. What's the best way to do that? I don't mind spending the time to preprocess this dataset before storing it on disk, since it won't be changed once it's created. I'd prefer not to write any C code, and instea...
0
1
179
0
29,460,131
0
0
0
0
1
true
0
2015-04-04T11:38:00.000
1
1
0
graph of multiple y axes in plotly
29,445,943
1.2
python,graph,plotly
Full disclosure, I work for Plotly. Here's my shot at summarizing your problem in general, you've got 4 dimensions for each country (year, exports, gdp, standard of living). You might be able to use either or both of these solutions: visualize this in two dimensions using x-value, y-value, marker-size, and marker-line...
I have 3 sets of comparison data(y axes) which needs to be plotted against a target source values. I'm comparing exports, gdp, standard of living values of different countries against a target countries values for different years. But values of each category are haphazard i.e exports in millions of dollars, gdp in perc...
0
1
925
0
29,469,527
0
1
0
0
1
false
0
2015-04-06T10:28:00.000
0
1
0
random.random or random.choice for Monte Carlo simulation?
29,469,458
0
python,montecarlo
Do you want a uniform distribution of these 3 values? If so, random.choice will give you exactly that.
I want to apply a simple Monte Carlo simulation on a variable that has three distinct values. Should I use random.random and assign the float to a variable value, or use random.choice(["a", "b", "c"])?
0
1
143
0
35,924,390
0
0
0
0
1
false
0
2015-04-07T00:15:00.000
0
1
0
How to use DecisionTreeRegressor() for Categorical Segmentation?
29,481,698
0
python,decision-tree
For this kind of decision tree you need to use DecisionTreeClassifier(). It appears that DecisionTreeRegressor only works with numerical predictor data. DecisionTreeClassifier() only works with class predictor data. I really wanted one that does both, but it doesn't appear possible.
I have used the python's DecisionTreeRegressor() to segment data based on a Predictor that is continuous, and it works well. In the present project I have been asked to use Categorical data as Predictor. Predictor - Industry Domain, Response - Revenue. On using DecisionTreeRegressor() it threw error "Cannot change st...
0
1
385
0
29,547,417
0
1
0
0
1
true
5
2015-04-09T13:20:00.000
0
3
0
Is there a pre-existing implementation of the General Number Field Sieve (GNFS) in Python?
29,539,678
1.2
python
the pearl wrapper for GGNFS (c implementation) was rewritten into python by Brian Gladman. Look for factmsieve.py
Is there any inbuilt or online Implementation of GNFS factoring in Python? I need a version that can easily be used to factor integers in other programs so I would need to import and preferably is comparable with or only needs minimal change to work with Python 3. I need this to factor (multiple) numbers of over 90 dig...
0
1
2,960
0
29,991,069
0
0
0
0
1
false
2
2015-04-10T03:30:00.000
3
1
1
Which will give the best performance Hive or Pig or Python Mapreduce with text file and oracle table as source?
29,552,853
0.53705
python,hadoop,mapreduce,hive,apache-pig
Python Map Reduce or anything using Hadoop Streaming interface will most likely be slower. That is due to the overhead of passing data through stdin and stdout and the implementation of the streaming API consumer (in your case python). Python UDF's in Hive and Pig do the same thing. You might not want to compress data ...
I have the below requirements and confused about which one to choose for high performance. I am not java developer. I am comfort with Hive, Pig and Python. I am using HDP2.1 with tez engine. Data sources are text files(80 GB) and Oracle table(15GB). Both are structured data. I heard Hive will suite for structure data ...
0
1
2,382
0
29,593,497
0
0
0
0
1
true
1
2015-04-11T06:52:00.000
1
1
0
Is there a Python wrapper for Stanford Neural Net based dependency parser?
29,575,034
1.2
python,parsing,nlp,neural-network,stanford-nlp
I don't know of any such wrapper at the moment, and there are no plans at Stanford to build one. (Maybe the NLTK developers would be up for the challenge?)
I know about the Python wrappers for Stanford CoreNLP package but this package does not seem to contain neural net based dependency parser model. Rather it is present in Stanford-parser-full-****-- package for which I can't find any Python wrapper. My Question: Is there a Python wrapper that would parse using Stanford ...
0
1
207
0
29,637,968
0
0
0
0
1
false
1
2015-04-13T06:05:00.000
0
1
0
Moving Window Average Convolution with different Radii - Python
29,598,769
0
python,arrays,numpy
Since you are attempting a rather customized moving window average convolution, it is unlikely that you will find it in an existing library. Instead you can implement this in a straightforward way with loops. Then use Cython, f2py or numba, etc to speed this up to a level comparable with a native C/Fortran implementat...
I would like to perform a basic moving average convolution of an array where each pixel is replaced by the average of its surrounding pixels. But my problem scenario goes like this : I have two arrays valueArray and radiiArray. Both the arrays have the same shape. I need to apply the moving average to the valueArra...
0
1
267
0
29,605,232
0
0
0
0
1
false
0
2015-04-13T06:12:00.000
1
2
0
How to convert RGB to Intensity in Python 2.7 using Opencv
29,598,848
0.099668
python,opencv
Try to use BGR2GRAY(and so on - BGR2HSL etc) instead of RGB2GRAY - OpenCV usually use BGR channel order, not RGB.
Here i have one RGB image where i need want extract plane of intensity. I have tried HSL, in this i took L Luminosity but its not similar with Intensity, and tried RGB2GRAY but this also little bit similar but not actual. so is there any special code to get intensity of the image? or is there any calculation of Intensi...
0
1
954
0
68,004,057
0
0
0
1
1
false
25
2015-04-13T14:01:00.000
0
2
0
Update existing row in database from pandas df
29,607,222
0
python,postgresql,pandas
For sql alchemy case of read table as df, change df, then update table values based on df, I found the df.to_sql to work with name=<table_name> index=False if_exists='replace' This should replace the old values in the table with the ones you changed in the df
I have a PostgreSQL db. Pandas has a 'to_sql' function to write the records of a dataframe into a database. But I haven't found any documentation on how to update an existing database row using pandas when im finished with the dataframe. Currently I am able to read a database table into a dataframe using pandas read_sq...
0
1
11,070
0
29,663,998
0
1
0
0
1
false
0
2015-04-16T01:30:00.000
0
2
0
Python (instantiate) locks to multiple outputs
29,663,838
0
python,multithreading,locking,queue
It sounds like you will have potentially more than one thread writing to a single output file, so you want to make the writes thread-safe, while allowing another output file to be created and written to if a subject is added. I would recommend having each thread simply lock the output file during the write. This would ...
I am working on Python multi-threading application. The scenario is: The source data(thousands of small files per hour) contains data about many subjects(range 1-100). Each row starts with "subject1|col1|col2|...|coln|". Right now users are interested in only 10(example) subjects. But in future they can add(or remove)...
0
1
184
0
29,692,821
0
1
0
0
1
false
0
2015-04-17T07:11:00.000
4
2
0
How to convert string like 1.424304064E9 to datetime in pandas dataframe?
29,692,575
0.379949
python,pandas
use datetime.datetime.fromtimestamp(float("1.424304064E9"))
My data is in this format - "1.424304064E9" I have tried pandas.to_datetime(df['ts']) but no success. What am I missing?
0
1
298
0
29,699,268
0
1
0
0
1
false
0
2015-04-17T12:13:00.000
0
3
0
Dijkstra's algorithm on adjacency matrix in python
29,698,896
0
python,algorithm
When I had to implement Dijkstra's algorithm in php to find the shorter way between 2 tables of a database, I constructed the matrix with 3 values : 0 if the 2 points are the same, 1 if they are linked by an edge, -1 otherwise. After that the algorithm just worked as intended.
How can I use Dijkstra's algorithm on an adjacency matrix with no costs for edges in Python? It has 1 if there is an edge between 2 vertices and 0 otherwise. The explanations that I've found on the internet are all for graphs with costs.
0
1
2,624
0
30,233,155
0
1
0
0
1
true
2
2015-04-19T20:57:00.000
0
2
0
RandomForestClassifier import
29,735,766
1.2
python,scikit-learn,random-forest
The problem was that I had the 64bit version of Anaconda and the 32bit sklearn.
I've installed Anaconda Python distribution with scikit-learn. While importing RandomForestClassifier: from sklearn.ensemble import RandomForestClassifier I have the following error: File "C:\Anaconda\lib\site-packages\sklearn\tree\tree.py", line 36, in <module> from . import _tree ImportError: cannot import name _...
0
1
10,431
0
29,762,672
0
0
0
0
1
false
1
2015-04-20T23:18:00.000
1
1
0
FreqDist().plot() as an histogram
29,760,119
0.197375
python,python-2.7,nltk
It seems NTLK has a tabulate() method, which gives you the numeric data. From there on you could use pylab to generate the hist() function (or bar() for a bar plot).
I am using NLTK and FreqDist().plot() . But for curiosity, it's there a way to transform the line graph into an histogram? and how I can put labels in both cases? I've searched in the documentation, but sadly it isn't detailed for it. Thanks in advance
0
1
1,774
0
29,786,975
0
0
0
0
1
false
1
2015-04-21T06:23:00.000
0
1
0
spark 1.3.1: Dataframe breaking MLib API
29,764,424
0
python,apache-spark,apache-spark-sql
I think community is going to patch this. But for now, we can use Dataframe.rdd in ALS.train (or any other place where we see only RDDs are allowed)
I am trying to use Spark SQL and MLib together to create a recommendation program (extending movie recommendation program) in python. It was working fine with 1.2.0. However, in 1.3.1, by default spark create Dataframe objects instead of SchemaRDD objects as output of a SQL. hence, mlib.ALS.train method is failing with...
0
1
208
0
29,785,304
0
1
0
0
1
false
0
2015-04-21T23:43:00.000
0
3
0
get next value in list Pandas
29,785,134
0
python,list,pandas
It appears that printing the list wouldn't work, and you haven't provided us with any code to work with, or an example print of what your date time looks like. My best suggestion is to use the sort function. dataframe.sort() If I wanted a specific date to print, I would have to say to print it by index number once you ...
I have a list of unique dates in chronological order. I have a dataframe with dates in it. I want to use the list of dates in the dataframe to get the NEXT date in the list (find the date in dataframe in the list, return the date to the right of it ( next chronological date). Any ideas?
0
1
366
0
56,717,998
0
1
0
0
3
false
7
2015-04-22T12:40:00.000
0
4
0
Cannot import cv2 in PyCharm
29,797,893
0
python,opencv,pycharm
Do the following steps: Download and install the OpenCV executable. Add OpenCV in the system path(%OPENCV_DIR% = /path/of/opencv/directory) Go to C:\opencv\build\python\2.7\x86 folder and copy cv2.pyd file. Go to C:\Python27\DLLs directory and paste the cv2.pyd file. Go to C:\Python27\Lib\site-packages directory and p...
I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline wi...
0
1
8,253
0
44,804,084
0
1
0
0
3
false
7
2015-04-22T12:40:00.000
0
4
0
Cannot import cv2 in PyCharm
29,797,893
0
python,opencv,pycharm
Have you selected the right version of python ? or rather, when you have installed opencv with brew, this last probably has installed a new version of python that you can find in Cellar's Directory. You can see this immediately; from the main window of PyCharm select: Configure -> Preferences -> Project Interpreter ...
I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline wi...
0
1
8,253
0
39,482,840
0
1
0
0
3
false
7
2015-04-22T12:40:00.000
0
4
0
Cannot import cv2 in PyCharm
29,797,893
0
python,opencv,pycharm
I got the same situation under win7x64 with pycharm version 2016.1.1, after a quick glimpse into the stack frame, I think it is a bug! Pycharm ipython patches import action for loading QT, matplotlib, ..., and finally sys.path lost its way! anyway, there is a workaround, copy Lib/site-packages/cv2.pyd or cv2.so to $PYT...
I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline wi...
0
1
8,253
0
38,987,705
0
1
0
0
1
false
0
2015-04-23T00:28:00.000
0
2
0
OpenCV with standalone python executable (py2exe/pyinstaller)
29,811,423
0
python,opencv,py2exe,pyinstaller
I guess I will go ahead and post an answer for this, but solution was provided by @otterb in the comments to the question. I am pasting the text here: "py2exe is not perfect so will often miss some libraries or dll, pyd etc needed. Most likely you are missing opencv_highgui249.dl‌​l and opencv_ffmpeg249.dll etc. I woul...
I have a python program that uses OpenCV to get frames from a video file for processing. I then create a standalone executable using py2exe (also tried pyinstaller and got same error). My computer and the target computer are both Windows 7, but the target computer does not have python installed. I use OpenCV to read th...
0
1
6,352
0
29,826,612
0
0
0
0
1
false
6
2015-04-23T14:30:00.000
2
2
0
Python float precision float
29,826,523
0.197375
python,floating-point,double,precision
You could try the c_float type from the ctypes standard library. Alternatively, if you are capable of installing additional packages you might try the numpy package. It includes the float32 type.
I need to implement a Dynamic Programming algorithm to solve the Traveling Salesman problem in time that beats Brute Force Search for computing distances between points. For this I need to index subproblems by size and the value of each subproblem will be a float (the length of the tour). However holding the array in m...
0
1
4,158
0
32,484,856
0
0
0
0
1
true
7
2015-04-23T16:08:00.000
4
2
0
Ripley's K Function (Second order intensity function) Python
29,828,922
1.2
python,spatial
Solved my problem, this is for others looking to do same analysis. Definitely recommend using R for spatial analysis. A transfer from python is simple because all you need is coordinates of your point pattern. Write a csv of x,y and z coordinates of your points using python R has good functionality of reading csv usin...
I am looking for Ripley's k function implementation in Python. But so far haven't been able to find any spatial modules implementing this in scipy or elsewhere. I have created Voronoi tessellation of a fibre composite and need to perform analysis using Ripley's K and pair distribution functions compared to a Poisson d...
0
1
2,325
0
29,892,429
0
0
0
0
1
false
1
2015-04-24T01:27:00.000
0
2
0
Python Pandas Large Row Processing
29,837,153
0
python,pandas
Row by row. Pandas is not the ideal tool for this. I would suggest you look into Map/Reduce. It is designed for exactly this. Streaming is the key to row by row processing.
I have a lot of time series data. Almost 3 GB of csv files. The dimensions are 50k columns with 6000 rows. Now I need to process them row by row. They are time ordered and its important that for each row, I look at each column. Would importing this in to pandas as a pivot table and iterating them over row by row effic...
0
1
662
0
29,841,705
0
0
0
0
1
false
0
2015-04-24T06:15:00.000
0
1
0
Is it better to store temp data in arrays or save it to file for access later?
29,840,006
0
python,performance,numpy,save
Try to make the data obsolete as fast as possible by further processing/accumulating e.g. plotting immediately. You did not give details about the memory/storage needed. for sparse matrices there are efficient representations. if your matrices are not sparse there are roughly 500k entries per matrix and therefore 5G en...
This is a broad question. I am running a very long simulation (in Python) that generates a sizeable amount of data (about 10,000 729*729 matrices). I only need the data to plot a couple of graphs and then I'm done with it. At the moment I save the data in (numpy) arrays. When the simulation is complete I plot the data....
0
1
626
0
29,870,156
0
0
0
0
1
false
1
2015-04-25T19:56:00.000
0
1
0
Ball-Line Segment Collision on End-Point of Line
29,870,031
0
python,math,vector,line,collision
Possible Solutions: Instead of using a single 1D 'line', you could construct a 2D rectangle (that is as this as you want/need it to be) --- composed of 4 separate 'lines'. I.e. you can have collisions with any of the 4 faces of the rectangle object. Would that work? Do some sort of corner collision -- if the ball is...
So I have a program where a ball subject to gravity bounces off of lines created by a user with mouse clicks. These lines are normally sloped. My collision bounces work perfectly EXCEPT in the case where ball does approximately this: ->O ------ My code works by finding the normal vector of the line such that the scala...
0
1
157
0
29,883,739
0
0
0
0
1
false
14
2015-04-25T22:57:00.000
6
3
0
Python multi dimensional sparse array
29,871,669
1
python,scipy
scipy.sparse has a number of formats, though only a couple have an efficient set of numeric operations. Unfortunately, those are the harder ones to extend. dok uses a tuple of the indices as dictionary keys. So that would be easy to generalize from 2d to 3d or more. coo has row, col, data attribute arrays. Conceptu...
I am working on a project where I need to deal with 3 dimensional large array. I was using numpy 3d array but most of my entries are going to be zero, so it's lots of wastage of memory. Scipy sparse seems to allow only 2D matrix. Is there any other way I can store 3D sparse array?
0
1
9,257
0
41,026,037
0
0
0
0
1
false
3
2015-04-26T18:40:00.000
0
2
0
Arrow pointing to a point on a curve
29,881,872
0
python,matplotlib,plot
The inverted arrowhead is due to a negative sign of the head_length variable. Probably you are scaling it using a negative value. Using head_length= abs(value)*somethingelse should take care of your problem.
I am trying to plot arrows pointing at a point on a curve in python using matplotlib. On this line i need to point vertical arrows at specific points. This is for indicating forces acting on a beam, so their direction is very important. Where the curve is the beam and the arrow is the force. I know the coordinate of sa...
0
1
1,773
0
39,548,461
0
1
0
0
1
false
11
2015-04-26T21:20:00.000
1
5
0
Trouble installing scipy via pyCharm windows 8 - no lapack / blas resources found
29,883,690
0.039979
python,pycharm,lapack,blas
I had the same issue, and downloading Anaconda, and switching the project interpreter in PyCharm to \Anaconda3\python.exe helped solve this. Good luck!
I'm currently having trouble installing scipy via PyCharm's package manager. I have installed numpy successfully and do have the Microsoft Visual Studio C/C++ compiler in the System Variables. However, when it's time to install scipy in PyCharm, the following error occurs: Executed Command: pip install scipy Error occu...
0
1
16,017
0
29,889,993
0
0
0
0
1
false
29
2015-04-27T05:58:00.000
1
9
0
How to visualize a neural network
29,888,233
0.022219
python,image,neural-network
Draw the network with nodes as circles connected with lines. The line widths must be proportional to the weights. Very small weights can be displayed even without a line.
I want to draw a dynamic picture for a neural network to watch the weights changed and the activation of neurons during learning. How could I simulate the process in Python? More precisely, if the network shape is: [1000, 300, 50], then I wish to draw a three layer NN which contains 1000, 300 and 50 neurons respectiv...
0
1
35,856
1
29,999,330
0
0
0
0
1
false
1
2015-04-28T09:02:00.000
0
1
0
pyQT4 native file dialog remembering last directory
29,914,909
0
windows,python-3.x,pyqt4
The QFileDialog.saveState() and QFileDialog.restoreState() methods can save and restore the current directory of the dialog box.
I have a pyQT4 application where the user is asked for a savefile (QFileDialog and all that...) One annoyance is it does not remember the last directory so multiple call always defaults to the working directory of the application (or whatever I set the 3rd argument to) If I set the option to not use the native file bro...
0
1
305
0
29,927,508
0
0
0
0
1
false
0
2015-04-28T16:12:00.000
0
2
0
Scipy - Multiplying large sparse matrix causes segmentation fault?
29,924,590
0
python,numpy,segmentation-fault,scipy,sparse-matrix
Resolved the issue, turns out this is a memory problem. I ran the operation on another machine and received a MemoryIssue (whereas my machine gives a segfault), and when given more memory it turns into a "negative dimensions not allowed error" a long way into it, which I presume is an integer overflow in the calculatio...
I have a CSR sparse matrix in scipy of size 444075 x 444075. I wish to multiply it by its transpose. However, when I do m * m.T it causes a segmentation fault 11 error. Is this a memory issue, and if so, is there a way to allocate more memory to the program? Is there a clever workaround/hack using subroutines other rou...
0
1
850
0
29,929,834
0
0
0
0
1
false
0
2015-04-28T18:05:00.000
0
1
0
putting headers into an array, python
29,926,772
0
python,csv,numpy
Assuming I have understood what you mean by headers (it would be easier to tell with a few complete lines, even if you had to scale it down from your actual file)... I would first read the irregular lines with normal python then, on the regular lines, use genfromtxt with skip_header and usecols (make a tuple like (i fo...
I have a set of data that is below some metadata. I'm looking to put the headers into a numpy array to be used later. However the first header needs to be ignored as that is the x data header, then the other columns are the y headers. How do i read this?
0
1
118
0
29,930,257
0
0
0
0
1
false
2
2015-04-28T21:21:00.000
0
3
0
Performing Decomposition on Sparse Matrices in Python
29,930,160
0
python,scipy,scikit-learn,sparse-matrix,pca
Even the input matrix is sparse the output will not be a sparse matrix. If the system does not support a dense matrix neither the results will not be supported
I'm trying to decomposing signals in components (matrix factorization) in a large sparse matrix in Python using the sklearn library. I made use of scipy's scipy.sparse.csc_matrix to construct my matrix of data. However I'm unable to perform any analysis such as factor analysis or independent component analysis. The on...
0
1
1,097
0
30,708,149
0
0
0
0
1
true
1
2015-04-29T11:44:00.000
0
1
0
Python: flood filling of multidimensional image
29,942,739
1.2
python,image-processing,flood-fill
Hell yeah! scipy.ndimage.measurements module helps!
I have a binary multidimensional image. And I want to get some implementation of flood fill that will give me the next: List of connected regions (with adjacent pixels with value True). For each region I want to get its bounding box and list of pixel coordinates of all pixels from the interconnected region. Is someth...
0
1
897
0
40,606,389
0
1
0
0
1
false
63
2015-05-04T12:51:00.000
9
5
0
In python, what is the difference between random.uniform() and random.random()?
30,030,659
1
python,random,uniform
In random.random() the output lies between 0 & 1 , and it takes no input parameters Whereas random.uniform() takes parameters , wherein you can submit the range of the random number. e.g. import random as ra print ra.random() print ra.uniform(5,10) OUTPUT:- 0.672485369423 7.9237539416
In python for the random module, what is the difference between random.uniform() and random.random()? They both generate pseudo random numbers, random.uniform() generates numbers from a uniform distribution and random.random() generates the next random number. What is the difference?
0
1
131,400
0
30,125,166
0
0
0
0
1
false
0
2015-05-04T16:31:00.000
0
1
0
Plot 2d line diagram in mayavi
30,035,123
0
python,matplotlib,mayavi
Mayavi is not really good at plotting 2d-diagramms, you can cheat a little by setting your camera position parallel to an 2d image. If you want to plot 2d-diagramms try using matplotlib.
I have a dataset of a tennis game. This dataset contains the ball positions in each rally and the current score. I already 3d-visualized the game and ball positions in mayavi. Now I want to plot 2d line diagrams in mayavi that visualizes the score developement after specific events (such as after: a break, a set-win, s...
0
1
783
0
30,048,240
0
0
0
0
1
false
0
2015-05-05T08:03:00.000
1
1
0
TfidfVectorizer does not use the whole set of words in all documents?
30,047,341
0.197375
python,nlp,tf-idf
Did you check the stop_words and max_features? If you provide values in either of these two, it will exclude some words.
I am trying to build a TFIDF model with TfidfVectorizer. The feature name list namely the number of column of sparse matrix is shorter than the length of word set of documents even though I set min_df as 1. What happened?
0
1
525
0
30,059,056
0
0
0
0
1
false
0
2015-05-05T09:25:00.000
0
1
0
Python: np.sort VS array.argsort()
30,049,051
0
python,arrays,sorting
I figure out why it didn't work with np.sort(). I misused the structured array function. With the following dtype, I have created my array with the following line: Data = np.zeros((78000,11),dtype=dtype2) I though that I had to create 1 row for each structured data. WRONG ! The right line is: Data = np.zeros((78000,1),...
I'm facing something strange, the function sort and the attribut argsort don't give me the same results. I have a Data array (CFD results) with the following structure: dtype([('nodenumber', '<f8'), (' x-coordinate', '<f8'), (' y-coordinate', '<f8'), (' z-coordinate', '<f8'), (' pressure', '<f8'), (' t...
0
1
906