GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 41,861,621 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2014-10-28T00:40:00.000 | 2 | 5 | 0 | Merge CSVs in Python with different columns | 26,599,137 | 0.07983 | python,csv,merge | For those of us using 2.7, this adds an extra linefeed between records in "out.csv". To resolve this, just change the file mode from "w" to "wb". | I have hundreds of large CSV files that I would like to merge into one. However, not all CSV files contain all columns. Therefore, I need to merge files based on column name, not column position.
Just to be clear: in the merged CSV, values should be empty for a cell coming from a line which did not have the column of t... | 0 | 1 | 16,477 |
0 | 26,640,860 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-10-28T18:29:00.000 | -2 | 1 | 0 | Multiple networks in Theano | 26,615,835 | 1.2 | python,theano | In a rather simplified way I've managed to find a nice solution. The trick was to create one model, define its function and then create the other model and define the second function. Works like a charm | I'd like to have 2 separate networks running in Theano at the same time, where the first network trains on the results of the second. I could embed both networks in the same structure but that would be a real mess in the entire forward pass (and probably won't even work because of the shared variables etc.)
The problem... | 0 | 1 | 111 |
0 | 62,499,396 | 0 | 0 | 0 | 0 | 2 | false | 31 | 2014-10-30T15:39:00.000 | 0 | 14 | 0 | Installing NumPy and SciPy on 64-bit Windows (with Pip) | 26,657,334 | 0 | python,numpy,scipy,windows64 | Follow these steps:
Open CMD as administrator
Enter this command : cd..
cd..
cd Program Files\Python38\Scripts
Download the package you want and put it in Python38\Scripts folder.
pip install packagename.whl
Done
You can write your python version instead of "38" | I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything.
I tried to install everything via Pip and most things worked. But when I came to SciPy, it co... | 0 | 1 | 131,918 |
0 | 44,685,941 | 0 | 0 | 0 | 0 | 2 | false | 31 | 2014-10-30T15:39:00.000 | 0 | 14 | 0 | Installing NumPy and SciPy on 64-bit Windows (with Pip) | 26,657,334 | 0 | python,numpy,scipy,windows64 | for python 3.6, the following worked for me
launch cmd.exe as administrator
pip install numpy-1.13.0+mkl-cp36-cp36m-win32
pip install scipy-0.19.1-cp36-cp36m-win32 | I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything.
I tried to install everything via Pip and most things worked. But when I came to SciPy, it co... | 0 | 1 | 131,918 |
0 | 26,727,002 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-11-04T02:49:00.000 | -2 | 2 | 0 | Summation of every row, column and diagonal in a 3x3 matrix numpy | 26,726,950 | -0.197375 | python,numpy,matrix,indexing,pygame | set a bool to checks every turn if someone has won. if it returns true, then whosever turn it is has won
so, for instance, it is x turn, he plays the winning move, bool checks if someone has won,returns true, print out (player whose turn it is) has won! and end game. | My assignment is Tic-Tac_Toe using pygame and numpy. I Have almost all of the program done. I just need help understanding how to find if a winner is found. I winner is found if the summation of ANY row, column, or diagonal is equal to 3.
I have two 3x3 matrices filled with 0's. Let's call them xPlayer and oPlayer. Th... | 0 | 1 | 1,887 |
0 | 29,713,740 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2014-11-04T10:57:00.000 | 2 | 1 | 0 | Processing musical genres using K-nn algorithm, how to deal with extracted feature? | 26,733,418 | 0.379949 | python,algorithm,classification,extraction | One approach would be to take the least RMS energy value of the signal as a parameter for classification.
You should use a music segment, rather than using the whole music file for classification.Theoretically, the part of the music of 30 sec, starting after the first 30 secs of the music, is best representative for ge... | I'm developing a little tool which is able to classify musical genres. To do this, I would like to use a K-nn algorithm (or another one, but this one seems to be good enough) and I'm using python-yaafe for the feature extraction.
My problem is that, when I extract a feature from my song (example: mfcc), as my songs are... | 0 | 1 | 449 |
0 | 26,763,840 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-11-05T17:30:00.000 | 0 | 1 | 0 | Can CryptGenRandom generate all possible permutations? | 26,763,448 | 0 | python,random | You are almost correct: you need a generator not with a period of 400!, but with an internal state of more than log2(400!) bits (which will also have a period larger than 400!, but the latter condition is not sufficient). So you need at least 361 bytes of internal state. CryptGenRandom doesn't qualify, but it ought to ... | I would like to shuffle a relatively long array (length ~400). While I am not a cryptography expert, I understand that using a random number generator with a period of less than 400! will limit the space of the possible permutations that can be generated.
I am trying to use Python's random.SystemRandom number generator... | 0 | 1 | 431 |
0 | 26,769,689 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-11-05T20:48:00.000 | 0 | 1 | 0 | Multiindex or dictionaries | 26,766,803 | 0 | python,pandas,hierarchy,multi-index | In general in my experience is more difficult to compare different Data Frames, so I would suggest to use one.
With some practical example I can try to give better advice.
However, personally I prefer to use an extra column instead of many Multiindex levels, but it's just my personal opinion. | I am trying to analyze results from several thermal building simulations. Each simulation produces hourly data for several variables and for each room of the analyzed building. Simulations can be repeated for different scenarios and each one of these scenarios will produce a different hourly set of data for each room a... | 0 | 1 | 173 |
0 | 26,826,242 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2014-11-07T03:30:00.000 | 1 | 3 | 0 | What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters? | 26,793,585 | 0.066568 | python,cluster-analysis,k-means | K-means is indeed sensitive to noise BUT investigate your data!
Have you pre-processed your "real-data" before applying the distance measure on it?
Are you sure your distance metric represents proximity as you'll expected?
There are a lot of possible "bugs" that may cause this scenario.. not necessary k-means fault | I use the k-means algorithm to clustering set of documents.
(parameters are - number of clusters=8, number of runs for different centroids =10)
The number of documents are 5800
Surprisingly the result for the clustering is
90% of documents belong to cluster - 7 (final cluster)
9% of documents belong to cluster - 0 (... | 0 | 1 | 272 |
0 | 26,817,383 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2014-11-07T03:30:00.000 | 1 | 3 | 0 | What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters? | 26,793,585 | 0.066568 | python,cluster-analysis,k-means | K-means is highly sensitive to noise!
Noise, which is farther away from the data, becomes even more influential when your square its deviations. This makes k-means really sensitive to this.
Produce a data set, with 50 points distributed N(0;0.1), 50 points distributed N(1;0.1) and 1 point at 100. Run k-means with k=2, ... | I use the k-means algorithm to clustering set of documents.
(parameters are - number of clusters=8, number of runs for different centroids =10)
The number of documents are 5800
Surprisingly the result for the clustering is
90% of documents belong to cluster - 7 (final cluster)
9% of documents belong to cluster - 0 (... | 0 | 1 | 272 |
0 | 26,793,992 | 0 | 0 | 0 | 0 | 3 | true | 0 | 2014-11-07T03:30:00.000 | 1 | 3 | 0 | What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters? | 26,793,585 | 1.2 | python,cluster-analysis,k-means | K-means clustering attempts to minimize sum of distances between each point and a centroid of a cluster each point belongs to. Therefore, if 90% of your points are close together the sum of distances between those points and the cluster centroid is fairly small, Therefore, the k-means solving algorithm puts a centroid ... | I use the k-means algorithm to clustering set of documents.
(parameters are - number of clusters=8, number of runs for different centroids =10)
The number of documents are 5800
Surprisingly the result for the clustering is
90% of documents belong to cluster - 7 (final cluster)
9% of documents belong to cluster - 0 (... | 0 | 1 | 272 |
0 | 36,050,176 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2014-11-07T06:39:00.000 | 11 | 3 | 0 | Output 50 samples closest to each cluster center using scikit-learn.k-means library | 26,795,535 | 1 | python,scikit-learn,k-means | One correction to the @snarly's answer.
after performing d = km.transform(X)[:, j],
d has elements of distances to centroid(j), not similarities.
so in order to give closest top 50 indices, you should remove '-1', i.e.,
ind = np.argsort(d)[::][:50]
(normally, d has sorted score of distance in ascending order.)
Also, ... | I have fitted a k-means algorithm on 5000+ samples using the python scikit-learn library. I want to have the 50 samples closest to a cluster center as an output. How do I perform this task? | 0 | 1 | 9,095 |
0 | 26,883,907 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2014-11-12T09:49:00.000 | 1 | 1 | 0 | Import error when using scipy.io module | 26,883,835 | 0.197375 | python,scipy | I would take a guess and say your Python doesnt know where you isntalled scipy.io. add the scipy path to PYTHONPATH. | I'm involved in a raspberry pi project and I use python language. I installed scipy, numpy, matplotlib and other libraries correctly. But when I type
from scipy.io import wavfile
it gives error as "ImportError: No module named scipy.io"
I tried to re-install them, but when i type the sudo cord, it says already the ne... | 0 | 1 | 1,955 |
0 | 26,904,535 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-11-12T23:08:00.000 | 4 | 1 | 0 | Fastest Count Vectorizer Implementation | 26,898,410 | 1.2 | python,machine-learning,nlp,scikit-learn,vectorization | Have you tried HashingVectorizer? It's slightly faster (up to 2X if I remember correctly). Next step is to profile the code, strip the features of CountVectorizer or HashingVectorizer that you don't use and rewrite the remaining part in optimized Cython code (after profiling again).
Vowpal Wabbit's bare-bone feature pr... | I'm looking for an implementation of n-grams count vectorization that is more efficient than scikit-learn's CountVectorizer. I've identified the CountVectorizer.transform() call as a huge bottleneck in a bit of software, and can dramatically increase model throughput if we're able to make this part of the pipeline mor... | 0 | 1 | 1,295 |
0 | 26,917,183 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-11-13T19:45:00.000 | 2 | 2 | 1 | What is the most efficient way to write 3GB of data to datastore? | 26,917,114 | 0.197375 | python-2.7,google-app-engine,google-cloud-datastore | If you need to store each row as a separate entity, it does not matter how you create these entities - you can improve the performance by batching your requests, but it won't affect the costs.
The costs depend on how many indexed properties you have in each entity. Make sure that you only index the properties that you ... | I have a 3Gb csv file. I would like to write all of the data to GAE datastore. I have tried reading the file row by row and then posting the data to my app, but I can only create around 1000 new entities before I exceed the free tier and start to incur pretty hefty costs. What is the most efficient / cost effective way... | 1 | 1 | 102 |
0 | 26,942,545 | 0 | 1 | 0 | 0 | 1 | false | 52 | 2014-11-15T04:16:00.000 | 5 | 10 | 0 | Reading csv zipped files in python | 26,942,476 | 0.099668 | python-2.7,csv,zip | Yes. You want the module 'zipfile'
You open the zip file itself with zipfile.ZipInfo([filename[, date_time]])
You can then use ZipFile.infolist() to enumerate each file within the zip, and extract it with ZipFile.open(name[, mode[, pwd]]) | I'm trying to get data from a zipped csv file. Is there a way to do this without unzipping the whole files? If not, how can I unzip the files and read them efficiently? | 0 | 1 | 70,401 |
0 | 55,484,853 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2014-11-15T19:23:00.000 | 4 | 2 | 0 | Pass pandas dataframe into class | 26,949,755 | 0.379949 | python,class,pandas | I would think you could create the dataframe in the first instance with
a = MyClass(my_dataframe)
and then just make a copy
b = a.copy()
Then b is independent of a | I would like to create a class from a pandas dataframe that is created from csv. Is the best way to do it, by using a @staticmethod? so that I do not have to read in dataframe separately for each object | 0 | 1 | 37,394 |
0 | 26,963,180 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-11-16T09:45:00.000 | 0 | 1 | 0 | How to use sklearn's DBSCAN with a spherical metric? | 26,955,646 | 1.2 | python,scikit-learn,dbscan,metric | Have you tried metric="precomputed"?
Then pass the distance matrix to the DBSCAN.fit function instead of the data.
From the documentation:
X array [n_samples, n_samples] or [n_samples, n_features] :
Array of distances between samples, or a feature array. The array is treated as a feature array unless the metric is giv... | I have a set of data distributed on a sphere and I am trying to understand what metrics must be given to the function DBSCAN distributed by scikit-learn. It cannot be the Euclidean metrics, because the metric the points are distributed with is not Euclidean. Is there, in the sklearn packet, a metric implemented for suc... | 0 | 1 | 816 |
0 | 26,958,901 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-11-16T15:45:00.000 | 2 | 2 | 0 | Using isinstance() versus duck typing | 26,958,759 | 0.197375 | python,matplotlib,duck-typing,isinstance | Why not write two separate functions, one that treats its input as a color map, and another that treats its input as a color? This would be the simplest way to deal with the problem, and would both avoid surprises, and leave you room to expand functionality in the future. | I'm writing an interface to matplotlib, which requires that lists of floats are treated as corresponding to a colour map, but other types of input are treated as specifying a particular colour.
To do this, I planned to use matplotlib.colors.colorConverter, which is an instance of a class that converts the other types o... | 0 | 1 | 888 |
0 | 26,967,730 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-11-17T06:03:00.000 | 5 | 1 | 0 | Is there a way to alter the edge opacity in Python igraph? | 26,966,487 | 1.2 | python,igraph,opacity | Edge opacity can be altered with the color attribute of the edge or with the edge_color keyword argument of plot(). The colors that you specify there are passed through the color_name_to_rgba function so you can use anything that color_name_to_rgba understands there; the easiest is probably an (R, G, B, A) tuple or the... | I know that you can adjust a graphs overall opacity in the plot function (opacity = (0 to 1)), but I cannot find anything in the manual or online searches that speak of altering the edge opacity (or transparency)? | 0 | 1 | 1,146 |
0 | 27,003,691 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-18T20:44:00.000 | 0 | 3 | 0 | numpy arrays will not concontenate | 27,003,660 | 1.2 | python,numpy | try np.hstack((a.reshape(1496, 1), b.reshape(1496, 1), c)). To be more general, it is np.hstack((a.reshape(a.size, 1), b.reshape(b.size, 1), c)) | I have three arrays a, b, c.
The are the shapes (1496,) (1496,) (1496, 1852). I want to join them into a single array or dataframe.
The first two arrays are single column vector, where the other has several columns. All three have 1496 rows.
My logic is to join into a single array by df=np.concontenate((a,b,c))
But the... | 0 | 1 | 47 |
0 | 27,011,549 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-11-19T07:39:00.000 | 2 | 2 | 0 | Shoud I use numpy for a image manipulation program? why | 27,011,456 | 1.2 | python,arrays,image-processing,numpy | Well I think you could do that, but maybe less convenient. The reasons could be:
numpy supports all the matrix manipulations and since it is optimized, could be much faster (You can also switch to OpenBLAS to make it amazingly faster). For image-processing problems, in some cases where images become larger, it could b... | Is there any reason why I should use numpy to represent pixels in an image manipulation program as opposed to just storing the values in my own array of numbers? Currently I am doing the latter but I see lots of people talking about using numpy for representing pixels as multidimensional arrays. Other than that are the... | 0 | 1 | 291 |
0 | 27,019,410 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-11-19T14:24:00.000 | 1 | 1 | 0 | Convert String containing letters to Int efficiently - Apache Spark | 27,019,270 | 0.197375 | java,python,scala,apache-spark | If you just want any matchable String to an int - String.hashCode(). However you will have to deal with possible hash collisions. Alternatively you'd have to convert each character to its int value and append (not add) all of these together. | I am working with a dataset that has users as Strings (ie. B000GKXY4S). I would like to convert each of these users to int, so I can use Rating(user: Int, product: Int, rating: Double) class in Apache Spark ALS. What is the most efficient way to do this? Preferably using Spark Scala functions or python native function... | 0 | 1 | 735 |
0 | 27,033,373 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2014-11-20T06:34:00.000 | 2 | 3 | 0 | Import csv into QGIS using Python | 27,033,261 | 0.132549 | python,csv,qgis | There is a parenthesis missing from the end of your --6 line of code. | I am attempting to import a file into QGIS using a python script. I'm having a problem getting it to accept the CRS. Code so far
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from qgis.core import *
from qgis.utils import iface
----1 Set file name here
InFlnm='Input.CSV'
---2 Set pathname here
InDrPth='G:/test... | 0 | 1 | 8,411 |
0 | 27,035,506 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2014-11-20T08:50:00.000 | 1 | 3 | 0 | X=sm.add_constant(X, prepend=True) is not working | 27,035,257 | 1.2 | python,regression,linear-regression | If sm is a defined object in statsmodels, you need to invoke it by statsmodels.sm, or using from statsmodel import sm, then you can invoke sm directly. | I am trying to get the beta and the error term from a linear regression(OLS) in python. I am stuck at the statement X=sm.add_constant(X, prepend=True), which is returning an
error:"AttributeError: 'module' object has no attribute 'add_constant'"
I already installed the statsmodels module. | 0 | 1 | 8,510 |
0 | 61,634,875 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-11-20T08:50:00.000 | 5 | 3 | 0 | X=sm.add_constant(X, prepend=True) is not working | 27,035,257 | 0.321513 | python,regression,linear-regression | Try importing statsmodel.api
import statsmodels.api as sm | I am trying to get the beta and the error term from a linear regression(OLS) in python. I am stuck at the statement X=sm.add_constant(X, prepend=True), which is returning an
error:"AttributeError: 'module' object has no attribute 'add_constant'"
I already installed the statsmodels module. | 0 | 1 | 8,510 |
0 | 27,050,808 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-11-20T21:38:00.000 | 6 | 2 | 0 | Is there a reason that scikit-learn only allows access to clf.coef_ with linear svms? | 27,050,055 | 1.2 | python,machine-learning,scikit-learn,svm | They simply don't exist for kernels that are not linear: The kernel SVM is solved in the dual space, so in general you only have access to the dual coefficients.
In the linear case this can be translated to primal feature space coefficients. In the general case these coefficients would have to live in the feature space... | I would like to calculate the primal variables w with a polynomial kernel svm, but to do this i need to compute clf.coef_ * clf.support_vectors_. Access is restricted to .coef_ on all kernel types except for linear - is there a reason for this, and is there another way to derive w in that case? | 0 | 1 | 1,416 |
0 | 27,070,113 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-21T20:36:00.000 | 0 | 1 | 0 | medium datasets under source control | 27,069,898 | 1.2 | python,git,svn,csv | If you're asking whether it would be efficient to put your datasets under version control, based on your description of the data, I believe the answer is yes. Both Mercurial and Git are very good at handling thousands of text files. Mercurial might be a better choice for you, since it is written in python and is easie... | This is more of a general question about how feasible is it to store data sets under source control.
I have 20 000 csv files with number data that I update every day. The overall size of the directory is 100Mbytes or so, that are stored on a local disk on ext4 partition.
Each day changes should be diffs of about 1kbyte... | 0 | 1 | 50 |
0 | 27,140,986 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-11-23T07:50:00.000 | 2 | 2 | 0 | Finding the most similar documents (nearest neighbours) from a set of documents | 27,086,753 | 0.197375 | python,scikit-learn,nltk | You should learn about hashing mechanisms that can be used to calculate similarity between documents.
Typical hash functions are designed to minimize collision mapping near duplicates to very different hash keys. In cryptographic hash functions, if the data is changed with one bit, the hash key will be changed to a co... | I have 80,000 documents that are about a very vast number of topics. What I want to do is for every article, provide links to recommend other articles (something like top 5 related articles) that are similar to the one that a user is currently reading. If I don't have to, I'm not really interested in classifying the do... | 0 | 1 | 2,909 |
0 | 27,095,449 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-11-23T16:30:00.000 | 0 | 1 | 0 | Using cross-validation to find the right value of k for the k-nearest-neighbor classifier | 27,091,319 | 0 | ipython,classification,decision-tree,nearest-neighbor,cross-validation | I assume here that you mean the value of k that returns the lowest error in your wine quality model.
I find that a good k can depend on your data. Sparse data might prefer a lower k whereas larger datasets might work well with a larger k. In most of my work, a k between 5 and 10 have been quite good for problems with... | I am working on a UCI data set about wine quality. I have applied multiple classifiers and k-nearest neighbor is one of them. I was wondering if there is a way to find the exact value of k for nearest neighbor using 5-fold cross validation. And if yes, how do I apply that? And how can I get the depth of a decision tre... | 0 | 1 | 185 |
0 | 27,152,696 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2014-11-23T17:15:00.000 | 3 | 1 | 0 | Reading time from analog clock using Hough Line Transform in Python (OpenCV) | 27,091,836 | 1.2 | python,opencv,hough-transform | I've managed to solve my problem.
I've been trying to use Hough Line Transform where I was supposed to use Hough Probabilistic Transform. The moment I got it, I grouped lines drawn along similar functions, sorted them by length, and used arcsine as well as locations of their ends to find precise degrees at wchich hands... | I've been trying to write a program that locates clock's face on picture and then proceeds to read time from it. Locating works fairly well, reading time - not so much.
The cv2.HoughLines function returns angles at which lines lay (measuring from the top of the image) and their distance from upper-left corner of the im... | 0 | 1 | 2,056 |
0 | 27,368,753 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-25T23:31:00.000 | 1 | 1 | 0 | naive bayes feature vectors in pmml | 27,138,752 | 1.2 | python,machine-learning,scikit-learn,pmml | Since the PMML representation of the Naive Bayes model implements representing joint probabilities via the "PairCounts" element, one can simply replace that ratio with the probabilities output (not the log probability). Since the final probabilities are normalized, the difference doesn't matter. If the requirements inv... | I am trying to build my own pmml exporter for Naive Bayes model that I have built in scikit learn. In reading the PMML documentation it seems that for each feature vector you can either output the model in terms of count data if it is discrete or as a Gaussian/Poisson distribution if it is continous. But the coeffici... | 0 | 1 | 170 |
0 | 27,156,673 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-11-26T09:43:00.000 | 0 | 4 | 0 | How to find biggest sum of items not exceeding some value? | 27,145,789 | 0 | python,algorithm,mathematical-optimization,knapsack-problem,greedy | This problem can be phrased as a zero-one assignment problem, and solved with a linear programming package, such as GLPK, which can handle integer programming problems. The problem is to find binary variables x[i] such that the sum of x[i]*w[i] is as large as possible, and less than the prescribed limit, where w[i] are... | How to find biggest sum of items not exceeding some value? For example I have 45 values like this: 1.0986122886681098, 1.6094379124341003, 3.970291913552122, 3.1354942159291497, 2.5649493574615367. I need to find biggest possible combination not exceeding 30.7623.
I can't use bruteforce to find all combinations as amou... | 0 | 1 | 1,217 |
0 | 27,156,595 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-11-26T15:14:00.000 | 1 | 1 | 0 | Plot two images side by side with skimage | 27,152,624 | 1.2 | python-2.7,image-processing,plot,scikit-image | See skimage.feature.plot_matches, pass empty list of keypoints and matches if you only want to plot the images without points. | Looking up at different feature matching tutorials I've noticed that it's tipical to illustrate how the matching works by plotting side by side the same image in two different version (one normal and the other one rotated or distorted). I want to work on feature matching by using two distinct images (same scene shot fr... | 0 | 1 | 722 |
0 | 27,238,940 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-11-26T19:10:00.000 | 0 | 1 | 0 | Append data to end of human-readable file Python | 27,157,087 | 0 | python,numpy,save,append | Thanks for your thoughts. These two options came to my mind too but I need the mixture of both: My specific use case requires the file to be human readable - as far as I know pickling does not provide that and saving to a dictionary destroys the order. I need the data to be dropped as they need to be manipulated in oth... | In one run my python script calculates and returns the results for the variables A, B, C.
I would like to append the results run by run, row by row to a human-readable file.
After the runs i, I want to read the data back as numpy.arrays of the columns.
i | A B C
1 | 3 4 6
2 | 4 6 7
And maybe even access the row ... | 0 | 1 | 105 |
0 | 27,162,750 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-11-27T03:42:00.000 | 1 | 3 | 0 | Python: How to check that two CSV files with header rows contain same information disregarding row and column order? | 27,162,717 | 0.066568 | python,unit-testing,csv | store in a dictionary with corresponding head of csv file as key and first row as values
read the second file and check against with dictionary. | For unit testing a method, I want to compare a CSV file generated by that method (the actual result) against a manually created CSV (the expected result).
The files are considered equal, if the fields of the first row are exactly the same (i.e. the headers), and if the remaining row contain the same information.
The f... | 0 | 1 | 1,472 |
0 | 27,177,167 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2014-11-27T08:43:00.000 | 1 | 2 | 0 | Elastic Search query filtering | 27,166,357 | 0.099668 | python,search,curl,elasticsearch | the above search example looks correct.Try lowercasing the Data "Analyst" as "data analyst".
if doesn't help post your mappings,query you firing and response you are getting. | I have uploaded some data into Elastic server as " job id , job place , job req , job desc ". My index is my_index and doctype = job_list.
I need to write a query to find a particular term say " Data Analyst " and it should give me back matching results with a specified field like " job place " .
ie, Data Analyst term ... | 0 | 1 | 5,426 |
0 | 27,378,733 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-11-27T19:37:00.000 | 1 | 1 | 0 | Training a LDA model with gensim from some external tf-idf matrix and term list | 27,177,721 | 0.197375 | python-3.x,tf-idf,lda,topic-modeling,gensim | id2word must map each id (integer) to term (string).
In other words, it must support id2word[123] == 'koala'.
A plain Python dict is the easiest option. | I have a tf-idf matrix already, with rows for terms and columns for documents. Now I want to train a LDA model with the given terms-documents matrix. The first step seems to be using gensim.matutils.Dense2Corpus to convert the matrix into the corpus format. But how to construct the id2word parameter? I have the list of... | 0 | 1 | 505 |
1 | 27,192,613 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-28T16:11:00.000 | 3 | 1 | 0 | Generate a random point in space (x, y, z) with a boundary | 27,192,467 | 1.2 | python,random,spatial,coordinate | There's a lot that's unspecified in your question, such as what distribution you want to use. For the sake of this answer, I'll assume a uniform distribution.
The straightforward way to handle an arbitrary volume uniform distribution is to choose three uniformly random numbers as coordinates in the range of the boundin... | I would like to generate a uniformly random coordinate that is inside a convex bounding box defined by its (at least) 4 vertices (for the case of a tetrahedron).
Can someone suggest an algorithm that I can use?
Thanks!
If a point is generated in a bounding box, how do you detect whether or not it is outside the geomet... | 0 | 1 | 1,463 |
0 | 27,195,171 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-11-28T16:35:00.000 | 1 | 1 | 1 | To run python script in apache spark/Storm | 27,192,852 | 0.197375 | python,hadoop,apache-spark | First and foremost what are you trying to achieve? What does running on Hadoop technology mean to you? If the goal is to work with a lot of data, this is one thing, if it's to parallelize the algorithm, it's another. My guess is you want both.
First thing is: is the algorithm parallelizable? Can it run on multiple piec... | I am having an algorithm written in python (not hadoop compatible i.e. not mapper.py and reducer.py) and it is running perfectly in local system (not hadoop). My objective is to run this in hadoop.
Option 1: Hadoop streaming. But, I need to convert this python script into mapper and reducer. Any other way?
Option 2:... | 0 | 1 | 1,217 |
0 | 27,239,565 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2014-11-30T16:04:00.000 | 4 | 1 | 0 | Integrating exisiting Python Library to Anaconda | 27,215,170 | 1.2 | python,anaconda | There is no need to remove your system Python. Anaconda sits alongside it. When it installs, it adds a line to your .bashrc that adds the Anaconda directory first in your PATH. This means that whenever you type python or ipython in the terminal, it will use the Anaconda Python (and the Anaconda Python will automaticall... | I've been installing few Library/Toolkit for Python like NLTK, SciPy and NumPy on my Ubuntu. I would like to try to use Anaconda distribution though. Should I remove my existing libraries before installing Anaconda? | 0 | 1 | 3,390 |
0 | 27,259,240 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-30T19:08:00.000 | 0 | 1 | 0 | network animation with static nodes in python or even webgl | 27,217,051 | 1.2 | python,opengl,webgl,ipython,vispy | This looks like a good use-case for Vispy indeed. You'd need to use a PointVisual for the nodes, and a LineVisual for the edges. Then you can update the edges in real time as the simulation is executed.
The animation would also work in the IPython notebook with WebGL.
Note that other graphics toolkits might also work f... | So I have a particular task I need help with, but I was not sure how to do it. I have a model for the formation of ties between a fixed set of network nodes. So I want to set up a window or visualization that shows the set of all nodes on some sort of 2-dimensional or 3-dimensional grid. Then for each timestep, I want ... | 0 | 1 | 308 |
0 | 27,256,151 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-12-02T17:36:00.000 | 1 | 1 | 0 | Training a Machine Learning predictor | 27,255,560 | 1.2 | python,machine-learning,language-features,feature-selection | You either need to under-sample the bigger class (take a small random sample to match the size of the smaller class), over-sample the smaller class (bootstrap sample), or use an algorithm that supports unbalanced data - and for that you'll need to read the documentation.
You need to turn your words into a word vector. ... | I have been trying to build a prediction model using a user’s data. Model’s input is documents’ metadata (date published, title etc) and document label is that user’s preference (like/dislike). I would like to ask some questions that I have come across hoping for some answers:
There are way more liked documents than d... | 0 | 1 | 232 |
0 | 27,308,244 | 0 | 1 | 0 | 0 | 2 | true | 6 | 2014-12-05T03:25:00.000 | 8 | 2 | 1 | /usr/bin/python vs /opt/local/bin/python2.7 on OS X | 27,308,234 | 1.2 | python,macos,python-2.7,numpy,matplotlib | Points to keep in mind about Python
If a script foobar.py starts with #!/usr/bin/env python, then you will always get the OS X Python. That's the case even though MacPorts puts /opt/local/bin ahead of /usr/bin in your path. The reason is that MacPorts uses the name python2.7. If you want to use env and yet use MacPort... | Can you shed some light on the interaction between the Python interpreter distributed with OS X and the one that can be installed through MacPorts?
While installing networkx and matplotlib I am having difficulties with the interaction of /usr/bin/python and /opt/local/bin/python2.7. (The latter is itself a soft pointer... | 0 | 1 | 10,161 |
0 | 27,400,616 | 0 | 1 | 0 | 0 | 2 | false | 6 | 2014-12-05T03:25:00.000 | 0 | 2 | 1 | /usr/bin/python vs /opt/local/bin/python2.7 on OS X | 27,308,234 | 0 | python,macos,python-2.7,numpy,matplotlib | May I also suggest using Continuum Analytics "anaconda" distribution. One benefit in doing so would be that you won't then need to modify he standard OS X python environment. | Can you shed some light on the interaction between the Python interpreter distributed with OS X and the one that can be installed through MacPorts?
While installing networkx and matplotlib I am having difficulties with the interaction of /usr/bin/python and /opt/local/bin/python2.7. (The latter is itself a soft pointer... | 0 | 1 | 10,161 |
0 | 27,312,169 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-12-05T08:45:00.000 | 1 | 1 | 0 | How to convert numpy distribution to an array? | 27,311,941 | 1.2 | python,arrays,numpy | Just putting a list(...) call around your call to normal will turn it into a regular Python list. | I am using the function numpy.random.normal(0,0.1,20) to generate some numbers. Given below is the output I get from the function. The problem is I want these numbers to be in an array format.
[ 0.13500488 0.11023982 0.09908623 -0.01437589 0.00619559 -0.17200946
-0.00501746 0.07422642 0.1226481 -0.01422786 -0.0... | 0 | 1 | 330 |
0 | 27,370,090 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2014-12-06T01:02:00.000 | 0 | 2 | 0 | OS X not using most recent NumPY version | 27,327,104 | 1.2 | python,macos,numpy | The new NumPY version would install (via pip) into the System path, where it wasn't being recognized by Python. To solve this I ran pip install --user numpy==1.7.1 to specify I want NumPY version 1.7.1 on my Python (user) path.
:) | Trying to update NumPY by running pip install -U numpy, which yields "Requirement already up-to-date: numpy in /Library/Python/2.7/site-packages". Then checking the version with import numpy and numpy.version.version yields '1.6.2' (old version). Python is importing numpy via the path '/System/Library/Frameworks/Python... | 0 | 1 | 708 |
0 | 27,328,371 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2014-12-06T01:02:00.000 | 0 | 2 | 0 | OS X not using most recent NumPY version | 27,327,104 | 0 | python,macos,numpy | You can remove the old version of numpy from
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy
.
Just delete the numpy package from there and then try to import numpy from the python shell. | Trying to update NumPY by running pip install -U numpy, which yields "Requirement already up-to-date: numpy in /Library/Python/2.7/site-packages". Then checking the version with import numpy and numpy.version.version yields '1.6.2' (old version). Python is importing numpy via the path '/System/Library/Frameworks/Python... | 0 | 1 | 708 |
0 | 27,384,466 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2014-12-09T16:26:00.000 | 1 | 2 | 0 | fuzzy matching lots of strings | 27,383,896 | 1.2 | python,sql,r,fuzzy-search,fuzzy-logic | That is exactly what I am facing at my new job daily (but lines counts are few million). My approach is to:
1) find a set of unique strings by using p = unique(a)
2) remove punctuation, split strings in p by whitespaces, make a table of words' frequencies, create a set of rules and use gsub to "recover" abbreviations... | I've got a database with property owners; I would like to count the number of properties owned by each person, but am running into standard mismatch problems:
REDEVELOPMENT AUTHORITY vs. REDEVELOPMENT AUTHORITY O vs. PHILADELPHIA REDEVELOPMEN vs. PHILA. REDEVELOPMENT AUTH
COMMONWEALTH OF PENNA vs. COMMONWEALTH OF PENNS... | 0 | 1 | 963 |
0 | 27,385,088 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2014-12-09T16:26:00.000 | 1 | 2 | 0 | fuzzy matching lots of strings | 27,383,896 | 0.099668 | python,sql,r,fuzzy-search,fuzzy-logic | You can also use agrep() in R for fuzzy name matching, by giving a percentage of allowed mismatches. If you pass it a fixed dataset, then you can grep for matches out of your database. | I've got a database with property owners; I would like to count the number of properties owned by each person, but am running into standard mismatch problems:
REDEVELOPMENT AUTHORITY vs. REDEVELOPMENT AUTHORITY O vs. PHILADELPHIA REDEVELOPMEN vs. PHILA. REDEVELOPMENT AUTH
COMMONWEALTH OF PENNA vs. COMMONWEALTH OF PENNS... | 0 | 1 | 963 |
0 | 27,387,097 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2014-12-09T16:53:00.000 | 3 | 1 | 0 | OpenCV python on raspberry | 27,384,395 | 1.2 | python,opencv,raspberry-pi | Check the API docs for 3.0. Some python functions return more parameters or in a different order.
example: cv2.cv.CV_HAAR_SCALE_IMAGE was replaced with cv2.CASCADE_SCALE_IMAGE
or
(cnts, _) = cv2.findContours(...) now returning the modified image as well
(modImage, cnts, _) = cv2.findContours(...) | I've installed on my raspberry opencv python module and everything was working fine. Today I've compiled a C++ version of OpenCV and now when I want to run my python script i get this error:
Traceback (most recent call last):
File "wiz.py", line 2, in
import cv2.cv as cv
ImportError: No module named cv | 0 | 1 | 1,523 |
0 | 27,412,604 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-12-10T04:58:00.000 | 2 | 3 | 0 | binary document classifcation | 27,393,613 | 0.132549 | python,machine-learning,nlp,nltk | I generally recommend using Scikit as Slater suggested. Its more scalable than NLTK. For this task using Naive Bayes Classifier or Support Vector Machine is your best bet. You are dealing with binary classification so you don't have multi classes.
As for the features that you should extract, try unigrams, bigrams, tri... | I know this is a very vague question but I'm trying to figure out the best way to do document classification. I have two sets training and testing. The training set is a set of documents each labeled 1 or 0. The documents are labeled 1 if is it a informative summary and a 0 if it is not. I'm trying to create a supervis... | 0 | 1 | 211 |
0 | 38,023,538 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2014-12-10T16:43:00.000 | 5 | 3 | 0 | import sklearn not working in PyCharm | 27,406,345 | 0.321513 | python,scikit-learn,pycharm | This worked for me:
In my PyCharm Community Edition 5.0.4, Preference -> Project Interpreter -> check whether sklearn package is installed for the current project interpreter, if not, install it. | I installed numpy, scipy and scikit-learn using pip on mac os. However in PyCharm, all imports work except when i try importing sklearn. I tried doing it in the Python shell and it worked fine. Any ideas as to what is causing this?
Also, not sure if it is relevant, but i installed scikit-learn last.
The error I receiv... | 0 | 1 | 10,664 |
0 | 27,422,973 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2014-12-10T16:43:00.000 | 6 | 3 | 0 | import sklearn not working in PyCharm | 27,406,345 | 1 | python,scikit-learn,pycharm | I managed to figure it out, i had to go to the project interpreter and change the python distribution as it had defaulted the OS installed Python rather than my own installed distribution. | I installed numpy, scipy and scikit-learn using pip on mac os. However in PyCharm, all imports work except when i try importing sklearn. I tried doing it in the Python shell and it worked fine. Any ideas as to what is causing this?
Also, not sure if it is relevant, but i installed scikit-learn last.
The error I receiv... | 0 | 1 | 10,664 |
0 | 27,453,175 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-12-11T02:35:00.000 | 1 | 1 | 0 | Stop Spyder from importing modules like `numpy`, `pandas`, etc | 27,414,466 | 1.2 | python,spyder | (Spyder dev here) This is not possible. If Pandas is installed on the same Python installation where Spyder is, then Spyder will import Pandas to: a) report to its users the minimal version needed to view DataFrames in the Variable Explorer and b) import csv files as DataFrames.
The only solution I can suggest you is t... | When I start Spyder, it automatically imports pandas and numpy. Is it possible to have Spyder ignore these modules?
I see these are imported in multiple Spyderlib files. For example, pandas gets imported in spyderlib/widgets/importwizard.py, spyderlib/baseconfig.py, etc.
(I'm trying to debug something in pandas and I... | 0 | 1 | 651 |
0 | 30,337,118 | 0 | 0 | 0 | 0 | 1 | false | 44 | 2014-12-14T15:13:00.000 | 25 | 4 | 0 | How to use Gensim doc2vec with pre-trained word vectors? | 27,470,670 | 1 | python,nlp,gensim,word2vec,doc2vec | Note that the "DBOW" (dm=0) training mode doesn't require or even create word-vectors as part of the training. It merely learns document vectors that are good at predicting each word in turn (much like the word2vec skip-gram training mode).
(Before gensim 0.12.0, there was the parameter train_words mentioned in anothe... | I recently came across the doc2vec addition to Gensim. How can I use pre-trained word vectors (e.g. found in word2vec original website) with doc2vec?
Or is doc2vec getting the word vectors from the same sentences it uses for paragraph-vector training?
Thanks. | 0 | 1 | 40,470 |
0 | 27,522,080 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-12-17T09:09:00.000 | 1 | 1 | 0 | Which object in Numpy Python is good for matrix manipulation? numpy.array or numpy.matrix? | 27,521,836 | 0.197375 | python,numpy | Objects of type numpy.array are n-dimensional, meaning they can represent 2-dimensional matrices, as well as 3D, 4D, 5D, etc.
The numpy.matrix, however, is designed specifically for the purpose of 2-dimensional matrices. As part of this specialisation, some of the operators are modified, for example * refers to matrix ... | It seesms like we can have n dimensional array by numpy.array
also numpy.matrix is exact matrix I want.
which one is generally used? | 0 | 1 | 83 |
0 | 27,580,194 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2014-12-19T15:18:00.000 | 1 | 2 | 0 | Install Python 2.7.9 over 2.7.6 | 27,568,886 | 0.099668 | python,python-2.7,opencv,numpy,upgrade | Upgrading to new version can give you more stable and featured version. Usually this is the case - version 2.7 is mature and stable. I think you do not need to re-install/reconfigure the packages again because of this stability (2.7.6 and 2.7.9 are 2.7 anyway). Problems are hardly possible, although they may be in very... | I'm using Python for my research. I have both version of Python on my system: 3.3.2 and 2.7.6. However due to the compatibility with the required packages (openCV, Numpy, Scipy, etc.) and the legacy code, I work most of the time with Python 2.7.6.
It took me quite a lot of effort at the beginning to set up the environm... | 0 | 1 | 11,082 |
0 | 27,592,508 | 0 | 1 | 0 | 0 | 1 | true | 92 | 2014-12-21T18:32:00.000 | 119 | 5 | 0 | Floor or ceiling of a pandas series in python? | 27,592,456 | 1.2 | python,pandas,series,floor,ceil | You can use NumPy's built in methods to do this: np.ceil(series) or np.floor(series).
Both return a Series object (not an array) so the index information is preserved. | I have a pandas series series. If I want to get the element-wise floor or ceiling, is there a built in method or do I have to write the function and use apply? I ask because the data is big so I appreciate efficiency. Also this question has not been asked with respect to the Pandas package. | 0 | 1 | 103,550 |
0 | 27,604,701 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2014-12-22T14:16:00.000 | 0 | 3 | 0 | how to print finite number of digits USING the scientific notation | 27,604,441 | 0 | python | %f stands for Fixed Point and will force the number to show relative to the number 1 (1e-3 is shown as 0.001). %e stands for Exponential Notation and will give you what you want (1e-3 is shown as 1e-3). | I have some values that I need to print in scientific notation (values of the order of 10^-8, -9)
But I would like to don't print a long number, only two digits after the .
something as:
9.84e-08
and not
9.84389879870496809597e-08
How can I do it? I tried to use
"%.2f" % a
where 'a' is the number containing the v... | 0 | 1 | 93 |
0 | 27,604,491 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2014-12-22T14:16:00.000 | 2 | 3 | 0 | how to print finite number of digits USING the scientific notation | 27,604,441 | 1.2 | python | try with this :
print "%.2e"%9.84389879870496809597e-08 #'9.84e-08' | I have some values that I need to print in scientific notation (values of the order of 10^-8, -9)
But I would like to don't print a long number, only two digits after the .
something as:
9.84e-08
and not
9.84389879870496809597e-08
How can I do it? I tried to use
"%.2f" % a
where 'a' is the number containing the v... | 0 | 1 | 93 |
0 | 27,689,079 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-12-22T18:02:00.000 | 1 | 1 | 0 | OpenCV: how to get image format if reading from buffer? | 27,608,053 | 1.2 | python,opencv,image-processing | There is a standard Python functionimghdr.what. It rulez!
^__^ | I read an image (of unknown format, most frequent are PNGs or JPGs) from a buffer.
I can decode it with cv2.imdecode, I can even check if it is valid (imdecode returns non-None).
But how can I reveal the image type (PNG, JPG, something else) of the buffer I've just read? | 0 | 1 | 2,355 |
0 | 45,621,086 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2014-12-23T22:57:00.000 | -1 | 1 | 0 | Intel MKL Error with Gaussian Fitting in Python? | 27,629,227 | -0.197375 | python,scipy,least-squares,intel-mkl | You could try Intel's python distribution. It includes a pre-built scipy optimized with MKL. | I'm doing a Monte Carlo simulation in Python in which I obtain a set of intensities at certain 2D coordinates and then fit a 2D Gaussian to them. I'm using the scipy.optimize.leastsq function and it all seems to work well except for the following error:
Intel MKL ERROR: Parameter 6 was incorrect on entry to DGELSD.
Th... | 0 | 1 | 860 |
0 | 27,637,837 | 0 | 0 | 0 | 0 | 1 | true | 69 | 2014-12-24T12:53:00.000 | 27 | 8 | 0 | What are Python pandas equivalents for R functions like str(), summary(), and head()? | 27,637,281 | 1.2 | python,r,pandas | summary() ~ describe()
head() ~ head()
I'm not sure about the str() equivalent. | I'm only aware of the describe() function. Are there any other functions similar to str(), summary(), and head()? | 0 | 1 | 53,740 |
0 | 27,641,772 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2014-12-24T19:51:00.000 | 1 | 1 | 0 | Better way to store a set of files with arrays? | 27,641,616 | 1.2 | python,database,numpy,dataset,storage | Reading 500 files in python should not take much time, as the overall file size is around few MB. Your data-structure is plain and simple in your file chunks, it ll not even take much time to parse I guess.
Is the actual slowness is bcoz of opening and closing file, then there may be OS related issue (it may have very... | I've accumulated a set of 500 or so files, each of which has an array and header that stores metadata. Something like:
2,.25,.9,26 #<-- header, which is actually cryptic metadata
1.7331,0
1.7163,0
1.7042,0
1.6951,0
1.6881,0
1.6825,0
1.678,0
1.6743,0
1.6713,0
I'd like to read these arrays into memory selectively. W... | 0 | 1 | 66 |
0 | 27,688,141 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2014-12-28T17:47:00.000 | 0 | 1 | 0 | Python/Cassandra: insert vs. CSV import | 27,678,990 | 0 | python,cassandra,load-testing | For a few million, I'd say just use CSV (assuming rows aren't huge); and see if it works. If not, inserts it is :)
For more heavy duty stuff, you might want to create sstables and use sstable loader. | I am generating load test data in a Python script for Cassandra.
Is it better to insert directly into Cassandra from the script, or to write a CSV file and then load that via Cassandra?
This is for a couple million rows. | 0 | 1 | 364 |
0 | 27,717,883 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-12-31T00:22:00.000 | 3 | 2 | 0 | how to do the sum of pixels with Python and OpenCV | 27,714,535 | 0.291313 | python,opencv,pixel,integral | sumElems function in OpenCV will help you to find out the sum of the pixels of the whole of the image in python. If you want to find only the sum of a particular portion of an image, you will have to select the ROI of the image on the sum is to be calculated.
As a side note, if you had found out the integral image, the... | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | 0 | 1 | 17,785 |
0 | 27,738,842 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-12-31T00:22:00.000 | 5 | 2 | 0 | how to do the sum of pixels with Python and OpenCV | 27,714,535 | 0.462117 | python,opencv,pixel,integral | np.sum(img[y1:y2, x1:x2, c1:c2]) Where c1 and c2 are the channels. | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | 0 | 1 | 17,785 |
0 | 27,810,170 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-01-05T21:58:00.000 | 0 | 1 | 0 | opencv_traincascade.exe error, "Please empty the data folder"? | 27,788,609 | 1.2 | python,opencv,classification,cascade | I solved it!
I downloaded opencv and all other required programs on another computer and tried running train classifier on another set of pictures. After I verified that it worked in the other computer I copied all files back to my computer and used them. | I have been successful at training a classifier before but today I started getting errors.
Problem:
When I try to train a classifier using opencv_traincascade.exe I get the following message:
"Training parameters are loaded from the parameter file in data folder!
Please empty the data folder if you want to use your o... | 0 | 1 | 359 |
0 | 55,962,515 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2015-01-06T16:50:00.000 | 0 | 2 | 0 | Is there a way to get a numpy-style view to a slice of an array stored in a hdf5 file? | 27,803,331 | 0 | python,hdf5,pytables,h5py | It is unavoidable to not copy that section of the dataset to memory.
Reason for that is simply because you are requesting the entire section, not just a small part of it.
Therefore, it must be copied completely.
So, as h5py already allows you to use HDF5 datasets in the same way as NumPy arrays, you will have to change... | I have to work on large 3D cubes of data. I want to store them in HDF5 files (using h5py or maybe pytables). I often want to perform analysis on just a section of these cubes. This section is too large to hold in memory. I would like to have a numpy style view to my slice of interest, without copying the data to memory... | 0 | 1 | 568 |
0 | 27,820,207 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-07T11:58:00.000 | 2 | 1 | 0 | Distinct 0 and 1 on histogram with logscale | 27,819,021 | 0.379949 | python,matplotlib,scale,histogram,logarithm | So I assume that you want to have a logscale on the y axis from what you have written.
Obviously, what you want to achieve won't be possible. log(0) ist NaN because log(0) is not defined mathematically. You could, in theory, set ylim to a very small number close to 0, but that wouldn't help you either. Your y axis woul... | Is there any way to plot a histogram in matplot lib with log scale that include 0?
plt.ylim( ymin = 0 ) doesn't work because log(0) is NaN and matplot lib removes is... :( | 0 | 1 | 45 |
0 | 27,829,029 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-01-07T14:24:00.000 | 5 | 2 | 0 | Working with rasters in file geodatabase (.gdb) with GDAL | 27,821,571 | 1.2 | python,gdal | Currently both FileGDB and OpenFileGDB drivers handle only vector datasets. Raster support is not part of Esri's FGDB API.
You will need to use Esri tools to export the rasters to another format, such as GeoTIFF. | I'm working on a tool that converts raster layers to arrays for processing with NumPy, and ideally I would like to be able to work with rasters that come packaged in a .gdb without exporting them all (especially if this requires engaging ArcGIS or ArcPy).
Is this possible with the OpenFileGDB driver? From what I can... | 0 | 1 | 2,796 |
0 | 27,961,586 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2015-01-11T03:49:00.000 | 2 | 1 | 0 | Do scipy.sparse functions release the GIL? | 27,883,769 | 1.2 | python,numpy,scipy,sparse-matrix,gil | They do, for Scipy versions >= 0.14.0 | Question
Do scipy.sparse functions, like csr._mul_matvec release the GIL?
Context
Python functions that wrap foreign code (like C) often release the GIL during execution, enabling parallelism with multi-threading. This is common in the numpy codebase. Is it also common in scipy.sparse? If so which operations release... | 0 | 1 | 209 |
0 | 27,961,800 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-15T10:39:00.000 | 0 | 3 | 0 | Loading data file with too many commas in Python | 27,961,552 | 0 | python,numpy,comma,data-files | I don't know if this is an option but you could pre-process it using tr -s ',' file.txt. This is a shell command so you'd have to do it either before calling python or using system. The latter might not be the best way since dragon2fly solved the issue using a python function. | I am trying to collect some data from a .txt file into my python script. The problem is that when the data was collected, it could not collect data in one of the columns, which has given me more commas than normally. It looks like this:
0,0,,-2235
1,100,,-2209
2,200,,-2209
All I want is to load the data and remove the ... | 0 | 1 | 192 |
1 | 27,987,246 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-01-16T05:07:00.000 | 0 | 1 | 0 | Paraview glyph uniform distribution does not work on my dataset | 27,977,626 | 0 | python,paraview | Uniform distribution works by picking a set of random locations in space and finding data points closet to those locations to glyph. Try playing with the Seed to see if that helps pick different random locations that yield better results.
If you could share the data, that'd make it easier to figure out what could be go... | I'm running Paraview 4.2 on Linux. Here's what's happening:
I load my XDMF/hdf5 data into PV, which contains vector data.
I apply a glyph filter to the loaded data, and hit apply (thereby using the default mode of Uniform Spatial Distribution).
No glyphs appear on screen, and the information tab shows that the filter ... | 0 | 1 | 653 |
0 | 28,009,442 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-18T11:56:00.000 | 6 | 4 | 0 | randomly select 3 numbers whose sum is 356 and each of these 3 is more than 30 | 28,009,390 | 1 | python,python-2.7 | This question is rather subjective to the definition of random, and the distribution you wish to replicate.
The simplest solution:
Choose a one random number, rand1 : [30,296]
Choose a second random number, rand2 : [30, (326-Rand1)]
Then the third cannot be random due to the constraint so calc via 356-(rand1+rand2) | please how can I randomly select 3 numbers whose sum is 356 and each of these 3 is more than 30?
So output should be for example [100, 34, 222]
(but not [1,5,350])
I would like to use random module to do this. thank you! | 0 | 1 | 131 |
0 | 28,024,414 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2015-01-19T12:00:00.000 | 0 | 1 | 0 | Using precomputed Gram matrix in sklearn linear models (Lasso, Lars, etc) | 28,024,191 | 0 | python,machine-learning,scikit-learn | (My answer is based on the usage of svm.SVC, Lasso may be different.)
I think that you are supposed pass the Gram matrix instead of X to the fit method.
Also, the Gram matrix has shape (n_samples, n_samples) so it should also be too large for memory in your case, right? | I'm trying to train a linear model on a very large dataset.
The feature space is small but there are too many samples to hold in memory.
I'm calculating the Gram matrix on-the-fly and trying to pass it as an argument to sklearn Lasso (or other algorithms) but, when I call fit, it needs the actual X and y matrices.
A... | 0 | 1 | 1,183 |
0 | 28,057,921 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2015-01-20T23:59:00.000 | 0 | 2 | 0 | Store large dictionary to file in Python | 28,057,407 | 0 | python,dictionary,storage,store,pickle | With 60,000 dimensions do you mean 60,000 elements? if this is the case and the numbers are 1..10 then a reasonably compact but still efficient approach is to use a dictionary of Python array.array objects with 1 byte per element (type 'B').
The size in memory should be about 60,000 entries x 60,000 bytes, totaling 3.3... | I have a dictionary with many entries and a huge vector as values. These vectors can be 60.000 dimensions large and I have about 60.000 entries in the dictionary. To save time, I want to store this after computation. However, using a pickle led to a huge file. I have tried storing to JSON, but the file remains extremel... | 0 | 1 | 6,975 |
0 | 57,910,696 | 0 | 0 | 0 | 0 | 2 | false | 189 | 2015-01-21T10:17:00.000 | 2 | 7 | 0 | Random state (Pseudo-random number) in Scikit learn | 28,064,634 | 0.057081 | python,random,scikit-learn | If there is no randomstate provided the system will use a randomstate that is generated internally. So, when you run the program multiple times you might see different train/test data points and the behavior will be unpredictable. In case, you have an issue with your model you will not be able to recreate it as you do ... | I want to implement a machine learning algorithm in scikit learn, but I don't understand what this parameter random_state does? Why should I use it?
I also could not understand what is a Pseudo-random number. | 0 | 1 | 236,863 |
0 | 50,672,222 | 0 | 0 | 0 | 0 | 2 | false | 189 | 2015-01-21T10:17:00.000 | 23 | 7 | 0 | Random state (Pseudo-random number) in Scikit learn | 28,064,634 | 1 | python,random,scikit-learn | If you don't specify the random_state in your code, then every time you run(execute) your code a new random value is generated and the train and test datasets would have different values each time.
However, if a fixed value is assigned like random_state = 42 then no matter how many times you execute your code the resul... | I want to implement a machine learning algorithm in scikit learn, but I don't understand what this parameter random_state does? Why should I use it?
I also could not understand what is a Pseudo-random number. | 0 | 1 | 236,863 |
0 | 28,078,626 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-21T23:07:00.000 | 0 | 1 | 0 | Detecting similar objects using OpenCV | 28,078,555 | 0 | python,opencv,image-processing | Very open question, but OpenCV is where to be looking. Your best bet would probably be building Haar cascade classifiers. Plenty of reading material on the topic, somewhat overwhelming at first but that is what I would be looking into. | I've been looking into this for a while and was wondering the feasibility of using something like feature detection in OpenCV to do the following:
I'm working on a project that requires identifying items within a grocery store that do not have barcodes (i.e. produce). I want to build a local database of the various ite... | 0 | 1 | 243 |
0 | 28,083,896 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-01-22T07:19:00.000 | 0 | 1 | 0 | How to install matplotlib on windows | 28,083,203 | 0 | python,matplotlib | Yes the matplotlib site above will do the job!
The same procedure you will have to follow to install numpy, which I guess you will also need. | I just started using python and definitely need the matplotlib. I'm confused by the fact that there is even not a clear explanation for the basic ideas behind install a lib/package in python generally. Anyway, I'm using windows and have installed Python 3.4.2 downloaded from the offical website, how should I install th... | 0 | 1 | 986 |
0 | 28,105,601 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-01-22T15:27:00.000 | 0 | 1 | 0 | How to display a matplotlib figure object | 28,092,518 | 0 | python,matplotlib,figures | Figures need a canvas to draw on.
Try fig.draw() | I am working with a matplotlib-based routine that returns a figure and, as separate objects, the axes that it contains. Is there any way, that I can display these things and edit them (annotate, change some font sizes, things like that)? "fig.show()" doesn't work, just returns an error. Thanks. | 0 | 1 | 1,408 |
0 | 47,223,192 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2015-01-23T11:49:00.000 | 0 | 4 | 0 | How do I install Numpy for Python 2.7 on Windows? | 28,109,268 | 0 | python,windows,numpy | Wasted a lot of time trying to install on Windows from various binaries and installers, which all seemed to install a broken version, until I found that this worked: navigate to the python install directory and do python .\site-packages\pip install numpy | I am trying to install numpy for python 2.7, I've downloaded the zip, unzipped it and was expecting a Windows download file (.exe), but there isn't one.
Which of these files do I use to install it?
I tried running the setup.py file but don't seem to be getting anywhere.
Thanks!!! | 0 | 1 | 21,104 |
0 | 28,249,829 | 0 | 0 | 0 | 0 | 2 | true | 6 | 2015-01-27T10:00:00.000 | 10 | 4 | 0 | seeking convergence with optimize.fmin on scipy | 28,167,648 | 1.2 | python,optimization,scipy | There is actually no need to see your code to explain what is happening. I will answer point by point quoting you.
My problem is, when I start the minimization, the value printed decreases
untill it reaches a certain point (the value 46700222.800). There it
continues to decrease by very small bites, e.g.,
467002... | I have a function I want to minimize with scipy.optimize.fmin. Note that I force a print when my function is evaluated.
My problem is, when I start the minimization, the value printed decreases untill it reaches a certain point (the value 46700222.800). There it continues to decrease by very small bites, e.g., 46700222... | 0 | 1 | 13,104 |
0 | 28,219,470 | 0 | 0 | 0 | 0 | 2 | false | 6 | 2015-01-27T10:00:00.000 | 0 | 4 | 0 | seeking convergence with optimize.fmin on scipy | 28,167,648 | 0 | python,optimization,scipy | Your question is a bit ambiguous. Are you printing the value of your function, or the point where it is evaluated?
My understanding of xtol and ftol is as follows. The iteration stops
when the change in the value of the function between iterations is less than ftol
AND
when the change in x between successive iterati... | I have a function I want to minimize with scipy.optimize.fmin. Note that I force a print when my function is evaluated.
My problem is, when I start the minimization, the value printed decreases untill it reaches a certain point (the value 46700222.800). There it continues to decrease by very small bites, e.g., 46700222... | 0 | 1 | 13,104 |
0 | 28,189,659 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-01-28T07:56:00.000 | 2 | 2 | 0 | Rotated Paraboloid Surface Fitting | 28,187,233 | 0.197375 | python,matlab,curve-fitting,least-squares,surface | Dont use any toolboxes, GUIs or special functions for this problem. Your problem is very common and the equation you provided may be solved in a very straight-forward manner. The solution to the linear least squares problem can be outlined as:
The basis of the vector space is x^2, y^2, z^2, xy, yz, zx, x, y, z, 1. The... | I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a... | 0 | 1 | 1,508 |
0 | 28,188,683 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-01-28T07:56:00.000 | 0 | 2 | 0 | Rotated Paraboloid Surface Fitting | 28,187,233 | 0 | python,matlab,curve-fitting,least-squares,surface | Do you have enough data points to fit all 10 parameters - you will need at least 10?
I also suspect that 10 parameters are to many to describe a general paraboloid, meaning that some of the parameters are dependent. My fealing is that a translated and rotated paraboloid needs 7 parameters (although I'm not really sure) | I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a... | 0 | 1 | 1,508 |
0 | 28,198,700 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2015-01-28T16:31:00.000 | 3 | 2 | 0 | Python predict_proba class identification | 28,197,444 | 1.2 | python,machine-learning,scikit-learn | Column 0 corresponds to the class 0, column 1 corresponds to the class 1. | Suppose my labeled data has two classes 1 and 0. When I run predict_proba on the test set it returns an array with two columns. Which column corresponds to which class ? | 0 | 1 | 655 |
0 | 56,207,791 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2015-01-28T16:31:00.000 | 0 | 2 | 0 | Python predict_proba class identification | 28,197,444 | 0 | python,machine-learning,scikit-learn | You can check that by printing the classes with print(estimator.classes_). The array will have the same order like the output. | Suppose my labeled data has two classes 1 and 0. When I run predict_proba on the test set it returns an array with two columns. Which column corresponds to which class ? | 0 | 1 | 655 |
0 | 28,202,075 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-01-28T16:50:00.000 | 2 | 1 | 0 | statsmodels: Method used to generate condifence intervals for quantile regression coefficients? | 28,197,813 | 1.2 | python,statsmodels | Inference for parameters is the same across models and is mostly inherited from the base classes.
Quantile regression has a model specific covariance matrix of the parameters.
tvalues, pvalues, confidence intervals, t_test and wald_test are all based on the assumption of an asymptotic normal distribution of the estimat... | I am using the statsmodels.formulas.api.quantreg() for quantile regression in Python. I see that when fitting the quantile regression model, there is an option to specify the significance level for confidence intervals of the regression coefficients, and the confidence interval result appears in the summary of the fit.... | 0 | 1 | 843 |
0 | 28,225,707 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-01-29T22:10:00.000 | 3 | 1 | 0 | skimage.io.imsave "destroys" grayscale image? | 28,225,600 | 1.2 | python,image,image-processing,matplotlib,scipy | I think I've figured out why. By convention, floats in skimage are supposed to be in the range [0, 1]. | I have an array of graysale image read in from a color one. If I use matplotlib to imshow the grayscale image, it looks just fine. But when I io.imsave it, it's ruined (by an outrageous amount of noise). However, if I numpy.around it first before io.imsave-ing, then it's significantly better, but black and white are st... | 0 | 1 | 998 |
0 | 28,232,764 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-01-30T09:13:00.000 | 5 | 1 | 0 | RandomForestClassifier differ from BaggingClassifier | 28,232,551 | 0.761594 | python-3.x,scikit-learn,random-forest | The RandomForestClassifier introduces randomness externally (relative to the individual tree fitting) via bagging as BaggingClassifier does.
However it injects randomness also deep inside the tree construction procedure by sub-sampling the list of features that are candidate for splitting: a new random set of features ... | How is using a BaggingClassifier with baseestimator=RandomForestClassifier differ from a RandomForestClassifier in sklearn? | 0 | 1 | 668 |
0 | 28,238,935 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2015-01-30T15:11:00.000 | 0 | 1 | 0 | Retain Excel Settings When Adding New CSV | 28,238,830 | 0 | python,excel,csv | Try importing it as a csv file, instead of opening it directly on excel. | I've written a python/webdriver script that scrapes a table online, dumps it into a list and then exports it to a CSV. It does this daily.
When I open the CSV in Excel, it is unformatted, and there are fifteen (comma-delimited) columns of data in each row of column A.
Of course, I then run 'Text to Columns' and get ev... | 0 | 1 | 24 |
0 | 30,613,008 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-02-01T01:23:00.000 | 1 | 1 | 0 | Color Perceptual Image Hashing | 28,258,468 | 0.197375 | python,image-processing,hash | I found a couple of ways to do this.
I ended up using a Mean Squared Error function that I wrote myself:
def mse(reference, query):
return (((reference).astype("double")-(query).astype("double"))**2).mean()
Until, upon later tinkering I found a function that seemed to do something similar (compare image similarity... | I've been trying to write on a fast (ish) image matching program which doesn't match rotated or scale deformed image, in Python.
The goal is to be able to find small sections of an image that are similar to other images in color features, but dissimilar if rotated or warped.
I found out about perceptual image hashing, ... | 0 | 1 | 852 |
0 | 28,270,527 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-02-02T02:42:00.000 | 1 | 1 | 0 | pyplot - Is there a way to explicitly specify the x and y axis numbering? | 28,270,435 | 0.197375 | python,matplotlib | Aha, one needs to use the "extent" argument, as in:
plt.imshow(H, cmap=plt.gray(), extent=[-5, 3, 6, 9]) | I'm displaying an image and want to specify the x and y axis numbering rather than having row and column numbers show up there. Any ideas? | 0 | 1 | 31 |
0 | 28,287,768 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2015-02-02T04:06:00.000 | 6 | 2 | 0 | Scikitlearn - order of fit and predict inputs, does it matter? | 28,270,967 | 1.2 | python,scikit-learn | Yes, you need to reorder them. Imagine a simpler case, Linear Regression. The algorithm will calculate the weights for each of the features, so for example if feature 1 is unimportant, it will get assigned a close to 0 weight.
If at prediction time the order is different, an important feature will be multiplied by thi... | Just getting started with this library... having some issues (i've read the docs but didn't get clarity) with RandomForestClassifiers
My question is pretty simple, say i have a train data set like
A B C
1 2 3
Where A is the independent variable (y) and B-C are the dependent variables (x). Let's say th... | 0 | 1 | 2,722 |
0 | 28,315,175 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-02-02T16:54:00.000 | 0 | 2 | 0 | Scikit-learn RandomForestClassifier output of predict_proba | 28,282,706 | 0 | python,scikit-learn,random-forest | classifier.predict_proba() returns the class probabilities. The n dimension of the array will vary depending on how many classes there are in the subset you train on | I have a dataset that I split in two for training and testing a random forest classifier with scikit learn.
I have 87 classes and 344 samples. The output of predict_proba is, most of the times, a 3-dimensional array (87, 344, 2) (it's actually a list of 87 numpy.ndarrays of (344, 2) elements).
Sometimes, when I pick ... | 0 | 1 | 2,825 |
0 | 63,671,324 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-02-06T04:03:00.000 | 3 | 2 | 0 | Is it possible to mask an image in Python Imaging Library (PIL)? | 28,358,379 | 0.291313 | python,image,image-processing,python-imaging-library,mask | You can use the PIL library to mask the images. Add in the alpha parameter to img2, As you can't just paste this image over img1. Otherwise, you won't see what is underneath, you need to add an alpha value.
img2.putalpha(128) #if you put 0 it will be completly transparent, keep image opaque
Then you can mask both the i... | I have some traffic camera images, and I want to extract only the pixels on the road. I have used remote sensing software before where one could specify an operation like
img1 * img2 = img3
where img1 is the original image and img2 is a straight black-and-white mask. Essentially, the white parts of the image would ev... | 0 | 1 | 10,471 |
0 | 53,429,718 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-02-07T16:26:00.000 | 1 | 6 | 0 | Create a numpy array (10x1) with zeros and fives | 28,384,481 | 0.033321 | python,arrays,numpy | Just do the following.
import numpy as np
arr = np.zeros(10)
arr[:3] = 5 | I'm having trouble figuring out how to create a 10x1 numpy array with the number 5 in the first 3 elements and the other 7 elements with the number 0. Any thoughts on how to do this efficiently? | 0 | 1 | 5,701 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.