GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 24,598,296 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2014-07-06T16:54:00.000 | 3 | 1 | 0 | Unistall opencv 2.4.9 and install 3.0.0 | 24,598,160 | 1.2 | python,opencv,ubuntu,uninstallation | The procedure depends on whether or not you built OpenCV from source with CMake, or snatched it from a repository.
From repository
sudo apt-get purge libopencv* will cleanly remove all traces. Substitute libopencv* as appropriate in case you were using an unofficial ppa.
From source
If you still have the files generate... | Im using openCV on Ubuntu 14.04, but some of the functions that I require particularly in cv2 library (cv2.drawMatches, cv2.drawMatchesKnn) does not work in 2.4.9. How do I uninstall 2.4.9 and install 3.0.0 from the their git? I know the procedure for installing 3.0.0 but how do I make sure that 2.4.9 get completely re... | 0 | 1 | 18,476 |
0 | 24,616,160 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-07-07T16:46:00.000 | 0 | 3 | 0 | Genetic Algorithm in Optimization of Events | 24,615,687 | 0 | python,artificial-intelligence,genetic-algorithm | To start, let's make sure I understand your problem.
You have a set of sample data, each element containing a time series of a binary variable (we'll call it V). When V is set to True, a function (A, B, or C) is applied which returns V to it's False state. You would like to apply a genetic algorithm to determine which ... | I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem.
Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is... | 0 | 1 | 335 |
0 | 24,616,007 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-07-07T16:46:00.000 | 1 | 3 | 0 | Genetic Algorithm in Optimization of Events | 24,615,687 | 0.066568 | python,artificial-intelligence,genetic-algorithm | I'm unsure of what your question is, but here are the elements you need for any GA:
A population of initial "genomes"
A ranking function
Some form of mutation, crossing over within the genome
and reproduction.
If a critical event is always the same, your GA should work very well. That being said, if you have a differe... | I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem.
Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is... | 0 | 1 | 335 |
0 | 28,782,077 | 0 | 1 | 0 | 0 | 1 | true | 8 | 2014-07-08T17:26:00.000 | 2 | 1 | 0 | Embedded charts in PyCharm IPython console | 24,638,043 | 1.2 | python,matplotlib,pycharm | It doesn't look like you can do it: PyCharm does not use the 'qtconsole' of ipython, but either a plain text console (when you open the "Python console" tab in PyCharm) or ipython notebook (when you open a *.ipynb file). Moreover, PyCharm is done in Java, while to have an interactive plot Matplotlib needs to have a dir... | Is there a way to allow embedded Matplotlib charts in the IPython console that is activated within PyCharm? I'm looking for similar behavior to what can be done with the QT console version of IPython, i.e. ipython qtconsole --matplotlib inline | 0 | 1 | 2,490 |
0 | 54,384,472 | 0 | 0 | 0 | 0 | 1 | false | 43 | 2014-07-09T07:12:00.000 | 5 | 7 | 0 | What is the best stemming method in Python? | 24,647,400 | 0.141893 | python,nltk,stemming | Stemming is all about removing suffixes(usually only suffixes, as far as I have tried none of the nltk stemmers could remove a prefix, forget about infixes).
So we can clearly call stemming as a dumb/ not so intelligent program. It doesn't check if a word has a meaning before or after stemming.
For eg. If u try to stem... | I tried all the nltk methods for stemming but it gives me weird results with some words.
Examples
It often cut end of words when it shouldn't do it :
poodle => poodl
article articl
or doesn't stem very good :
easily and easy are not stemmed in the same word
leaves, grows, fairly are not stemmed
Do you know other ... | 0 | 1 | 74,285 |
0 | 24,663,901 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-07-09T21:07:00.000 | 1 | 1 | 0 | How to filter out all grayscale pixels in an image? | 24,663,825 | 0.197375 | python,opencv,image-processing | filter out greyscale or filter in the allowed colors
Idk if the range of colors or range of greyscale is larger but maybe whitelisting instead of blacklisting is helpful here | I am working on a project which involves using a thermal video camera to detect objects of a certain temperature. The output I am receiving from the camera is an image where the pixels of interest (within the specified temperature range) are colored yellow-orange depending on intensity, and all other pixels are graysc... | 0 | 1 | 1,335 |
0 | 30,565,768 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2014-07-10T16:14:00.000 | 0 | 1 | 0 | Locally save data from remote iPython notebook | 24,681,509 | 1.2 | python,ipython-notebook | Maybe some combination of cPickle and bash magic for scp? | I'm using an ipython notebook that is running on a remote server. I want to save data from the notebook (e.g. a pandas dataframe) locally.
Currently I'm saving the data as a .csv file on the remote server and then move it over to my local machine via scp. Is there a more elegant way directly from the notebook? | 0 | 1 | 671 |
0 | 24,687,460 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-07-10T22:19:00.000 | 0 | 1 | 0 | Reading Large File from non local disk in Python | 24,687,248 | 0 | python,memory,local-storage,large-files | Copying a file is sequentially reading it and saving in another place.
The performance of application might vary depending on the data access patterns, computation to I/O time, network latency and network bandwidth.
If you execute your script once, and read through it sequentially it's the same as copying the file, exc... | Sorry if the topic was already approached, I didn't find it.
I am trying to read with Python a bench of large csv files (>300 MB) that are not located in a local drive.
I am not an expert in programming but I know that if you copy it into a local drive first it should take less time that reading it (or am I wrong?).
T... | 0 | 1 | 214 |
0 | 42,065,440 | 0 | 0 | 0 | 0 | 1 | false | 31 | 2014-07-11T10:03:00.000 | 48 | 2 | 0 | python equivalent of qnorm, qf and qchi2 of R | 24,695,174 | 1 | python,r,scipy | The equivalent of the R pnorm() function is: scipy.stats.norm.cdf() with python
The equivalent of the R qnorm() function is: scipy.stats.norm.ppf() with python | I need the quantile of some distributions in python. In r it is possible to compute these values using the qf, qnorm and qchi2 functions.
Is there any python equivalent of these R functions?
I have been looking on scipy but I did non find anything. | 0 | 1 | 32,670 |
0 | 24,708,214 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-07-11T22:58:00.000 | 3 | 1 | 0 | Evaluating convergence of SGD classifier in scikit learn | 24,707,836 | 1.2 | python,scikit-learn | This a known limitation of the current implementation of scikit-learn's SGD classifier, there is currently no automated convergence check on that model. You can set verbose=1 to get some feedback when running though. | Is there any automated way to evaluate convergence of the SGDClassifier?
I'm trying to run an elastic net logit in python and am using scikit learn's SGDClassifier with log loss and elastic net penalty. When I fit the model in python, I get all zeros for my coefficients. When I run glmnet in R, I get significant non-ze... | 0 | 1 | 822 |
0 | 66,810,359 | 0 | 0 | 0 | 0 | 2 | false | 71 | 2014-07-12T16:54:00.000 | 1 | 5 | 0 | Can sklearn random forest directly handle categorical features? | 24,715,230 | 0.039979 | python,scikit-learn,random-forest,one-hot-encoding | Maybe you can use 1~4 to replace these four color, that is, it is the number rather than the color name in that column. And then the column with number can be used in the models | Say I have a categorical feature, color, which takes the values
['red', 'blue', 'green', 'orange'],
and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically... | 0 | 1 | 72,780 |
0 | 35,471,754 | 0 | 0 | 0 | 0 | 2 | false | 71 | 2014-07-12T16:54:00.000 | 16 | 5 | 0 | Can sklearn random forest directly handle categorical features? | 24,715,230 | 1 | python,scikit-learn,random-forest,one-hot-encoding | You have to make the categorical variable into a series of dummy variables. Yes I know its annoying and seems unnecessary but that is how sklearn works.
if you are using pandas. use pd.get_dummies, it works really well. | Say I have a categorical feature, color, which takes the values
['red', 'blue', 'green', 'orange'],
and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically... | 0 | 1 | 72,780 |
0 | 24,736,966 | 0 | 0 | 0 | 0 | 1 | false | 28 | 2014-07-13T01:08:00.000 | 1 | 6 | 0 | PySpark Drop Rows | 24,718,697 | 0.033321 | python,apache-spark,pyspark | Personally I think just using a filter to get rid of this stuff is the easiest way. But per your comment I have another approach. Glom the RDD so each partition is an array (I'm assuming you have 1 file per partition, and each file has the offending row on top) and then just skip the first element (this is with the sca... | how do you drop rows from an RDD in PySpark? Particularly the first row, since that tends to contain column names in my datasets. From perusing the API, I can't seem to find an easy way to do this. Of course I could do this via Bash / HDFS, but I just want to know if this can be done from within PySpark. | 0 | 1 | 49,327 |
0 | 24,732,678 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-07-14T08:09:00.000 | 0 | 1 | 0 | Program gets stuck at finding the Contour while using Open CV | 24,732,112 | 0 | python,opencv,camera,detection,hsv | You can use a while loop and check if the blob region is not null and then find contours!
it would be helpful if you posted your code. We can explain the answer in a better way then. | I recently started using Python and I've been working on an Open CV based project for over a month now.
I am using Simple Thresholding to detect a coloured blob and I have thresholded the HSV values to detect the blob. All works well, but when the blob goes out of the FOV of the camera, the program gets stuck. I was wo... | 0 | 1 | 111 |
0 | 24,967,653 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2014-07-14T14:52:00.000 | 10 | 2 | 0 | Updated Bokeh to 0.5.0, now plots all previous versions of graph in one window | 24,739,390 | 1 | python,plot,bokeh | as of 0.5.1 there is now bokeh.plotting.reset_output that will clear all output_modes and state. This is especially useful in situations where a new interpreter is not started in between executions (e.g., Spyder and the notebook) | Before I updated, I would run my script and output the html file. There would be my one plot in the window. I would make changes to my script, run it, output the html file, look at the new plot. Then I installed the library again to update it using conda. I made some changes to my script, ran it again, and the output f... | 0 | 1 | 2,333 |
0 | 24,757,540 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-07-14T19:33:00.000 | 2 | 1 | 0 | Updating a NaiveBayes Classifier (in scikit-learn) over time | 24,744,409 | 1.2 | python,scikit-learn | Use the partial_fit method on the naive Bayes estimator. | I'm building a NaiveBayes classifier using scikit-learn, and so far things are going well if I have a set body of data to train. However, for the particular project I'm working on, there will be new data coming in every day that ideally would be part of the training set.
I'm aware that you can pickle the classifier to ... | 0 | 1 | 346 |
0 | 24,765,554 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-07-14T19:49:00.000 | 1 | 1 | 0 | Convert CudaNdarraySharedVariable to TensorVariable | 24,744,701 | 1.2 | python,machine-learning,neural-network,gpu,theano | For a plain CudaNdarray variable, something like this should work:
'''x = CudaNdarray... x_new=theano.tensor.TensorVariable(CudaNdarrayType([False] * tensor_dim))
f = theano.function([x_new], x_new)
converted_x = f(x)
''' | I'm trying to convert a pylearn2 GPU model to a CPU compatible version for prediction on a remote server -- how can I convert CudaNdarraySharedVariable's to TensorVariable's to avoid an error calling cuda code on a GPU-less machine? The experimental theano flag unpickle_gpu_to_cpu seems to have left a few CudaNdarrayS... | 0 | 1 | 505 |
0 | 24,762,074 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-07-15T15:09:00.000 | 0 | 2 | 0 | Manipulating Large Amounts of Image Data in Python | 24,761,787 | 0 | python,image | It is possible, if you us NumPy and especially numpy.memmap to store the image data. That way the image data looks as if it were in memory but is on the disk using the mmap mechanism. The nice thing is that the numpy.memmap arrays are not more difficult to handle than ordinary arrays.
There is some performance overhead... | I have a large number of images of different categories, e.g. "Cat", "Dog", "Bird". The images have some hierarchical structure, like a dict. So for example the key is the animal name and the value is a list of animal images, e.g. animalPictures[animal][index].
I want to manipulate each image (e.g. compute histogram) a... | 0 | 1 | 258 |
0 | 24,785,891 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2014-07-16T16:20:00.000 | 0 | 5 | 0 | Converting a folder of Excel files into CSV files/Merge Excel Workbooks | 24,785,824 | 0 | python,csv,xlrd,xlsxwriter | Look at openoffice's python library. Although, I suspect openoffice would support MS document files.
Python has no native support for Excel file. | I have a folder with a large number of Excel workbooks. Is there a way to convert every file in this folder into a CSV file using Python's xlrd, xlutiles, and xlsxWriter?
I would like the newly converted CSV files to have the extension '_convert.csv'.
OTHERWISE...
Is there a way to merge all the Excel workbooks in the... | 0 | 1 | 2,934 |
0 | 24,797,554 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-07-17T01:51:00.000 | 1 | 1 | 0 | How to find and graph the intersection of 3+ circles with Matplotlib | 24,793,636 | 0.197375 | python,graph,matplotlib,geometry,intersection | Maybe you should try something more analytical? It should not be very difficult:
Find the circle pairs whose distance is less than the sum of their radii; they intersect.
Calculate the intersection angles by simple trigonometry.
Draw a polygon (path) by using a suitably small delta angle in both cases (half of the pol... | I'm working on a problem that involves creating a graph which shows the areas of intersection of three or more circles (each circle is the same size). I have many sets of circles, each set containing at least three circles. I need to graph the area common to the interior of each and every circle in the set, if it even ... | 0 | 1 | 1,241 |
0 | 26,639,275 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-07-21T21:21:00.000 | 0 | 1 | 0 | Produce a PMML file for the Nnet model in python | 24,875,008 | 0 | python,neural-network,pmml | Finally I found my own solution. I wrote my own PMML Parser and scorer . PMML is very much same as XML so its easy to build and retrieve fields accordingly. If anyone needs more information please comment below.
Thanks ,
Raghu. | I have a model(Neural Network) in python which I want to convert into a PMML file . I have tried the following:
1.)py2pmml -> Not able to find the source code for this
2.)in R -> PMML in R works fine but my model is in Python.(Cant run the data in R to generate the same model in R) . Does not work for my dataset.
... | 0 | 1 | 382 |
0 | 24,902,112 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-07-22T17:44:00.000 | 1 | 2 | 0 | What activation function to use or modifications to make when neural network gives same output on regression with PyBrain? | 24,894,231 | 1.2 | python,machine-learning,neural-network,pybrain | Neural nets are not stable when fed input data on arbitrary scales (such as between approximately 0 and 1000 in your case). If your output units are tanh they can't even predict values outside the range -1 to 1 or 0 to 1 for logistic units!
You should try recentering/scaling the data (making it have mean zero and unit ... | I have a neural network with one input, three hidden neurons and one output. I have 720 input and corresponding target values, 540 for training, 180 for testing.
When I train my network using Logistic Sigmoid or Tan Sigmoid function, I get the same outputs while testing, i.e. I get same number for all 180 output values... | 0 | 1 | 815 |
0 | 36,859,613 | 0 | 1 | 0 | 0 | 1 | false | 14 | 2014-07-22T19:07:00.000 | 2 | 4 | 0 | Is it possible to create grouping of input cells in IPython Notebook? | 24,895,714 | 0.099668 | python,ipython,ipython-notebook | Latest version of Ipython/Jupyter notebook allows selection of multiple cells using shift key which can be useful for batch operations such as copy, paste, delete, etc. | When I do data analysis on IPython Notebook, I often feel the need to move up or down several adjacent input cells, for better flow of the analysis story.
I'd expected that once I'd create a heading, all cells under that heading would move together if I move the heading. But this is not the case.
Any way I can do this... | 0 | 1 | 6,408 |
0 | 24,897,058 | 0 | 0 | 0 | 0 | 1 | false | 12 | 2014-07-22T19:31:00.000 | 5 | 2 | 0 | sklearn: Have an estimator that filters samples | 24,896,178 | 0.462117 | python,scikit-learn | The scikit-learn transformer API is made for changing the features of the data (in nature and possibly in number/dimension), but not for changing the number of samples. Any transformer that drops or adds samples is, as of the existing versions of scikit-learn, not compliant with the API (possibly a future addition if d... | I'm trying to implement my own Imputer. Under certain conditions, I would like to filter some of the train samples (that I deem low quality).
However, since the transform method returns only X and not y, and y itself is a numpy array (which I can't filter in place to the best of my knowledge), and moreover - when I us... | 0 | 1 | 2,829 |
0 | 24,912,790 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-07-23T13:59:00.000 | 0 | 2 | 0 | How to convert standardized outputs of a neural network back to original scale? | 24,912,521 | 1.2 | python,machine-learning,neural-network | Assuming the mean and standard deviation of the targets are mu and sigma, the normalized value of a target y should be (y-mu)/sigma. In that case if you get an output y', you can move it back to original scale by converting y' -> mu + y' * sigma. | In my neural network, I have inputs varying from 0 to 719, and targets varying from 0 to 1340. So, I standardize the inputs and targets by standard scaling such that the mean is 0 and variance is 1. Now, I calculate the outputs using back-propagation. All my outputs lie between -2 and 2. How do I convert these outputs ... | 0 | 1 | 1,506 |
0 | 25,251,918 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-07-24T13:33:00.000 | 0 | 1 | 0 | How to overcome version incompatibility with Abaqus and Numpy (Python's library)? | 24,935,230 | 0 | python,numpy,nlopt,abaqus | I have similar problems. As an (annoying) work around I usually write out important data in text files using the regular python. Afterwards, using a bash script, I start a second python (different version) to further analyse the data (matplotlib etc). | I want to run an external library of python called NLopt within Abaqus through python. The issue is that the NLopt I found is compiled against the latest release of Numpy, i.e. 1.9, whereas Abaqus 6.13-2 is compiled against Numpy 1.4. I tried to replace the Numpy folder under the site-packages under the Abaqus installa... | 0 | 1 | 318 |
0 | 25,012,484 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-07-26T02:41:00.000 | 3 | 2 | 0 | One-hot encoding of large dataset with scikit-learn | 24,966,984 | 0.291313 | python,scikit-learn | There is no way around finding out which possible values your categorical features can take, which probably implies that you have to go through your data fully once in order to obtain a list of unique values of your categorical variables.
After that it is a matter of transforming your categorical variables to integer v... | I have a large dataset which I plan to do logistic regression on. It has lots of categorical variables, each having thousands of features which I am planning to use one hot encoding on. I will need to deal with the data in small batches. My question is how to make sure that one hot encoding sees all the features of eac... | 0 | 1 | 3,004 |
0 | 24,971,512 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-07-26T13:17:00.000 | 1 | 1 | 0 | When should I use numpy? | 24,971,400 | 1.2 | python,numpy | Long answer short, when you need do huge mathematical operations, like vector multiplications and so on which requires writing lots of loops and what not, yet your codes gets unreadable yet not efficient you should use Numpy.
Few key benefits:
NumPy arrays have a fixed size at creation, unlike Python lists (which can ... | I'm a newbee of python. And recently I heard some people say that numpy is a good module for dealing with huge data.
I'm curious what can numpy do for us in the daily work.
As I know, most of us were not scientists and researchers, at what circumstances numpy can bring us benefit?
Can you share a good practice with me... | 0 | 1 | 669 |
0 | 24,985,330 | 0 | 1 | 0 | 0 | 1 | true | 8 | 2014-07-27T20:05:00.000 | 3 | 3 | 0 | Efficiently grouping a list of coordinates points by location in Python | 24,985,127 | 1.2 | python,algorithm,grid | You can hash all coordinate points (e.g. using dictionary structure in python) and then for each coordinate point, hash the adjacent neighbors of the point to find pairs of points that are adjacent and "merge" them. Also, for each point you can maintain a pointer to the connected component that that point belongs to (u... | Given a list of X,Y coordinate points on a 2D grid, what is the most efficient algorithm to create a list of groups of adjacent coordinate points?
For example, given a list of points making up two non-adjacent squares (3x3) on a grid (15x15), the result of this algorithm would be two groups of points corresponding to t... | 0 | 1 | 7,135 |
0 | 25,005,318 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-07-28T21:21:00.000 | 1 | 1 | 0 | Parallel exact matrix diagonalization with Python | 25,004,564 | 1.2 | python,numpy,scipy,linear-algebra,numerical-methods | For symmetric sparse matrix eigenvalue/eigenvector finding, you may use scipy.sparse.linalg.eigsh. It uses ARPACK behind the scenes, and there are parallel ARPACK implementations. AFAIK, SciPy can be compiled with one if your scipy installation uses the serial version.
However, this is not a good answer, if you need al... | Is anyone aware of an implemented version (perhaps using scipy/numpy) of parallel exact matrix diagonalization (equivalently, finding the eigensystem)? If it helps, my matrices are symmetric and sparse. I would hate to spend a day reinventing the wheel.
EDIT:
My matrices are at least 10,000x10,000 (but, preferably, at ... | 0 | 1 | 3,153 |
0 | 25,036,779 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2014-07-29T16:35:00.000 | 6 | 1 | 0 | How can sklearn select categorical features based on feature selection | 25,020,482 | 1.2 | python,scikit-learn,feature-selection | You can't. The feature selection routines in scikit-learn will consider the dummy variables independently of each other. This means they can "trim" the domains of categorical variables down to the values that matter for prediction. | My question is i want to run feature selection on the data with several categorical variables. I have used get_dummies in pandas to generate all the sparse matrix for these categorical variables. My question is how sklearn knows that one specific sparse matrix actually belongs to one feature and select/drop them all? F... | 0 | 1 | 3,078 |
0 | 25,032,965 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2014-07-29T20:32:00.000 | -1 | 1 | 0 | Sorting multiple columns in excel via xlwt for python | 25,024,437 | -0.197375 | python,excel,xlwt | You will get data from queries, right? Then you will write them to an excel by xlwt. Just before writing, you can sort them. If you can show us your code, then maybe I can optimize them. Otherwise, you have to follow wnnmaw's advice, do it in a more complicate way. | I'm using python to write a report which is put into an excel spreadshet.
There are four columns, namely:
Product Name | Previous Value | Current Value | Difference
When I am done putting in all the values I then want to sort them based on Current Value. Is there a way I can do this in xlwt? I've only seen examples of... | 0 | 1 | 1,410 |
0 | 25,069,634 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-07-31T10:52:00.000 | 0 | 1 | 0 | Performing a rolling vector auto regression with two variables X and Y, with time lag on X in Python | 25,057,063 | 0 | python,signals,filtering,time-series | There are some rather quick ways.
I assume you are only interested in the slope and average of the signal Y. In order to calculate these, you need to have:
sum(Y)
sum(X)
sum(X.X)
sum(X.Y)
All sums are over the samples in the window. When you have these, the average is:
sum(Y) / n
and the slope:
(sum(X.Y) - sum(X) sum... | I have to perform linear regressions on a rolling window on Y and a time lagged version of X, ie finding Y(t) = aX(t-1) + b. The window size is fixed at 30 samples. I want to return a numpy array of all the beta coefficients. Is there a quick way of doing this? I read about the Savitsky-Golay filter, but it regresses o... | 0 | 1 | 149 |
0 | 32,809,719 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2014-08-01T19:11:00.000 | 1 | 4 | 0 | 2D Interpolation with periodic boundary conditions | 25,087,111 | 0.049958 | python,interpolation | Another function that could work is scipy.ndimage.interpolation.map_coordinates.
It does spline interpolation with periodic boundary conditions.
It does not not directly provide derivatives, but you could calculate them numerically. | I'm running a simulation on a 2D space with periodic boundary conditions. A continuous function is represented by its values on a grid. I need to be able to evaluate the function and its gradient at any point in the space. Fundamentally, this isn't a hard problem -- or to be precise, it's an almost already solved pr... | 0 | 1 | 3,228 |
0 | 25,110,368 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-08-02T15:00:00.000 | 1 | 3 | 0 | What is the data structure in python that can contain multiple pandas data frames? | 25,096,357 | 0.066568 | python,pandas | I haven't done much with Panels, but what exactly is the functionality that you need? Is there a reason a simple python list wouldn't work? Or, if you want to refer by name and not just by list position, a dictionary? | I want to write a function to return several data frames (different dims) and put them into a larger "container" and then select each from the "container" using indexing. I think I want to find some data structure like list in R, which can have different kinds of objects.
What can I use to do this? | 0 | 1 | 104 |
0 | 25,123,660 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-08-03T02:31:00.000 | 1 | 1 | 0 | Spyder, Python IDE startup code crashing GUI | 25,101,081 | 0.197375 | python,numpy,scipy,ipython,spyder | You may find Spyder's array editor better suited for large arrays than the qt console. | I am using Spyder from the Anaconda scientific package set (3.x) and consistently work with very large arrays. I want to be able to see these arrays in my console window so I use these two commands:
set_printoptions(linewidth=1000)
to set the maximum characters displayed on a single line to 1000 and:
set_printoptions(... | 0 | 1 | 604 |
0 | 25,122,607 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2014-08-03T23:40:00.000 | 0 | 1 | 0 | Benefits of Pytables / databases over file system for data organization? | 25,110,089 | 0 | python,csv,organization,pytables | First of all, I am a big fan of Pytables, because it helped me manage huge data files (20GB or more per file), which I think is where Pytables plays out its strong points (fast access, built-in querying etc.). If the system is also used for archiving, the compression capabilities of HDF5 will reduce space requirements ... | I'm currently in the process of trying to redesign the general workflow of my lab, and am coming up against a conceptual roadblock that is largely due to my general lack of knowledge in this subject.
Our data currently is organized in a typical file system structure along the lines of:
Date\Cell #\Sweep #
where for a... | 0 | 1 | 332 |
0 | 25,288,486 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2014-08-04T16:07:00.000 | -1 | 1 | 0 | Using Pickle vs database for loading large amount of data? | 25,122,947 | 1.2 | python,database,computer-vision,pickle | Use a database because it allows you to query faster. I've done this before. I would suggest against using cPickle. What specific implementation are you using? | I have previously saved a dictionary which maps image_name -> list of feature vectors, with the file being ~32 Gb. I have been using cPickle to load the dictionary in, but since I only have 8 GB of RAM, this process takes forever. Someone suggested using a database to store all the info, and reading from that, but woul... | 0 | 1 | 811 |
0 | 25,142,706 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-08-05T12:11:00.000 | 0 | 2 | 0 | How to check a random 3d object surface if is flat in python | 25,138,508 | 0 | python,image,3d,transform | Firstly, all lines in 3d correspond to an equation; secondly, all lines in 3d that lie on a particular plane for part of their length correspond to equations that belong to a set of linear equations that share certain features, which you would need to determine. The first thing you should do is identify the four corner... | I used micro CT (it generates a kind of 3D image object) to evaluate my samples which were shaped like a cone. However the main surface which should be flat can not always be placed parallel to the surface of image stacks. To perform the transform, first of all, I have to find a way to identify the flat surface. Theref... | 0 | 1 | 435 |
0 | 67,782,295 | 0 | 0 | 0 | 0 | 1 | false | 42 | 2014-08-05T18:09:00.000 | -2 | 4 | 0 | TFIDF for Large Dataset | 25,145,552 | -0.099668 | python,lucene,nlp,scikit-learn,tf-idf | The lengths of the documents The number of terms in common Whether the terms are common or unusual How many times each term appears | I have a corpus which has around 8 million news articles, I need to get the TFIDF representation of them as a sparse matrix. I have been able to do that using scikit-learn for relatively lower number of samples, but I believe it can't be used for such a huge dataset as it loads the input matrix into memory first and th... | 0 | 1 | 28,481 |
0 | 25,178,533 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2014-08-06T18:55:00.000 | 0 | 1 | 0 | Python pandas module openpxyl version issue | 25,168,058 | 0 | python,pandas,openpyxl,versions | The best thing would be to remove the version of openpyxl you installed and let Pandas take care. | My installed version of the python(2.7) module pandas (0.14.0) will not import. The message I receive is this:
UserWarning: Installed openpyxl is not supported at this time. Use >=1.6.1 and <2.0.0.
Here's the problem - I already have openpyxl version 1.8.6 installed so I can't figure out what the problem might be! Does... | 0 | 1 | 114 |
0 | 25,170,275 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-08-06T20:22:00.000 | 1 | 2 | 0 | error in import pandas after installing it using pip | 25,169,506 | 0.099668 | python,pandas | Try to locate your pandas lib in /python*/lib/site-packages, add dir to your sys.path file. | I installed pandas using pip and get the following message "pip install pandas
Requirement already satisfied (use --upgrade to upgrade): pandas in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
Cleaning up..."
When I load up python and try to import pandas, it says module not found. Pleas... | 0 | 1 | 1,176 |
0 | 25,414,604 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-08-07T20:56:00.000 | 0 | 1 | 0 | How to test the default NLTK NER chunker's accuracy on own corpus? | 25,192,029 | 0 | python,nltk | Read in the chunked portion of your corpus and convert it into the format that the NLTK expects, i.e. as a list of shallow Trees. Once you have it in this form, you can pass it to the evaluate() method just like you would pass the "gold standard" examples.
The evaluate method will strip off the chunks, run your text t... | How to test the default NLTK NER chunker's accuracy on own corpus?
I've tagged a percentage of my own corpus. I'm curious if it's possible to use the default NLTK tagger to see accuracy rate on this corpus?
I already know about the ne_chunker.evaluate() function, but it's not immediately clear to me how to input in my... | 0 | 1 | 269 |
0 | 25,233,675 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-08-10T22:16:00.000 | 1 | 2 | 0 | Large Array of binary data | 25,233,539 | 0.099668 | python | Another possibility is to represent the last axis of 20 bits as a single 32 bit integer. This way a 5000x5000 array would suffice. | I'm working with a large 3 dimensional array of data that is binary, each value is one of two possible values. I currently have this data stored in the numpy array as int32 objects that are either 1 or 0.
It works fine for small arrays but eventually i will need to make the array 5000x5000x20, which I can't even get c... | 0 | 1 | 119 |
0 | 25,242,915 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-08-11T07:36:00.000 | 0 | 1 | 0 | Python hierarchical clustering visualization dump [scipy] | 25,238,028 | 1.2 | python,scipy,cluster-analysis,hierarchical-clustering | The solution was that scipy had it's own built in function to turn linkage matrix to binary tree. The function name is scipy.to_tree(matrix) | Recently I was visualizing my datasets using python modules scikit and scipy hierarchical clustering and dendrogram. Dendrogram method drawing me a graph and now I need to export this tree as a graph in my code. I am wondering is there any way to get this data. Any help would be really appreciated. Thanks. | 0 | 1 | 682 |
0 | 25,298,846 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-08-13T13:52:00.000 | 1 | 1 | 0 | Python NLTK tokenizing text using already found bigrams | 25,288,032 | 1.2 | python,python-2.7,nlp,nltk | The way how topic modelers usually pre-process text with n-grams is they connect them by underscore (say, topic_modeling or white_house). You can do that when identifying big rams themselves. And don't forget to make sure that your tokenizer does not split by underscore (Mallet does if not setting token-regex explicitl... | Background: I got a lot of text that has some technical expressions, which are not always standard.
I know how to find the bigrams and filter them.
Now, I want to use them when tokenizing the sentences. So words that should stay together (according to the calculated bigrams) are kept together.
I would like to know if... | 0 | 1 | 317 |
0 | 63,317,500 | 0 | 0 | 0 | 0 | 1 | false | 374 | 2014-08-17T17:52:00.000 | 0 | 9 | 0 | How can I display full (non-truncated) dataframe information in HTML when converting from Pandas dataframe to HTML? | 25,351,968 | 0 | python,html,pandas | For those who like to reduce typing (i.e., everyone!): pd.set_option('max_colwidth', None) does the same thing | I converted a Pandas dataframe to an HTML output using the DataFrame.to_html function. When I save this to a separate HTML file, the file shows truncated output.
For example, in my TEXT column,
df.head(1) will show
The film was an excellent effort...
instead of
The film was an excellent effort in deconstructing the com... | 1 | 1 | 438,196 |
1 | 44,932,618 | 0 | 1 | 0 | 0 | 1 | false | 8 | 2014-08-24T03:57:00.000 | 1 | 4 | 0 | Using Anaconda Python 3.4 with PyQt5 | 25,468,397 | 0.049958 | matplotlib,anaconda,python-3.4,pyqt5 | I use Anaconda and with Python v2.7.X and qt5 doesn't work. The work-around I found was
Tools -> Preferences -> Python console -> External modules -> Library: PySlide | I have an existing PyQt5/Python3.4 application that works great, and would now like to add "real-time" data graphing to it. Since matplotlib installation specifically looks for Python 3.2, and NumPhy / ipython each have there own Python version requirements, I thought I'd use a python distribution to avoid confusion. ... | 0 | 1 | 15,580 |
0 | 71,228,716 | 0 | 0 | 0 | 0 | 2 | false | 27 | 2014-08-25T12:23:00.000 | 0 | 7 | 0 | How to convert a 16-bit to an 8-bit image in OpenCV? | 25,485,886 | 0 | python,numpy,opencv,image-processing | This is the simplest way I found: img8 = cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U) | I have a 16-bit grayscale image and I want to convert it to an 8-bit grayscale image in OpenCV for Python to use it with various functions (like findContours etc.). How can I do this in Python? | 0 | 1 | 66,029 |
0 | 63,122,556 | 0 | 0 | 0 | 0 | 2 | false | 27 | 2014-08-25T12:23:00.000 | 1 | 7 | 0 | How to convert a 16-bit to an 8-bit image in OpenCV? | 25,485,886 | 0.028564 | python,numpy,opencv,image-processing | Yes you can in Python. To get the expected result, choose a method based on what you want the values mapped from say uint16 to uint8 be.
For instance,
if you do img8 = (img16/256).astype('uint8') values below 256 are mapped to 0
if you do img8 = img16.astype('uint8') values above 255 are mapped to 0
In the LUT me... | I have a 16-bit grayscale image and I want to convert it to an 8-bit grayscale image in OpenCV for Python to use it with various functions (like findContours etc.). How can I do this in Python? | 0 | 1 | 66,029 |
0 | 25,508,739 | 0 | 0 | 0 | 0 | 1 | false | 32 | 2014-08-26T14:34:00.000 | 4 | 4 | 0 | Fastest way to parse large CSV files in Pandas | 25,508,510 | 0.197375 | python,pandas | One thing to check is the actual performance of the disk system itself. Especially if you use spinning disks (not SSD), your practical disk read speed may be one of the explaining factors for the performance. So, before doing too much optimization, check if reading the same data into memory (by, e.g., mydata = open('my... | I am using pandas to analyse large CSV data files. They are around 100 megs in size.
Each load from csv takes a few seconds, and then more time to convert the dates.
I have tried loading the files, converting the dates from strings to datetimes, and then re-saving them as pickle files. But loading those takes a few sec... | 0 | 1 | 36,848 |
0 | 25,533,547 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-08-27T16:32:00.000 | 0 | 3 | 0 | Many independent pseudorandom graphs each with same arbitrary y for any input x | 25,532,502 | 0 | python,algorithm,random,python-3.4 | Well you're probably going to need to come up with some more detailed requirements but yes, there are ways:
pre-populate a dictionary with however many terms in the series you require for a given seed and then at run-time simply look the nth term up.
if you're not fussed about the seed values and/or do not require s... | By 'graph' I mean 'function' in the mathematical sense, where you always find one unchanging y value per x value.
Python's random.Random class's seed behaves as the x-coordinate of a random graph and each new call to random.random() gives a new random graph with all new x-y mappings.
Is there a way to directly refer to... | 0 | 1 | 172 |
0 | 25,554,002 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-08-28T16:35:00.000 | 1 | 2 | 0 | Numpy array element-by-element comparison optimization | 25,553,781 | 0.099668 | python,optimization,numpy | It computes max(a) once, then it compares the (scalar) result against each (scalar) element in a, and creates a bool-array for the result. | Let a be a numpy array of length n.
Does the statement
a == max(a)
calculate the expression max(a) n-times or just one? | 0 | 1 | 70 |
0 | 25,562,736 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-08-29T06:09:00.000 | 6 | 2 | 0 | How to return 'negative' of a value in pandas dataframe? | 25,562,570 | 1.2 | python,pandas | Just use the negative sign on the column directly. For instance, if your DataFrame has a column "A", then -df["A"] gives the negatives of those values. | In pandas, is there any function that returns the negative of the values in a column? | 0 | 1 | 2,315 |
0 | 25,594,611 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-08-31T16:14:00.000 | 1 | 4 | 0 | Generate random matrix with every number 0..k | 25,593,876 | 1.2 | python,algorithm | You want to generate a random n*m matrix of integers 1..k with every integer used, and no integer used twice in any row. And you want to do it efficiently.
If you just want to generate a reasonable answer, reasonably quickly, you can generate the rows by taking a random selection of elements, and putting them into a r... | Given an integer k, I am looking for a pythonic way to generate a nxm matrix (or nested list) which has every integer from 0..k-1 but no integer appears more than once in each row.
Currently I'm doing something like this
random.sample(list(combinations(xrange(k), m)), n)
but this does not guarantee every number from 0.... | 0 | 1 | 407 |
0 | 25,607,452 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-09-01T08:54:00.000 | 1 | 1 | 0 | Multiple files or single files into HDFStore | 25,602,155 | 0.197375 | python,pandas,hdfstore | These are the differences:
multiple files
when using multiple files you can only corrupt a single file when writing (eg you have a power failure when writing)
you can parallelize writing with multiple files (note - never, ever try to parallelize with a single file a this will corrupt it!!!)
single file
grouping if ... | I am converting 100 csv files into dataframes and storing them in an HDFStore.
What are the pros and cons of
a - storing the csv file as 100 different HDFStore files?
b - storing all the csv files as separate items in a single HDFStore?
Other than performance issues, I am asking the question as I am having stability is... | 0 | 1 | 202 |
0 | 25,691,635 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-09-02T15:30:00.000 | 0 | 1 | 0 | Installing matplotlib via pip on Ubuntu 12.04 | 25,627,100 | 1.2 | python,matplotlib,ubuntu-12.04 | Okay, the problem was in gcc version. During building and creating wheel of package pip uses system gcc (which version is 4.7.2). I'm using python from virtualenv, which was built with gcc 4.4.3. So version of libstdc++ library is different in IPython and one that pip used.
As always there are two solutions (or even m... | I'm trying to use matplotlib on Ubuntu 12.04. So I built a wheel with pip:
python .local/bin/pip wheel --wheel-dir=wheel/ --build=build/ matplotlib
Then successfully installed it:
python .local/bin/pip install --user --no-index --find-links=wheel/ --build=build/ matplotlib
But when I'm trying to import it in ipython Im... | 0 | 1 | 520 |
0 | 48,203,281 | 0 | 1 | 0 | 0 | 1 | false | 164 | 2014-09-03T13:53:00.000 | 35 | 6 | 0 | Python: Convert timedelta to int in a dataframe | 25,646,200 | 1 | python,pandas,timedelta | Timedelta objects have read-only instance attributes .days, .seconds, and .microseconds. | I would like to create a column in a pandas data frame that is an integer representation of the number of days in a timedelta column. Is it possible to use 'datetime.days' or do I need to do something more manual?
timedelta column
7 days, 23:29:00
day integer column
7 | 0 | 1 | 343,961 |
0 | 25,750,072 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2014-09-05T20:56:00.000 | 1 | 1 | 0 | Inspecting or turning off Numpy/SciPy Parallelization | 25,693,870 | 1.2 | python,numpy,parallel-processing,scipy,scikit-learn | Indeed BLAS, or in my case OpenBLAS, was performing the parallelization.
The solution was to set the environment variable OMP_NUM_THREADS to 1.
Then all is right with the world. | I am running some K-Means clustering from the sklearn package.
Although I am setting the parameter n_jobs = 1 as indicated in the sklearn documentation, and although a single process is running, that process will apparently consume all the CPUs on my machine. That is, in top, I can see the python job is using, say 400... | 0 | 1 | 525 |
0 | 25,735,046 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-09-06T19:09:00.000 | 0 | 2 | 0 | Find the 'shape' of a list of numbers (straight-line/concave/convex, how many humps) | 25,703,792 | 0 | python,statistics,classification,computer-science,differentiation | How about if you difference the data (I.e., x[i+1] - x[i]) repeatedly until all the results are the same sign? For example, if you difference it twice and all the results are nonnegative, you know it's convex. Otherwise difference again and check the signs. You could set a limit, say 10 or so, beyond which you figure t... | This is a bit hard to explain. I have a list of integers. So, for example, [1, 2, 4, 5, 8, 7, 6, 4, 1] - which, when plotted against element number, would resemble a convex graph. How do I somehow extract this 'shape' characteristic from the list? It doesn't have to particularly accurate - just the general shape, conve... | 0 | 1 | 2,040 |
0 | 25,714,220 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2014-09-07T19:40:00.000 | 1 | 3 | 0 | does numpy asarray() refer to original list | 25,714,046 | 0.066568 | python,numpy | Yes, it is safe to delete it if your input data consists of a list. From the documentation No copy is performed (ONLY) if the input is already an ndarray. | I have a very long list of list and I am converting it to a numpy array using numpy.asarray(), is it safe to delete the original list after getting this matrix or does the newly created numpy array will also be affected by this action? | 0 | 1 | 1,721 |
0 | 25,714,754 | 0 | 1 | 0 | 0 | 1 | false | 21 | 2014-09-07T20:36:00.000 | 2 | 4 | 0 | Find rhyme using NLTK in Python | 25,714,531 | 0.099668 | python,nltk | Use soundex or double metaphone to find out if they rhyme. NLTK doesn't seem to implement these but a quick Google search showed some implementations. | I have a poem and I want the Python code to just print those words which are rhyming with each other.
So far I am able to:
Break the poem sentences using wordpunct_tokenize()
Clean the words by removing the punctuation marks
Store the last word of each sentence of the poem in a list
Generate another list using cmudic... | 0 | 1 | 14,269 |
0 | 25,824,846 | 1 | 0 | 0 | 0 | 1 | true | 0 | 2014-09-12T23:14:00.000 | 1 | 1 | 0 | What big data solution can I use to process a huge number of input files? | 25,818,198 | 1.2 | python,amazon-ec2,bigdata,amazon-sqs | Problem with Hadoop is when you get a very large number of files that you do not combine with CombineFileInput format, it makes the job less efficient.
Spark doesnt seem to have a problem with this though, Ive had jobs run without problems with 10s of 1000s of files and output 10s of 1000s of files. Not tried to really... | I am currently searching for the best solution + environment for a problem I have. I'm simplifying the problem a bit, but basically:
I have a huge number of small files uploaded to Amazon S3.
I have a rule system that matches any input across all file content (including file names) and then outputs a verdict classify... | 1 | 1 | 148 |
0 | 25,824,686 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-09-13T15:00:00.000 | 2 | 1 | 0 | How do raise exception if all elements of numpy array are not floats? | 25,824,415 | 1.2 | python,numpy | All numbers in a numpy array have the same dtype. So you can quickly check what dtype the array has by looking at array.dtype. If this is float or float64 then every single item in the array will be of type float.
Numpy can also create arrays with mixed dtypes similar to normal python lists but then array.dtype=np.obje... | Just as the title says, I want to raise an exception when I send in an input A that should be an array containing floats. That is, if A contains at least one item that is not a float it should return an TypeError. | 0 | 1 | 828 |
0 | 25,830,628 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-09-14T06:08:00.000 | 2 | 2 | 0 | Print from specific row of csv file | 25,830,569 | 0.197375 | python,csv | You need to come up with a way to detect the start and end of the relevant section of the file; the csv module does not contain any built-in mechanism for doing this by itself, because there is no general and unambiguous delimiter for the beginning and end of a particular section.
I have to question the wisdom of jammi... | my csv file has multiple tables in a single file for example
name age gender
n1 10 f
n2 20 m
n3 30 m
city population
city1 10
city2 20
city3 30
How can I print from city row to city3 row.using python csv module | 0 | 1 | 764 |
0 | 25,836,165 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-09-14T17:43:00.000 | 1 | 1 | 0 | Why is a list of cumulative frequency sums required for implementing a random word generator? | 25,836,133 | 0.197375 | python,algorithm,random,cumulative-sum,cumulative-frequency | Your approach is (also) correct, but it uses space proportional to the input text size. The approach suggested by the book uses space proportional only to the number of distinct words in the input text, which is usually much smaller. (Think about how often words like "the" appear in English text.) | I'm working on exercise 13.7 from Think Python: How to Think Like a Computer Scientist. The goal of the exercise is to come up with a relatively efficient algorithm that returns a random word from a file of words (let's say a novel), where the probability of the word being returned is correlated to its frequency in the... | 0 | 1 | 242 |
0 | 40,950,835 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-09-15T11:49:00.000 | 2 | 1 | 0 | Numba vectorize maxing out all processors | 25,847,411 | 0.379949 | python,vectorization,numba | You can limit the number of threads that target=parallel will use by setting the NUMBA_NUM_THREADS envvar. Note that you can't change this after Numba is imported, it gets set when you first start it up. You can check whether it works by examining the value of numba.config.NUMBA_DEFAULT_NUM_THREADS | Does anyone know if there is a way to configure anaconda such that @vectorize does not take all the processors available in the machine? For example, if I have an eight core machine, I only want @vectorize to use four cores. | 0 | 1 | 430 |
0 | 25,902,242 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-09-17T19:58:00.000 | 0 | 2 | 0 | Signal feature identification | 25,899,286 | 0 | python,audio,machine-learning,signal-processing | Your points 1 and 2 are not very different: 1) is the end results of a classification problem 2) is the feature that you give for classification. What you need is a good classifier (SVM, decision trees, hierarchical classifiers etc.) and a good set of features (pitch, formants etc. that you mentioned). | I'm am trying to identify phonemes in voices using a training database of known ones.
I'm wondering if there is a way of identifying common features within my training sample and using that to classify a new one.
It seems like there are two paths:
Give the process raw/normalised data and it will return similar ones... | 0 | 1 | 401 |
0 | 25,961,702 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-09-20T14:23:00.000 | 0 | 2 | 0 | asymmetric regularization in machine learning libraries (e.g. scikit ) in python | 25,949,733 | 0 | python,machine-learning,scikit-learn,asymmetric,regularized | Depending on the amount of data you have and the classifier you would like to use, it might be easier to implement the loss and then use a standard solver like lbfgs or newton, or do stochastic gradient descent, if you have a lot of data.
Using a simple custom solver will most likely be much slower than using scikit-le... | The problem requires me to regularize weights of selected features while training a linear classifier. I am using python SKlearn.
Having googled a lot about incorporating asymmetric regularization for classifiers in SKlearn, I could not find any solution. The core library function that performs this task is provided as... | 0 | 1 | 206 |
0 | 26,011,654 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-09-21T20:21:00.000 | 0 | 1 | 1 | "Text file busy" error for the mapper in a Hadoop streaming job execution | 25,963,463 | 1.2 | python,hadoop,mapreduce,streaming | Can you please try stopping all the daemons using 'stop-all' first and then rerun your MR job after restarting the daemons (using 'start-all')?
Lets see if it helps! | I have an application that creates text files with one line each and dumps it to hdfs.
This location is in turn being used as the input directory for a hadoop streaming job.
The expectation is that the number of mappers will be equal to the "input file split" which is equal to the number of files in my case. Some how a... | 0 | 1 | 317 |
0 | 25,996,185 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-09-22T02:34:00.000 | 0 | 1 | 0 | Pybrain Reinforcement Learning dynamic output | 25,965,953 | 0 | python,pybrain,reinforcement-learning | It is certainly possible to train a neural network (based on pybrain or otherwise) to make predictions of this sort that are better than a coin toss.
However, weather prediction is a very complex art, even for people who do it as their full-time profession and have been for decades. Those weather forecasters have much... | Can you use Reinforcement Learning from Pybrain on dynamic changing output. For example weather: lets say you have 2 attributes Humidity and Wind and the output will be either Rain or NO_Rain ( and all attributes are either going to have a 1 for true or 0 for false in the text file i am using). can you use Reinforcemen... | 0 | 1 | 241 |
0 | 26,202,109 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-09-24T08:11:00.000 | 1 | 1 | 0 | Tell IPython Parallel to use Pickle again after Dill has been activated | 26,011,787 | 1.2 | ipython,pickle,ipython-parallel,dill | I'm the dill author. I don't know if IPython does anything unusual, but you can revert to pickle if you like through dill directly with dill.extend(False)… although this is a relatively new feature (not yet in a stable release).
If IPython doesn't have a dv.use_pickle() (it doesn't at the moment), it should… and could... | I'm developing a distributed application using IPython parallel. There are several tasks which are carried out one after another on the IPython cluster engines.
One of these tasks inevitably makes use of closures. Hence, I have to tell IPython to use Dill instead of Pickle by calling dv.use_dill(). Though this should b... | 0 | 1 | 186 |
0 | 26,047,701 | 0 | 1 | 0 | 0 | 2 | false | 6 | 2014-09-25T20:07:00.000 | -1 | 3 | 0 | import anaconda packages to IDLE? | 26,047,185 | -0.066568 | python,anaconda | You should try starting IDLE with the anaconda interpreter instead. AFAIK it's too primitive an IDE to be configurable which interpreter to use. So if anaconda doesn't ship one, use a different IDE instead, such as PyCharm, PyDev, Eric, Sublime2, Vim, Emacs. | I installed numpy, scipy, matplotlib, etc through Anaconda. I set my PYTHONPATH environment variable to include C://Anaconda; C://Anaconda//Scripts; C://Anaconda//pkgs;.
import sys sys.path shows that IDLE is searching in these Anaconda directories. conda list in the command prompt shows that all the desired packages a... | 0 | 1 | 7,062 |
0 | 26,067,690 | 0 | 1 | 0 | 0 | 2 | false | 6 | 2014-09-25T20:07:00.000 | 2 | 3 | 0 | import anaconda packages to IDLE? | 26,047,185 | 0.132549 | python,anaconda | You need to add those directories to PATH, not PYTHONPATH, and it should not include the pkgs directory. | I installed numpy, scipy, matplotlib, etc through Anaconda. I set my PYTHONPATH environment variable to include C://Anaconda; C://Anaconda//Scripts; C://Anaconda//pkgs;.
import sys sys.path shows that IDLE is searching in these Anaconda directories. conda list in the command prompt shows that all the desired packages a... | 0 | 1 | 7,062 |
0 | 26,089,798 | 1 | 0 | 0 | 0 | 1 | false | 2 | 2014-09-28T05:16:00.000 | 0 | 2 | 0 | How to convert xml file of stack overflow dump to csv file | 26,081,880 | 0 | python-3.x,data-dump | Use one of the python xml modules to parse the .xml file. Unless you have much more that 27GB ram, you will need to do this incrementally, so limit your choices accordingly. Use the csv module to write the .csv file.
Your real problem is this. Csv files are lines of fields. They represent a rectangular table. Xml fi... | I have stack overflow data dump file in .xml format,nearly 27GB and I want to convert them in .csv file. Please somebody tell me, tools to convert xml to csv file or python program | 0 | 1 | 704 |
0 | 26,099,751 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2014-09-29T09:06:00.000 | 3 | 1 | 0 | python scipy ode dopri5 'larger nmax needed' | 26,096,209 | 0.53705 | python,scipy | nmax refers to the maximum number of internal steps that the solver will take. The default is 500. You can change it with the nsteps argument of the set_integrator method. E.g.
ode(f).set_integrator('dopri5', nsteps=1000)
(The Fortran code calls this NMAX, and apparently the Fortran name was copied to the error messa... | While using scipy 0.13.0, ode(f).set_integrator('dopri5'), I get the error message -
larger nmax is needed
I looked for nmax in the ode.py but I can't see the variable. I guess that the number call for integration exceeds the allowed default value.
How can I increase the nmax value? | 0 | 1 | 2,600 |
0 | 54,617,685 | 0 | 0 | 0 | 0 | 2 | false | 53 | 2014-09-29T11:21:00.000 | 12 | 9 | 0 | Rename unnamed column pandas dataframe | 26,098,710 | 1 | python,pandas,csv | The solution can be improved as data.rename( columns={0 :'new column name'}, inplace=True ). There is no need to use 'Unnamed: 0', simply use the column number, which is 0 in this case and then supply the 'new column name'. | My csv file has no column name for the first column, and I want to rename it. Usually, I would do data.rename(columns={'oldname':'newname'}, inplace=True), but there is no name in the csv file, just ''. | 0 | 1 | 116,651 |
0 | 58,729,636 | 0 | 0 | 0 | 0 | 2 | false | 53 | 2014-09-29T11:21:00.000 | 5 | 9 | 0 | Rename unnamed column pandas dataframe | 26,098,710 | 0.110656 | python,pandas,csv | Try the below code,
df.columns = [‘A’, ‘B’, ‘C’, ‘D’] | My csv file has no column name for the first column, and I want to rename it. Usually, I would do data.rename(columns={'oldname':'newname'}, inplace=True), but there is no name in the csv file, just ''. | 0 | 1 | 116,651 |
0 | 26,111,537 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-09-29T16:21:00.000 | 0 | 2 | 0 | method for implementing regression tree on raster data - python | 26,104,434 | 0 | python,regression,weka,raster,landsat | I have had some experience using LandSat Data for the prediction of environmental properties of soil, which seems to be somewhat related to the problem that you have described above. Although I developed my own models at the time, I could describe the general process that I went through in order to map the predicted d... | I'm trying to build and implement a regression tree algorithm on some raster data in python, and can't seem to find the best way to do so. I will attempt to explain what I'm trying to do:
My desired output is a raster image, whose values represent lake depth, call it depth.tif. I have a series of raster images, each re... | 0 | 1 | 709 |
0 | 26,123,179 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-09-29T16:21:00.000 | 0 | 2 | 0 | method for implementing regression tree on raster data - python | 26,104,434 | 0 | python,regression,weka,raster,landsat | It sounds like you are not using any spatial information to build your tree
(such as information on neighboring pixels), just reflectance. So, you can
apply your decision tree to the pixels as if the pixels were all in a
one-dimensional list or array.
A 600-branch tree for a 6000 point training data file seems like it... | I'm trying to build and implement a regression tree algorithm on some raster data in python, and can't seem to find the best way to do so. I will attempt to explain what I'm trying to do:
My desired output is a raster image, whose values represent lake depth, call it depth.tif. I have a series of raster images, each re... | 0 | 1 | 709 |
0 | 38,604,808 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-03T11:19:00.000 | 0 | 2 | 0 | Way to compute the value of the loss function on data for an SGDClassifier? | 26,178,035 | 0 | python,machine-learning,scikit-learn | The above answer was too short, outdated and might result in misleading.
Using score method could only give accuracy (it's in the BaseEstimator). If you want the loss function, you could either call private function _get_loss_function (defined in the BaseSGDClassifier). Or accessing BaseSGDClassifier.loss_functions cl... | I'm using an SGDClassifier in combination with the partial fit method to train with lots of data. I'd like to monitor when I've achieved an acceptable level of convergence, which means I'd like to know the loss every n iterations on some data (possibly training, possibly held-out, maybe both).
I know this information i... | 0 | 1 | 1,528 |
0 | 26,199,110 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-05T02:30:00.000 | 0 | 1 | 0 | Is there a ways to take out the artist and song from the array? | 26,199,088 | 0 | python,python-2.7 | Assuming this array is a string array of x strings, you can create a substring from the index of the second instance of '|' to the end of the string. | Suppose you have this type of array (Sonny Rollins|Who Cares?|Sonny Rollins And Friends|Jazz|
Various|, Westminster Philharmonic Orchestra conducted by Kenneth Alwyn|Curse of the Werewolf: Prelude|Horror!|Soundtrack|1996). Is there any possible way to only take out Sonny Rollins and Who cares from the array? | 1 | 1 | 27 |
0 | 29,718,799 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-10-06T07:04:00.000 | 0 | 1 | 0 | preclassified trained twitter comments for categorization | 26,211,308 | 0 | python,twitter,machine-learning,classification,nltk | The thing is that how do I even generate/create a training data for
such a huge data
I would suggest finding a training data set that could help you with the categories you are interested in. So let's say price related articles, you might want to find a training data set that is all about price related articles and ... | So I have some 1 million lines of twitter comments data in csv format. I need to classify them in certain categories like if somebody is talking about : "product longevity", "cheap/costly", "on sale/discount" etc.
As you can see I have multiple classes to classify these tweets data into.
The thing is that how do I ev... | 0 | 1 | 149 |
0 | 68,602,469 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2014-10-07T21:59:00.000 | 0 | 4 | 0 | Python: DBSCAN in 3 dimensional space | 26,246,015 | 0 | python,cluster-analysis,dbscan | Why not just flatten the data to 2 dimensions with PCA and use DBSCAN with only 2 dimensions? Seems easier than trying to custom build something else. | I have been searching around for an implementation of DBSCAN for 3 dimensional points without much luck. Does anyone know I library that handles this or has any experience with doing this? I am assuming that the DBSCAN algorithm can handle 3 dimensions, by having the e value be a radius metric and the distance between ... | 0 | 1 | 15,341 |
0 | 26,331,726 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-10-08T13:41:00.000 | 1 | 2 | 0 | Can lambdify return an array with dytpe np.float128? | 26,258,420 | 0.099668 | python,numpy,sympy | If you need a lot of precision, you can try using SymPy floats, or mpmath directly (which is part of SymPy), which provides arbitrary precision. For example, sympy.Float('2.0', 100) creates a float of 2.0 with 100 digits of precision. You can use something like sympy.sin(2).evalf(100) to get 100 digits of sin(2) for in... | I am solving a large non-linear system of equations and I need a high degree of numerical precision. I am currently using sympy.lambdify to convert symbolic expressions for the system of equations and its Jacobian into vectorized functions that take ndarrays as inputs and return an ndarray as outputs.
By default, lambd... | 0 | 1 | 667 |
0 | 26,286,979 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-09T14:19:00.000 | 0 | 3 | 0 | How can I find the break frequencies/3dB points from a bandpass filter frequency sweep data in python? | 26,280,838 | 0 | python,math,signal-processing | Assuming that you've loaded multiple readings of the PSD from the signal analyzer, try averaging them before attempting to find the bandedges. If the signal isn't changing too dramatically, the averaging process might smooth away any peaks and valleys and noise within the passband, making it easier to find the edges. T... | The data that i have is stored in a 2D list where one column represents a frequency and the other column is its corresponding dB. I would like to programmatically identify the frequency of the 3db points on either end of the passband. I have two ideas on how to do this but they both have drawbacks.
Find maximum point ... | 0 | 1 | 1,113 |
0 | 26,303,687 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-10T12:55:00.000 | 1 | 2 | 0 | Algorithm for matching objects | 26,299,978 | 0.099668 | python,algorithm,pattern-matching,cluster-analysis,data-mining | If your comparison works with "create a sum of all features and find those which the closest sum", there is a simple trick to get close objects:
Put all objects into an array
Calculate all the sums
Sort the array by sum.
If you take any index, the objects close to it will now have a close index as well. So to find th... | I have 1,000 objects, each object has 4 attribute lists: a list of words, images, audio files and video files.
I want to compare each object against:
a single object, Ox, from the 1,000.
every other object.
A comparison will be something like:
sum(words in common+ images in common+...).
I want an algorithm that will ... | 0 | 1 | 543 |
0 | 26,714,084 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-10T16:00:00.000 | 1 | 3 | 0 | Spotfire column title colors | 26,303,493 | 0.066568 | python,colors,crosstab,tibco,spotfire | One can make a color according to category by using Properties->Color->Add Rule , where you can see many conditions to apply your visualization. | Spotfire 5.5
Hi, I am looking for a way to color code or group columns together in a Spotfire cross-table. I have three categories (nearest, any, all) and three columns associated with each category. Is there a way I can visually group these columns with their corresponding category.
Is there a way to change column he... | 0 | 1 | 3,241 |
0 | 26,310,892 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-11T03:41:00.000 | -1 | 1 | 0 | Half-integer indices | 26,310,822 | -0.197375 | python,indexing | You can make __getitem__ which takes arbitrary objects as indices (and floating point numbers in particular). | I am working on a fluid dynamics simulation tool in Python. Traditionally (at least in my field), integer indices refer to the center of a cell. Some quantities are stored on the faces between cells, and in the literature are denoted by half-integer indices. In codes, however, these are shifted to integers to fit in... | 0 | 1 | 149 |
0 | 26,340,459 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-13T12:17:00.000 | 0 | 3 | 0 | How can I subset a data frame based on dates, when my dates column is not the index in Python? | 26,339,828 | 0 | python,date,subset | Assuming you're using Pandas.
dfQ1 = df[(df.date > Qstartdate) & (df.date < Qenddate)] | I have a large dataset with a date column (which is not the index) with the following format %Y-%m-%d %H:%M:%S.
I would like to create quarterly subsets of this data frame i.e. the data frame dfQ1 would contain all rows where the date was between month [1 and 4], dfQ2 would contain all rows where the date was between m... | 0 | 1 | 1,077 |
0 | 26,362,076 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-10-14T13:24:00.000 | 0 | 3 | 0 | Numpy's loadtxt(): OverflowError: Python int too large to convert to C long | 26,362,010 | 0 | python,numpy | You need to use a compound dtype, with a separate type per column. Or you can use np.genfromtxt without specifying any dtype, and it will be determined automatically, per each column, which may give you what you need with less effort (but perhaps slightly less performance and less error checking too). | I'm trying to load a matrix from a file using numpy. When I use any dtype other than float I get this error:
OverflowError: Python int too large to convert to C long
The code:
X = np.loadtxt(feats_file_path, delimiter=' ', dtype=np.int64 )
The problem is that my matrix has only integers and I can't use a float becaus... | 0 | 1 | 2,553 |
0 | 26,362,482 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-10-14T13:24:00.000 | 0 | 3 | 0 | Numpy's loadtxt(): OverflowError: Python int too large to convert to C long | 26,362,010 | 0 | python,numpy | Your number looks like it would fit in the uint64_t type, which is available if you have C99. | I'm trying to load a matrix from a file using numpy. When I use any dtype other than float I get this error:
OverflowError: Python int too large to convert to C long
The code:
X = np.loadtxt(feats_file_path, delimiter=' ', dtype=np.int64 )
The problem is that my matrix has only integers and I can't use a float becaus... | 0 | 1 | 2,553 |
0 | 26,368,952 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-10-14T14:51:00.000 | 1 | 1 | 0 | Testing against NumPy/SciPy sane version pairs | 26,363,853 | 1.2 | python,testing,numpy,scipy,integration-testing | This doesn't completely answer your question, but I think the policy of scipy release management since 0.11 or earlier has been to support all of the numpy versions from 1.5.1 up to the numpy version in development at the time of the scipy release. | Testing against NumPy/SciPy includes testing against several versions of them, since there is the need to support all versions since Numpy 1.6 and Scipy 0.11.
Testing all combinations would explode the build matrix in continuous integration (like travis-ci). I've searched the SciPy homepage for notes about version comp... | 0 | 1 | 69 |
0 | 26,393,748 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-10-15T22:33:00.000 | 0 | 1 | 0 | Why we calculate pseudo inverse over inverse | 26,393,254 | 0 | python,matrix,scipy | Short answer
A positive semi-definite matrix does not have to have full rank, thus might not be invertible using the normal inverse.
Long answer
If cov does not have full rank, it does have some eigenvalues equal to zero and its inverse is not defined (because some of its eigenvalues would be infinitely large). Thus, i... | I was looking into "scipy.stats.multivariate_normal" function, there they mentioned that they are using the pseudo inverse, and pseudo determinant.
The covariance matrix cov must be a (symmetric) positive semi-definite matrix. The determinant and inverse of cov are computed as the pseudo-determinant and pseudo-inverse... | 0 | 1 | 239 |
0 | 26,565,108 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-10-18T17:10:00.000 | 1 | 2 | 0 | Sampling parts of a vector from gaussian mixture model | 26,442,403 | 0.099668 | python,numpy,random-sample,normal-distribution,mixture-model | Since for sampling only relative proportion of the distribution matters, scaling preface or can be thrown away. For diagonal covariance matrix, one can just use the covariance submarine and mean subvector that has dimensions of missing data. For covariance with off-diagonal elements, the mean and std dev of a sampling ... | I want to sample only some elements of a vector from a sum of gaussians that is given by their means and covariance matrices.
Specifically:
I'm imputing data using gaussian mixture model (GMM). I'm using the following procedure and sklearn:
impute with mean
get means and covariances with GMM (for example 5 components)... | 0 | 1 | 1,040 |
0 | 26,509,674 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-10-22T14:02:00.000 | 2 | 1 | 0 | How to convert numpy array into libsvm format | 26,509,319 | 1.2 | python,arrays,numpy,svm,libsvm | The svmlight format is tailored to classification/regression problems. Therefore, the array X is a matrix with as many rows as data points in your set, and as many columns as features. y is the vector of instance labels.
For example, suppose you have 1000 objects (images of bicycles and bananas, for example), featurize... | I have a numpy array for an image and am trying to dump it into the libsvm format of LABEL I0:V0 I1:V1 I2:V2..IN:VN. I see that scikit-learn has a dump_svmlight_file and would like to use that if possible since it's optimized and stable.
It takes parameters of X, y, and file output name. The values I'm thinking about ... | 0 | 1 | 2,922 |
0 | 29,172,738 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-10-23T12:24:00.000 | 0 | 2 | 0 | Python statsmodel.api logistic regression (Logit) | 26,528,019 | 0 | python,statistics,statsmodels,logistic-regression | If the response is on the unit interval interpreted as a probability, in addition to loss considerations, the other perspective which may help is looking at it as a Binomial outcome, as a count instead of a Bernoulli. In particular, in addition to the probabilistic response in your problem, is there any counterpart to... | So I'm trying to do a prediction using python's statsmodels.api to do logistic regression on a binary outcome. I'm using Logit as per the tutorials.
When I try to do a prediction on a test dataset, the output is in decimals between 0 and 1 for each of the records.
Shouldn't it be giving me zero and one? or do I have to... | 0 | 1 | 10,827 |
0 | 26,559,152 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-10-24T14:50:00.000 | 0 | 1 | 0 | efficient different sized list comparisons | 26,550,430 | 0 | python,algorithm,memory-efficient | Nothing is going to be superfast, and there's a lot of data there (half a million results, to start with), but the following should fit in your time and space budget on modern hardware.
If possible, start by sorting the lists by length, from longest to shortest. (I don't mean sort each lists; the order of elements in t... | I wish to compare around 1000 lists of varying size. Each list might have thousands of items. I want to compare each pair of lists, so potentially around 500000 comparisons. Each comparison consists of counting how many of the smaller list exists in the larger list (if same size, pick either list).Ultimately I want to ... | 0 | 1 | 159 |
0 | 26,591,932 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-10-27T16:06:00.000 | 1 | 3 | 0 | how to store duration in a pandas column in minutes:second format that allows arithemtic? | 26,591,805 | 0.066568 | python,pandas | If you don't want to convert to date time but still want to do math with them you'd most like be best off converting them to seconds in a different column while retaining the string format of them or creating a function that converts to string and applying that after any computations. | Currently I am storing duration in a pandas column using strings.
For example '12:05' stands for 12 minutes and 5 seconds.
I would like to convert this pandas column from string to a format that allows arithmetic, while retaining the MM:SS format.
I would like to avoid storing day, hour, dates, etc. | 0 | 1 | 173 |
0 | 26,625,834 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-10-27T19:39:00.000 | 0 | 1 | 0 | Using numpy with PyDev | 26,595,519 | 0 | python,eclipse,numpy,pydev | I recommend you to either use the setup.py from the downloaded archive or to download the "superpack"-executable for windows, if you work on windows anyway.
In PyDev, i overcame problems with new libraries by using the autoconfig button. If that doesn't work, another solution could be deleting and reconfiguring the pyt... | Although I've been doing things with python by myself for a while now, I'm completely new to using python with external libraries. As a result, I seem to be having trouble getting numpy to work with PyDev.
Right now I'm using PyDev in Eclipse, so I first tried to go to My Project > Properties > PyDev - PYTHONPATH > Ext... | 0 | 1 | 2,141 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.