GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 14,396,884 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-01-18T10:13:00.000 | 1 | 1 | 0 | Setting the SVM discriminant value in PyML | 14,396,632 | 1.2 | python,svm,pyml | Roundabout way of doing it below:
Use the result.getDecisionFunction() method and choose according to your own preference.
Returns a list of values like:
[-1.0000000000000213, -1.0000000000000053, -0.9999999999999893]
Better answers still appreciated. | I'm using PyML's SVM to classify reads, but would like to set the discriminant to a higher value than the default (which I assume is 0). How do I do it?
Ps. I'm using a linear kernel with the liblinear-optimizer if that matters. | 0 | 1 | 80 |
0 | 15,023,264 | 0 | 1 | 0 | 0 | 2 | true | 8 | 2013-01-18T14:27:00.000 | 2 | 3 | 0 | Spyder default module import list | 14,400,993 | 1.2 | python,import,module,spyder | The startup script for Spyder is in site-packages/spyderlib/scientific_startup.py.
Carlos' answer would also work, but this is what I was looking for. | I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list?
In case it makes a difference, I'm using the Spyder configuration provided by the Py... | 0 | 1 | 15,729 |
0 | 14,401,134 | 0 | 1 | 0 | 0 | 2 | false | 8 | 2013-01-18T14:27:00.000 | -2 | 3 | 0 | Spyder default module import list | 14,400,993 | -0.132549 | python,import,module,spyder | If Spyder is executed as a python script by python binary, then you should be able to simply edit Spyder python sources and include the modules you need. You should take a look into how is it actually executed upon start. | I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list?
In case it makes a difference, I'm using the Spyder configuration provided by the Py... | 0 | 1 | 15,729 |
0 | 14,443,106 | 0 | 0 | 0 | 0 | 1 | true | 10 | 2013-01-21T03:21:00.000 | 10 | 3 | 0 | Can we load pandas DataFrame in .NET ironpython? | 14,432,059 | 1.2 | python,.net,pandas,ironpython,python.net | No, Pandas is pretty well tied to CPython. Like you said, your best bet is to do the analysis in CPython with Pandas and export the result to CSV. | Can we load a pandas DataFrame in .NET space using iron python? If not I am thinking of converting pandas df into a csv file and then reading in .net space. | 0 | 1 | 8,863 |
0 | 35,618,939 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2013-01-22T21:11:00.000 | 0 | 2 | 0 | Is stemming used when gensim creates a dictionary for tf-idf model? | 14,468,078 | 0 | python,nlp,gensim | I was also struggling with the same case. To overcome i first stammed documents using NLTK and later processed it with gensim. Probably it can be a easier and handy way to perform your task. | I am using Gensim python toolkit to build tf-idf model for documents. So I need to create a dictionary for all documents first. However, I found Gensim does not use stemming before creating the dictionary and corpus. Am I right ? | 0 | 1 | 940 |
0 | 23,838,980 | 0 | 0 | 0 | 0 | 2 | false | 30 | 2013-01-25T15:55:00.000 | -9 | 4 | 0 | What's the maximum size of a numpy array? | 14,525,344 | -1 | arrays,numpy,python-2.7,size,max | It is indeed related to the system maximum address length, to say it simply, 32-bit system or 64-bit system. Here is an explanation for these questions, originally from Mark Dickinson
Short answer: the Python object overhead is killing you. In Python 2.x on a 64-bit machine, a list of strings consumes 48 bytes per list... | I'm trying to create a matrix containing 2 708 000 000 elements. When I try to create a numpy array of this size it gives me a value error. Is there any way I can increase the maximum array size?
a=np.arange(2708000000)
ValueError Traceback (most recent call last)
ValueError: Maximum allo... | 0 | 1 | 82,332 |
0 | 14,525,604 | 0 | 0 | 0 | 0 | 2 | false | 30 | 2013-01-25T15:55:00.000 | 19 | 4 | 0 | What's the maximum size of a numpy array? | 14,525,344 | 1 | arrays,numpy,python-2.7,size,max | You're trying to create an array with 2.7 billion entries. If you're running 64-bit numpy, at 8 bytes per entry, that would be 20 GB in all.
So almost certainly you just ran out of memory on your machine. There is no general maximum array size in numpy. | I'm trying to create a matrix containing 2 708 000 000 elements. When I try to create a numpy array of this size it gives me a value error. Is there any way I can increase the maximum array size?
a=np.arange(2708000000)
ValueError Traceback (most recent call last)
ValueError: Maximum allo... | 0 | 1 | 82,332 |
0 | 14,535,721 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-01-26T09:36:00.000 | 1 | 2 | 0 | How would I go about finding all possible permutations of a 4x4 matrix with static corner elements? | 14,535,650 | 0.099668 | python,math,matrix,permutation,itertools | Just pull the placed numbers out of the permutation set. Then insert them into their proper position in the generated permutations.
For your example you'd take out 1, 16, 4, 13. Permute on (2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15), for each permutation, insert 1, 16, 4, 13 where you have pre-selected to place them. | So far I have been using python to generate permutations of matrices for finding magic squares. So what I have been doing so far (for 3x3 matrices) is that I find all the possible permutations of the set {1,2,3,4,5,6,7,8,9} using itertools.permutations, store them as a list and do my calculations and print my results.... | 0 | 1 | 2,502 |
0 | 17,913,278 | 0 | 0 | 0 | 0 | 1 | false | 17 | 2013-01-26T18:15:00.000 | 3 | 3 | 0 | Pandas Drop Rows Outside of Time Range | 14,539,992 | 0.197375 | python,pandas | You can also do:
rng = pd.date_range('1/1/2000', periods=24, freq='H')
ts = pd.Series(pd.np.random.randn(len(rng)), index=rng)
ts.ix[datetime.time(10):datetime.time(14)]
Out[4]:
2000-01-01 10:00:00 -0.363420
2000-01-01 11:00:00 -0.979251
2000-01-01 12:00:00 -0.896648
2000-01-01 13:00:00 -0.051159
2000-01-01 ... | I am trying to go through every row in a DataFrame index and remove all rows that are not between a certain time.
I have been looking for solutions but none of them separate the Date from the Time, and all I want to do is drop the rows that are outside of a Time range. | 0 | 1 | 19,605 |
0 | 14,544,871 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2013-01-27T05:55:00.000 | 4 | 1 | 0 | find a value other than a root with fsolve in python's scipy | 14,544,838 | 0.664037 | python,scipy,root,solver | This is easy if you change your definition of f(x). e.g. if you want f(x) = 5, define your function: g(x) = f(x) - 5 = 0 | I know how I can solve for a root in python using scipy.optimize.fsolve.
I have a function defined
f = lambda : -1*numpy.exp(-x**2) and I want to solve for x setting the function to a certain nonzero. For instance, I want to solve for x using f(x) = 5.
Is there a way to do this with fsolve or would I need to use ano... | 0 | 1 | 380 |
0 | 14,545,631 | 0 | 0 | 0 | 0 | 2 | true | 3 | 2013-01-27T08:11:00.000 | 4 | 3 | 0 | numpy vs native Python - most efficient way | 14,545,602 | 1.2 | python,performance,numpy | In general, it probably matters most (efficiency-wise) to avoid conversions between the two. If you're mostly using non-numpy functions on data, then they'll be internally operating using standard Python data types, and thus using numpy arrays would be inefficient due to the need to convert back and forth.
Similarly, i... | For a lot of functions, it is possible to use either native Python or numpy to proceed.
This is the case for math functions, that are available with Python native import math, but also with numpy methods. This is also the case when it comes to arrays, with narray from numpy and pythons list comprehensions, or tuples. ... | 0 | 1 | 1,039 |
0 | 14,545,646 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2013-01-27T08:11:00.000 | 1 | 3 | 0 | numpy vs native Python - most efficient way | 14,545,602 | 0.066568 | python,performance,numpy | When there's a choice between working with NumPy array and numeric lists, the former are typically faster.
I don't quite understand the second question, so won't try to address it. | For a lot of functions, it is possible to use either native Python or numpy to proceed.
This is the case for math functions, that are available with Python native import math, but also with numpy methods. This is also the case when it comes to arrays, with narray from numpy and pythons list comprehensions, or tuples. ... | 0 | 1 | 1,039 |
0 | 14,552,819 | 0 | 1 | 1 | 0 | 1 | false | 13 | 2013-01-27T19:48:00.000 | 3 | 3 | 0 | Embed R code in python | 14,551,472 | 0.197375 | python,r | When I need to do R calculations, I usually write R scripts, and run them from Python using the subprocess module. The reason I chose to do this was because the version of R I had installed (2.16 I think) wasn't compatible with RPy at the time (which wanted 2.14).
So if you already have your R installation "just the wa... | I need to make computations in a python program, and I would prefer to make some of them in R. Is it possible to embed R code in python ? | 0 | 1 | 13,823 |
0 | 14,756,113 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-01-28T10:00:00.000 | 0 | 2 | 0 | Can a neural network recognize a screen and replicate a finite set of actions? | 14,559,547 | 0 | python,neural-network,image-recognition,online-algorithm | This is not entirely correct.
A 3-layer feedforward MLP can theoretically replicate any CONTINUOUS function.
If there are discontinuities, then you need a 4th layer.
Since you are dealing with pixelated screens and such, you probably would need to consider a fourth layer.
Finally, if you are looking at circular shap... | I learned, that neural networks can replicate any function.
Normally the neural network is fed with a set of descriptors to its input neurons and then gives out a certain score at its output neuron. I want my neural network to recognize certain behaviours from a screen. Objects on the screen are already preprocessed a... | 0 | 1 | 888 |
0 | 14,575,243 | 0 | 0 | 0 | 0 | 3 | true | 26 | 2013-01-29T00:28:00.000 | 38 | 3 | 0 | Python statistics package: difference between statsmodel and scipy.stats | 14,573,728 | 1.2 | python,scipy,scikits,statsmodels | Statsmodels has scipy.stats as a dependency. Scipy.stats has all of the probability distributions and some statistical tests. It's more like library code in the vein of numpy and scipy. Statsmodels on the other hand provides statistical models with a formula framework similar to R and it works with pandas DataFrames. T... | I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.
One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.sta... | 0 | 1 | 16,292 |
0 | 14,574,087 | 0 | 0 | 0 | 0 | 3 | false | 26 | 2013-01-29T00:28:00.000 | -1 | 3 | 0 | Python statistics package: difference between statsmodel and scipy.stats | 14,573,728 | -0.066568 | python,scipy,scikits,statsmodels | I think THE statistics package is numpy/scipy. It works also great if you want to plot your data using matplotlib.
However, as far as I know, matplotlib doesn't work with Python 3.x yet. | I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.
One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.sta... | 0 | 1 | 16,292 |
0 | 14,575,672 | 0 | 0 | 0 | 0 | 3 | false | 26 | 2013-01-29T00:28:00.000 | 5 | 3 | 0 | Python statistics package: difference between statsmodel and scipy.stats | 14,573,728 | 0.321513 | python,scipy,scikits,statsmodels | I try to use pandas/statsmodels/scipy for my work on a day-to-day basis, but sometimes those packages come up a bit short (LOESS, anybody?). The problem with the RPy module is (last I checked, at least) that it wants a specific version of R that isn't current---my R installation is 2.16 (I think) and RPy wanted 2.14. S... | I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.
One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.sta... | 0 | 1 | 16,292 |
0 | 14,600,682 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2013-01-29T10:25:00.000 | 1 | 1 | 0 | How to grab matplotlib plot as html in ipython notebook? | 14,580,684 | 1.2 | python,pandas,matplotlib,jupyter-notebook | Ok, if you go that route, this answer stackoverflow.com/a/5314808/243434 on how to capture >matplotlib figures as inline PNGs may help – @crewbum
To prevent duplication of plots, try running with pylab disabled (double-check your config >files and the command line). – @crewbum
--> this last requires a restart of the n... | I have an IPython Notebook that is using Pandas to back-test a rule-based trading system.
I have a function that accepts various scalars and functions as parameters and outputs a stats pack as some tables and a couple of plots.
For automation, I want to be able to format this nicely into a "page" and then call the func... | 0 | 1 | 2,790 |
0 | 15,783,554 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2013-01-30T22:20:00.000 | -2 | 6 | 0 | Easy way to implement a Root Raised Cosine (RRC) filter using Python & Numpy | 14,614,966 | -0.066568 | python,numpy,scipy,signal-processing | SciPy will support any filter. Just calculate the impulse response and use any of the appropriate scipy.signal filter/convolve functions. | SciPy/Numpy seems to support many filters, but not the root-raised cosine filter. Is there a trick to easily create one rather than calculating the transfer function? An approximation would be fine as well. | 0 | 1 | 13,131 |
0 | 14,635,675 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2013-01-31T21:33:00.000 | 3 | 1 | 0 | Python vector transformation (normalize, scale, rotate etc.) | 14,635,549 | 1.2 | python,math,vector-graphics | No, the standard in numpy. I wouldn't think of it as overkill, think of it as a very well written and tested library, even if you do just need a small portion of it. All the basic vector & matrix operations are implemented efficiently (falling back to C and Fortan) which makes it fast and memory efficient. Don't make y... | I'm about to write my very own scaling, rotation, normalization functions in python. Is there a convenient way to avoid this? I found NumPy, but it kind-a seems like an overkill for my little 2D-needs.
Are there basic vector operations available in the std python libs? | 0 | 1 | 1,480 |
0 | 20,688,782 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-01-31T21:42:00.000 | 2 | 1 | 0 | GAE MapReduce, How to write Multiple Outputs | 14,635,693 | 1.2 | python,google-app-engine,mapreduce | I don't think such functionality exists (yet?) in the GAE Mapreduce library.
Depending on the size of your dataset, and the type of output required, you can small-time-investment hack your way around it by co-opting the reducer as another output writer. For example, if one of the reducer outputs should go straight bac... | I have a data set which I do multiple mappings on.
Assuming that I have 3 key-values pair for the reduce function, how do I modify the output such that I have 3 blobfiles - one for each of the key value pair?
Do let me know if I can clarify further. | 0 | 1 | 139 |
0 | 42,075,989 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2013-01-31T23:16:00.000 | 1 | 4 | 0 | A-star search in numpy or python | 14,636,918 | 0.049958 | python,numpy,a-star | No, there is no A* search in Numpy. | i tried searching stackoverflow for the tags [a-star] [and] [python] and [a-star] [and] [numpy], but nothing. i also googled it but whether due to the tokenizing or its existence, i got nothing.
it's not much harder than your coding-interview tree traversals to implement. but, it would be nice to have a correct efficie... | 0 | 1 | 15,316 |
0 | 14,655,846 | 0 | 0 | 0 | 0 | 3 | true | 0 | 2013-02-01T21:55:00.000 | 0 | 3 | 0 | Is there a way to generate a random variate from a non-standard distribution without computing CDF? | 14,655,681 | 1.2 | c++,python,algorithm,random,montecarlo | Acceptance\Rejection:
Find a function that is always higher than the pdf. Generate 2 Random variates. The first one you scale to calculate the value, the second you use to decide whether to accept or reject the choice. Rinse and repeat until you accept a value.
Sorry I can't be more specific, but I haven't done it for ... | I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution.
I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) rando... | 0 | 1 | 374 |
0 | 14,657,373 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2013-02-01T21:55:00.000 | 0 | 3 | 0 | Is there a way to generate a random variate from a non-standard distribution without computing CDF? | 14,655,681 | 0 | c++,python,algorithm,random,montecarlo | Indeed acceptance/rejection is the way to go if you know analytically your pdf. Let's call it f(x). Find a pdf g(x) such that there exist a constant c, such that c.g(x) > f(x), and such that you know how to simulate a variable with pdf g(x) - For example, as you work with a distribution with a finite support, a uniform... | I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution.
I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) rando... | 0 | 1 | 374 |
0 | 18,890,513 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2013-02-01T21:55:00.000 | 0 | 3 | 0 | Is there a way to generate a random variate from a non-standard distribution without computing CDF? | 14,655,681 | 0 | c++,python,algorithm,random,montecarlo | If acceptance rejection is also too inefficient you could also try some Markov Chain MC method, they generate a sequence of samples each one dependent on the previous one, so by skipping blocks of them one can subsample obtaining a more or less independent set. They only need the PDF, or even just a multiple of it. Usu... | I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution.
I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) rando... | 0 | 1 | 374 |
0 | 14,665,480 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-02-02T19:03:00.000 | 0 | 2 | 0 | Taking data from a text file and putting it into a numpy array | 14,665,379 | 0 | python,numpy | numpy.loadtxt() is the function you are looking for. This returns a two-dimenstional array. | I need some help taking data from a .txt file and putting it into an array. I have a very rudimentary understanding of Python, and I have read through the documentation sited in threads relevant to my problem, but after hours of attempting to do this I still have not been able to get anywhere. The data in my file looks... | 0 | 1 | 132 |
0 | 14,727,117 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2013-02-03T21:55:00.000 | 1 | 2 | 0 | Counting Cars in OpenCV + Python | 14,677,763 | 1.2 | python,video,image-processing,opencv,computer-vision | I guess you are detecting the cars in each frame and creating a new bounding box each time a car is detected. This would explain the many increments of your variable.
You have to find a way to figure out if the car detected in one frame is the same car from the frame before (if you had a car detected in the previous fr... | ive got this big/easy problem that i need to solve but i cant..
What im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bo... | 0 | 1 | 1,748 |
0 | 14,678,095 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2013-02-03T21:55:00.000 | 0 | 2 | 0 | Counting Cars in OpenCV + Python | 14,677,763 | 0 | python,video,image-processing,opencv,computer-vision | You should use an sqlite database for store cars' informations. | ive got this big/easy problem that i need to solve but i cant..
What im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bo... | 0 | 1 | 1,748 |
0 | 25,715,719 | 0 | 0 | 0 | 0 | 1 | false | 127 | 2013-02-04T13:59:00.000 | 14 | 13 | 0 | Adding meta-information/metadata to pandas DataFrame | 14,688,306 | 1 | python,pandas | Just ran into this issue myself. As of pandas 0.13, DataFrames have a _metadata attribute on them that does persist through functions that return new DataFrames. Also seems to survive serialization just fine (I've only tried json, but I imagine hdf is covered as well). | Is it possible to add some meta-information/metadata to a pandas DataFrame?
For example, the instrument's name used to measure the data, the instrument responsible, etc.
One workaround would be to create a column with that information, but it seems wasteful to store a single piece of information in every row! | 0 | 1 | 59,446 |
0 | 14,718,635 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2013-02-05T22:55:00.000 | 2 | 1 | 0 | Natural Language Processing - Similar to ngram | 14,718,543 | 0.379949 | python,nlp,nltk,n-gram | You might want to look into word sense disambiguation (WSD), it is the problem of determining which "sense" (meaning) of a word is activated by the use of the word in a particular context, a process which appears to be largely unconscious in people. | I'm currently working on a NLP project that is trying to differentiate between synonyms (received from Python's NLTK with WordNet) in a context. I've looked into a good deal of NLP concepts trying to find exactly what I want, and the closest thing I've found is n-grams, but its not quite a perfect fit.
Suppose I am tr... | 0 | 1 | 1,365 |
0 | 14,734,299 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2013-02-06T16:06:00.000 | 2 | 3 | 0 | python changes format numbers ndarray many digits | 14,733,471 | 0.132549 | python,numpy,string-formatting,multidimensional-array | Try numpy.set_printoptions() -- there you can e.g. specify the number of digits that are printed and suppress the scientific notation. For example, numpy.set_printoptions(precision=8,suppress=True) will print 8 digits and no "...e+xx". | I'm a beginner in python and easily get stucked and confused...
When I read a file which contains a table with numbers with digits, it reads it as an numpy.ndarray
Python is changing the display of the numbers.
For example:
In the input file i have this number: 56143.0254154
and in the output file the number is writt... | 0 | 1 | 1,536 |
0 | 14,754,539 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-02-07T03:13:00.000 | 2 | 5 | 0 | Interpolation Function | 14,742,893 | 0.07983 | python,matlab,pandas,gps | What you can do is use interp1 function. This function will fit in deifferent way the numbers for a new X series.
For example if you have
x=[1 3 5 6 10 12]
y=[15 20 17 33 56 89]
This means if you want to fill in for x1=[1 2 3 4 5 6 7 ... 12], you will type
y1=interp1(x,y,x1) | This is probably a very easy question, but all the sources I have found on interpolation in Matlab are trying to correlate two values, all I wanted to benefit from is if I have data which is collected over an 8 hour period, but the time between data points is varying, how do I adjust it such that the time periods are e... | 0 | 1 | 705 |
0 | 14,827,656 | 1 | 0 | 0 | 0 | 2 | false | 2 | 2013-02-12T05:48:00.000 | 0 | 4 | 0 | Search in Large data set | 14,826,245 | 0 | python,data-structures | I'd give you a code sample if I better understood what your current data structures look like, but this sounds like a job for a pandas dataframe groupby (in case you don't feel like actually using a database as others have suggested). | I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user.
I tried traversing lists but is computationally very expensive. I am also trying to do it by... | 0 | 1 | 177 |
0 | 14,826,472 | 1 | 0 | 0 | 0 | 2 | false | 2 | 2013-02-12T05:48:00.000 | 0 | 4 | 0 | Search in Large data set | 14,826,245 | 0 | python,data-structures | Can you do something like this.
Im assuming friends of a user is relatively less, and the events attended by a particular user is also much lesser than total number of events.
So have a boolean vector of attended events for each friend of the user.
Doing a dot product and those that have max will be the friend who most... | I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user.
I tried traversing lists but is computationally very expensive. I am also trying to do it by... | 0 | 1 | 177 |
0 | 14,864,547 | 0 | 0 | 0 | 0 | 1 | false | 22 | 2013-02-13T21:06:00.000 | 22 | 2 | 0 | sklearn logistic regression with unbalanced classes | 14,863,125 | 1 | python,scikit-learn,classification | Have you tried to pass to your class_weight="auto" classifier? Not all classifiers in sklearn support this, but some do. Check the docstrings.
Also you can rebalance your dataset by randomly dropping negative examples and / or over-sampling positive examples (+ potentially adding some slight gaussian feature noise). | I'm solving a classification problem with sklearn's logistic regression in python.
My problem is a general/generic one. I have a dataset with two classes/result (positive/negative or 1/0), but the set is highly unbalanced. There are ~5% positives and ~95% negatives.
I know there are a number of ways to deal with an u... | 0 | 1 | 18,285 |
0 | 14,877,368 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2013-02-14T06:40:00.000 | 1 | 1 | 0 | customize the default toolbar icon images of a matplotlib graph | 14,869,145 | 1.2 | python,matplotlib | I suspect that exactly what you will have to do will depend on you gui toolkit. The code that you want to look at is in matplotlib/lib/matplotlib/backends and you want to find the class that sub-classes NavigationToolbar2 in which ever backend you are using. | i want to change the default icon images of a matplotplib.
even when i replaced the image with the same name and size from the image location
i.e. C:\Python27\Lib\site-packages\matplotlib\mpl-data\images\home.png
its still plotting the the graphs with the same default images.
If I need to change the code of the image... | 0 | 1 | 888 |
0 | 14,877,671 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-02-14T13:01:00.000 | 4 | 2 | 0 | How to Train Single-Object Recognition? | 14,875,450 | 0.379949 | python,language-agnostic,machine-learning,object-recognition,pybrain | First, a note regarding the classification method to use.
If you intend to use the image pixels themselves as features, neural network might be a fitting classification method. In that case, I think it might be a better idea to train the same network to distinguish between the various objects, rather than using a sepa... | I was thinking of doing a little project that involves recognizing simple two-dimensional objects using some kind of machine learning. I think it's better that I have each network devoted to recognizing only one type of object. So here are my two questions:
What kind of network should I use? The two I can think of tha... | 0 | 1 | 2,529 |
0 | 14,884,277 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-02-14T21:18:00.000 | 1 | 1 | 0 | Read Vector from Text File | 14,884,214 | 1.2 | python,numpy | Just use loadtxt and reshape (or ravel) the resulting array. | I have a text file with a bunch of number that contains newlines every 32 entries. I want to read this file a a column vector using Numpy. How can I use numpy.loadtxt and ignore the newlines such that the generated array is of size 1024x1 and not 32x32? | 0 | 1 | 174 |
0 | 14,900,251 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2013-02-15T16:35:00.000 | 2 | 1 | 0 | how to find the integral of a matrix exponential in python | 14,899,139 | 0.379949 | python,matrix,numpy,integration,exponential | Provided A has the right properties, you could transform it to the diagonal form A0 by calculating its eigenvectors and eigenvalues. In the diagonal form, the solution is sol = [exp(A0*b) - exp(A0*a)] * inv(A0), where A0 is the diagonal matrix with the eigenvalues and inv(A0) just contains the inverse of the eigenvalue... | I have a matrix of the form, say e^(Ax) where A is a square matrix. How can I integrate it from a given value a to another value bso that the output is a corresponding array? | 0 | 1 | 1,283 |
0 | 14,966,265 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-02-18T22:31:00.000 | 2 | 2 | 0 | Does the EPD Free distribution use MKL? | 14,946,512 | 0.197375 | lapack,blas,enthought,intel-mkl,epd-python | The EPD Free 7.3 installers do not include MKL. The BLAS/LAPACK libraries which they use are ATLAS on Linux & Windows and Accelerate on OSX. | According to the Enthought website, the EPD Python distribution uses MKL for numpy and scipy. Does EPD Free also use MKL? If not does it use another library for BLAS/LAPACK? I am using EPD Free 7.3-2
Also, what library does the windows binary installer for numpy that can be found on scipy.org use? | 0 | 1 | 538 |
0 | 14,999,516 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2013-02-21T09:20:00.000 | 1 | 1 | 0 | error in Python gradient measurement | 14,998,497 | 1.2 | python,scipy | The standard error of a linear regression is the standard deviation of the serie obtained by substracting the fitted model to your data points. It indicates how well your data points can be fitted by a linear model. | I need to fit a straight line to my data to find out if there is a gradient.
I am currently doing this with scipy.stats.linregress.
I'm a little confused though, because one of the outputs of linregress is the "standard error", but I'm not sure how linregress calculated this, as the uncertainty of your data points is n... | 0 | 1 | 340 |
0 | 15,011,126 | 0 | 0 | 0 | 0 | 1 | true | 8 | 2013-02-21T17:44:00.000 | 6 | 2 | 0 | Artificial life with neural networks | 15,008,875 | 1.2 | python,artificial-intelligence,neural-network,artificial-life | If the environment is benign enough (e.g it's easy enough to find food) then just moving randomly may be a perfectly viable strategy and reproductive success may be far more influenced by luck than anything else. Also consider unintended consequences: e.g if offspring is co-sited with its parent then both are immediat... | I am trying to build a simple evolution simulation of agents controlled by neural network. In the current version each agent has feed-forward neural net with one hidden layer. The environment contains fixed amount of food represented as a red dot. When an agent moves, he loses energy, and when he is near the food, he g... | 0 | 1 | 2,480 |
0 | 15,016,437 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-02-22T03:09:00.000 | 1 | 2 | 0 | Pandas: Attaching Descriptive Dict() to Hierarchical Index (i.e. CountryCode and CountryName) | 15,016,187 | 0.099668 | python,pandas | I think the simplest solution split this into two columns in your DataFrame, one for country_code and country_name (you could name them something else).
When you print or graph you can select which column is used. | Is there anyway to attach a descriptive version to an Index Column?
For Example, I use ISO3 CountryCode's to merge from different data sources
'AUS' -> Australia etc. This is very convenient for merging different data sources, but when I want to print the data I would like the description version (i.e. Australia). I am... | 0 | 1 | 102 |
0 | 15,020,070 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2013-02-22T06:54:00.000 | 3 | 3 | 0 | Python SciPy convolve vs fftconvolve | 15,018,526 | 0.197375 | python,scipy,fft,convolution | FFT fast convolution via the overlap-add or overlap save algorithms can be done in limited memory by using an FFT that is only a small multiple (such as 2X) larger than the impulse response. It breaks the long FFT up into properly overlapped shorter but zero-padded FFTs.
Even with the overlap overhead, O(NlogN) will b... | I know generally speaking FFT and multiplication is usually faster than direct convolve operation, when the array is relatively large. However, I'm convolving a very long signal (say 10 million points) with a very short response (say 1 thousand points). In this case the fftconvolve doesn't seem to make much sense, sinc... | 0 | 1 | 10,751 |
0 | 15,038,477 | 0 | 0 | 0 | 0 | 1 | true | 12 | 2013-02-22T10:05:00.000 | 15 | 3 | 0 | How to encode a categorical variable in sklearn? | 15,021,521 | 1.2 | python,machine-learning,scikit-learn | DictVectorizer is the recommended way to generate a one-hot encoding of categorical variables; you can use the sparse argument to create a sparse CSR matrix instead of a dense numpy array. I usually don't care about multicollinearity and I haven't noticed a problem with the approaches that I tend to use (i.e. LinearSVC... | I'm trying to use the car evaluation dataset from the UCI repository and I wonder whether there is a convenient way to binarize categorical variables in sklearn. One approach would be to use the DictVectorizer of LabelBinarizer but here I'm getting k different features whereas you should have just k-1 in order to avoid... | 0 | 1 | 22,439 |
1 | 15,036,718 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-02-23T03:25:00.000 | 0 | 3 | 0 | 2D Python list will have random results | 15,036,694 | 0 | python,list | Your board is getting multiple references to the same array.
You need to replace the * 10 with another list comprehension. | I have created a 10 by 10 game board. It is a 2D list, with another list of 2 inside. I used
board = [[['O', 'O']] * 10 for x in range(1, 11)]. So it will produce something like
['O', 'O'] ['O', 'O']...
['O', 'O'] ['O', 'O']...
Later on I want to set a single cell to have 'C' I use board.gameBoard[animal.y][animal.x]... | 0 | 1 | 168 |
0 | 15,042,390 | 0 | 1 | 0 | 0 | 1 | false | 8 | 2013-02-23T14:34:00.000 | 0 | 2 | 0 | How to efficiently compute the cosine similarity between millions of strings | 15,041,647 | 0 | java,python,algorithm,divide-and-conquer,cosine-similarity | Work with the transposed matrix. That is what Mahout does on Hadoop to do this kind of task fast (or just use Mahout).
Essentially, computing cosine similarity the naive way is bad. Because you end up computing a lot of 0 * something. Instead, you better work in columns, and leave away all 0s there. | I need to compute the cosine similarity between strings in a list. For example, I have a list of over 10 million strings, each string has to determine similarity between itself and every other string in the list. What is the best algorithm I can use to efficiently and quickly do such task? Is the divide and conquer alg... | 0 | 1 | 2,043 |
0 | 15,068,825 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-02-25T13:59:00.000 | 0 | 3 | 0 | representation of a number as multiplication of its factors | 15,068,698 | 0 | python | I know one...
If you're using python, you can use dictionary's to simplify the storage...
You'll have to check for every prime less than square root of the number.
Now, suppose p^k divides your number n, your task, I suppose is to find k.
Here's the method:
int c = 0; int temp = n; while(temp!=0) { temp /= p; c+= temp... | I want to represent a number as the product of its factors.The number of factors that are used to represent the number should be from 2 to number of prime factors of the same number(this i s the maximum possible number of factors for a number).
for example taking the number 24:
representation of the number as two facto... | 0 | 1 | 1,491 |
0 | 15,107,442 | 0 | 0 | 0 | 0 | 1 | false | 17 | 2013-02-25T16:48:00.000 | 5 | 3 | 0 | keep/slice specific columns in pandas | 15,072,005 | 0.321513 | python,pandas | If your column names have information that you can filter for, you could use df.filter(regex='name*').
I am using this to filter between my 189 data channels from a1_01 to b3_21 and it works fine. | I know about these column slice methods:
df2 = df[["col1", "col2", "col3"]] and df2 = df.ix[:,0:2]
but I'm wondering if there is a way to slice columns from the front/middle/end of a dataframe in the same slice without specifically listing each one.
For example, a dataframe df with columns: col1, col2, col3, col4, col5... | 0 | 1 | 28,966 |
0 | 15,079,557 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2013-02-26T00:15:00.000 | 4 | 2 | 0 | Linear time algorithm to compute cartesian product | 15,079,069 | 0.379949 | python,algorithm,cartesian-product | There are mn results; the minimum work you have to do is write each result to the output. So you cannot do better than O(mn). | I was asked in an interview to come up with a solution with linear time for cartesian product. I did the iterative manner O(mn) and a recursive solution also which is also O(mn). But I could not reduce the complexity further. Does anyone have ideas on how this complexity can be improved? Also can anyone suggest an effi... | 0 | 1 | 1,149 |
0 | 20,166,514 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2013-02-26T00:15:00.000 | 0 | 2 | 0 | Linear time algorithm to compute cartesian product | 15,079,069 | 0 | python,algorithm,cartesian-product | The question that comes to my mind reading this is, "Linear with respect to what?" Remember that in mathematics, all variables must be defined to have meaning. Big-O notation is no exception. Simply saying an algorithm is O(n) is meaningless if n is not defined.
Assuming the question was meaningful, and not a mistak... | I was asked in an interview to come up with a solution with linear time for cartesian product. I did the iterative manner O(mn) and a recursive solution also which is also O(mn). But I could not reduce the complexity further. Does anyone have ideas on how this complexity can be improved? Also can anyone suggest an effi... | 0 | 1 | 1,149 |
0 | 15,090,586 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-02-26T10:56:00.000 | 0 | 1 | 0 | contourf result differs when switching axes | 15,087,303 | 0 | python,matplotlib,axes | The problem was the sampling. Although the arrays have the same size, the stepsize in the plot is not equal for x and y axis. | I am plotting a contourmap. When first plotting I noticed I had my axes wrong. So I switched the axes and noticed that the structure of both plots is different. On the first plot the axes and assignments are correct, but the structure is messy. On the second plot it is the other way around.
Since it's a square matrix I... | 0 | 1 | 50 |
0 | 15,111,407 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2013-02-27T11:42:00.000 | 7 | 4 | 0 | what is a reason to use ndarray instead of python array | 15,111,230 | 1 | python,numpy,multidimensional-array | There are at least two main reasons for using NumPy arrays:
NumPy arrays require less space than Python lists. So you can deal with more data in a NumPy array (in-memory) than you can with Python lists.
NumPy arrays have a vast library of functions and methods unavailable
to Python lists or Python arrays.
Yes, you ca... | I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into nd... | 0 | 1 | 1,532 |
0 | 15,111,278 | 0 | 0 | 0 | 0 | 3 | true | 5 | 2013-02-27T11:42:00.000 | 8 | 4 | 0 | what is a reason to use ndarray instead of python array | 15,111,230 | 1.2 | python,numpy,multidimensional-array | NumPy and Python arrays share the property of being efficiently stored in memory.
NumPy arrays can be added together, multiplied by a number, you can calculate, say, the sine of all their values in one function call, etc. As HYRY pointed out, they can also have more than one dimension. You cannot do this with Python ar... | I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into nd... | 0 | 1 | 1,532 |
0 | 53,073,528 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2013-02-27T11:42:00.000 | 1 | 4 | 0 | what is a reason to use ndarray instead of python array | 15,111,230 | 0.049958 | python,numpy,multidimensional-array | Another great advantage of using NumPy arrays over built-in lists is the fact that NumPy has a C API that allows native C and C++ code to access NumPy arrays directly. Hence, many Python libraries written in low-level languages like C are expecting you to work with NumPy arrays instead of Python lists.
Reference: Pytho... | I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use .append to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into nd... | 0 | 1 | 1,532 |
0 | 15,143,804 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2013-02-28T18:45:00.000 | 6 | 2 | 0 | Is there any documentation on the interdependencies between packages in the scipy, numpy, pandas, scikit ecosystem? Python | 15,143,253 | 1 | python,numpy,scipy,pandas,scikit-learn | AFAIK, here is the dependency tree (numpy is a dependency of everything):
numpy
scipy
scikit-learn
pandas | Is there any documentation on the interdependencies and relationship between packages in the the scipy, numpy, pandas, scikit ecosystem? | 0 | 1 | 198 |
0 | 15,155,284 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2013-03-01T09:44:00.000 | 3 | 1 | 0 | svm.sparse.SVC taking a lot of time to get trained | 15,154,690 | 1.2 | python,svm,scikit-learn | Try using sklearn.svm.LinearSVC. This also has a linear kernel, but the underlying implementation is liblinear, which is known to be faster. With that in mind, your data set isn't very small, so even this classifier might take a while.
Edit after first comment:
In that I think you have several options, neither of whic... | I am trying to train svm.sparse.SVC in scikit-learn. Right now the dimension of the feature vectors is around 0.7 million and the number of feature vectors being used for training is 20k. I am providing input using csr sparse matrices as only around 500 dimensions are non-zero in each feature vector. The code is runnin... | 0 | 1 | 1,802 |
0 | 29,629,579 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-03-01T12:25:00.000 | -1 | 2 | 0 | Where do I add a scale factor to the Essential Matrix to produce a real world translation value | 15,157,756 | -0.099668 | python,opencv,image-processing | I have the same problem.I think the monocular camera may need a object known the 3D coordinate.That may help . | I'm working with OpenCV and python and would like to obtain the real world translation between two cameras. I'm using a single calibrated camera which is moving. I've already worked through feature matching, calculation of F via RANSAC, and calculation of E. To get the translation between cameras, I think I can use: w,... | 0 | 1 | 726 |
0 | 15,181,340 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-03-02T17:45:00.000 | 0 | 3 | 0 | Alternative to support vector machine classifier in python? | 15,177,490 | 0 | python,opencv,machine-learning,scikit-learn,classification | If your images that belong to the same class are results of a transformations to some starting image you can increase your training size by making transofrmations to your labeled examples.
For example if you are doing character recognition, afine or elastic transforamtions can be used. P.Simard in Best Practices for ... | I have to make comparison between 155 image feature vectors. Every feature vector has got 5 features.
My image are divided in 10 classes.
Unfortunately i need at least 100 images for class for using support vector machine , There is any alternative? | 0 | 1 | 1,430 |
0 | 15,200,968 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2013-03-04T11:32:00.000 | 8 | 2 | 0 | Calculate inverse of a function--Library | 15,200,560 | 1 | python,c,math | As has already been mentioned, not all functions are invertible. In some cases imposing additional constraints helps: think about the inverse of sin(x).
Once you are sure your function has a unique inverse, solve the equation f(x) = y. The solution gives you the inverse, y(x).
In python, look for nonlinear solvers from... | Is there any library available to have inverse of a function? To be more specific, given a function y=f(x) and domain, is there any library which can output x=f(y)? Sadly I cannot use matlab/mathematics in my application, looking for C/Python library.. | 0 | 1 | 36,620 |
0 | 15,319,789 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-03-07T21:42:00.000 | 0 | 1 | 0 | creating arabic corpus | 15,282,336 | 0 | python,nlp,nltk,sentiment-analysis,rapidminer | Well, I think that rapidminer is very interesting and can handle this task. It contains several operators dealing with text mining. Also, it allows the creation of new operators with high fluency. | I'm doing the sentiment analysis for the Arabic language , I want to creat my own corpus , to do that , I collect 300 status from facebook and I classify them into positive and negative , now I want to do the tokenization of these status , in order to obain a list of words , and hen generate unigrams and bigrams, trigr... | 0 | 1 | 1,057 |
0 | 15,306,292 | 0 | 1 | 0 | 0 | 2 | false | 5 | 2013-03-09T01:45:00.000 | 1 | 4 | 0 | How to calculate all interleavings of two lists? | 15,306,231 | 0.049958 | python,algorithm | As suggested by @airza, the itertools module is your friend.
If you want to avoid using encapsulated magical goodness, my hint is to use recursion.
Start playing the process of generating the lists in your mind, and when you notice you're doing the same thing again, try to find the pattern. For example:
Take the first... | I want to create a function that takes in two lists, the lists are not guaranteed to be of equal length, and returns all the interleavings between the two lists.
Input: Two lists that do not have to be equal in size.
Output: All possible interleavings between the two lists that preserve the original list's order.
Examp... | 0 | 1 | 1,387 |
0 | 15,307,850 | 0 | 1 | 0 | 0 | 2 | false | 5 | 2013-03-09T01:45:00.000 | 0 | 4 | 0 | How to calculate all interleavings of two lists? | 15,306,231 | 0 | python,algorithm | You can try something a little closer to the metal and more elegant(in my opinion) iterating through different possible slices. Basically step through and iterate through all three arguments to the standard slice operation, removing anything added to the final list. Can post code snippet if you're interested. | I want to create a function that takes in two lists, the lists are not guaranteed to be of equal length, and returns all the interleavings between the two lists.
Input: Two lists that do not have to be equal in size.
Output: All possible interleavings between the two lists that preserve the original list's order.
Examp... | 0 | 1 | 1,387 |
0 | 15,329,656 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-03-11T00:01:00.000 | 0 | 3 | 0 | How can I efficiently get all divisors of X within a range if I have X's prime factorization? | 15,329,256 | 0 | c++,python,algorithm,primes,prime-factoring | As Malvolio was (indirectly) going about, I personal wouldn't find a use for prime factorization if you want to find factors in a range, I would start at int t = (int)(sqrt(n)) and then decremnt until1. t is a factor2. Complete util t or t/n range has been REACHED(a flag) and then (both) has left the range
Or if your r... | So I have algorithms (easily searchable on the net) for prime factorization and divisor acquisition but I don't know how to scale it to finding those divisors within a range. For example all divisors of 100 between 23 and 49 (arbitrary). But also something efficient so I can scale this to big numbers in larger ranges. ... | 0 | 1 | 1,052 |
0 | 15,330,625 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2013-03-11T03:01:00.000 | 2 | 3 | 0 | NumPy array size issue | 15,330,521 | 1.2 | python,numpy,scipy | To answer your second question:
Tuples in Python are n-dimensional. That is you can have a 1-2-3-...-n tuple. Due to syntax, the way you represent a 1-dimensional tuple is ('element',) where the trailing comma is mandatory. If you have ('element') then this is just simply the expression inside the parenthesis. So (3) +... | I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ).
Why does this happen?
Also, what does it mean for a tuple to have its last element unspecified? That is, ... | 0 | 1 | 1,032 |
0 | 15,330,600 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2013-03-11T03:01:00.000 | 2 | 3 | 0 | NumPy array size issue | 15,330,521 | 0.132549 | python,numpy,scipy | A tuple like this: (3, ) means that it's a tuple with a single element (a single dimension, in this case). That's the correct syntax - with a trailing , because if it looked like this: (3) then Python would interpret it as a number surrounded by parenthesis, not a tuple.
It'd be useful to see the actual code, but I'm g... | I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ).
Why does this happen?
Also, what does it mean for a tuple to have its last element unspecified? That is, ... | 0 | 1 | 1,032 |
0 | 15,330,640 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-03-11T03:08:00.000 | 0 | 2 | 0 | Searching multiple words in many blocks of data | 15,330,568 | 0 | c++,python | Why no consider using multithreading to store the result? Make an array with size equal to number of blocks, then each thread count for the result in one block, then the thread writes the result to the corresponding entry in the array. Later on you sort the array by decreasing order then you get the result. | I have to search about 100 words in blocks of data (20000 blocks approximately) and each block consists of about 20 words. The blocks should be returned in the decreasing order of the number of matches. The brute force technique is very cumbersome because you have to search for all the 100 words one by one and then com... | 0 | 1 | 77 |
0 | 15,370,151 | 0 | 0 | 0 | 0 | 3 | false | 6 | 2013-03-12T19:07:00.000 | 0 | 5 | 0 | python - saving numpy array to a file (smallest size possible) | 15,369,985 | 0 | python,numpy,scipy | If you don't mind installing additional packages (for both python and c++), you can use [BSON][1] (Binary JSON). | Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.
What I would like to do is find a way to accomplish this same task, changi... | 0 | 1 | 7,265 |
0 | 15,370,191 | 0 | 0 | 0 | 0 | 3 | false | 6 | 2013-03-12T19:07:00.000 | 1 | 5 | 0 | python - saving numpy array to a file (smallest size possible) | 15,369,985 | 0.039979 | python,numpy,scipy | numpy.ndarray.tofile and numpy.fromfile are useful for direct binary output/input from python. std::ostream::write std::istream::read are useful for binary output/input in c++.
You should be careful about endianess if the data are transferred from one machine to another. | Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.
What I would like to do is find a way to accomplish this same task, changi... | 0 | 1 | 7,265 |
0 | 19,226,920 | 0 | 0 | 0 | 0 | 3 | false | 6 | 2013-03-12T19:07:00.000 | 1 | 5 | 0 | python - saving numpy array to a file (smallest size possible) | 15,369,985 | 0.039979 | python,numpy,scipy | Use the an hdf5 file, they are really simple to use through h5py and you can use set compression a flag. Note that hdf5 has also a c++ interface. | Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.
What I would like to do is find a way to accomplish this same task, changi... | 0 | 1 | 7,265 |
0 | 15,425,560 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-03-15T05:37:00.000 | 3 | 3 | 0 | Pandas: More Efficient .map() function or method? | 15,425,492 | 0.197375 | python,pandas | There isn't, but if you want to only apply to unique values, just do that yourself. Get mySeries.unique(), then use your function to pre-calculate the mapped alternatives for those unique values and create a dictionary with the resulting mappings. Then use pandas map with the dictionary. This should be about as fast... | I am using a rather large dataset of ~37 million data points that are hierarchically indexed into three categories country, productcode, year. The country variable (which is the countryname) is rather messy data consisting of items such as: 'Austral' which represents 'Australia'. I have built a simple guess_country() t... | 0 | 1 | 2,544 |
0 | 18,670,288 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-03-17T16:29:00.000 | 1 | 1 | 0 | Choosing the threshold values for hysteresis | 15,463,191 | 0.197375 | python,opencv,image-processing,computer-vision | Such image statistics as mean, std etc. are not sufficient to answer the question, and canny may not be the best approach; it all depends on characteristics of the image. To learn about those characteristics and approaches, you may google for a survey of image segmentation / edge detection methods. And this kind of pro... | I'm trying to choose the best parameters for the hysteresis phase in the canny function of OpenCV. I found some similar questions in stackoverflow but they didn't solve my problem. So far I've found that there are two main approaches:
Compute mean and standard deviation and set the thresholds as: lowT = mean - std, hi... | 0 | 1 | 2,035 |
0 | 15,743,100 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2013-03-18T07:06:00.000 | 2 | 2 | 0 | Ensamble methods with scikit-learn | 15,471,372 | 1.2 | python,machine-learning,scikit-learn | Do you just want to do majority voting? This is not implemented afaik. But as I said, you can just average the predict_proba scores. Or you can use LabelBinarizer of the predictions and average those. That would implement a voting scheme.
Even if you are not interested in the probabilities, averaging the predicted prob... | Is there any way to combine different classifiers into one in sklearn? I find sklearn.ensamble package. It contains different models, like AdaBoost and RandofForest, but they use decision trees under the hood and I want to use different methods, like SVM and Logistic regression. Is it possible with sklearn? | 0 | 1 | 694 |
0 | 15,513,160 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-03-19T23:12:00.000 | 0 | 2 | 0 | How should I divide a large (~50Gb) dataset into training, test, and validation sets? | 15,512,276 | 0 | python,numpy,dataset | You could assign a unique sequential number to each row, then choose a random sample of those numbers, then serially extract each relevant row to a new file. | I have a large dataset. It's currently in the form of uncompressed numpy array files that were created with numpy.array.tofile(). Each file is approximately 100000 rows of 363 floats each. There are 192 files totalling 52 Gb.
I'd like to separate a random fifth of this data into a test set, and a random fifth of that t... | 0 | 1 | 756 |
0 | 15,514,922 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2013-03-20T03:15:00.000 | 1 | 2 | 0 | Finding points in space closer than a certain value | 15,514,641 | 0.099668 | python,performance,algorithm,numpy,kdtree | The first thing that comes to my mind is:
If we calculate the distance between each two atoms in the set it will be O(N^2) operations. It is very slow.
What about to introduce the statical orthogonal grid with some cells size (for example close to the distance you are interested) and then determine the atoms belonging ... | In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1/10 of a second fo... | 0 | 1 | 772 |
0 | 15,514,859 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2013-03-20T03:15:00.000 | 5 | 2 | 0 | Finding points in space closer than a certain value | 15,514,641 | 1.2 | python,performance,algorithm,numpy,kdtree | Great question! Here is my suggestion:
Divide each coordinate by your "epsilon" value of 0.1/0.2/whatever and round the result to an integer. This creates a "quotient space" of points where distance no longer needs to be determined using the distance formula, but simply by comparing the integer coordinates of each poin... | In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1/10 of a second fo... | 0 | 1 | 772 |
0 | 15,540,786 | 0 | 1 | 0 | 0 | 2 | false | 5 | 2013-03-21T06:10:00.000 | 0 | 2 | 0 | Are there any downsides to using virtualenv for scientific python and machine learning? | 15,540,640 | 0 | python-2.7,virtualenv,scientific-computing | There's no performance overhead to using virtualenv. All it's doing is using different locations in the filesystem.
The only "overhead" is the time it takes to set it up. You'd need to install each package in your virtualenv (numpy, pandas, etc.) | I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook. | 0 | 1 | 1,022 |
0 | 15,540,795 | 0 | 1 | 0 | 0 | 2 | true | 5 | 2013-03-21T06:10:00.000 | 3 | 2 | 0 | Are there any downsides to using virtualenv for scientific python and machine learning? | 15,540,640 | 1.2 | python-2.7,virtualenv,scientific-computing | Virtualenv is the best and easiest way to keep some sort of order when it comes to dependencies. Python is really behind Ruby (bundler!) when it comes to dealing with installing and keeping track of modules. The best tool you have is virtualenv.
So I suggest you create a virtualenv directory for each of your applicatio... | I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook. | 0 | 1 | 1,022 |
0 | 15,578,952 | 0 | 0 | 0 | 0 | 1 | true | 18 | 2013-03-22T16:35:00.000 | 24 | 1 | 0 | How do you improve matplotlib image quality? | 15,575,466 | 1.2 | python,graph,matplotlib | You can save the images in a vector format so that they will be scalable without quality loss. Such formats are PDF and EPS. Just change the extension to .pdf or .eps and matplotlib will write the correct image format. Remember LaTeX likes EPS and PDFLaTeX likes PDF images. Although most modern LaTeX executables are PD... | I am using a python program to produce some data, plotting the data using matplotlib.pyplot and then displaying the figure in a latex file.
I am currently saving the figure as a .png file but the image quality isn't great. I've tried changing the DPI in matplotlib.pyplot.figure(dpi=200) etc but this seems to make litt... | 0 | 1 | 18,876 |
0 | 15,643,468 | 0 | 0 | 0 | 1 | 2 | false | 2 | 2013-03-23T22:45:00.000 | 0 | 3 | 0 | How to use NZ Loader (Netezza Loader) through Python Script? | 15,592,980 | 0 | python,netezza | You need to get the nzcli installed on the machine that you want to run nzload from - your sysadmin should be able to put it on your unix/linux application server. There's a detailed process to setting it all up, caching the passwords, etc - the sysadmin should be able to do that to.
Once it is set up, you can create N... | I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you | 0 | 1 | 4,583 |
0 | 17,522,337 | 0 | 0 | 0 | 1 | 2 | false | 2 | 2013-03-23T22:45:00.000 | 1 | 3 | 0 | How to use NZ Loader (Netezza Loader) through Python Script? | 15,592,980 | 0.066568 | python,netezza | you can use nz_load4 to load the data,This is the support utility /nz/support/contrib/bin
the syntax is same like nzload,by default nz_load4 will load the data using 4 thread and you can go upto 32 thread by using -tread option
for more details use nz_load4 -h
This will create the log files based on the number of thr... | I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you | 0 | 1 | 4,583 |
0 | 15,623,721 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2013-03-25T16:24:00.000 | 1 | 1 | 0 | Matplotlib fullscreen not working | 15,619,825 | 1.2 | python,numpy,matplotlib,scipy | SOLVED - My problem was that I was not up to the latest version of Matplotlib. I did the following steps to get fullscreen working in Matplotlib with Ubuntu 12.10.
Uninstalled matplotlib with sudo apt-get remove python-matplotlib
Installed build dependencies for matplotlib sudo apt-get build-dep python-matplotlib
Inst... | I am trying desperately to make a fullscreen plot in matplotlib on Ubuntu 12.10. I have tried everything I can find on the web. I need my plot to go completely fullscreen, not just maximized. Has anyone ever gotten this to work? If so, could you please share how?
Thanks. | 0 | 1 | 1,451 |
0 | 15,638,779 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2013-03-26T12:11:00.000 | 2 | 2 | 0 | Efficient way to do a rolling linear regression | 15,636,796 | 1.2 | python,matlab,numpy,linear-regression,rolling-computation | No, there is NO function that will do a rolling regression, returning all the statistics you wish, doing it efficiently.
That does not mean you can't write such a function. To do so would mean multiple calls to a tool like conv or filter. This is how a Savitsky-Golay tool would work, which DOES do most of what you want... | I have two vectors x and y, and I want to compute a rolling regression for those, e.g a on (x(1:4),y(1:4)), (x(2:5),y(2:5)), ...
Is there already a function for that? The best algorithm I have in mind for this is O(n), but applying separate linear regressions on every subarrays would be O(n^2).
I'm working with Matlab ... | 0 | 1 | 4,368 |
0 | 15,638,712 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2013-03-26T13:44:00.000 | 6 | 2 | 0 | calculating mean and standard deviation of the data which does not fit in memory using python | 15,638,612 | 1 | python,statistics | Sounds like a math question. For the mean, you know that you can take the mean of a chunk of data, and then take the mean of the means. If the chunks aren't the same size, you'll have to take a weighted average.
For the standard deviation, you'll have to calculate the variance first. I'd suggest doing this alongside t... | I have a lot of data stored at disk in large arrays. I cant load everything in memory altogether.
How one could calculate the mean and the standard deviation? | 0 | 1 | 6,151 |
0 | 15,791,843 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2013-04-03T14:41:00.000 | 0 | 2 | 0 | Grouping data by frequency | 15,790,467 | 0 | python,group-by,timestamp,time-series | There are several ways to approach this, but you're effectively "binning" on the times. I would approach it in a few steps:
You don't want to parse the time yourself with string manipulation, it will blow up in your face; trust me! Parse out the timestamp into a datetime object (google should give you a pretty good ans... | I did a code which generates random numbers below and i save them in a csv which look like below, I am trying to play around and learn the group by function. I would like for instance do the sum or average of those group by timestamp. I am new in Python, i cannot find anywhere to start though. Ulitmately i would like t... | 0 | 1 | 584 |
0 | 16,271,420 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-04-03T15:52:00.000 | 0 | 1 | 0 | openstreet maps: dynamically retrieve shp files in python | 15,792,073 | 0 | python,openstreetmap,shapefile,mapnik | Dynamic retrieval of data from shapefiles is not suggested for large applications. The best practice is to dump the shapefile in databases like postgres (shp2pgsql) & generated the map using mapnik & tile them using tilecache. | I manage to install mapnik for python and am able to render maps using a provided shp file. Is there a possibility to retrieve dynamically the shape file for the map I want to render (given coordinates) from python? or do I need to download the whole OSM files, and import them into my own database?
thanks | 0 | 1 | 286 |
0 | 15,847,067 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2013-04-05T11:12:00.000 | 0 | 3 | 0 | Cannot seem to install pandas for python 2.7 on windows | 15,832,445 | 0 | python,windows,installation,pandas | After you have installed python check to see if the appropriate path variables are set by typing the following at the command line:
echo %PATH%
if you do not see something like:
C:\Python27;C:\Python27\Scripts
on the output (probably with lots of other paths) then type this:
set PATH=%PATH%;C:\\Python27\\;C:\\Python2... | Sorry if this has been answered somewhere already, I couldn't find the answer.
I have installed python 2.7.3 onto a windows 7 computer. I then downloaded the pandas-0.10.1.win-amd64-py2.7.exe and tried to install it. I have gotten past the first window, but then it states "Python 2.7 is required, which was not found in... | 0 | 1 | 3,609 |
0 | 25,210,272 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2013-04-05T11:12:00.000 | 2 | 3 | 0 | Cannot seem to install pandas for python 2.7 on windows | 15,832,445 | 0.132549 | python,windows,installation,pandas | I faced the same issue. Here is what worked
Changed to PATH to include C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts\;
uninstall 64bit numpy and pandas
install 32win 2.7 numpy and pandas
I had to also install dateutil and pytz
pandas and numpy work import work fine | Sorry if this has been answered somewhere already, I couldn't find the answer.
I have installed python 2.7.3 onto a windows 7 computer. I then downloaded the pandas-0.10.1.win-amd64-py2.7.exe and tried to install it. I have gotten past the first window, but then it states "Python 2.7 is required, which was not found in... | 0 | 1 | 3,609 |
0 | 16,186,805 | 0 | 0 | 0 | 0 | 1 | true | 19 | 2013-04-05T15:29:00.000 | 8 | 2 | 0 | Making pyplot.hist() first and last bins include outliers | 15,837,810 | 1.2 | python,numpy,matplotlib | No. Looking at matplotlib.axes.Axes.hist and the direct use of numpy.histogram I'm fairly confident in saying that there is no smarter solution than using clip (other than extending the bins that you histogram with).
I'd encourage you to look at the source of matplotlib.axes.Axes.hist (it's just Python code, though adm... | pyplot.hist() documentation specifies that when setting a range for a histogram "lower and upper outliers are ignored".
Is it possible to make the first and last bins of a histogram include all outliers without changing the width of the bin?
For example, let's say I want to look at the range 0-3 with 3 bins: 0-1, 1-2, ... | 0 | 1 | 7,794 |
0 | 15,887,123 | 0 | 0 | 0 | 0 | 3 | false | 3 | 2013-04-07T21:34:00.000 | 0 | 3 | 0 | Feature Selection in dataset containing both string and numerical values? | 15,868,108 | 0 | python,machine-learning,weka,rapidminer,feature-selection | Feature selection algorithms assigns weights to different features based on their impact in the classification. In my best knowledge the features types does not make difference when computing different weights. I suggest to convert string features to numerical based on their ASCII codes or any other techniques. Then yo... | Hi I have big dataset which has both strings and numerical values
ex.
User name (str) , handset(str), number of requests(int), number of downloads(int) ,.......
I have around 200 such columns.
Is there a way/algorithm which can handle both strings and integers during feature selection ?
Or how should I approach this ... | 0 | 1 | 1,683 |
0 | 17,920,216 | 0 | 0 | 0 | 0 | 3 | false | 3 | 2013-04-07T21:34:00.000 | 0 | 3 | 0 | Feature Selection in dataset containing both string and numerical values? | 15,868,108 | 0 | python,machine-learning,weka,rapidminer,feature-selection | I've used Weka Feature Selection and although the attribute evaluator methods I've tried can't handle string attributes you can temporary remove them in the Preprocess > Filter > Unsupervised > Attribute > RemoveType, then perform the feature selection and, later, include strings again to do the classification. | Hi I have big dataset which has both strings and numerical values
ex.
User name (str) , handset(str), number of requests(int), number of downloads(int) ,.......
I have around 200 such columns.
Is there a way/algorithm which can handle both strings and integers during feature selection ?
Or how should I approach this ... | 0 | 1 | 1,683 |
0 | 16,003,658 | 0 | 0 | 0 | 0 | 3 | false | 3 | 2013-04-07T21:34:00.000 | 0 | 3 | 0 | Feature Selection in dataset containing both string and numerical values? | 15,868,108 | 0 | python,machine-learning,weka,rapidminer,feature-selection | There are a set of operators you could use in the Attribute Weighting group within RapidMiner. For example, Weight By Correlation or Weight By Information Gain.
These will assess how much weight to give an attribute based on its relevance to the label (in this case the download flag). The resulting weights can then be ... | Hi I have big dataset which has both strings and numerical values
ex.
User name (str) , handset(str), number of requests(int), number of downloads(int) ,.......
I have around 200 such columns.
Is there a way/algorithm which can handle both strings and integers during feature selection ?
Or how should I approach this ... | 0 | 1 | 1,683 |
0 | 15,892,422 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2013-04-08T01:28:00.000 | 1 | 2 | 0 | Random Forest - Predict using less estimators | 15,869,919 | 0.099668 | python,limit,scikit-learn,prediction,random-forest | Once trained, you can access these via the "estimators_" attribute of the random forest object. | I've trained a Random Forest (regressor in this case) model using scikit learn (python), and I'would like to plot the error rate on a validation set based on the numeber of estimators used. In other words, there's a way to predict using only a portion of the estimators in your RandomForestRegressor?
Using predict(X) wi... | 0 | 1 | 1,415 |
0 | 52,104,659 | 0 | 0 | 0 | 0 | 2 | false | 340 | 2013-04-08T12:41:00.000 | -2 | 5 | 0 | What is the difference between ndarray and array in numpy? | 15,879,315 | -0.07983 | python,arrays,numpy,multidimensional-array,numpy-ndarray | I think with np.array() you can only create C like though you mention the order, when you check using np.isfortran() it says false. but with np.ndarrray() when you specify the order it creates based on the order provided. | What is the difference between ndarray and array in Numpy? And where can I find the implementations in the numpy source code? | 0 | 1 | 148,479 |
0 | 15,879,428 | 0 | 0 | 0 | 0 | 2 | false | 340 | 2013-04-08T12:41:00.000 | 66 | 5 | 0 | What is the difference between ndarray and array in numpy? | 15,879,315 | 1 | python,arrays,numpy,multidimensional-array,numpy-ndarray | numpy.array is a function that returns a numpy.ndarray. There is no object type numpy.array. | What is the difference between ndarray and array in Numpy? And where can I find the implementations in the numpy source code? | 0 | 1 | 148,479 |
0 | 15,904,277 | 0 | 1 | 0 | 0 | 1 | true | 75 | 2013-04-09T14:02:00.000 | 135 | 2 | 0 | matplotlib bar graph black - how do I remove bar borders | 15,904,042 | 1.2 | python,graph,matplotlib,border | Set the edgecolor to "none": bar(..., edgecolor = "none") | I'm using pyplot.bar but I'm plotting so many points that the color of the bars is always black. This is because the borders of the bars are black and there are so many of them that they are all squished together so that all you see is the borders (black). Is there a way to remove the bar borders so that I can see th... | 0 | 1 | 72,444 |
0 | 15,957,090 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2013-04-09T15:40:00.000 | 1 | 2 | 0 | how does ImageFilter in PIL normalize the pixel values between 0 and 255 after filtering with Kernel or mask | 15,906,368 | 0.099668 | image,image-processing,python-2.7,python-imaging-library | The above answer of Mark states his theory regarding what happens when a Zero-summing kernel is used with scale argument 0 or None or not passed/mentioned. Now talking about how PIL handles calculated pixel values after applying kernel,scale and offset, which are not in [0,255] range. My theory about how it normalizes ... | how does ImageFilter in PIL normalize the pixel values(not the kernel) between 0 and 255 after filtering with Kernel or mask?(Specially zero-summing kernel like:( -1,-1,-1,0,0,0,1,1,1 ))
my code was like:
import Image
import ImageFilter
Horiz = ImageFilter.Kernel((3, 3), (-1,-2,-1,0,0,0,1,2,1), scale=None, offset=0) #... | 0 | 1 | 1,691 |
0 | 15,915,785 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2013-04-10T01:08:00.000 | 0 | 2 | 0 | Creating many arrays at once | 15,915,255 | 0 | python,arrays,os.walk | Python's lists are dynamic, you can change their length on-the-fly, so just store them in a list. Or if you wanted to reference them by name instead of number, use a dictionary, whose size can also change on the fly. | I am currently working with hundreds of files, all of which I want to read in and view as a numpy array. Right now I am using os.walk to pull all the files from a directory. I have a for loop that goes through the directory and will then create the array, but it is not stored anywhere. Is there a way to create arrays "... | 0 | 1 | 57 |
0 | 15,925,696 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2013-04-10T11:03:00.000 | 0 | 1 | 0 | using python for opencv | 15,924,060 | 0 | python,opencv | You should have a look at Python Boost. This might help you to bind C++ functions you need to python. | There are many functions in OpenCV 2.4 not available using Python.
Please advice me how to convert the C++ functions so that I can use in Python 2.7.
Thanks in advance. | 0 | 1 | 61 |
0 | 15,949,294 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2013-04-10T23:02:00.000 | 5 | 1 | 0 | How to Extend Scipy Sparse Matrix returned by sklearn TfIdfVectorizer to hold more features | 15,938,025 | 1.2 | python-2.7,scipy,sparse-matrix,scikit-learn | I think the easiest would be to create a new sparse matrix with your custom features and then use scipy.sparse.hstack to stack the features.
You might also find the "FeatureUnion" from the pipeline module helpful. | I am working on a text classification problem using scikit-learn classifiers and text feature extractor, particularly TfidfVectorizer class.
The problem is that I have two kinds of features, the first are captured by the n-grams obtained from TfidfVectorizer and the other are domain specific features that I extract fro... | 0 | 1 | 692 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.