GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1,696,133 | 0 | 0 | 0 | 0 | 4 | false | 6 | 2009-11-08T10:29:00.000 | 2 | 5 | 0 | Does WordNet have "levels"? (NLP) | 1,695,971 | 0.07983 | python,text,nlp,words,wordnet | In order to get levels, you need to predefine the content of each level. An ontology often defines these as the immediate IS_A children of a specific concept, but if that is absent, you need to develop a method of that yourself.
The next step is to put a priority on each concept, in case you want to present only one ca... | For example...
Chicken is an animal.
Burrito is a food.
WordNet allows you to do "is-a"...the hiearchy feature.
However, how do I know when to stop travelling up the tree? I want a LEVEL.
That is consistent.
For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain leve... | 0 | 1 | 2,585 |
0 | 1,717,952 | 0 | 0 | 0 | 0 | 4 | false | 6 | 2009-11-08T10:29:00.000 | 0 | 5 | 0 | Does WordNet have "levels"? (NLP) | 1,695,971 | 0 | python,text,nlp,words,wordnet | WordNet's hypernym tree ends with a single root synset for the word "entity". If you are using WordNet's C library, then you can get a while recursive structure for a synset's ancestors using traceptrs_ds, and you can get the whole synset tree by recursively following nextss and ptrlst pointers until you hit null point... | For example...
Chicken is an animal.
Burrito is a food.
WordNet allows you to do "is-a"...the hiearchy feature.
However, how do I know when to stop travelling up the tree? I want a LEVEL.
That is consistent.
For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain leve... | 0 | 1 | 2,585 |
0 | 1,698,380 | 0 | 0 | 0 | 0 | 4 | false | 6 | 2009-11-08T10:29:00.000 | 6 | 5 | 0 | Does WordNet have "levels"? (NLP) | 1,695,971 | 1 | python,text,nlp,words,wordnet | [Please credit Pete Kirkham, he first came with the reference to SUMO which may well answer the question asked by Alex, the OP]
(I'm just providing a complement of information here; I started in a comment field but soon ran out of space and layout capabilites...)
Alex: Most of SUMO is science or engineering? It does no... | For example...
Chicken is an animal.
Burrito is a food.
WordNet allows you to do "is-a"...the hiearchy feature.
However, how do I know when to stop travelling up the tree? I want a LEVEL.
That is consistent.
For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain leve... | 0 | 1 | 2,585 |
0 | 35,500,598 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2009-11-08T17:54:00.000 | 1 | 11 | 0 | Algorithm for solving Sudoku | 1,697,334 | 0.01818 | python,algorithm,sudoku | Not gonna write full code, but I did a sudoku solver a long time ago. I found that it didn't always solve it (the thing people do when they have a newspaper is incomplete!), but now think I know how to do it.
Setup: for each square, have a set of flags for each number showing the allowed numbers.
Crossing out: just li... | I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of k... | 0 | 1 | 147,751 |
0 | 1,697,407 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2009-11-08T17:54:00.000 | 5 | 11 | 0 | Algorithm for solving Sudoku | 1,697,334 | 0.090659 | python,algorithm,sudoku | I wrote a simple program that solved the easy ones. It took its input from a file which was just a matrix with spaces and numbers. The datastructure to solve it was just a 9 by 9 matrix of a bit mask. The bit mask would specify which numbers were still possible on a certain position. Filling in the numbers from the fil... | I want to write a code in python to solve a sudoku puzzle. Do you guys have any idea about a good algorithm for this purpose. I read somewhere in net about a algorithm which solves it by filling the whole box with all possible numbers, then inserts known values into the corresponding boxes.From the row and coloumn of k... | 0 | 1 | 147,751 |
0 | 1,698,110 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2009-11-08T21:44:00.000 | 4 | 3 | 0 | Neural Networks in Python without using any readymade libraries...i.e., from first principles..help! | 1,698,017 | 0.26052 | python,scipy,neural-network | If you're familiar with Matlab, check out the excellent Python libraries numpy, scipy, and matplotlib. Together, they provide the most commonly used subset of Matlab functions. | I am trying to learn programming in python and am also working against a deadline for setting up a neural network which looks like it's going to feature multidirectional associative memory and recurrent connections among other things. While the mathematics for all these things can be accessed from various texts and sou... | 0 | 1 | 4,234 |
0 | 1,705,913 | 0 | 0 | 0 | 0 | 2 | false | 10 | 2009-11-10T05:33:00.000 | 0 | 11 | 0 | Finding cycle of 3 nodes ( or triangles) in a graph | 1,705,824 | 0 | python,graph,geometry,cycle | Do you need to find 'all' of the 'triangles', or just 'some'/'any'?
Or perhaps you just need to test whether a particular node is part of a triangle?
The test is simple - given a node A, are there any two connected nodes B & C that are also directly connected.
If you need to find all of the triangles - specifically, al... | I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient.
I am using python for my programming, if these is some inbuilt module... | 0 | 1 | 18,355 |
0 | 1,705,866 | 0 | 0 | 0 | 0 | 2 | false | 10 | 2009-11-10T05:33:00.000 | 1 | 11 | 0 | Finding cycle of 3 nodes ( or triangles) in a graph | 1,705,824 | 0.01818 | python,graph,geometry,cycle | Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes.
Then, as you try new approaches you can do two things:
1) Make certain that the answer remains the same.
2) See what the improvement is.
Having a faster algorithm that mi... | I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient.
I am using python for my programming, if these is some inbuilt module... | 0 | 1 | 18,355 |
0 | 50,057,490 | 0 | 0 | 0 | 0 | 1 | false | 76 | 2009-11-10T12:57:00.000 | 3 | 13 | 0 | Call Python function from MATLAB | 1,707,780 | 0.046121 | python,matlab,language-interoperability | Like Daniel said you can run python commands directly from Matlab using the py. command. To run any of the libraries you just have to make sure Malab is running the python environment where you installed the libraries:
On a Mac:
Open a new terminal window;
type: which python (to find out where the default version of ... | I need to call a Python function from MATLAB. how can I do this? | 0 | 1 | 132,733 |
0 | 1,730,684 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2009-11-13T08:43:00.000 | 1 | 3 | 0 | Just Curious about Python+Numpy to Realtime Gesture Recognition | 1,727,950 | 0.066568 | python,c,numpy,gesture-recognition | I think the answer depends on three things: how well you code in Matlab, how well you code in Python/Numpy, and your algorithm. Both Matlab and Python can be fast for number crunching if you're diligent about vectorizing everything and using library calls.
If your Matlab code is already very good I would be surprised ... | i 'm just finish labs meeting with my advisor, previous code is written in matlab and it run offline mode not realtime mode, so i decide to convert to python+numpy (in offline version) but after labs meeting, my advisor raise issue about speed of realtime recognition, so i have doubt about speed of python+numpy to do t... | 0 | 1 | 914 |
0 | 1,777,708 | 0 | 0 | 0 | 0 | 1 | false | 18 | 2009-11-21T18:24:00.000 | 13 | 8 | 0 | replacing Matlab with python | 1,776,290 | 1 | python,matlab | I've been programming with Matlab for about 15 years, and with Python for about 10. It usually breaks down this way:
If you can satisfy the following conditions:
1. You primarily use matrices and matrix operations
2. You have the money for a Matlab license
3. You work on a platform that mathworks supports
T... | i am a engineering student and i have to do a lot of numerical processing, plots, simulations etc. The tool that i use currently is Matlab. I use it in my university computers for most of my assignments. However, i want to know what are the free options available.
i have done some research and many have said that pyth... | 0 | 1 | 15,393 |
0 | 1,816,714 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2009-11-23T15:03:00.000 | 0 | 5 | 0 | Any python Support Vector Machine library around that allows online learning? | 1,783,669 | 0 | python,artificial-intelligence,machine-learning,svm | Why would you want to train it online? Adding trainings instances would usually require to re-solve the quadratic programming problem associated with the SVM.
A way to handle this is to train a SVM in batch mode, and when new data is available, check if these data points are in the [-1, +1] margin of the hyperplane. If... | I do know there are some libraries that allow to use Support vector Machines from python code, but I am looking specifically for libraries that allow one to teach it online (this is, without having to give it all the data at once).
Are there any? | 0 | 1 | 5,533 |
0 | 72,299,692 | 0 | 1 | 0 | 0 | 1 | false | 10 | 2009-11-26T12:54:00.000 | 0 | 4 | 0 | replace the NaN value zero after an operation with arrays | 1,803,516 | 0 | python | import numpy
alpha = numpy.array([1,2,3,numpy.nan,4])
n = numpy.nan_to_num(alpha)
print(n)
output : array([1., 2., 3., 0., 4.]) | how can I replace the NaN value in an array, zero if an operation is performed such that as a result instead of the NaN value is zero operations as
0 / 0 = NaN can be replaced by 0 | 0 | 1 | 32,496 |
0 | 1,881,867 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2009-12-10T15:41:00.000 | 0 | 7 | 0 | How to unpickle from C code | 1,881,851 | 0 | python,c | take a look at module struct ? | I have a python code computing a matrix, and I would like to use this matrix (or array, or list) from C code.
I wanted to pickle the matrix from the python code, and unpickle it from c code, but I could not find documentation or example on how to do this.
I found something about marshalling data, but nothing about unpi... | 0 | 1 | 1,690 |
0 | 1,897,910 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2009-12-13T21:17:00.000 | 0 | 5 | 0 | Test if point is in some rectangle | 1,897,779 | 0 | python,algorithm,point | Your R-tree approach is the best approach I know of (that's the approach I would choose over quadtrees, B+ trees, or BSP trees, as R-trees seem convenient to build in your case). Caveat: I'm no expert, even though I remember a few things from my senior year university class of algorithmic! | I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point.
Using R-trees seem to work, but they are really meant for rectan... | 0 | 1 | 13,398 |
0 | 1,897,962 | 0 | 0 | 0 | 0 | 2 | false | 11 | 2009-12-13T21:17:00.000 | 3 | 5 | 0 | Test if point is in some rectangle | 1,897,779 | 0.119427 | python,algorithm,point | For rectangles that are aligned with the axes, you only need two points (four numbers) to identify the rectangle - conventionally, bottom-left and top-right corners. To establish whether a given point (Xtest, Ytest) overlaps with a rectangle (XBL, YBL, XTR, YTR) by testing both:
Xtest >= XBL && Xtest <= XTR
Ytest >= ... | I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point.
Using R-trees seem to work, but they are really meant for rectan... | 0 | 1 | 13,398 |
0 | 1,916,520 | 0 | 0 | 0 | 0 | 2 | false | 9 | 2009-12-15T20:02:00.000 | 2 | 4 | 0 | How do I add rows and columns to a NUMPY array? | 1,909,994 | 0.099668 | python,arrays,numpy,reshape | No matter what, you'll be stuck reallocating a chunk of memory, so it doesn't really matter if you use arr.resize(), np.concatenate, hstack/vstack, etc. Note that if you're accumulating a lot of data sequentially, Python lists are usually more efficient. | Hello I have a 1000 data series with 1500 points in each.
They form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data.
Now what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way?
I would want the dat... | 0 | 1 | 17,603 |
0 | 1,910,401 | 0 | 0 | 0 | 0 | 2 | true | 9 | 2009-12-15T20:02:00.000 | 3 | 4 | 0 | How do I add rows and columns to a NUMPY array? | 1,909,994 | 1.2 | python,arrays,numpy,reshape | If you want zeroes in the added elements, my_array.resize((1600, 1000)) should work. Note that this differs from numpy.resize(my_array, (1600, 1000)), in which previous lines are duplicated, which is probably not what you want.
Otherwise (for instance if you want to avoid initializing elements to zero, which could be ... | Hello I have a 1000 data series with 1500 points in each.
They form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data.
Now what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way?
I would want the dat... | 0 | 1 | 17,603 |
0 | 1,950,575 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2009-12-23T03:30:00.000 | 0 | 5 | 0 | python lottery suggestion | 1,950,539 | 0 | python | The main shortcoming of software-based methods of generating lottery numbers is the fact that all random numbers generated by software are pseudo-random.
This may not be a problem for your simple application, but you did ask about a 'specific mathematical philosophy'. You will have noticed that all commercial lottery s... | I know python offers random module to do some simple lottery. Let say random.shuffle() is a good one.
However, I want to build my own simple one. What should I look into? Is there any specific mathematical philosophies behind lottery?
Let say, the simplest situation. 100 names and generate 20 names randomly.
I don't wa... | 0 | 1 | 1,639 |
0 | 1,972,198 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2009-12-28T23:46:00.000 | 1 | 3 | 0 | Interpolating a scalar field in a 3D space | 1,972,172 | 0.066568 | python,algorithm,interpolation | Why not try quadlinear interpolation?
extend Trilinear interpolation by another dimension. As long as a linear interpolation model fits your data, it should work. | I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total.
I would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points.
The spacial mesh has constant spacing and surrounds the iso-energ... | 0 | 1 | 5,443 |
0 | 1,973,347 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2009-12-28T23:46:00.000 | 2 | 3 | 0 | Interpolating a scalar field in a 3D space | 1,972,172 | 0.132549 | python,algorithm,interpolation | Since you have a spatial mesh with constant spacing, you can identify all neighbors on opposite sides of the isosurface. Choose some form of interpolation (q.v. Reed Copsey's answer) and do root-finding along the line between each such neighbor. | I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total.
I would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points.
The spacial mesh has constant spacing and surrounds the iso-energ... | 0 | 1 | 5,443 |
0 | 10,303,500 | 0 | 1 | 0 | 0 | 1 | false | 6 | 2010-01-12T10:02:00.000 | 2 | 4 | 0 | How do I plot a graph in Python? | 2,048,041 | 0.099668 | python,matplotlib | There is a very good book:
Sandro Tosi, Matplotlib for Python Developers, Packt Pub., 2009. | I have installed Matplotlib, and I have created two lists, x and y.
I want the x-axis to have values from 0 to 100 in steps of 10 and the y-axis to have values from 0 to 1 in steps of 0.1. How do I plot this graph? | 0 | 1 | 5,478 |
0 | 2,111,067 | 0 | 0 | 0 | 1 | 3 | false | 4 | 2010-01-21T16:22:00.000 | 1 | 5 | 0 | File indexing (using Binary trees?) in Python | 2,110,843 | 0.039979 | python,algorithm,indexing,binary-tree | If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database.
Script the file data into a database, index as you see fit, and query the data in any complex way the database supports.
That is unless you're looking... | Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In par... | 0 | 1 | 2,801 |
0 | 2,110,912 | 0 | 0 | 0 | 1 | 3 | false | 4 | 2010-01-21T16:22:00.000 | 1 | 5 | 0 | File indexing (using Binary trees?) in Python | 2,110,843 | 0.039979 | python,algorithm,indexing,binary-tree | The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.
To reduce the time spent waiting for I/O, your best bet is compression.
Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more ... | Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In par... | 0 | 1 | 2,801 |
0 | 12,805,622 | 0 | 0 | 0 | 1 | 3 | false | 4 | 2010-01-21T16:22:00.000 | 1 | 5 | 0 | File indexing (using Binary trees?) in Python | 2,110,843 | 0.039979 | python,algorithm,indexing,binary-tree | sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system. | Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In par... | 0 | 1 | 2,801 |
0 | 2,124,356 | 0 | 1 | 0 | 0 | 1 | false | 30 | 2010-01-23T19:12:00.000 | 1 | 6 | 0 | how to generate permutations of array in python? | 2,124,347 | 0.033321 | python,permutation | You may want the itertools.permutations() function. Gotta love that itertools module!
NOTE: New in 2.6 | i have an array of 27 elements,and i don't want to generate all permutations of array (27!)
i need 5000 randomly choosed permutations,any tip will be useful... | 0 | 1 | 38,205 |
0 | 2,126,656 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2010-01-24T08:07:00.000 | 2 | 1 | 0 | Machine learning issue for negative instances | 2,126,383 | 0.379949 | python,artificial-intelligence,machine-learning,data-mining | The question is very unclear, but assuming what you mean is that your machine learning algorithm is not working without negative examples and you can't give it every possible negative example, then it's perfectly alright to give it some negative examples.
The point of data mining (a.k.a. machine learning) is to try com... | I had to build a concept analyzer for computer science field and I used for this machine learning, the orange library for Python. I have the examples of concepts, where the features are lemma and part of speech, like algorithm|NN|concept. The problem is that any other word, that in fact is not a concept, is classified ... | 0 | 1 | 296 |
0 | 2,352,019 | 0 | 0 | 0 | 0 | 2 | false | 37 | 2010-02-02T23:50:00.000 | 5 | 4 | 0 | How can I detect and track people using OpenCV? | 2,188,646 | 0.244919 | python,opencv,computer-vision,motion-detection | Nick,
What you are looking for is not people detection, but motion detection. If you tell us a lot more about what you are trying to solve/do, we can answer better.
Anyway, there are many ways to do motion detection depending on what you are going to do with the results. Simplest one would be differencing followed by ... | I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles.
I've looked at several of the built-in samples:
No... | 0 | 1 | 45,934 |
0 | 2,190,799 | 0 | 0 | 0 | 0 | 2 | false | 37 | 2010-02-02T23:50:00.000 | 2 | 4 | 0 | How can I detect and track people using OpenCV? | 2,188,646 | 0.099668 | python,opencv,computer-vision,motion-detection | This is similar to a project we did as part of a Computer Vision course, and I can tell you right now that it is a hard problem to get right.
You could use foreground/background segmentation, find all blobs and then decide that they are a person. The problem is that it will not work very well since people tend to go to... | I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles.
I've looked at several of the built-in samples:
No... | 0 | 1 | 45,934 |
0 | 2,405,587 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2010-02-03T21:07:00.000 | 3 | 4 | 0 | OpenCV 2.0 and Python | 2,195,441 | 1.2 | python,opencv | After Step 1 (Installer) just copy the content of C:\OpenCV2.0\Python2.6\Lib\site-packages to C:\Python26\Lib\site-packages (standard installation path assumed).
That's all.
If you have a webcam installed you can try the camshift.demo in C:\OpenCV2.0\samples\python
The deprecated stuff (C:\OpenCV2.0\samples\swig_python... | I cannot get the example Python programs to run. When executing the Python command "from opencv import cv" I get the message "ImportError: No module named _cv". There is a stale _cv.pyd in the site-packages directory, but no _cv.py anywhere. See step 5 below.
MS Windows XP, VC++ 2008, Python 2.6, OpenCV 2.0
Here's wh... | 0 | 1 | 16,728 |
0 | 2,204,122 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2010-02-04T23:45:00.000 | 1 | 3 | 1 | Minimal linear regression program | 2,204,087 | 0.066568 | c++,python,bash,linear-algebra | How about extracting the coeffs into a file, import to another machine and then use Excel/Matlab/whatever other program that does this for you? | I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc.
I wonder what would be the best approach to use... | 0 | 1 | 2,428 |
0 | 2,204,124 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2010-02-04T23:45:00.000 | 3 | 3 | 1 | Minimal linear regression program | 2,204,087 | 0.197375 | c++,python,bash,linear-algebra | For a single, simple, known function (as in your case: a line) it is not hard to simply code a basic least square routine from scratch (but does require some attention to detail). It is a very common assignment in introductory numeric analysis classes.
So, look up least squares on wikipedia or mathworld or in a text bo... | I am running some calculations in an external machine and at the end I get X, Y pairs. I want to apply linear regression and obtain A, B, and R2. In this machine I can not install anything (it runs Linux) and has basic stuff installed on it, python, bash (of course), etc.
I wonder what would be the best approach to use... | 0 | 1 | 2,428 |
0 | 2,223,165 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2010-02-08T04:01:00.000 | 1 | 4 | 0 | Matplotlib turn off antialias for text in plot? | 2,219,503 | 1.2 | python,matplotlib,antialiasing | It seems this is not possible. Some classes such as Line2D have a "set_antialiased" method, but Text lacks this. I suggest you file a feature request on the Sourceforge tracker, and send an email to the matplotlib mailing list mentioning the request. | Is there any way to turn off antialias for all text in a plot, especially the ticklabels? | 0 | 1 | 2,659 |
0 | 2,224,074 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2010-02-08T17:38:00.000 | 0 | 3 | 0 | Design pattern for ongoing survey anayisis | 2,223,576 | 0 | python,design-patterns,statistics,matrix,survey | On the analysis, if your six questions have been posed in a way that would lead you to believe the answers will be correlated, consider conducting a factor analysis on the raw scores first. Often comparing the factors across regions or customer type has more statistical power than comparing across questions alone. ... | I'm doing an ongoing survey, every quarter. We get people to sign up (where they give extensive demographic info).
Then we get them to answer six short questions with 5 possible values much worse, worse, same, better, much better.
Of course over time we will not get the same participants,, some will drop out and some... | 0 | 1 | 1,157 |
0 | 2,247,284 | 0 | 1 | 0 | 0 | 1 | true | 6 | 2010-02-11T19:39:00.000 | 2 | 3 | 0 | Associative Matrices? | 2,247,197 | 1.2 | python,data-structures,matrix,d,associative-array | Why not just use a standard matrix, but then have two dictionaries - one that converts the row keys to row indices and one that converts the columns keys to columns indices. You could make your own structure that would work this way fairly easily I think. You just make a class that contains the matrix and the two dicti... | I'm working on a project where I need to store a matrix of numbers indexed by two string keys. The matrix is not jagged, i.e. if a column key exists for any row then it should exist for all rows. Similarly, if a row key exists for any column then it should exist for all columns.
The obvious way to express this is wit... | 0 | 1 | 566 |
0 | 3,279,041 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2010-02-19T20:51:00.000 | 0 | 6 | 0 | How do quickly search through a .csv file in Python | 2,299,454 | 0 | python,dictionary,csv,large-files | my idea is to use python zodb module to store dictionaty type data and then create new csv file using that data structure. do all your operation at that time. | I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry.
Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took a... | 0 | 1 | 20,439 |
0 | 2,443,606 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2010-02-19T20:51:00.000 | 1 | 6 | 0 | How do quickly search through a .csv file in Python | 2,299,454 | 0.033321 | python,dictionary,csv,large-files | You can't go directly to a specific line in the file because lines are variable-length, so the only way to know when line #n starts is to search for the first n newlines. And it's not enough to just look for '\n' characters because CSV allows newlines in table cells, so you really do have to parse the file anyway. | I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry.
Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took a... | 0 | 1 | 20,439 |
0 | 2,352,875 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2010-02-28T20:18:00.000 | 3 | 5 | 0 | Calculating the area underneath a mathematical function | 2,352,499 | 0.119427 | python,polynomial-math,numerical-integration | It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area.
You have a polynomial of degree 2: f(x) = ax2 + bx + c
You want to find the area under the curve for x in the range [0,1].
The antide... | I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1.
Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions?
I'm a little unclear... | 0 | 1 | 4,045 |
0 | 2,361,204 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2010-03-02T05:57:00.000 | 1 | 1 | 0 | Solving Sparse Linear Problem With Some Known Boundary Values | 2,361,176 | 1.2 | python,numpy,sparse-matrix,poisson | If I understand correctly, some elements of x are known, and some are not, and you want to solve Ax = b for the unknown values of x, correct?
Let Ax = [A1 A2][x1; x2] = b, where the vector x = [x1; x2], the vector x1 has the unknown values of x, and vector x2 have the known values of x. Then, A1x1 = b - A2x2. Therefore... | I'm trying to solve a Poisson equation on a rectangular domain which ends up being a linear problem like
Ax=b
but since I know the boundary conditions, there are nodes where I have the solution values. I guess my question is...
How can I solve the sparse system Ax=b if I know what some of the coordinates of x are... | 0 | 1 | 533 |
0 | 2,371,227 | 0 | 0 | 0 | 0 | 1 | false | 222 | 2010-03-03T07:42:00.000 | 2 | 12 | 0 | Generate a heatmap in MatPlotLib using a scatter data set | 2,369,492 | 0.033321 | python,matplotlib,heatmap,histogram2d | Make a 2-dimensional array that corresponds to the cells in your final image, called say heatmap_cells and instantiate it as all zeroes.
Choose two scaling factors that define the difference between each array element in real units, for each dimension, say x_scale and y_scale. Choose these such that all your datapoints... | I have a set of X,Y data points (about 10k) that are easy to plot as a scatter plot but that I would like to represent as a heatmap.
I looked through the examples in MatPlotLib and they all seem to already start with heatmap cell values to generate the image.
Is there a method that converts a bunch of x,y, all differen... | 0 | 1 | 284,427 |
0 | 2,392,026 | 0 | 0 | 0 | 1 | 1 | true | 8 | 2010-03-06T09:30:00.000 | 15 | 2 | 0 | SQLite or flat text file? | 2,392,017 | 1.2 | python,sql,database,r,file-format | If all the languages support SQLite - use it. The power of SQL might not be useful to you right now, but it probably will be at some point, and it saves you having to rewrite things later when you decide you want to be able to query your data in more complicated ways.
SQLite will also probably be substantially faster i... | I process a lot of text/data that I exchange between Python, R, and sometimes Matlab.
My go-to is the flat text file, but also use SQLite occasionally to store the data and access from each program (not Matlab yet though). I don't use GROUPBY, AVG, etc. in SQL as much as I do these operations in R, so I don't necessari... | 0 | 1 | 3,563 |
0 | 2,433,626 | 0 | 0 | 1 | 0 | 2 | true | 1 | 2010-03-12T12:54:00.000 | 5 | 2 | 0 | OpenCV performance in different languages | 2,432,792 | 1.2 | c++,python,c,performance,opencv | You've answered your own question pretty well. Most of the expensive computations should be within the OpenCV library, and thus independent of the language you use.
If you're really concerned about efficiency, you could profile your code and confirm that this is indeed the case. If need be, your custom processing func... | I'm doing some prototyping with OpenCV for a hobby project involving processing of real time camera data. I wonder if it is worth the effort to reimplement this in C or C++ when I have it all figured out or if no significant performance boost can be expected. The program basically chains OpenCV functions, so the main p... | 0 | 1 | 1,352 |
0 | 2,470,491 | 0 | 0 | 1 | 0 | 2 | false | 1 | 2010-03-12T12:54:00.000 | 0 | 2 | 0 | OpenCV performance in different languages | 2,432,792 | 0 | c++,python,c,performance,opencv | OpenCV used to utilize IPP, which is very fast. However, OpenCV 2.0 does not. You might customize your OpenCV using IPP, for example color conversion routines. | I'm doing some prototyping with OpenCV for a hobby project involving processing of real time camera data. I wonder if it is worth the effort to reimplement this in C or C++ when I have it all figured out or if no significant performance boost can be expected. The program basically chains OpenCV functions, so the main p... | 0 | 1 | 1,352 |
0 | 2,460,285 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2010-03-17T03:29:00.000 | 1 | 4 | 0 | Python KMeans clustering words | 2,459,739 | 0.049958 | python,cluster-analysis | Yeah I think there isn't a good implementation to what I need.
I have some crazy requirements, like distance caching etc.
So i think i will just write my own lib and release it as GPLv3 soon. | I am interested to perform kmeans clustering on a list of words with the distance measure being Leveshtein.
1) I know there are a lot of frameworks out there, including scipy and orange that has a kmeans implementation. However they all require some sort of vector as the data which doesn't really fit me.
2) I need a g... | 0 | 1 | 1,947 |
0 | 2,469,142 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2010-03-18T10:31:00.000 | 7 | 2 | 0 | Open-source implementation of Mersenne Twister in Python? | 2,469,031 | 1 | python,random,open-source,mersenne-twister | Mersenne Twister is an implementation that is used by standard python library. You can see it in random.py file in your python distribution.
On my system (Ubuntu 9.10) it is in /usr/lib/python2.6, on Windows it should be in C:\Python26\Lib | Is there any good open-source implementation of Mersenne Twister and other good random number generators in Python available? I would like to use in for teaching math and comp sci majors? I am also looking for the corresponding theoretical support.
Edit: Source code of Mersenne Twister is readily available in various ... | 0 | 1 | 5,857 |
0 | 2,489,898 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2010-03-21T20:55:00.000 | 1 | 2 | 0 | python parallel computing: split keyspace to give each node a range to work on | 2,488,670 | 0.099668 | python,algorithm,cluster-analysis,character-encoding | You should be able to treat your words as numerals in a strange base. For example, let's say you have a..z as your charset (26 characters), 4 character strings, and you want to distribute among equally 10 machines. Then there are a total of 26^4 strings, so each machine gets 26^4/10 strings. The first machine will g... | My question is rather complicated for me to explain, as i'm not really good at maths, but i'll try to be as clear as possible.
I'm trying to code a cluster in python, which will generate words given a charset (i.e. with lowercase: aaaa, aaab, aaac, ..., zzzz) and make various operations on them.
I'm searching how to c... | 0 | 1 | 260 |
0 | 2,497,667 | 0 | 1 | 0 | 0 | 1 | false | 11 | 2010-03-23T03:47:00.000 | 4 | 5 | 0 | Plot string values in matplotlib | 2,497,449 | 0.158649 | python,matplotlib | Why not just make the x value some auto-incrementing number and then change the label?
--jed | I am using matplotlib for a graphing application. I am trying to create a graph which has strings as the X values. However, the using plot function expects a numeric value for X.
How can I use string X values? | 0 | 1 | 48,768 |
0 | 5,876,058 | 0 | 0 | 0 | 0 | 1 | false | 96 | 2010-03-25T00:24:00.000 | 78 | 22 | 0 | how to merge 200 csv files in Python | 2,512,386 | 1 | python,csv,merge,concatenation | Why can't you just sed 1d sh*.csv > merged.csv?
Sometimes you don't even have to use python! | Guys, I here have 200 separate csv files named from SH (1) to SH (200). I want to merge them into a single csv file. How can I do it? | 0 | 1 | 189,456 |
1 | 3,120,753 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2010-03-25T07:45:00.000 | 1 | 1 | 0 | How to copy matplotlib figure? | 2,513,786 | 0.197375 | python,wxpython,copy,matplotlib | I'm not familiar with the inner workings, but could easily imagine how disposing of a frame damages the figure data. Is it expensive to draw? Otherwise I'd take the somewhat chickenish approach of simply redrawing it ;) | I have FigureCanvasWxAgg instance with a figure displayed on a frame. If user clicks on the canvas another frame with a new FigureCanvasWxAgg containing the same figure will be shown. By now closing the new frame can result in destroying the C++ part of the figure so that it won't be available for the first frame.
How... | 0 | 1 | 1,759 |
0 | 2,521,903 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2010-03-25T19:29:00.000 | 2 | 5 | 0 | A 3-D grid of regularly spaced points | 2,518,730 | 0.07983 | python,numpy | I would say go with meshgrid or mgrid, in particular if you need non-integer coordinates. I'm surprised that Numpy's broadcasting rules would be more efficient, as meshgrid was designed especially for the problem that you want to solve. | I want to create a list containing the 3-D coords of a grid of regularly spaced points, each as a 3-element tuple. I'm looking for advice on the most efficient way to do this.
In C++ for instance, I simply loop over three nested loops, one for each coordinate. In Matlab, I would probably use the meshgrid function (whi... | 0 | 1 | 1,877 |
0 | 2,528,683 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2010-03-26T22:28:00.000 | 6 | 2 | 1 | Is Using Python to MapReduce for Cassandra Dumb? | 2,527,173 | 1.2 | python,mongodb,cassandra,couchdb,nosql | Cassandra supports map reduce since version 0.6. (Current stable release is 0.5.1, but go ahead and try the new map reduce functionality in 0.6.0-beta3) To get started I recommend to take a look at the word count map reduce example in 'contrib/word_count'. | Since Cassandra doesn't have MapReduce built in yet (I think it's coming in 0.7), is it dumb to try and MapReduce with my Python client or should I just use CouchDB or Mongo or something?
The application is stats collection, so I need to be able to sum values with grouping to increment counters. I'm not, but pretend I'... | 0 | 1 | 1,625 |
0 | 2,546,266 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2010-03-30T14:35:00.000 | 3 | 4 | 0 | How should I use random.jumpahead in Python | 2,546,039 | 0.148885 | python,random | jumpahead(1) is indeed sufficient (and identical to jumpahead(50000) or any other such call, in the current implementation of random -- I believe that came in at the same time as the Mersenne Twister based implementation). So use whatever argument fits in well with your programs' logic. (Do use a separate random.Rand... | I have a application that does a certain experiment 1000 times (multi-threaded, so that multiple experiments are done at the same time). Every experiment needs appr. 50.000 random.random() calls.
What is the best approach to get this really random. I could copy a random object to every experiment and do than a jumpahea... | 0 | 1 | 2,828 |
0 | 2,569,226 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2010-04-01T18:31:00.000 | 0 | 4 | 0 | MATLAB-like variable editor in Python | 2,562,697 | 0 | python,numpy,ipython | in ipython, ipipe.igrid() can be used to view tabular data. | Is there a data viewer in Python/IPython like the variable editor in MATLAB? | 0 | 1 | 2,415 |
0 | 28,463,696 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2010-04-01T18:31:00.000 | 0 | 4 | 0 | MATLAB-like variable editor in Python | 2,562,697 | 0 | python,numpy,ipython | Even Pycharm will be a good option if you are looking for MATLAB like editor. | Is there a data viewer in Python/IPython like the variable editor in MATLAB? | 0 | 1 | 2,415 |
0 | 2,564,787 | 0 | 0 | 1 | 0 | 2 | false | 10 | 2010-04-01T21:17:00.000 | 3 | 2 | 0 | Better use a tuple or numpy array for storing coordinates | 2,563,773 | 0.291313 | python,arrays,numpy,tuples,complex-numbers | A numpy array with an extra dimension is tighter in memory use, and at least as fast!, as a numpy array of tuples; complex numbers are at least as good or even better, including for your third question. BTW, you may have noticed that -- while questions asked later than yours were getting answers aplenty -- your was la... | I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind:
1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy a... | 0 | 1 | 6,196 |
0 | 2,564,868 | 0 | 0 | 1 | 0 | 2 | true | 10 | 2010-04-01T21:17:00.000 | 7 | 2 | 0 | Better use a tuple or numpy array for storing coordinates | 2,563,773 | 1.2 | python,arrays,numpy,tuples,complex-numbers | In terms of memory consumption, numpy arrays are more compact than Python tuples.
A numpy array uses a single contiguous block of memory. All elements of the numpy array must be of a declared type (e.g. 32-bit or 64-bit float.) A Python tuple does not necessarily use a contiguous block of memory, and the elements of th... | I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind:
1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy a... | 0 | 1 | 6,196 |
0 | 2,569,232 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2010-04-02T19:23:00.000 | 0 | 7 | 0 | Does Python/Scipy have a firls( ) replacement (i.e. a weighted, least squares, FIR filter design)? | 2,568,707 | 0 | python,algorithm,math,matlab,digital-filter | It seems unlikely that you'll find exactly what you seek already written in Python, but perhaps the Matlab function's help page gives or references a description of the algorithm? | I am porting code from Matlab to Python and am having trouble finding a replacement for the firls( ) routine. It is used for, least-squares linear-phase Finite Impulse Response (FIR) filter design.
I looked at scipy.signal and nothing there looked like it would do the trick. Of course I was able to replace my remez a... | 0 | 1 | 4,383 |
0 | 2,661,971 | 0 | 0 | 0 | 0 | 1 | true | 12 | 2010-04-04T00:25:00.000 | 20 | 3 | 0 | What is the best interface from Python 3.1.1 to R? | 2,573,132 | 1.2 | python,r,interface,python-3.x | edit: Rewrite to summarize the edits that accumulated over time.
The current rpy2 release (2.3.x series) has full support for Python 3.3, while
no claim is made about Python 3.0, 3.1, or 3.2.
At the time of writing the next rpy2 release (under development, 2.4.x series) is only supporting Python 3.3.
History of Python ... | I am using Python 3.1.1 on Mac OS X 10.6.2 and need an interface to R. When browsing the internet I found out about RPy. Is this the right choice?
Currently, a program in Python computes a distance matrix and, stores it in a file. I invoke R separately in an interactive way and read in the matrix for cluster analysis.... | 0 | 1 | 4,111 |
0 | 2,590,381 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2010-04-07T06:05:00.000 | 0 | 2 | 0 | Histogram in Matplotlib with input file | 2,590,328 | 0 | python,matplotlib,histogram | You can't directly tell matplotlib to make a histogram from an input file - you'll need to open the file yourself and get the data from it. How you'd do that depends on the format of the file - if it's just a file with a number on each line, you can just go through each line, strip() spaces and newlines, and use float(... | I wish to make a Histogram in Matplotlib from an input file containing the raw data (.txt). I am facing issues in referring to the input file. I guess it should be a rather small program. Any Matplotlib gurus, any help ?
I am not asking for the code, some inputs should put me on the right way ! | 0 | 1 | 14,040 |
0 | 2,597,749 | 0 | 1 | 0 | 0 | 1 | false | 9 | 2010-04-08T03:41:00.000 | 4 | 2 | 0 | random.randint(1,n) in Python | 2,597,444 | 0.379949 | python,random | No doubt you have a bounded amount of memory, and address space, on your machine; for example, for a good 64-bit machine, 64 GB of RAM [[about 2**36 bytes]] and a couple of TB of disk (usable as swap space for virtual memory) [[about 2**41 bytes]]. So, the "upper bound" of a Python long integer will be the largest one... | Most of us know that the command random.randint(1,n) in Python (2.X.X) would generate a number in random (pseudo-random) between 1 and n. I am interested in knowing what is the upper limit for n ? | 0 | 1 | 19,740 |
0 | 2,621,082 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2010-04-12T09:44:00.000 | 11 | 2 | 0 | random() in python | 2,621,055 | 1 | python,random | The [ indicates that 0.0 is included in the range of valid outputs. The ) indicates 1.0 is not in the range of valid outputs. | In python the function random() generates a random float uniformly in the semi-open range [0.0, 1.0). In principle can it ever generate 0.0 (i.e. zero) and 1.0 (i.e. unity)? What is the scenario in practicality? | 0 | 1 | 1,409 |
0 | 2,621,096 | 0 | 1 | 0 | 0 | 2 | true | 4 | 2010-04-12T09:44:00.000 | 13 | 2 | 0 | random() in python | 2,621,055 | 1.2 | python,random | 0.0 can be generated; 1.0 cannot (since it isn't within the range, hence the ) as opposed to [).
The probability of generating 0.0 is equal to the probability of generating any other number within that range, namely, 1/X where X is the number of different possible results. For a standard unsigned double-precision float... | In python the function random() generates a random float uniformly in the semi-open range [0.0, 1.0). In principle can it ever generate 0.0 (i.e. zero) and 1.0 (i.e. unity)? What is the scenario in practicality? | 0 | 1 | 1,409 |
0 | 2,661,789 | 0 | 0 | 0 | 0 | 1 | false | 50 | 2010-04-18T09:39:00.000 | 2 | 5 | 0 | tag generation from a text content | 2,661,778 | 0.07983 | python,tags,machine-learning,nlp,nltk | A very simple solution to the problem would be:
count the occurences of each word in the text
consider the most frequent terms as the key phrases
have a black-list of 'stop words' to remove common words like the, and, it, is etc
I'm sure there are cleverer, stats based solutions though.
If you need a solution to use ... | I am curious if there is an algorithm/method exists to generate keywords/tags from a given text, by using some weight calculations, occurrence ratio or other tools.
Additionally, I will be grateful if you point any Python based solution / library for this.
Thanks | 0 | 1 | 28,812 |
0 | 2,669,456 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2010-04-19T12:58:00.000 | 1 | 1 | 0 | Statistical analysis on large data set to be published on the web | 2,667,537 | 1.2 | php,python,postgresql,statistics | I think you can utilize your current combination(python/numpy/matplotlib) fully if the number of users are not too big. I do some similar works, and my data size a little more than 10g. Data are stored in a few sqlite files, and i use numpy to analyze data, PIL/matplotlib to generate chart files(png, gif), cherrypy as ... | I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the ... | 0 | 1 | 1,770 |
0 | 2,726,598 | 0 | 1 | 1 | 0 | 2 | false | 3 | 2010-04-27T18:13:00.000 | 1 | 4 | 0 | List of objects or parallel arrays of properties? | 2,723,790 | 0.049958 | python,performance,data-structures,numpy | I think it depends on what you're going to be doing with them, and how often you're going to be working with (all attributes of one particle) vs (one attribute of all particles). The former is better suited to the object approach; the latter is better suited to the array approach.
I was facing a similar problem (althou... | The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties?
I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity... | 0 | 1 | 1,102 |
0 | 2,723,845 | 0 | 1 | 1 | 0 | 2 | false | 3 | 2010-04-27T18:13:00.000 | 2 | 4 | 0 | List of objects or parallel arrays of properties? | 2,723,790 | 0.099668 | python,performance,data-structures,numpy | Having an object for each ball in this example is certainly better design. Parallel arrays are really a workaround for languages that do not support proper objects. I wouldn't use them in a language with OO capabilities unless it's a tiny case that fits within a function (and maybe not even then) or if I've run out of ... | The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties?
I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity... | 0 | 1 | 1,102 |
0 | 2,744,657 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2010-04-30T12:41:00.000 | 3 | 4 | 0 | Python: Plot some data (matplotlib) without GIL | 2,744,530 | 0.148885 | python,matplotlib,parallel-processing,gil | I think you'll need to put the graph into a proper Windowing system, rather than relying on the built-in show code.
Maybe sticking the .show() in another thread would be sufficient?
The GIL is irrelevant - you've got a blocking show() call, so you need to handle that first. | my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results)
But the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place).
I can only display the plot, wait ... | 0 | 1 | 737 |
0 | 2,744,604 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2010-04-30T12:41:00.000 | 3 | 4 | 0 | Python: Plot some data (matplotlib) without GIL | 2,744,530 | 0.148885 | python,matplotlib,parallel-processing,gil | This has nothing to do with the GIL, just modify your analysis code to make it update the graph from time to time (for example every N iterations).
Only then if you see that drawing the graph slows the analysis code too much, put the graph update code in a subprocess with multiprocessing. | my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results)
But the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place).
I can only display the plot, wait ... | 0 | 1 | 737 |
0 | 2,744,906 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2010-04-30T12:41:00.000 | 2 | 4 | 0 | Python: Plot some data (matplotlib) without GIL | 2,744,530 | 0.099668 | python,matplotlib,parallel-processing,gil | It seems like the draw() method can circumvent the need for show().
The only reason left for .show() in the script is to let it do the blocking part so that the images don't disapear when the script reaches its end. | my problem is the GIL of course. While I'm analysing data it would be nice to present some plots in between (so it's not too boring waiting for results)
But the GIL prevents this (and this is bringing me to the point of asking myself if Python was such a good idea in the first place).
I can only display the plot, wait ... | 0 | 1 | 737 |
0 | 2,770,393 | 0 | 0 | 1 | 0 | 5 | false | 3 | 2010-05-05T01:24:00.000 | 1 | 6 | 0 | R or Python for file manipulation | 2,770,030 | 0.033321 | python,file,r,performance | what do you mean by "file manipulation?" are you talking about moving files around, deleting, copying, etc., in which case i would use a shell, e.g., bash, etc. if you're talking about reading in the data, performing calculations, perhaps writing out a new file, etc., then you could probably use Python or R. unless mai... | I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r.
My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed... | 0 | 1 | 2,365 |
0 | 2,770,138 | 0 | 0 | 1 | 0 | 5 | false | 3 | 2010-05-05T01:24:00.000 | 1 | 6 | 0 | R or Python for file manipulation | 2,770,030 | 0.033321 | python,file,r,performance | Know where the time is being spent. If your R scripts are bottlenecked on disk IO (and that is very possible in this case), then you could rewrite them in hand-optimized assembly and be no faster. As always with optimization, if you don't measure first, you're just pissing into the wind. If they're not bottlenecked on ... | I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r.
My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed... | 0 | 1 | 2,365 |
0 | 2,770,071 | 0 | 0 | 1 | 0 | 5 | false | 3 | 2010-05-05T01:24:00.000 | 0 | 6 | 0 | R or Python for file manipulation | 2,770,030 | 0 | python,file,r,performance | My guess is that you probably won't see much of a speed-up in time. When comparing high-level languages, overhead in the language is typically not to blame for performance problems. Typically, the problem is your algorithm.
I'm not very familiar with R, but you may find speed-ups by reading larger chunks of data into... | I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r.
My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed... | 0 | 1 | 2,365 |
0 | 2,771,903 | 0 | 0 | 1 | 0 | 5 | false | 3 | 2010-05-05T01:24:00.000 | 0 | 6 | 0 | R or Python for file manipulation | 2,770,030 | 0 | python,file,r,performance | R data manipulation has rules for it to be fast. The basics are:
vectorize
use data.frames as little as possible (for example, in the end)
Search for R time optimization and profiling and you will find many resources to help you. | I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r.
My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed... | 0 | 1 | 2,365 |
0 | 2,770,354 | 0 | 0 | 1 | 0 | 5 | true | 3 | 2010-05-05T01:24:00.000 | 10 | 6 | 0 | R or Python for file manipulation | 2,770,030 | 1.2 | python,file,r,performance | I write in both R and Python regularly. I find Python modules for writing, reading and parsing information easier to use, maintain and update. Little niceties like the way python lets you deal with lists of items over R's indexing make things much easier to read.
I highly doubt you will gain any significant speed-up ... | I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r.
My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed... | 0 | 1 | 2,365 |
0 | 2,785,460 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2010-05-05T03:02:00.000 | 1 | 3 | 0 | Extract points within a shape from a raster | 2,770,356 | 0.066568 | python,arcgis,raster | You need a library that can read your raster. I am not sure how to do that in Python but you could look at geotools (especially with some of the new raster library integration) if you want to program in Java. If you are good with C I would reccomend using something like GDAL.
If you want to look at a desktop tool you c... | I have a raster file (basically 2D array) with close to a million points. I am trying to extract a circle from the raster (and all the points that lie within the circle). Using ArcGIS is exceedingly slow for this. Can anyone suggest any image processing library that is both easy to learn and powerful and quick enough f... | 0 | 1 | 3,334 |
0 | 2,806,868 | 0 | 0 | 0 | 0 | 3 | false | 9 | 2010-05-10T22:10:00.000 | 14 | 8 | 0 | Huge Graph Structure | 2,806,806 | 1 | python,memory,data-structures,graph | If that is 100-600 edges/node, then you are talking about 3.6 billion edges.
Why does this have to be all in memory?
Can you show us the structures you are currently using?
How much memory are we allowed (what is the memory limit you are hitting?)
If the only reason you need this in memory is because you need to be ... | I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation.
I have tried a memory map representation, arrays, dictionaries and strings to represent... | 0 | 1 | 6,997 |
0 | 2,806,891 | 0 | 0 | 0 | 0 | 3 | false | 9 | 2010-05-10T22:10:00.000 | 4 | 8 | 0 | Huge Graph Structure | 2,806,806 | 0.099668 | python,memory,data-structures,graph | I doubt you'll be able to use a memory structure unless you have a LOT of memory at your disposal:
Assume you are talking about 600 directed edges from each node, with a node being 4-bytes (integer key) and a directed edge being JUST the destination node keys (4 bytes each).
Then the raw data about each node is 4 + 600... | I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation.
I have tried a memory map representation, arrays, dictionaries and strings to represent... | 0 | 1 | 6,997 |
0 | 2,806,909 | 0 | 0 | 0 | 0 | 3 | false | 9 | 2010-05-10T22:10:00.000 | 0 | 8 | 0 | Huge Graph Structure | 2,806,806 | 0 | python,memory,data-structures,graph | Sounds like you need a database and an iterator over the results. Then you wouldn't have to keep it all in memory at the same time but you could always have access to it. | I'm developing an application in which I need a structure to represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges per node) in memory. The edges representation will contain some attributes of the relation.
I have tried a memory map representation, arrays, dictionaries and strings to represent... | 0 | 1 | 6,997 |
0 | 53,035,704 | 0 | 0 | 0 | 0 | 1 | false | 24 | 2010-05-14T21:50:00.000 | 0 | 2 | 0 | Dimension Reduction in Categorical Data with missing values | 2,837,850 | 0 | python,r,statistics | 45% of the data have at least one missing value, you say. This is impressive. I would first look if there is no pattern. You say they are missing at random. Have you tested for MAR ? Have you tested for MAR for sub-groups ?
Not knowing your data I would first look if there are not cases with many missing values and se... | I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty f... | 0 | 1 | 16,303 |
0 | 2,943,067 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2010-05-31T08:47:00.000 | 0 | 2 | 0 | python dictionary with constant value-type | 2,942,375 | 0 | python,arrays,data-structures,dictionary | You might try using std::map. Boost.Python provides a Python wrapping for std::map out-of-the-box. | I bumped into a case where I need a big (=huge) python dictionary, which turned to be quite memory-consuming.
However, since all of the values are of a single type (long) - as well as the keys, I figured I can use python (or numpy, doesn't really matter) array for the values ; and wrap the needed interface (in: x ; ou... | 0 | 1 | 1,303 |
0 | 2,960,944 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2010-06-02T19:16:00.000 | 0 | 4 | 0 | merging in python | 2,960,855 | 0 | python | Add all keys and associated values from both sets of data to a single dictionary.
Get the items of the dictionary ans sort them.
print out the answer.
k1=[7, 2, 3, 5]
v1=[10,11,12,26]
k2=[0, 4]
v2=[20, 33]
d=dict(zip(k1,v1))
d.update(zip(k2,v2))
answer=d.items()
answer.sort()
keys=[k for (k,v) in answer]
values=[... | I have the following 4 arrays ( grouped in 2 groups ) that I would like to merge in ascending order by the keys array.
I can use also dictionaries as structure if it is easier.
Has python any command or something to make this quickly possible?
Regards
MN
# group 1
[7, 2, 3, 5] #keys
[10,11,12,26] #values
[0, 4... | 0 | 1 | 125 |
0 | 2,968,306 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2010-06-03T17:06:00.000 | 1 | 7 | 0 | How to share an array in Python with a C++ Program? | 2,968,172 | 0.028564 | c++,python,serialization | I would propose simply to use c arrays(via ctypes on the python side) and simply pull/push the raw data through an socket | I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this?
Thanks
Edit: It is likely that the array will only have 50 e... | 0 | 1 | 2,433 |
0 | 2,968,375 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2010-06-03T17:06:00.000 | 1 | 7 | 0 | How to share an array in Python with a C++ Program? | 2,968,172 | 0.028564 | c++,python,serialization | Serialization is one problem while IPC is another. Do you have the IPC portion figured out? (pipes, sockets, mmap, etc?)
On to serialization - if you're concerned about performance more than robustness (being able to plug more modules into this architecture) and security, then you should take a look at the struct modu... | I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this?
Thanks
Edit: It is likely that the array will only have 50 e... | 0 | 1 | 2,433 |
0 | 2,969,618 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2010-06-03T20:40:00.000 | 5 | 4 | 0 | Generate n-dimensional random numbers in Python | 2,969,593 | 1.2 | python,random,n-dimensional | Numpy has multidimensional equivalents to the functions in the random module
The function you're looking for is numpy.random.normal | I'm trying to generate random numbers from a gaussian distribution. Python has the very useful random.gauss() method, but this is only a one-dimensional random variable. How could I programmatically generate random numbers from this distribution in n-dimensions?
For example, in two dimensions, the return value of this ... | 0 | 1 | 5,620 |
0 | 2,969,634 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2010-06-03T20:40:00.000 | 1 | 4 | 0 | Generate n-dimensional random numbers in Python | 2,969,593 | 0.049958 | python,random,n-dimensional | You need to properly decompose your multi-dimensional distribution into a composition of one-dimensional distributions. For example, if you want a point at a Gaussian-distributed distance from a given center and a uniformly-distributed angle around it, you'll get the polar coordinates for the delta with a Gaussian rho... | I'm trying to generate random numbers from a gaussian distribution. Python has the very useful random.gauss() method, but this is only a one-dimensional random variable. How could I programmatically generate random numbers from this distribution in n-dimensions?
For example, in two dimensions, the return value of this ... | 0 | 1 | 5,620 |
0 | 29,524,883 | 0 | 0 | 0 | 0 | 1 | false | 86 | 2010-06-03T21:19:00.000 | 117 | 5 | 0 | How do I add space between the ticklabels and the axes in matplotlib | 2,969,867 | 1 | python,matplotlib | If you don't want to change the spacing globally (by editing your rcParams), and want a cleaner approach, try this:
ax.tick_params(axis='both', which='major', pad=15)
or for just x axis
ax.tick_params(axis='x', which='major', pad=15)
or the y axis
ax.tick_params(axis='y', which='major', pad=15) | I've increased the font of my ticklabels successfully, but now they're too close to the axis. I'd like to add a little breathing room between the ticklabels and the axis. | 0 | 1 | 89,545 |
0 | 2,980,269 | 0 | 0 | 0 | 1 | 1 | true | 3 | 2010-06-05T12:08:00.000 | 1 | 1 | 0 | Efficient way to access a mapping of identifiers in Python | 2,980,257 | 1.2 | python,database,sqlite,dictionary,csv | As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer.
You are right that switching to an SQLite database is a good choice when the number of records gets very large. | I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers.
Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by k... | 0 | 1 | 109 |
0 | 2,991,030 | 0 | 0 | 0 | 1 | 1 | false | 3 | 2010-06-07T15:52:00.000 | 3 | 3 | 0 | How to save big "database-like" class in python | 2,990,995 | 0.197375 | python,serialization,pickle,object-persistence | Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok. | I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows:
DataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any othe... | 0 | 1 | 306 |
0 | 2,994,438 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2010-06-08T02:17:00.000 | 2 | 1 | 0 | Extracting Information from Images | 2,994,398 | 1.2 | python,image,opencv,identification | Your question is difficult to answer without more clarification about the types of images you are analyzing and your purpose.
The tone of the post seems that you are interested in tinkering -- that's fine. If you want to tinker, one example application might be iris identification using wavelet analysis. You can also t... | What are some fast and somewhat reliable ways to extract information about images? I've been tinkering with OpenCV and this seems so far to be the best route plus it has Python bindings.
So to be more specific I'd like to determine what I can about what's in an image. So for example the haar face detection and full b... | 0 | 1 | 1,546 |
0 | 9,964,718 | 0 | 0 | 0 | 0 | 3 | false | 21 | 2010-06-10T06:34:00.000 | 0 | 4 | 0 | what changes when your input is giga/terabyte sized? | 3,012,157 | 0 | python,large-data-volumes,scientific-computing | The main assumptions are about the amount of cpu/cache/ram/storage/bandwidth you can have in a single machine at an acceptable price. There are lots of answers here at stackoverflow still based on the old assumptions of a 32 bit machine with 4G ram and about a terabyte of storage and 1Gb network. With 16GB DDR-3 ram mo... | I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny.
I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and ... | 0 | 1 | 1,855 |
0 | 3,012,350 | 0 | 0 | 0 | 0 | 3 | false | 21 | 2010-06-10T06:34:00.000 | 1 | 4 | 0 | what changes when your input is giga/terabyte sized? | 3,012,157 | 0.049958 | python,large-data-volumes,scientific-computing | While some languages have naturally lower memory overhead in their types than others, that really doesn't matter for data this size - you're not holding your entire data set in memory regardless of the language you're using, so the "expense" of Python is irrelevant here. As you pointed out, there simply isn't enough a... | I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny.
I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and ... | 0 | 1 | 1,855 |
0 | 3,012,599 | 0 | 0 | 0 | 0 | 3 | true | 21 | 2010-06-10T06:34:00.000 | 18 | 4 | 0 | what changes when your input is giga/terabyte sized? | 3,012,157 | 1.2 | python,large-data-volumes,scientific-computing | I'm currently engaged in high-performance computing in a small corner of the oil industry and regularly work with datasets of the orders of magnitude you are concerned about. Here are some points to consider:
Databases don't have a lot of traction in this domain. Almost all our data is kept in files, some of those f... | I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny.
I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and ... | 0 | 1 | 1,855 |
0 | 27,096,538 | 0 | 1 | 1 | 0 | 2 | false | 3 | 2010-06-10T15:58:00.000 | 0 | 5 | 0 | Save Workspace - save all variables to a file. Python doesn't have it) | 3,016,116 | 0 | python,serialization | I take issue with the statement that the saving of variables in Matlab is an environment function. the "save" statement in matlab is a function and part of the matlab language not just a command. It is a very useful function as you don't have to worry about the trivial minutia of file i/o and it handles all sorts of v... | I cannot understand it. Very simple, and obvious functionality:
You have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple comm... | 0 | 1 | 8,265 |
0 | 3,016,188 | 0 | 1 | 1 | 0 | 2 | false | 3 | 2010-06-10T15:58:00.000 | 2 | 5 | 0 | Save Workspace - save all variables to a file. Python doesn't have it) | 3,016,116 | 0.07983 | python,serialization | What you are describing is Matlab environment feature not a programming language.
What you need is a way to store serialized state of some object which could be easily done in almost any programming language. In python world pickle is the easiest way to achieve it and if you could provide more details about the errors... | I cannot understand it. Very simple, and obvious functionality:
You have a code in any programming language, You run it. In this code You generate variables, than You save them (the values, names, namely everything) to a file, with one command. When it's saved You may open such a file in Your code also with simple comm... | 0 | 1 | 8,265 |
0 | 5,926,995 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2010-06-14T04:59:00.000 | 0 | 2 | 0 | Can't import matplotlib | 3,035,028 | 0 | python,installation,numpy,matplotlib | Following Justin's comment ... here is the equivalent file for Linux:
/usr/lib/pymodules/python2.6/matplotlib/__init__.py
sudo edit that to fix the troublesome line to:
if not ((int(nn[0]) >= 1 and int(nn[1]) >= 1) or int(nn[0]) >= 2):
Thanks Justin Peel! | I installed matplotlib using the Mac disk image installer for MacOS 10.5 and Python 2.5. I installed numpy then tried to import matplotlib but got this error: ImportError: numpy 1.1 or later is required; you have 2.0.0.dev8462. It seems to that version 2.0.0.dev8462 would be later than version 1.1 but I am guessing tha... | 0 | 1 | 1,283 |
0 | 3,098,439 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2010-06-23T01:31:00.000 | 4 | 5 | 0 | Method for guessing type of data represented currently represented as strings | 3,098,337 | 0.158649 | python,parsing,csv,input,types | ast.literal_eval() can get the easy ones. | I'm currently parsing CSV tables and need to discover the "data types" of the columns. I don't know the exact format of the values. Obviously, everything that the CSV parser outputs is a string. The data types I am currently interested in are:
integer
floating point
date
boolean
string
My current thoughts are to ... | 0 | 1 | 4,235 |
0 | 3,106,854 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2010-06-24T02:02:00.000 | 1 | 2 | 0 | Finding images with pure colours | 3,106,788 | 1.2 | python,image,image-processing,colors | How about doing this?
Blur the image using some fast blurring algorithm. (Search for stack blur or box blur)
Compute standard deviation of the pixels in RGB domain, once for each color.
Discard the image if the standard deviation is beyond a certain threshold. | I've read a number of questions on finding the colour palette of an image, but my problem is slightly different. I'm looking for images made up of pure colours: pictures of the open sky, colourful photo backgrounds, red brick walls etc.
So far I've used the App Engine Image.histogram() function to produce a histogram, ... | 1 | 1 | 381 |
0 | 3,134,369 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2010-06-28T16:43:00.000 | 1 | 3 | 1 | Switch python distributions | 3,134,332 | 0.066568 | python,osx-snow-leopard,numpy,macports | You need to update your PATH so that the stuff from MacPorts is in front of the standard system directories, e.g., export PATH=/opt/local/bin:/opt/local/sbin:/opt/local/Library/Frameworks/Python.framework/Versions/Current/bin/:$PATH.
UPDATE: Pay special attention to the fact that /opt/local/Library/Frameworks/Python.fr... | I have a MacBook Pro with Snow Leopard, and the Python 2.6 distribution that comes standard. Numpy does not work properly on it. Loadtxt gives errors of the filename being too long, and getfromtxt does not work at all (no object in module error). So then I tried downloading the py26-numpy port on MacPorts. Of cours... | 0 | 1 | 645 |
0 | 3,166,594 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2010-07-01T23:48:00.000 | 1 | 4 | 0 | Unstructured Text to Structured Data | 3,162,450 | 0.049958 | python,nlp,structured-data | Possibly look at "Collective Intelligence" by Toby Segaran. I seem to remember that addressing the basics of this in one chapter. | I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button.
I understand this may come under the NLP category, but I am interested only in the process of going from something like "Levi jeans size 32 A0b293"
to... | 0 | 1 | 7,405 |
0 | 3,177,235 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2010-07-01T23:48:00.000 | 0 | 4 | 0 | Unstructured Text to Structured Data | 3,162,450 | 0 | python,nlp,structured-data | If you are only working for cases like the example you cited, you are better off using some manual rule-based that is 100% predictable and covers 90% of the cases it might encounter production..
You could enumerable lists of all possible brands and categories and detect which is which in an input string cos there's u... | I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button.
I understand this may come under the NLP category, but I am interested only in the process of going from something like "Levi jeans size 32 A0b293"
to... | 0 | 1 | 7,405 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.