GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 59,199,255 | 0 | 0 | 0 | 0 | 2 | false | 26 | 2015-09-25T03:54:00.000 | -1 | 5 | 0 | why matplotlib give the error []? | 32,774,520 | -0.039979 | python,matplotlib | Had this problem. You just have to use show() function to show it in a window. Use pyplot.show() | I am using python 2.7.9 on win8. When I tried to plot using matplotlib, the following error showed up:
from pylab import *
plot([1,2,3,4])
[matplotlib.lines.Line2D object at 0x0392A9D0]
I tried the test code "python simple_plot.py --verbose-helpful", and the following warning showed up:
$HOME=C:\Users\XX
matplot... | 0 | 1 | 61,800 |
0 | 62,611,119 | 0 | 0 | 0 | 0 | 2 | false | 26 | 2015-09-25T03:54:00.000 | 0 | 5 | 0 | why matplotlib give the error []? | 32,774,520 | 0 | python,matplotlib | When you run plt.plot() on Spider, you will now receive the following notification:
Figures now render in the Plots pane by default. To make them also appear inline in the Console, uncheck "Mute Inline Plotting" under the Plots pane options menu.
I followed this instruction, and it works. | I am using python 2.7.9 on win8. When I tried to plot using matplotlib, the following error showed up:
from pylab import *
plot([1,2,3,4])
[matplotlib.lines.Line2D object at 0x0392A9D0]
I tried the test code "python simple_plot.py --verbose-helpful", and the following warning showed up:
$HOME=C:\Users\XX
matplot... | 0 | 1 | 61,800 |
0 | 32,776,321 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-09-25T06:31:00.000 | 5 | 1 | 0 | Why Numpy and Pandas arrays consuming more memory than source data? | 32,776,134 | 1.2 | python,numpy,pandas,bigdata | Memory consumption depends very much on the way data is stored. For example 1 as string takes only one byte, as an int it takes two bytes and eight bytes as double. Then there is the overhead of creating it as in Object of DaataFrame and Series. All this is done for efficient processing.
As a general rule of thumb data... | I am new to bigdata, I want to parse the whole data, so I cant split it when i try to use numpy array for processing 1 GB data it takes 4GB memory (In real time I am dealing with huge data). Is there any optimized way to use these array for this much data or any special function to handle huge data. | 0 | 1 | 1,066 |
0 | 32,809,283 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2015-09-27T13:58:00.000 | 6 | 1 | 0 | Storing a Random state | 32,808,686 | 1.2 | python,random | You can save the state of the PRNG using random.getstate() (then, e.g., use pickle to save it to disk. Later, a random.setstate(state) will return your PRNG to exactly the state it was in. | I'm designing a program which:
Includes randomness
Can stop executing and save its state at certain points (in XML)
Can start executing starting from a saved state
Is deterministic (so the program can run from the same state twice and produces the same result)
The problem here is saving the randomness. I can initial... | 0 | 1 | 2,249 |
0 | 32,859,613 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2015-09-30T06:41:00.000 | 0 | 1 | 0 | How to combine multiple feature sets in bag of words | 32,859,460 | 1.2 | python-2.7,machine-learning,scikit-learn,text-mining,text-classification | You can train individual classifiers for descriptions and merchants, and obtain a final score using score = w1 * predictions + w2 * components.
The values of w1 and w2 should be obtained using cross validation.
Alternatively, you can train a single multiclass classifier by combining the training dataset.
You will now ... | I have text classification data with predictions depending on categories, 'descriptions' and 'components'. I could do the classification using bag of words in python with scikit on 'descriptions'. But I want to get predictions using both categories in bag of words with weights to individual feature sets
x = descripti... | 0 | 1 | 713 |
0 | 33,103,479 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-09-30T14:56:00.000 | 0 | 1 | 0 | Generate orphan mesh in abaqus python | 32,869,355 | 0 | python,mesh,abaqus,orphan | In Abaqus you can only edit Native Meshes. This time, as you said, you have an Orphan Mesh. The only way to edit this kind of mesh is doing it by yourself with an external script. | I am trying to generate an orphan mesh on a part with python.
I have already defined the nodes by using a code giving by Tim in another post.
However, the with the following command:
ListElem.append(myTrabPart.Element(nodes=tup,elemShape=HEX8)
I ended up by the message "there is no mesh to edit". It seems that the List... | 0 | 1 | 424 |
0 | 32,900,703 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-10-01T05:58:00.000 | 0 | 2 | 0 | PySpark - Combining Session Data without Explicit Session Key / Iterating over All Rows | 32,880,370 | 0 | python,apache-spark,pyspark,mapreduce,apache-spark-sql | Zero323's solution works great but wanted to post an rdd implementation as well. I think this will be helpful for people trying to translate streaming MapReduce to pyspark. My implementation basically maps keys (individuals in this case) to a list of list for the streaming values that would associate with that key (are... | I am trying to aggregate session data without a true session "key" in PySpark. I have data where an individual is detected in an area at a specific time, and I want to aggregate that into a duration spent in each area during a specific visit (see below).
The tricky part here is that I want to infer the time someone exi... | 0 | 1 | 445 |
0 | 32,909,946 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-10-02T13:14:00.000 | 5 | 1 | 0 | Caffe: train, validation and test split | 32,908,025 | 0.761594 | python,machine-learning,neural-network,caffe,conv-neural-network | Differentiating between validation and testing is made to imply that hyperparameters may be tuned to the validation set while nothing is fitted to the test set in any way.
caffe doesn't optimize anything but the weights, and since the test is only there for evaluation, it does exactly as expected.
Assuming you're tunin... | I've been using caffe for a while, with some success, but I have noticed in examples given that there is only ever a two way split on the data set with TRAIN and TEST phases, where the TEST set seems to act as a validation set.
Ideally I would like to have three sets, so that once the model is trained, I can save it an... | 0 | 1 | 4,078 |
0 | 33,026,758 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-10-02T17:23:00.000 | 0 | 1 | 0 | inserting training instances to scikit-learn dataset | 32,912,567 | 0 | python-2.7,numpy,scipy,scikit-learn | FIRST: I'm guessing the reason sparse data is giving a different answer than the same data converted to dense, is that my representation of sparse was starting feature indices from one rather than zero (because oll library that I used previously required so). So my first column was all zero, when converted to dense it ... | I have a dataset of 15M+ training instances in form of svmlight dataset. I read these data using sklearn.datasets.load_svmlight_file(). The data itself is not sparse, so I don't mind converting it to any other dense representation (I will prefer that).
At some point in my program I need to add millions of new data reco... | 0 | 1 | 162 |
0 | 33,177,976 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-10-04T17:18:00.000 | 3 | 1 | 0 | How do you free up gpu memory? | 32,936,166 | 0.53705 | python-2.7,gpu,gpgpu,theano | If borrow is set to true garbage collection is on (default true: config.allow_gc=True) and the video card is not currently being used as a display device (doubtful, since you're using a mobile gpu), the only other options are to reduce the parameters of the network or possibly the batch size of the model. The latter w... | When running theano, I get an error: not enough memory. See below.
What are some possible actions that can be taken to free up memory?
I know I can close applications etc, but I just want see if anyone has other ideas. For example, is it possible to reserve memory?
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 ... | 0 | 1 | 3,994 |
0 | 71,779,306 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2015-10-04T21:15:00.000 | 0 | 2 | 0 | DBSCAN (with metric only) in scikit-learn | 32,938,494 | 0 | python,scikit-learn,cluster-analysis,data-mining,dbscan | I wrote my own distance code ref the top answer, just as it says, it was extremely slow, the built-in distance code was much better. I'm wondering how to speed up. | I have objects and a distance function, and want to cluster these using DBSCAN method in scikit-learn. My objects don't have a representation in Euclidean space. I know, that it is possible to useprecomputed metric, but in my case it's very impractical, due to large size of distance matrix. Is there any way to overcome... | 0 | 1 | 6,418 |
0 | 64,074,702 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2015-10-07T00:50:00.000 | 0 | 7 | 0 | Which columns are binary in a Pandas DataFrame? | 32,982,034 | 0 | python,numpy,pandas | You can just use the unique() function from pandas on each column in your dataset.
ex: df["colname"].unique()
This will return a list of all unique values in the specified column.
You can also use for loop to traverse through all the columns in the dataset.
ex: [df[cols].unique() for cols in df] | I have a pandas dataframe with a large number of columns and I need to find which columns are binary (with values 0 or 1 only) without looking at the data. Which function should be used? | 0 | 1 | 9,639 |
0 | 32,994,584 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-10-07T09:18:00.000 | 1 | 1 | 0 | NLP - Find which Verb is talking about the Noun in a sentence | 32,988,413 | 0.197375 | python,nlp,nltk | That's a good suggestion, I will try it with anaphora too.
For now, my problem is solved by the concept of noun phrase & verb phrase.
I extracted clause(s) from the sentence
identified verbs & nouns in each, and
related them through an iterative technique.
Thank you for the help. | Given a sentence, Using python NLTK how can I know which Verb is talking about which Noun.
Eg: Cat sat on the mat.
Here "sat(verb)" is talking about "Cat(noun)".
Consider a complex sentence which has more nouns & verbs
Thank You. | 0 | 1 | 1,077 |
0 | 33,027,650 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2015-10-08T22:37:00.000 | 2 | 1 | 0 | What's the purpose of Series instead of lists in Pandas and Python? | 33,027,086 | 1.2 | python,pandas | This isn't going to be a very complete answer, but hopefully is an intuitive "general" answer.
Pandas doesn't use a list as the "core" unit that makes up a DataFrame because Series objects make assumptions that lists do not. A list in python makes very little assumptions about what is inside, it could be pretty much an... | Why doesn't Pandas build DataFrames directly from lists? Why was such a thing as a series created in the first place?
Or: If the data in a DataFrame is actually stored in memory as a collection of Series, why not just use a collection of lists?
Yet another way to ask the same question: what's the purpose of Series over... | 0 | 1 | 221 |
0 | 33,040,012 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-10-09T13:43:00.000 | 2 | 1 | 0 | MiniBatchKMeans Python | 33,039,884 | 1.2 | python,machine-learning,scikit-learn,cluster-computing | The batch size is defined by batch_size, period. Furthermore you can define init_size which is the size of samples taken to initiallize the process, and by default it is 3*batch_size. You can simply set bath_size=100 and init_size=10 and then 10 samples are used to perform initialization (kmeans is not globaly converge... | I am using the function MiniBatchKMeans() from scikitlearn. Well,
in its documentation there are:
batch_size : int, optional, default: 100
Size of the mini batches.
init_size : int, optional, default: 3 * batch_size
Number of samples to randomly sample for speeding up the initialization (sometimes at the expense ... | 0 | 1 | 879 |
0 | 33,043,867 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-10-09T17:06:00.000 | 1 | 2 | 0 | How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive? | 33,043,704 | 0.099668 | python,hadoop,hive,hbase,bigdata | If it's already in the CSV or any format on the linux file system, that PIG can understand, just do a hadoop fs -copyFromLocal to
If you want to read/process the raw H5 File format using Python on HDFS, look at hadoop-streaming (map/reduce)
Python can handle 2GB on a decent linux system- not sure if you need hadoop f... | I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data ana... | 0 | 1 | 726 |
0 | 50,411,499 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-10-09T17:06:00.000 | 0 | 2 | 0 | How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive? | 33,043,704 | 0 | python,hadoop,hive,hbase,bigdata | Don't load such amount of small files into HDFS. Hadoop doesn't handle well lots of small files. Each small file will incur in overhead because the block size (usually 64MB) is much bigger.
I want to do it myself, so I'm thinking of solutions. The million song dataset files don't have more than 1MB. My approach will be... | I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data ana... | 0 | 1 | 726 |
0 | 67,288,557 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-10-10T13:51:00.000 | 0 | 3 | 0 | Subtracting Background From Image using Opencv in Python | 33,054,711 | 0 | python-2.7,opencv | replace foreground = np.absolute(frame - background)
with foreground = cv2.absdiff(frame, background) | The following program displays 'foreground' completely black and not 'frame'. I also checked that all the values in 'frame' is equal to the values in 'foreground'.
They have same channels,data type etc.
I am using python 2.7.6 and OpenCV version 2.4.8
import cv2
import numpy as np
def subtractBackground(f... | 0 | 1 | 3,991 |
0 | 38,753,037 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-10-10T15:35:00.000 | 0 | 1 | 0 | Cannot connect to Jupyter Notebook server in Azure HDInsight | 33,055,691 | 0 | python,azure,apache-spark,azure-hdinsight,jupyter | Just saw this question way too late, but I will venture that you are using an unsupported browser.
Please use Chrome to connect to Jupyter. | I am trying to run a Python module using a Jupyter Notebook on Azure HDInsight, but I continue to get the following error message: A connection to the notebook server could not be established. The notebook will continue trying to reconnect, but until it does, you will NOT be able to run code. Check your network connect... | 0 | 1 | 2,411 |
0 | 40,980,117 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-10-12T12:10:00.000 | 0 | 3 | 0 | importing csv file in python | 33,080,794 | 0 | python,csv,python-3.x | First of all, at the top of the code do import csv
After that you need to set a variable name; this is so you can open the CSV file. For example, data=open('CSV name', 'rt') You will need to fill in where it says CSV name. That's how you open it.
To read a CSV file, you set another variable. For example, data2=csv.rea... | I want to import CSV files in a python script. Column and row numbers are not fixed , first row contains name of the variables and next rows are values of those variables.
I am new to Python, any help is appreciated. thanks. | 0 | 1 | 4,532 |
0 | 33,101,046 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-10-12T13:19:00.000 | 0 | 1 | 0 | OpenCV - using digital cameras. | 33,082,220 | 0 | python,opencv | Install Drivers for required camera, connect it, and use cv2.VideoCapture(int). Here, instead of 0, use a different integer according to the camera. By default, 0 is for the inbuilt webcam.
e.g.: cv2.VideoCapture(1) | The quality of video recording that is required for our project is not met by the webcams. Is it possible to use high megapixel digital cameras (Sony, Canon, Olympus) with OpenCV ?
How to talk to the digital cameras using OpenCV (and specifically using Python) | 0 | 1 | 1,805 |
0 | 33,087,239 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2015-10-12T17:16:00.000 | 3 | 1 | 0 | print a pandas dataframe to text with lines longer than 80 chars | 33,086,758 | 1.2 | python,pandas | Try changing pandas.options.display.width. (It's 80 by default) | I want to print a Dataframe to a text file,
Say I have a table with 4 lines, and 12 columns.
It looks quite nice when I just use print df, with all the values of a column aligned to the right, however, when there are too many columns (8 in my case) it breaks the table down so that the last 4 columns are printed after 4... | 0 | 1 | 259 |
0 | 33,094,494 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2015-10-13T00:55:00.000 | 6 | 2 | 0 | What is meant by PCA preserving only large pairwise distances? | 33,092,493 | 1 | python,matplotlib,machine-learning,visualization,pca | Don't confuse PCA with dimensionality reduction.
PCA is a rotation transformation that aligns the data with the axes in such a way that the first dimension has maximum variance, the second maximum variance among the remainder, etc. Rotations preserve pairwise distances.
When you use PCA for dimensionality reduction, yo... | I am currently reading up on t-SNE visualization technique and it was mentioned that one of the drawback of using PCA for visualizing high dimension data is that it only preserves large pairwise distances between the points. Meaning points which are far apart in high dimension would also appear far apart in low dimens... | 0 | 1 | 1,245 |
0 | 66,142,281 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2015-10-13T00:55:00.000 | 0 | 2 | 0 | What is meant by PCA preserving only large pairwise distances? | 33,092,493 | 0 | python,matplotlib,machine-learning,visualization,pca | If I can re-phrase @Don Reba's comment:
The PCA transformation itself does not alter distances.
The 2-dimensional plot often used to visualise the PCA results takes into account only two dimensions, disregards all the other dimensions, and as such this visualisation provides a distorted representation of distances. | I am currently reading up on t-SNE visualization technique and it was mentioned that one of the drawback of using PCA for visualizing high dimension data is that it only preserves large pairwise distances between the points. Meaning points which are far apart in high dimension would also appear far apart in low dimens... | 0 | 1 | 1,245 |
0 | 33,124,530 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-10-13T21:23:00.000 | 0 | 1 | 0 | Can Orange read in IF...THEN text format rule file and use it to score another dataset? | 33,112,874 | 0 | python,orange | No, there is no function to do this. Apparently nobody ever needed it.
You can do it yourself, but if you know some Python it should be easier to test a list of rules without using Orange. | I am wondering if Orange can read in a text format rule file and use it to score another dataset. For example, a rule.txt file was previously created in Orange through rule_to_string function and contains rules in this IF...THEN format:
"IF sex=['female'] AND status=['first'] THEN survived=yes". Can Orange read in the... | 0 | 1 | 86 |
0 | 33,152,516 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-10-15T14:05:00.000 | 2 | 1 | 0 | Caching Pandas Dataframe by Serialization or In-memory KV Store | 33,150,684 | 0.379949 | python,caching,pandas,redis | I have a DF of ~ 1 GB of plain text data. Assuming the dumping to disk is always slower than reading I compared HDF5 write performance with pickle.
HDF5 took 35 sec while pickle did 190 sec. So, you could consider using HDF5 instead of pickle | Which method of caching pandas DataFrame objcts will provide the highest performance? By storing it to a flat file on disk using pickle, or by storing it in a key-value store like Redis? | 0 | 1 | 1,886 |
0 | 44,307,542 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2015-10-16T01:47:00.000 | 1 | 2 | 0 | Seaborn Restore marker edges | 33,161,270 | 0.099668 | python,matplotlib,seaborn | A solution to this is after importing seaborn do the following:
matplotlib.rcParams['lines.markeredgewidth'] = 1 | Apparently, importing seaborn sets the marker edges in a matplotlib.pyplot.plot to zero or deletes them.
e.g. plt.plot(x,y,maker='s',markerfacecolor='none')
results in a plot without markers.
Is there a way to get the edges back?
markeredgecolor='k' has no effect. | 0 | 1 | 3,009 |
0 | 43,644,522 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2015-10-16T01:47:00.000 | 1 | 2 | 0 | Seaborn Restore marker edges | 33,161,270 | 0.099668 | python,matplotlib,seaborn | Give edgecolor='k' a try. This worked for me in a similar scatter plot. | Apparently, importing seaborn sets the marker edges in a matplotlib.pyplot.plot to zero or deletes them.
e.g. plt.plot(x,y,maker='s',markerfacecolor='none')
results in a plot without markers.
Is there a way to get the edges back?
markeredgecolor='k' has no effect. | 0 | 1 | 3,009 |
0 | 33,162,435 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-10-16T03:56:00.000 | 0 | 2 | 0 | Use Python to Change csv Data Column Format | 33,162,320 | 0 | python,excel,csv,pandas | This is a time formatting problem/philosophy by Excel. For some reason, Microsoft prefers to hide seconds and sub-seconds on user displays: even MSDOS's dir command omitted seconds.
If I were you, I'd use Excel's format operation and set it to display seconds, then save the spreadsheet as CSV and see if it put anyth... | I am using python pandas to read csv file. The csv file has a datetime column that has second precisions "9/1/2015 9:25:00 AM", but if I open in excel, it has only minute precisions "9/1/15 9:25". Moreover, when I use the pd.read_csv() function, it only shows up to minute precision. Is there any way that I could solve... | 0 | 1 | 1,815 |
0 | 33,167,366 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2015-10-16T08:21:00.000 | 2 | 2 | 0 | Fast Kalman Filter | 33,165,668 | 1.2 | python,cython,kalman-filter | The size of the covariance matrix is driven by the size of your state. Another question relates to the assumptions on your model and if this can bring up significant optimizations (obviously, optimizing implies reworking the "standard KF").
From my POV, your situation roughly depends on the value (number_of_states² * ... | I wonder if anyone can give me a pointer to really fast/efficient Kalman filter implementation, possibly in Python (or Cython, but C/C++ could also work if it is much faster). I have a problem with many learning epochs (possibly hundreds of millions), and many input (cues; say, between tens to hundred thousands). Thus,... | 0 | 1 | 1,111 |
0 | 33,264,437 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2015-10-16T08:21:00.000 | 1 | 2 | 0 | Fast Kalman Filter | 33,165,668 | 0.099668 | python,cython,kalman-filter | If you have many measurements per update, you should look at the information form of the Kalman filter. Each additional measurement is just addition. The tradeoff is a more complex predict step, and the cost of inverting the information matrix whenever you want to get your state out. | I wonder if anyone can give me a pointer to really fast/efficient Kalman filter implementation, possibly in Python (or Cython, but C/C++ could also work if it is much faster). I have a problem with many learning epochs (possibly hundreds of millions), and many input (cues; say, between tens to hundred thousands). Thus,... | 0 | 1 | 1,111 |
0 | 33,170,242 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-10-16T10:59:00.000 | 1 | 1 | 0 | Implementing online learning with time series | 33,168,836 | 0.197375 | python,r,machine-learning,scikit-learn | If you have to make predictions at each time stamp, then this doesn't become a a time series problem (unless you plan to use the sequence of previous observations to make your next prediction, in which case you will need to train a sequence based model). Assuming you can only train a model based on the final data you o... | I have a classification problem with time series data.
Each example has 10 variables which are measured at irregular intervals and in the end the object is classified into 1 of the 2 possible classes (binary classification).
I have only the final class of the example to learn from during training. But when given a new ... | 0 | 1 | 317 |
0 | 33,175,701 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-10-16T13:45:00.000 | 1 | 1 | 0 | Speed up Python MST calculation using Delaunay Triangulation | 33,172,090 | 1.2 | python,algorithm,performance,minimum-spanning-tree,delaunay | NB: this assumes we're working in 2-d
I suspect that what you are doing now is feeding all point to point distances to the MST library. There are on the order of N^2 of these distances and the asymptotic runtime of Kruskal's algorithm on such an input is N^2 * log N.
Most algorithms for Delaunay triangulation take N lo... | I have a code that makes Minimum Spanning Trees of many sets of points (about 25000 data sets containing 40-10000 points in each set) and this is obviously taking a while. I am using the MST algorithm from scipy.sparse.csgraph.
I have been told that the MST is a subset of the Delaunay Triangulation, so it was suggested... | 0 | 1 | 659 |
0 | 33,197,300 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-10-18T05:51:00.000 | 0 | 1 | 0 | Resampling an irregular distributed 1-D signal in python | 33,194,779 | 1.2 | python,arrays,numpy,resampling | Following Warren Weckesser's comment, the answer is using scipy.interpolate.interp1d | I've a nx2 ndarray which represent a height profile of the form h(x), with x being a non-negative real number and h(x) the height value in x. The x-values are irregular distributed, meaning:
x[i] - x[i - 1] != x[i + 1] - x[i]
I would like to take my array and create a new one with evenly spaced x-values with the corres... | 0 | 1 | 142 |
0 | 33,197,079 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-10-18T09:36:00.000 | 0 | 1 | 0 | How to Calculate width of the middle 98% mass of the gray level histogram of a image | 33,196,427 | 1.2 | python-2.7,image-processing,histogram,contrast | Let the total mass of the histogram be M.
Accumulate the mass in the bins, starting from index zero, until you pass 0.01 M. You get an index Q01.
Decumulate the mass in the bins, starting from the maximum index, until you pass 0.99 M. You get an index Q99.
These indexes are the so-called first and last percentiles. The... | I need to calculate the contrast of an color image, so the steps that was given to me are,
computed the histogram for RGB channel separately and combined it together as Histogram = histOfRedC + histOfBlueC + histOfgreenC.
normalize it to unit length, as each image is of different size.
The contrast quality, is equal ... | 0 | 1 | 140 |
0 | 33,282,334 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-10-19T13:55:00.000 | 1 | 1 | 0 | extracting the data through python script in paraview | 33,216,350 | 1.2 | python,paraview | Usually plots are made by plotting one data array versus another. You can often obtain that data directly from the filter/source that produced it and save it to a CSV file. To do this, select
You can save data from a filter as a CSV file by selecting the filter/source in the Pipeline Browser and choosing File -> Save ... | How to extract data from plot data filter in paraview through python script?? I want to get data through python script by which paraview is drawing the graph.
If anyone know this answer please help
Thank you | 0 | 1 | 628 |
0 | 33,228,345 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-10-20T03:15:00.000 | 0 | 1 | 0 | Pandas' version of numpy.resize for efficient matrix resizing | 33,227,369 | 0 | python,arrays,numpy,pandas,resize | If you want to stay within 'Pandas', I would suggest one of the following:
df.unstack() which would result in shape (len(index2), maxlen * num_columns) following your notation; here columns will be stored as a MultiIndex.
Alternatively, you can use df.to_panel(); Panel is a natural Pandas data structure used for 3 dime... | I have a dataframe with two indexes. (Both timestamps but thats probably not relevant).
I need to get out a numpy matrix with shape (len(first_index), maxlen, num_columns).
maxlen is some number (likely the max of all of the len(second_index)) or just something simple like 1000.
I can do this with arr = df.as_matrix(.... | 0 | 1 | 1,511 |
0 | 42,767,296 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2015-10-20T10:35:00.000 | 3 | 3 | 0 | Access pixel values within a contour boundary using OpenCV in Python | 33,234,363 | 0.197375 | python,image,opencv,image-processing,opencv-contour | Answer from @rayryeng is excellent!
One small thing from my implementation is:
The np.where() returns a tuple, which contains an array of row indices and an array of column indices. So, pts[0] includes a list of row indices, which correspond to height of the image, pts[1] includes a list of column indices, which corres... | I'm using OpenCV 3.0.0 on Python 2.7.9. I'm trying to track an object in a video with a still background, and estimate some of its properties. Since there can be multiple moving objects in an image, I want to be able to differentiate between them and track them individually throughout the remaining frames of the video.... | 0 | 1 | 32,084 |
0 | 33,290,142 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-10-21T02:23:00.000 | 0 | 1 | 0 | About learning curves | 33,249,904 | 1.2 | python,machine-learning,scikit-learn | If the gap between the training and cross-validation accuracy is increasing then this is an indication that your model is overfitting on the training data.
With every iteration (supplying additional training data) your model is better able to capture the training data, however it is no longer able to better generalise ... | I am trying to plot the learning curves for my SVC classifier with sklearn.learning_curve. From the plot, I find that both of my training scores and test scores increases simultaneously. But the gap between the training curve and cross-validation curve becomes larger with the increasing number of the samples. As I know... | 0 | 1 | 959 |
0 | 33,261,884 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-10-21T13:54:00.000 | 0 | 2 | 0 | Python multiple rows to one row | 33,261,261 | 0 | python,pandas,group-by | You need to make a dictionary, where the key is the id. Each value of that is going to be another dictionary of outN to value.
Read a line. You get an id, outN, and a value. Check you have an dict for that id first, and if not, create one. Then shove the value for that outN into the dict for that id.
Second step: You ... | I have a question per below - I need to transform multiple rows of ID into one row, and let the different "output"-values become columns with binary 1/0, like example.
Here is my table!
ID Output Timestamp
1 out1 1501
1 out2 1501
1 out5 1501
1 out9 1501
2 out3 ... | 0 | 1 | 1,859 |
0 | 34,088,151 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-10-22T22:50:00.000 | 0 | 1 | 0 | Spark: How to start remotely Jupyter in 'yarn_client' mode from a different user | 33,292,063 | 0 | hadoop,apache-spark,ipython,pyspark,jupyter | I have a working deployment of CDH5.5 + jupyter with pyspark and scala native spark. In my case I am using a dedicated user to start a jupyter server and then connecting to it from a client browser.
Before sharing some thoughts about your problem I would like to point out that if your fifth server is not close connecte... | Let's assume I've got a 4 nodes Hadoop cluster (Cloudera distro in my case) with a user named 'hadoop' on each node ('/home/hadoop'). Also, I've got a fifth server with installed on it, Jupyter and Anaconda with a user named 'ipython', but without hadoop installation.
Let's say I want to start Jupyter remotely from tha... | 0 | 1 | 1,327 |
0 | 33,339,510 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2015-10-24T23:31:00.000 | 2 | 1 | 0 | How to disable wheel_zoom in Bokeh? | 33,324,475 | 1.2 | python,bokeh | There is an open PR to improve this, it will be in the the 0.11 release. | Usually I do plotting inside of IPython Notebook with pylab mode.
Whenever I use Bokeh, I like to enable output_notebook() to show my plot inside of the IPython notebook.
Most annoying part is that Bokeh enable wheel_zoom by default which cause unintended zoom in IPython notebook.
I know I can avoid this by passing com... | 0 | 1 | 513 |
0 | 42,668,700 | 0 | 0 | 0 | 0 | 1 | false | 131 | 2015-10-26T13:08:00.000 | 1 | 5 | 0 | What is the difference between size and count in pandas? | 33,346,591 | 0.039979 | python,pandas,numpy,nan,difference | When we are dealing with normal dataframes then only difference will be an inclusion of NAN values, means count does not include NAN values while counting rows.
But if we are using these functions with the groupby then, to get the correct results by count() we have to associate any numeric field with the groupby to get... | That is the difference between groupby("x").count and groupby("x").size in pandas ?
Does size just exclude nil ? | 0 | 1 | 60,325 |
0 | 52,785,994 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2015-10-26T15:55:00.000 | 0 | 2 | 0 | Preventing PyTables (in Pandas) from printing "Closing remaining open files..." | 33,350,153 | 0 | python,pandas,pytables | You really have to close the open store manually. There is no other way.
Why? PyTables uses a file registry to track open files. A destructor for this file registry is registered with Python's atexit module, which is called when the Python interpreter exits. If this destructor method is called, it will print out t... | Is there a way to prevent PyTables from printing out
Closing remaining open files:path/to/store.h5...done?
I want to get rid of it just because it is clogging up the terminal.
I'm using pandas.HDFStore if that matters. | 0 | 1 | 1,063 |
0 | 37,483,626 | 0 | 0 | 0 | 0 | 1 | false | 207 | 2015-10-27T03:59:00.000 | 4 | 7 | 0 | Random number between 0 and 1? | 33,359,740 | 0.113791 | python,random | random.randrange(0,2) this works! | I want a random number between 0 and 1, like 0.3452. I used random.randrange(0, 1) but it is always 0 for me. What should I do? | 0 | 1 | 517,528 |
1 | 33,434,056 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-10-29T15:41:00.000 | 0 | 2 | 0 | tkinter opencv and numpy in windows with python2.7 | 33,418,678 | 0 | python,windows,opencv,numpy,tkinter | Finally did it with .whl files. Download them, copy to C:\python27\Scripts and then open "cmd" and navigate to that folder with "cd\" etc. Once there run:
pip install numpy-1.10.1+mkl-cp27-none-win_amd64.whl
for example.
In IDLE I then get:
import numpy
numpy.version
'1.10.1' | I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP f... | 0 | 1 | 170 |
1 | 33,441,221 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-10-29T15:41:00.000 | 0 | 2 | 0 | tkinter opencv and numpy in windows with python2.7 | 33,418,678 | 0 | python,windows,opencv,numpy,tkinter | small remark: WinPython has tkinter, as it's included by Python Interpreter itself | I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP f... | 0 | 1 | 170 |
0 | 33,421,040 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-10-29T17:14:00.000 | 0 | 1 | 0 | Padding python pivot tables with 0 | 33,420,633 | 0 | python,python-2.7,pandas,dataframe,pivot-table | I'm going to be general here, since there was no sample code or data provided. Let's say your original dataframe is called df and has columns Date and Sales.
I would try creating a list that has all dates from 01-01-2014 to 12-31-2015. Let's call this list dates. I would also create an empty list called sales (i.e. sal... | I have a pivot table which has an index of dates ranging from 01-01-2014 to 12-31-2015. I would like the index to range from 01-01-2013 to 12-31-2016 and do not know how without modifying the underlying dataset by inserting a row in my pandas dataframe with those dates in the column I want to use as my index for the pi... | 0 | 1 | 214 |
0 | 35,586,970 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-10-31T09:59:00.000 | 0 | 1 | 0 | Combining SVM Classifiers in MapReduce | 33,450,285 | 0 | python,mapreduce,scikit-learn,svm | Make sure that all of the required libraries (scikit-learn, NumPy, pandas) are installed on every node in your cluster.
Your mapper will process each line of input, i.e., your training row and emit a key that basically represents the fold for which you will be training your classifier.
Your reducer will collect the lin... | I've been tasked with solving a sentiment classification problem using scikit-learn, python, and mapreduce. I need to use mapreduce to parallelize the project, thus creating multiple SVM classifiers. I am then supposed to "average" the classifiers together, but I am not sure how that works or if it is even possible. Th... | 0 | 1 | 453 |
0 | 33,458,868 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2015-11-01T03:15:00.000 | 8 | 1 | 0 | how to make 1 by n dataframe from series in pandas? | 33,458,865 | 1.2 | python,pandas,dataframe,series | You can do df.ix[[n]] to get a one-row dataframe of row n. | I have a huge dataframe, and I index it like so:
df.ix[<integer>]
Depending on the index, sometimes this will have only one row of values. Pandas automatically converts this to a Series, which, quite frankly, is annoying because I can't operate on it the same way I can a df.
How do I either:
1) Stop pandas from con... | 0 | 1 | 1,239 |
0 | 33,479,441 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2015-11-01T16:16:00.000 | 2 | 1 | 0 | How exactly BIC in Augmented Dickey–Fuller test work in Python? | 33,464,294 | 1.2 | python,statsmodels | When we request automatic lag selection in adfulller, then the function needs to compare all models up to the given maxlag lags. For this comparison we need to use the same observations for all models. Because lagged observations enter the regressor matrix we loose observations as initial conditions corresponding to th... | This question is on Augmented Dickey–Fuller test implementation in statsmodels.tsa.stattools python library - adfuller().
In principle, AIC and BIC are supposed to compute information criterion for a set of available models and pick up the best (the one with the lowest information loss).
But how do they operate in the ... | 0 | 1 | 1,081 |
0 | 33,465,756 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-11-01T18:32:00.000 | 0 | 2 | 0 | What is a good way to implement several very similar functions? | 33,465,685 | 0 | python,oop | More information needs to be given to fully understand the context. But, in a general sense, I'd do a mix of all of them. Use helper functions for "shared" parts, and use conditional statements too. Honestly, a lot of it comes down to just what is easier for you to do? | I need several very similar plotting functions in python that share many arguments, but differ in some and of course also differ slightly in what they do. This is what I came up with so far:
Obviously just defining them one after the other and copying the code they share is a possibility, though not a very good one, I... | 0 | 1 | 444 |
0 | 33,504,368 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2015-11-02T02:01:00.000 | 7 | 2 | 0 | How to transform items using sklearn Pipeline? | 33,469,633 | 1 | python,machine-learning,scikit-learn | The reason why the results are different (and why calling transform even workds) is that LinearSVC also has a transform (now deprecated) that does feature selection
If you want to transform using just the first step, pipeline.named_steps['tfidf'].transform([item]) is the right thing to do.
If you would like to transfor... | I have a simple scikit-learn Pipeline of two steps: a TfIdfVectorizer followed by a LinearSVC.
I have fit the pipeline using my data. All good.
Now I want to transform (not predict!) an item, using my fitted pipeline.
I tried pipeline.transform([item]), but it gives a different result compared to pipeline.named_steps['... | 0 | 1 | 7,410 |
0 | 33,481,202 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2015-11-02T14:14:00.000 | 0 | 1 | 0 | Importing data from text file and saving the same in excel | 33,479,646 | 1.2 | matlab,python-2.7,csv,export-to-csv | To read a text file in Matlab you can use fscanf or textscan then to export to excel you can use xlswrite that write directly to the excel file. | I am trying to read data from text file (which is output given by Tesseract OCR) and save the same in excel file. The problem i am facing here is the text files are in space separated format, and there are multiple files. Now i need to read all the files and save the same in excel sheet.
I am using MATLAB to import and... | 0 | 1 | 212 |
0 | 34,476,701 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2015-11-03T11:09:00.000 | 1 | 1 | 0 | Why is wsgi looking for a library in /lib64 when the correct version is in the python distribution | 33,497,639 | 0.197375 | python-2.7,mod-wsgi | Copy all the files libz.so* to any path in your LD_LIBRARY_PATH
Short story long, I have miniconda and stuck at the same issue. I realised that conda prefer to search for library in LD_LIBRARY_PATH than its own libs.
Hence, you need to make missing library available in LD_LIBRARY_PATH, adding the whole conda lib direct... | I've created a flask application that I'm trying to deploy on an apache server. I've installed a conda distribution of python where I've downloaded associated modules, including flask, matplotlib and others. I'm using wsgi to launch the application.
The problem I'm having is when the server runs wsgi script it fails sa... | 0 | 1 | 725 |
0 | 34,154,972 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-11-05T03:30:00.000 | 7 | 2 | 0 | Testing the Keras sentiment classification with model.predict | 33,536,182 | 1.2 | python,sentiment-analysis,lstm,keras | So what you basically need to do is as follows:
Tokenize sequnces: convert the string into words (features): For example: "hello my name is georgio" to ["hello", "my", "name", "is", "georgio"].
Next, you want to remove stop words (check Google for what stop words are).
This stage is optional, it may lead to faulty res... | I have trained the imdb_lstm.py on my PC.
Now I want to test the trained network by inputting some text of my own. How do I do it?
Thank you! | 0 | 1 | 2,818 |
0 | 33,553,902 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-11-05T18:55:00.000 | 0 | 1 | 0 | Does Eigenface method use unsupervised trainning | 33,552,557 | 0 | python,face-recognition | Eigenfaces require supervised learning. You generally supply several of each subject, classifying them by identifying the subject. The eigenface model then classifies later images (often real-time snapshots) as to identity. | Eigenface method is a powerful method in face recognition. It uses the training images to find the eigenfaces and then use these eigenfaces to represent a new test image. Do the images in training dataset need to be labeled, or it is unsupervised training? | 0 | 1 | 116 |
0 | 33,560,748 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-11-06T05:45:00.000 | 0 | 1 | 0 | Calculating the Angle Between Vectors by using a vector as a reference point: | 33,560,269 | 1.2 | python,cosine-similarity,trigonometry | That approach will only work for 2-D vectors. For higher dimensions any two vectors will define a hyperplane, and only if the third (reference) vector also lies within this hyperplane will your approach work. Unfortunately instead of only calculating n angles and subtracting, in order to determine the angles between ... | I have been trying to find a fast algorithm of calculating all the angle between n vectors that are of length x. For example if x=3 and n=4, my data would look something like this:
A: [1,2,3]
B: [2,3,4]
C: [...]
D: [...]
I was wondering is it acceptable to find the the angle between all of be vectors (A,B,C,D) with res... | 0 | 1 | 390 |
0 | 33,596,513 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-11-07T21:04:00.000 | 0 | 1 | 0 | finding a local maximum in a 3d array (array of images) in python | 33,587,761 | 0 | python,opencv,image-processing,computer-vision | I'd take advantage of the fact that dilations are efficiently implemented in OpenCV. If a point is a local maximum in 3d, then it is also in any 2d slice, therefore:
Dilate each image in the array with a 3x3 kernel, keep as candidate maxima the points whose intensity is unchanged.
Brute-force test the candidates again... | I'm trying to implement a blob detector based on LOG, the steps are:
creating an array of n levels of LOG filters
use each of the filters on the input image to create a 3d array of h*w*n where h = height, w = width and n = number of levels.
find a local maxima and circle the blob in the original image.
I already crea... | 0 | 1 | 1,030 |
0 | 33,729,058 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-11-09T06:03:00.000 | 1 | 2 | 0 | Create a "spotlight" in an image using Python | 33,603,304 | 1.2 | python,image-processing | I finally did it with ImageMagick, using Python to calculate the various coordinates, etc.
This command will create the desired circle (radius 400, centered at (600, 600):
convert -size 1024x1024 xc:none -stroke black -fill steelblue -strokewidth 1 -draw "translate 600,600 circle 0,0 400,0" drawn.png
This command will... | Here's what I'm trying to do:
I have an image.
I want to take a circular region in the image, and have it appear as normal.
The rest of the image should appear darker.
This way, it will be as if the circular region is "highlighted".
I would much appreciate feedback on how to do it in Python.
Manually, in Gimp, I wou... | 0 | 1 | 687 |
0 | 33,611,826 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2015-11-09T09:27:00.000 | 2 | 1 | 0 | Statsmodels Logistic Regression class imbalance | 33,605,979 | 1.2 | python,statistics,statsmodels | programmer's answer:
statsmodels Logit and other discrete models don't have weights yet. (*)
GLM Binomial has implicitly defined case weights through the number of successful and unsuccessful trials per observation. It would also allow manipulating the weights through the GLM variance function, but that is not official... | I'd like to run a logistic regression on a dataset with 0.5% positive class by re-balancing the dataset through class or sample weights. I can do this in scikit learn, but it doesn't provide any of the inferential stats for the model (confidence intervals, p-values, residual analysis).
Is this possible to do in statsm... | 0 | 1 | 3,765 |
0 | 33,617,441 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-11-09T11:55:00.000 | 2 | 1 | 0 | scikit learn mean shift clustering in one-dimensional array | 33,608,541 | 0.379949 | python,scikit-learn,cluster-analysis | It does not make sense to run mean-shift on one-dimensional data.
Do regular kernel density estimation instead. Locate the minima, and split the data set there.
Mean shift is for data that is too complex for proper KDE.
One dimensional data never is. | how can I run a mean shift clustering on a 1D array?
Here there is my dataframe:
>>>df
INFO FREQ
R2 31 0.2468213
R5 27 0.003670532
UR 25 0.00337465
I need to apply the clustering on the "INFO" column.
Whit the kmeans I solved this problem using the reshape(-1,1) command:
k... | 0 | 1 | 1,483 |
0 | 33,621,420 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-11-09T17:12:00.000 | 1 | 3 | 0 | How to run multiple concurrent jobs in Spark using python multiprocessing | 33,614,453 | 0.066568 | python-2.7,apache-spark,hadoop-yarn,pyspark | How many CPUs do you have and how many are required per job? YARN will schedule the jobs and assign what it can on your cluster: if you require 8CPUs for your job and your system has only 8CPUs, then other jobs will be queued and ran serially.
If you requested 4 per job then you would see 2 jobs run in parallel at any ... | I have setup a Spark on YARN cluster on my laptop, and have problem running multiple concurrent jobs in Spark, using python multiprocessing. I am running on yarn-client mode. I tried two ways to achieve this:
Setup a single SparkContext and create multiple processes to submit jobs. This method does not work, and the p... | 0 | 1 | 5,740 |
0 | 46,858,249 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-11-09T17:42:00.000 | 1 | 1 | 0 | Add markeredges in seaborn lmplot? | 33,614,947 | 0.197375 | python,seaborn,marker | As per the comment from @Sören, you can add the markeredges with the keyword scatter_kws. For example scatter_kws={'linewidths':1,'edgecolor':'k'} | sns.lmplot(x="size", y="tip", data=tips)
gives a scatter plot. By default the markers have no edges.
How can I add markeredges? Sometimes I prefer to use edges transparent facecolor. Especially with dense data. However,
Neither markeredgewidth nor mew nor linewidths are accepted as keywords.
Does anyone know how to a... | 0 | 1 | 1,371 |
0 | 33,622,414 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2015-11-09T19:23:00.000 | 1 | 3 | 0 | How do I use distributed DNN training in TensorFlow? | 33,616,593 | 0.066568 | python,parallel-processing,deep-learning,tensorflow | Update
As you may have noticed. Tensorflow has already supported distributed DNN training for quite some time. Please refer to its offcial website for details.
=========================================================================
Previous
No, it doesn't support distribute training yet, which is a little disappointi... | Google released TensorFlow today.
I have been poking around in the code, and I don't see anything in the code or API about training across a cluster of GPU servers.
Does it have distributed training functionality yet? | 0 | 1 | 4,046 |
0 | 44,216,923 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2015-11-10T11:13:00.000 | 15 | 2 | 0 | Python Opencv morphological closing gives src data type = 0 is not supported | 33,628,679 | 1 | python,opencv,mathematical-morphology | Make sure volume_start is dtype=uint8. You can convert it with volume_start = np.array(volume_start, dtype=np.uint8).
Or nicer:
volume_start = volume_start.astype(np.uint8) | I'm trying to morphologically close a volume with a ball structuring element created by the function SE3 = skimage.morphology.ball(8).
When using closing = cv2.morphologyEx(volume_start, cv2.MORPH_CLOSE, SE) it returns TypeError: src data type = 0 is not supported
Do you know how to solve this issue?
Thank you | 0 | 1 | 16,369 |
0 | 33,675,492 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-11-11T01:53:00.000 | 0 | 1 | 0 | listen for ctf otf changes with traits in mayavi volume rendering | 33,642,997 | 1.2 | python-2.7,enthought,mayavi,traitsui | You are going to wade into dangerous territory. As you noted the recorder has idosyncratic behavior -- what that really means is that it uses features to programatically "disable" the trait notifications while it is doing things.
You can probably figure out a way to do it that way, but most likely you'll have to dig de... | I would like to listen to changes in the transfer function in how the color and opacity (ctf/otf) of my data is represented.
Listening to sensible-sounding traits such as mayavi.modules.volume.Volume._ctf does not trigger my callback.
I would expect this to be changed by the user either through the "standard" mayavi pi... | 0 | 1 | 96 |
0 | 33,683,680 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-11-12T17:46:00.000 | 0 | 1 | 0 | Reverse-engineering a clustering algorithm from the clusters | 33,677,932 | 0 | python,scikit-learn,cluster-analysis,feature-selection | Are you sure it was done automatically?
It sounds to me as if you should be treating this as a classification problem: construct a classifier that does the same as the human did. | I have a clustering of data performed by a human based solely on their knowledge of the system. I also have a feature vector for each element. I have no knowledge about the meaning of the features, nor do I know what the reasoning behind the human clustering was.
I have complete information about which elements belong ... | 0 | 1 | 497 |
0 | 42,054,866 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-11-12T21:01:00.000 | 1 | 2 | 0 | matplotlib can't be used from python3 | 33,681,281 | 0.099668 | python,matplotlib,pip | You can install the package from your distro with:
sudo apt-get install python3-matplotlib
It will probably throw an error when you import matplotlib, but it is solved by installing the package tkinter with:
sudo apt-get install python3-tk | I have two python compilers on my Ubuntu 14.04 VM. I have installed matplotlib as
pip install matplotlib
But the matplotlib cannot be used from python3.It can be used from python2.7
If I use import matplotlib.pyplot as plt inside my script test.py and run it as
python3 test.py
I get the error
ImportError: No module na... | 0 | 1 | 1,128 |
0 | 33,713,612 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-11-14T21:17:00.000 | 1 | 2 | 0 | How to convert specific elements within a numpy array to integers? | 33,713,472 | 0.099668 | python,arrays,numpy | The strength of Numpy arrays is that many low-level operations can be quickly performed on the data because most (not all) types used by these arrays have a fixed-size in memory. For instance, the floats you are using probably require 8 bytes each. The most important thing in that case is that all datas share the same ... | I've written a script that gives me the result of dividing two variables ("A" and "B") -- and the output of each variable is a numpy array with 26 elements. Usually, with any two elements from "A" and "B," the result of the operation is a float, and the the element in the output array that corresponds to that operation... | 0 | 1 | 82 |
0 | 33,754,768 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-11-17T09:24:00.000 | 1 | 1 | 0 | Pickling/unpickling alternative (API-compatible) class implementations | 33,753,224 | 1.2 | python,c++,alias,pickle | Does it help to alias in another way (fast = normal) if there is no fast implementation available? Maybe this could be done only for the time of unpickling and then reversed, to avoid confusing checks in other code? | In a distributed computing project, we are using Pyro to pass objects over the wire between nodes; Pyro internally serializes and deserializes objects using pickle.
Some classes in the project have two implementations: one pure-Python (for ease of installation, especially for Windows users), one in c++/boost::python (m... | 0 | 1 | 109 |
0 | 33,757,187 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-11-17T12:29:00.000 | 0 | 2 | 0 | How can I blur or pixify images in python by using matrixes? | 33,756,970 | 0 | python,image-processing | To make it blurry filter it using any low-pass filter (mean filter, gaussian filter etc.). | I already have a function that converts an image to a matrix, and back. But I was wondering how to manipulate the matrix so that the picture becomes blurry, or pixified? | 0 | 1 | 161 |
0 | 53,183,223 | 0 | 0 | 0 | 0 | 2 | false | 640 | 2015-11-17T14:37:00.000 | 3 | 28 | 0 | How to save/restore a model after training? | 33,759,623 | 0.021425 | python,tensorflow | Use tf.train.Saver to save a model. Remember, you need to specify the var_list if you want to reduce the model size. The val_list can be:
tf.trainable_variables or
tf.global_variables. | After you train a model in Tensorflow:
How do you save the trained model?
How do you later restore this saved model? | 0 | 1 | 468,965 |
0 | 33,763,208 | 0 | 0 | 0 | 0 | 2 | false | 640 | 2015-11-17T14:37:00.000 | 55 | 28 | 0 | How to save/restore a model after training? | 33,759,623 | 1 | python,tensorflow | There are two parts to the model, the model definition, saved by Supervisor as graph.pbtxt in the model directory and the numerical values of tensors, saved into checkpoint files like model.ckpt-1003418.
The model definition can be restored using tf.import_graph_def, and the weights are restored using Saver.
However, S... | After you train a model in Tensorflow:
How do you save the trained model?
How do you later restore this saved model? | 0 | 1 | 468,965 |
0 | 60,106,544 | 0 | 0 | 0 | 0 | 1 | false | 69 | 2015-11-17T19:24:00.000 | 5 | 5 | 0 | Remove nodes from graph or reset entire default graph | 33,765,336 | 0.197375 | python,tensorflow | Tensorflow 2.0 Compatible Answer: In Tensorflow Version >= 2.0, the Command to Reset Entire Default Graph, when run in Graph Mode is tf.compat.v1.reset_default_graph.
NOTE: The default graph is a property of the current thread. This function applies only to the current thread. Calling this function while a tf.compat.v1... | When working with the default global graph, is it possible to remove nodes after they've been added, or alternatively to reset the default graph to empty? When working with TF interactively in IPython, I find myself having to restart the kernel repeatedly. I would like to be able to experiment with graphs more easily i... | 0 | 1 | 75,804 |
0 | 42,991,702 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-11-18T11:24:00.000 | 2 | 2 | 0 | How to apply sklearn's EllipticEnvelope to find out top outliers in the given dataset? | 33,778,802 | 1.2 | python,scikit-learn,outliers | Right way to do this is:
Divide data into normal and outliers.
Take large sample from normal data as normal_train for fitting the novelty detection model.
Create a test set with a sample from normal that is not used in training (say normal_test) and a sample from outlier (say outlier_test) in a way such that the distr... | I am using sklearn's EllipticEnvelope to find outliers in dataset. But I am not sure about how to model my problem? Should I just use all the data (without dividing into training and test sets) and apply fit? Also how would I obtain the outlyingness of each datapoint? Should I use predict on the same dataset? | 0 | 1 | 2,901 |
0 | 68,254,814 | 0 | 0 | 0 | 0 | 1 | false | 46 | 2015-11-18T15:13:00.000 | 0 | 4 | 0 | How can I visualize the weights(variables) in cnn in Tensorflow? | 33,783,672 | 0 | python,tensorflow | Using the tensorflow 2 API, There are several options:
Weights extracted using the get_weights() function.
weights_n = model.layers[n].get_weights()[0]
Bias extracted using the numpy() convert function.
bias_n = model.layers[n].bias.numpy() | After training the cnn model, I want to visualize the weight or print out the weights, what can I do?
I cannot even print out the variables after training.
Thank you! | 0 | 1 | 53,725 |
0 | 33,846,706 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2015-11-21T17:03:00.000 | 6 | 2 | 0 | When should I use fftshift(fft(fftshift(x))) and when fft(x)? | 33,846,123 | 1 | python,fft | fft(fftshift(x)) rotates the input vector so the the phase of the complex FFT result is relative to the center of the original data window. If the input waveform is not exactly integer periodic in the FFT width, phase relative to the center of the original window of data may make more sense than the phase relative to ... | I am trying to implement an algorithm in python, but I am not sure when I should use fftshift(fft(fftshift(x))) and when only fft(x) (from numpy). Is there a rule of thumb based on the shape of input data?
I am using fftshift instead of ifftshift due to the even number of values in the vector x. | 0 | 1 | 13,324 |
0 | 33,871,559 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2015-11-22T04:50:00.000 | 1 | 2 | 0 | Installing numpy and pandas for python 3.5 | 33,851,716 | 1.2 | python,numpy | I was corresponding with some ppl at python.org and they told me to use
py -3.5 -m pip install SomePackage
This works. | I've been trying to install numpy and pandas for python 3.5 but it keeps telling me that I have an issue.
Could it be because numpy can't run on python 3.5 yet? | 0 | 1 | 4,445 |
0 | 33,853,861 | 0 | 1 | 0 | 0 | 2 | false | 30 | 2015-11-22T10:31:00.000 | -5 | 7 | 0 | How do I close all pyplot windows (including ones from previous script executions)? | 33,853,801 | -1 | python,matplotlib,pycharm | On *nix you can use killall command.
killall app
closes every instance of window with app for the window name.
You can also use the same command from inside your python script.
You can use os.system("bashcommand") to run the bashcommand. | So I have some python code that plots a few graphs using pyplot. Every time I run the script new plot windows are created that I have to close manually. How do I close all open pyplot windows at the start of the script? Ie. closing windows that were opened during previous executions of the script?
In MatLab this can be... | 0 | 1 | 61,115 |
0 | 52,167,731 | 0 | 1 | 0 | 0 | 2 | false | 30 | 2015-11-22T10:31:00.000 | 0 | 7 | 0 | How do I close all pyplot windows (including ones from previous script executions)? | 33,853,801 | 0 | python,matplotlib,pycharm | As there seems no absolutely trivial solution to do this automatically from the script itself: the possibly simplest way to close all existing figures in pycharm is killing the corresponding processes (as jakevdp suggested in his comment):
Menu Run\Stop... (Ctrl-F2). You'll find the windows closed with a delay of few s... | So I have some python code that plots a few graphs using pyplot. Every time I run the script new plot windows are created that I have to close manually. How do I close all open pyplot windows at the start of the script? Ie. closing windows that were opened during previous executions of the script?
In MatLab this can be... | 0 | 1 | 61,115 |
0 | 33,919,992 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2015-11-25T07:03:00.000 | 1 | 1 | 0 | datanitro - pass variables from one script to others | 33,910,358 | 1.2 | python,datanitro | There's no way to share the dataframe (each DataNitro script runs in its own process). You can read the frame each time, or if reading is slow, you can have the first script store it somewhere the other scripts can access it (e.g. as a csv or by pickling it). | Is there a way to have a variable (resulting from one script) accessible to other scripts while Excel is running?
I have tried from script1 import df but it runs script1 again to produce df. I have a script that runs when I first open the workbook and it reads a dataframe and I need that dataframe for other scripts (o... | 0 | 1 | 90 |
0 | 33,918,217 | 0 | 1 | 0 | 0 | 2 | true | 10 | 2015-11-25T13:42:00.000 | 5 | 3 | 0 | Will casting an "integer" float to int always return the closest integer? | 33,918,043 | 1.2 | python | Casting a float to an integer truncates the value, so if you have 3.999998, and you cast it to an integer, you get 3.
The way to prevent this is to round the result. int(round(3.99998)) = 4, since the round function always return a precisely integral value. | I get a float by dividing two numbers. I know that the numbers are divisible, so I always have an integer, only it's of type float. However, I need an actual int type. I know that int() strips the decimals (i.e., floor rounding). I am concerned that since floats are not exact, if I do e.g. int(12./3) or int(round(12./3... | 0 | 1 | 3,905 |
0 | 33,918,503 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2015-11-25T13:42:00.000 | 1 | 3 | 0 | Will casting an "integer" float to int always return the closest integer? | 33,918,043 | 0.066568 | python | I ended up using integer division (a//b) since I divided integers. Wouldn't have worked if I divided e.g. 3.5/0.5=7 though. | I get a float by dividing two numbers. I know that the numbers are divisible, so I always have an integer, only it's of type float. However, I need an actual int type. I know that int() strips the decimals (i.e., floor rounding). I am concerned that since floats are not exact, if I do e.g. int(12./3) or int(round(12./3... | 0 | 1 | 3,905 |
0 | 33,926,859 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-11-25T21:28:00.000 | 0 | 2 | 0 | Converting coordinates vector to numpy 2D matrix | 33,926,704 | 0 | python,numpy,matrix,matplotlib,lidar | I am aware that I am not answering half of your questions but this is how I would do it:
Create a 2D array of the desired resolution,
The "leftmost" values correspond to the smallest values of x and so forth
Fill the array with the elevation value of the closest match in terms of x and y values
Smoothen the result. | I have a set of 3D coordinates points: [lat,long,elevation] ([X,Y,Z]), derived from LIDAR data.
The points are not sorted and the steps size between the points is more or less random.
My goal is to build a function that converts this set of points to a 2D numpy matrix of a constant number of pixels where each (X,Y) cel... | 0 | 1 | 2,868 |
0 | 33,954,610 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2015-11-27T09:41:00.000 | 1 | 2 | 0 | Using OpenCV with Django | 33,954,438 | 0.099668 | python,django,opencv | Am I right that you dream about Django application able to capture video from your camera? This will not work (at least not in a way you expect).
Did you check any stack traces left by your web server (the one hosts Django app or the one started as Django built-in)?
I suggest you start playing with OpenCV a bit just fr... | I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library.
When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my b... | 1 | 1 | 4,890 |
0 | 35,443,792 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2015-11-27T09:41:00.000 | 2 | 2 | 0 | Using OpenCV with Django | 33,954,438 | 0.197375 | python,django,opencv | Use a separate thread for the cv2 function call and the app should work like a charm. From what I figure..infinite loading is probably because the video never ceases recording and hence the code further up ahead is never taken into account, ergo an infinite loading page. Threads should probably do it.
:) :) | I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library.
When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my b... | 1 | 1 | 4,890 |
0 | 33,980,259 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-11-28T20:01:00.000 | 0 | 1 | 0 | Fourier series of time domain data | 33,975,835 | 0 | python,fft,dft,period | Before doing the FFT, you will need to resample or interpolate the data until you get a set of amplitude values equally spaced in time. | I spent couple days trying to solve this problem, but no luck so I turn to you. I have file for a photometry of a star with time and amplitude data. I'm supposed to use this data to find period changes. I used Lomb-Scargle from pysca library, but I have to use Fourier analysis. I tried fft (dft) from scipy and numpy bu... | 0 | 1 | 243 |
0 | 33,997,375 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-11-30T11:16:00.000 | 0 | 4 | 0 | How do I append rows to an array in Python? | 33,997,336 | 0 | python,arrays,list,matrix,append | Use .append('item-goes-here') to append. | I have an array which is a 1X3 matrix, where:
column 1 = x coordinate
column 2 = y coordinate
column 3 = direction of vector.
I am tracking a series of points along a path.
At each point i want to store the x,y and direction back into the array, as a row.
So in the end, my array has grown vertically, with m... | 0 | 1 | 645 |
0 | 34,017,212 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2015-11-30T18:27:00.000 | 0 | 1 | 0 | Can't render latex in matplotlib.pyplot in python3 | 34,005,438 | 1.2 | python-3.x,matplotlib,pdflatex | I could solve the problem using rc('text', usetex=False) which apparently make the matplotlib to use the internal mathtext instead of my default latex installation.
Still I can not figure out the reason why my OS latex installation fails. | While something like matplotlib.pyplot.xlabel(r'Wavelenghth [$\mu$m]') works in python2 I get error when I use it in python 3
TypeError: startswith first arg must be str or a tuple of str, not
bytes
Does anyone know what it the problem? Is it from my latex installation?! | 0 | 1 | 623 |
0 | 34,321,420 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-12-01T16:38:00.000 | 0 | 1 | 0 | scoring="roc_auc" on GridSearchCV for RF and DT | 34,025,404 | 1.2 | python,scikit-learn | The answer is: It is possible.
However, the feature is only available to binary cases under the stated question. As explained by @AndreasMueller. | Reading the scikit-learn documentation and looking for similar topics I couldn't figure out an answer.
Can I apply GridSearchCV having scoring="roc_auc" on Random Forest or Decision Trees without any drawback?
Thank you in advance for any clarification. | 0 | 1 | 456 |
0 | 34,044,627 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-12-01T22:13:00.000 | 1 | 1 | 0 | Find Subplot Number from Matplotlib Pick Event | 34,031,206 | 1.2 | python,matplotlib,interactive | Place the axes in a list or dictionary when creating. Then when a pick event has occurred, match the pick event axis to the dictionary.
Thank you all. | So I have three matplotlib subplots. I can use a pick event to pull off and re-plot the data in any one of the subplot. Is it possible to read the pick event and to find out what subplot number was selected? | 0 | 1 | 531 |
0 | 34,097,320 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-12-04T20:39:00.000 | 0 | 3 | 0 | Count since last occurence in NumPy | 34,097,020 | 0 | python,numpy | Split the array based on the condition and use the lengths of the remaining pieces and the condition state of the first and last element in the array. | Seemingly straightforward problem: I want to create an array that gives the count since the last occurence of a given condition. In this condition, let the condition be that a > 0:
in: [0, 0, 5, 0, 0, 2, 1, 0, 0]
out: [0, 0, 0, 1, 2, 0, 0, 1, 2]
I assume step one would be something like np.cumsum(a > 0), but not sure... | 0 | 1 | 79 |
0 | 34,097,344 | 0 | 0 | 0 | 0 | 3 | false | 274 | 2015-12-04T20:55:00.000 | 84 | 12 | 0 | Convert a tensor to numpy array in Tensorflow? | 34,097,281 | 1 | python,numpy,tensorflow | To convert back from tensor to numpy array you can simply run .eval() on the transformed tensor. | How to convert a tensor into a numpy array when using Tensorflow with Python bindings? | 0 | 1 | 668,313 |
0 | 65,860,219 | 0 | 0 | 0 | 0 | 3 | false | 274 | 2015-12-04T20:55:00.000 | 4 | 12 | 0 | Convert a tensor to numpy array in Tensorflow? | 34,097,281 | 0.066568 | python,numpy,tensorflow | You can convert a tensor in tensorflow to numpy array in the following ways.
First:
Use np.array(your_tensor)
Second:
Use your_tensor.numpy | How to convert a tensor into a numpy array when using Tensorflow with Python bindings? | 0 | 1 | 668,313 |
0 | 63,803,837 | 0 | 0 | 0 | 0 | 3 | false | 274 | 2015-12-04T20:55:00.000 | 2 | 12 | 0 | Convert a tensor to numpy array in Tensorflow? | 34,097,281 | 0.033321 | python,numpy,tensorflow | If you see there is a method _numpy(),
e.g for an EagerTensor simply call the above method and you will get an ndarray. | How to convert a tensor into a numpy array when using Tensorflow with Python bindings? | 0 | 1 | 668,313 |
0 | 34,116,773 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-12-06T07:26:00.000 | 0 | 2 | 1 | Running Octave tasks from Python | 34,115,098 | 0 | python,subprocess,octave,message-queue,oct2py | All three options are reasonable depending on your particular case.
I don't want to rely on maintenance of external libraries such as oct2py, I am in favor of option 3
oct2py is implemented using option 3. You can reinvent what it already does or use it directly. oct2py is pure Python and it has permissive license: ... | I have a pretty complex computation code written in Octave and a python script which receives user input, and needs to run the Octave code based on the user inputs. As I see it, I have these options:
Port the Octave code to python.
Use external libraries (i.e. oct2py) which enable you to run the Octave/Matlab engine f... | 0 | 1 | 1,133 |
0 | 34,119,821 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-12-06T16:25:00.000 | 1 | 2 | 0 | how to sort list in python which has two numbers per index value? | 34,119,746 | 0.099668 | python-2.7,sorting | Try this: b = sorted(b, key = lambda i: (i[0], i[1])) | My code
b=[((1,1)),((1,2)),((2,1)),((2,2)),((1,3))]
for i in range(len(b)):
print b[i]
Obtained output:
(1, 1)
(1, 2)
(2, 1)
(2, 2)
(1, 3)
how do i sort this list by the first element or/and second element in each index value to get the output as:
(1, 1)
(1, 2)
(1, 3)
(2, 1)
(2, 2)
OR
(1, 1)
(2, 1)
(1, 2)
(2, ... | 0 | 1 | 48 |
0 | 34,122,819 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-12-06T20:36:00.000 | 1 | 1 | 0 | Multiclass NaiveBayes classification on a text dataset with changing prior probabilities | 34,122,417 | 0.197375 | python,machine-learning,nltk,naivebayes | If you know that priors change, you should refit them periodically (through gathering new training set representable for a new priors). In general - every ML method will fail in terms of accuracy if the priors change and you will not give this information to your classifier. You need at least some kind of feedback for ... | Ive come across an issue using Naive Bayes on Document classification into various classes problem.
Actually I was wondering that P(C) or the prior probability of classes that we have at our hands initially will keep on changing over the course of time.
For instance for classes - [music, sports, news] initial probabili... | 0 | 1 | 211 |
0 | 34,132,511 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-12-07T11:12:00.000 | 1 | 2 | 0 | Python Pandas IDE that would "know" columns and types | 34,132,184 | 0.099668 | python,debugging,pandas,ide | I don't believe that something like that exists, but you can always use df.info(). | I'm doing some development in Python, mostly using a simple text editor (Sublime Text). I'm mostly dealing in databases that I fit in Pandas DataFrames. My issue is, I often lose track of the column names, and occasionally the column types as well. Is there some IDE / plug-in / debug tool that would allow me to look in... | 0 | 1 | 308 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.