GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 34,163,258 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-12-08T02:33:00.000 | 0 | 2 | 0 | Review data sentiment analysis, focusing on extracting negative sentiment? | 34,146,996 | 0 | python,machine-learning,nlp,sentiment-analysis | When you annotate the sentiment, don't annotate 'Positive', 'Negative', and 'Neutral'. Instead, annotate them as either "has negative" or "doesn't have negative". Then your sentiment classification will only be concerned with how strongly the features indicate negative sentiment, which appears to be what you want. | I am trying to do sentiment analysis on a review dataset. Since I care more about identifying (extracting) negative sentiments in reviews (unlabeled now but I try to manually label a few hundreds or use Alchemy API), if the review is overall neutral or positive but a part has negative sentiment, I'd like my model to co... | 0 | 1 | 268 |
0 | 34,154,600 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-12-08T10:46:00.000 | 0 | 1 | 0 | Python Selenium infinite loop | 34,153,844 | 0 | python,loops,selenium | Your attempts is always less then 5 because there is no variable increment. So your loop is infinite | I'm trying to study customers behavior. Basically, I have information on customer's loyalty points activities data (e.g. how many points they have earned, how many points they have used, how recent they have used/earn points etc). I'm using R to conduct this analysis
I'm just wondering how should I go about segmenting ... | 0 | 1 | 436 |
0 | 34,457,756 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2015-12-09T08:17:00.000 | 1 | 1 | 0 | Is python ggplot still being developed? | 34,173,840 | 1.2 | python,plot,graphing,python-ggplot | Yes. They are currently doing a major rewrite. | Python ggplot is great, but missing many customization options.
The commit history on github for the past year does not look very promising...
Does anyone know if it is still being developed? | 0 | 1 | 154 |
0 | 61,137,400 | 0 | 0 | 0 | 0 | 2 | false | 19 | 2015-12-09T22:33:00.000 | 3 | 3 | 0 | What's the best way to refresh TensorBoard after new events/logs were added? | 34,190,298 | 0.197375 | python,tensorflow,tensorboard | I advise to always start tensorboard with --reload_multifile True to force reloading all event files. | What is the best way to quickly see the updated graph in the most recent event file in an open TensorBoard session? Re-running my Python app results in a new log file being created with potentially new events/graph. However, TensorBoard does not seem to notice those differences, unless restarted. | 0 | 1 | 21,777 |
0 | 44,359,908 | 0 | 0 | 0 | 0 | 2 | false | 19 | 2015-12-09T22:33:00.000 | 0 | 3 | 0 | What's the best way to refresh TensorBoard after new events/logs were added? | 34,190,298 | 0 | python,tensorflow,tensorboard | My issue is different. Each time I refresh 0.0.0.0:6006, it seems the new graph keep appending to the old one, which is quite annoying.
After trying kill process and delete old log several times, I realized the issue comes from writer.add_graph(sess.graph), because I didn't reset the graph in jupyter notebook. After ... | What is the best way to quickly see the updated graph in the most recent event file in an open TensorBoard session? Re-running my Python app results in a new log file being created with potentially new events/graph. However, TensorBoard does not seem to notice those differences, unless restarted. | 0 | 1 | 21,777 |
0 | 44,128,902 | 0 | 0 | 0 | 0 | 2 | false | 349 | 2015-12-10T10:19:00.000 | 7 | 16 | 0 | How to prevent tensorflow from allocating the totality of a GPU memory? | 34,199,233 | 1 | python,tensorflow,tensorflow2.0,tensorflow2.x,nvidia-titan | Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPUs whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation.
And in an inte... | I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.
For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enoug... | 0 | 1 | 237,456 |
0 | 52,828,871 | 0 | 0 | 0 | 0 | 2 | false | 349 | 2015-12-10T10:19:00.000 | 1 | 16 | 0 | How to prevent tensorflow from allocating the totality of a GPU memory? | 34,199,233 | 0.012499 | python,tensorflow,tensorflow2.0,tensorflow2.x,nvidia-titan | i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using
pip install tensorflow-gpu==1.8.0 | I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.
For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enoug... | 0 | 1 | 237,456 |
0 | 35,853,184 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-12-11T22:44:00.000 | 2 | 1 | 0 | Event files in Google Tensorflow | 34,233,767 | 0.379949 | python,tensorflow,tensorboard | The best solution from a TensorBoard perspective is to have a root directory for your experiment, e.g. ~/tensorflow/mnist_experiment, and then to create a new subdirectory for each run, e.g. ~/tensorflow/mnist_experiment/run1/...
Then run TensorBoard against the root directory, and every time you invoke your code, setu... | I am using Tensorflow to build up the Neural Network, and I would like to show training results on the Tensorboard. So far everything works fine. But I have a question on "event file" for the Tensorboard. I notice that every time when I run my python script, it generates different event files. And when I run my local s... | 0 | 1 | 1,651 |
0 | 44,353,399 | 0 | 0 | 0 | 0 | 1 | false | 24 | 2015-12-12T01:38:00.000 | 12 | 4 | 0 | Is it possible to modify an existing TensorFlow computation graph? | 34,235,225 | 1 | python,tensorflow | Yes, tf.Graph are build in an append-only fashion as @mrry puts it.
But there's workaround:
Conceptually you can modify an existing graph by cloning it and perform the modifications needed along the way. As of r1.1, Tensorflow provides a module named tf.contrib.graph_editor which implements the above idea as a set of c... | TensorFlow graph is usually built gradually from inputs to outputs, and then executed. Looking at the Python code, the inputs lists of operations are immutable which suggests that the inputs should not be modified. Does that mean that there is no way to update/modify an existing graph? | 0 | 1 | 13,045 |
0 | 34,277,060 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-12-14T19:30:00.000 | 0 | 2 | 0 | How are the points in a level curve chosen in pyplot? | 34,275,096 | 0 | python,math,matplotlib | The function is evaluated at every grid node, and compared to the iso-level. When there is a change of sign along a cell edge, a point is computed by linear interpolation between the two nodes. Points are joined in pairs by line segments. This is an acceptable approximation when the grid is dense enough. | I want to know how the contours levels are chosen in pyplot.contour. What I mean by this is, given a function f(x, y), the level curves are usually chosen by evaluating the points where f(x, y) = c, c=0,1,2,... etc. However if f(x, y) is an array A of nxn points, how do the level points get chosen? I don't mean how do ... | 0 | 1 | 976 |
0 | 35,905,473 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-12-14T21:43:00.000 | 0 | 1 | 0 | Matplotlib error in importing | 34,277,148 | 0 | python,python-2.7,matplotlib | I have the exact same problem. I'm not sure what the issue is but every once in a while, when trying to import matplotlib inside ipython I encounter this error and restarting the computer solves the issue. Maybe that would help in locating the issue? | I am using OSX El Capitan and trying to import matplotlib.pyplot
when I do that I get recursive error and at the end it says "ValueError: insecure string pickle"
Here is the whole log:
--------------------------------------------------------------------------- ValueError Traceback (most ... | 0 | 1 | 704 |
0 | 34,300,675 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-12-15T08:37:00.000 | 1 | 2 | 0 | Classifying sentences with overlapping words | 34,284,385 | 0.099668 | python,twitter,nltk,document-classification | I wouldn't be so quick to write off Naive Bayes. It does fine in many domains where there are lots of weak clues (as in "overlapping words"), but no absolutes. It all depends on the features you pass it. I'm guessing you are blindly passing it the usual "bag of words" features, perhaps after filtering for stopwords. We... | I've this CSV file which has comments (tweets, comments). I want to classify them into 4 categories, viz.
Pre Sales
Post Sales
Purchased
Service query
Now the problems that I'm facing are these :
There is a huge number of overlapping words between each of the
categories, hence using NaiveBayes is failing.
The size... | 0 | 1 | 622 |
0 | 34,323,744 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-12-16T18:01:00.000 | 0 | 1 | 0 | Pandas: Read CSV: ValueError: could not convert string to float | 34,319,011 | 0 | python,csv,pandas | I found the mistake. The problem was a thousand separator.
When writing the CSV file, most numbers were below thousand and were correctly written to the CSV file. However, this one value was greater than thousand and it was written as "1,123" which pandas did not recognize as a number but as a string. | I'm trying to read a large and complex CSV file with pandas.read_csv.
The exact command is
pd.read_csv(filename, quotechar='"', low_memory=True, dtype=data_types, usecols= columns, true_values=['T'], false_values=['F'])
I am pretty sure that the data types are correct. I can read the first 16 million lines (setting nro... | 0 | 1 | 8,440 |
0 | 34,323,069 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-12-16T22:08:00.000 | 1 | 2 | 0 | Alternative to numpy's linalg.eig? | 34,323,027 | 0.099668 | python,python-2.7,numpy,pca | So if scikit's third eigenvector is (a,-b,-c,-d) then mine is (-a,b,c,d).
That's completely normal. If v is an eigenvector of a matrix, then -v is an eigenvector with the same eigenvalue. | I have written a simple PCA code that calculates the covariance matrix and then uses linalg.eig on that covariance matrix to find the principal components. When I use scikit's PCA for three principal components I get almost the equivalent result. My PCA function outputs the third column of transformed data with flipped... | 0 | 1 | 593 |
0 | 34,340,617 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-12-17T15:06:00.000 | 1 | 1 | 1 | Location of tensorflow/models.. in Windows | 34,337,788 | 0.197375 | python,windows,docker,tensorflow | If you're using one of the devel tags (:latest-devel or :latest-devel-gpu), the file should be in /tensorflow/tensorflow/models/image/imagenet/classify_image.py.
If you're using the base container (b.gcr.io/tensorflow/tensorflow:latest), it's not included -- that image just has the binary installed, not a full source d... | I have installed tensorflow on Windows using Docker, I want to go to the folder "tensorflow/models/image/imagenet" that contains "classify_image.py" python file..
Can someone please how to reach this mentioned path? | 0 | 1 | 1,104 |
0 | 70,073,546 | 0 | 1 | 0 | 0 | 1 | false | 36 | 2015-12-18T14:15:00.000 | 2 | 2 | 0 | Append 2D array to 3D array, extending third dimension | 34,357,617 | 0.197375 | python,arrays,numpy,append | using np.stack should work
but the catch is both arrays should be of 2D form.
np.stack([A,B]) | I have an array A that has shape (480, 640, 3), and an array B with shape (480, 640).
How can I append these two as one array with shape (480, 640, 4)?
I tried np.append(A,B) but it doesn't keep the dimension, while the axis option causes the ValueError: all the input arrays must have same number of dimensions. | 0 | 1 | 65,088 |
0 | 34,361,488 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-12-18T16:46:00.000 | 0 | 1 | 0 | plotting 2D slice of arbitrary orientation through 3D data in matplotlib | 34,360,265 | 0 | python,matplotlib,3d | You can use roll function of numpy to rotate your plane and make it parallel with a base plane. now you can choose your plane and plot. Only problem is that at close to edges the value from one side will be added to opposite side. | I have a 3D regular grid of data. I would like to write a routine allowing the user to specify a plane slicing through the data with arbitrary orientation and returning a contour plot of the data in the plane. Is there a ready-made way in matplotlib to do this? Could find anything in the docs. | 0 | 1 | 494 |
0 | 34,377,289 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-12-20T01:43:00.000 | 1 | 1 | 0 | Adding and removing SVM parameters without having to totally retrain | 34,377,210 | 1.2 | python,scikit-learn,svm | If you are using SVC from sklearn then the answer is no. There is no way to do it, this implementation is purely batch training based. If you are training linear SVM using SGDClassifier from sklearn then the answer is yes as you can simply start the optimization from the previous solution (when removing feature - simpl... | I have a support vector machine trained on ~300,000 examples, and it takes roughly 1.5-2 hours to train this model, and I pickled(serialized) it. Currently, I want to add/remove a couple of the parameters of the model. Is there a way to do this without having to retrain the entire model? I am using sklearn in python. | 0 | 1 | 44 |
0 | 34,391,185 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-12-21T06:56:00.000 | 1 | 2 | 0 | What data science programming algorithm is like Naive Bayes for continuous variables? | 34,390,336 | 0.099668 | python,algorithm,machine-learning,naivebayes,data-science | For Naive Bayes you can discretize your continuous numerical properties.
For example, for "% Owner occupied housing" you split all 100% scale into ten partitions(0-10%, 10-20%, ..., 90-100%) and get the frequency table.
For some properties you can move to binary values: Unemployment rate < 30% - yes/no.
Good luck in le... | I am trying to build and train a machine learning data science algorithm that correctly predicts what presidential won in what county. I have the following information for training data.
Total population Median age % BachelorsDeg or higher Unemployment rate Per capita income Total households Average house... | 0 | 1 | 229 |
0 | 34,568,528 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2015-12-21T10:47:00.000 | 4 | 2 | 0 | Difference between local and dense layers in CNNs | 34,393,876 | 0.379949 | python,convolution,deep-learning,tensorflow,conv-neural-network | I am quoting user2576346's comments under the question:
As I understand, either it should be densely connected or be a convolutional layer ...
No this is not true. A more accurate way to phrase that statement would be that layers are either fully connected (dense) or locally connected.
A convolutional layer is an ex... | What is the difference between a "Local" layer and a "Dense" layer in a convolutional neural network? I am trying to understand the CIFAR-10 code in TensorFlow, and I see it uses "Local" layers instead of regular dense layers. Is there any class in TF that supports implementing "Local" layers? | 0 | 1 | 2,001 |
0 | 34,479,029 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-12-25T23:02:00.000 | 0 | 2 | 0 | Python: How to give participant the possibility to an answer | 34,467,177 | 0 | python,while-loop,psychopy | You probably placed your Answerrunning = False at the wrong place. And probably you need to put break at the end of each branch. Please explain more what you want to do, I don't understand.
If you say you need to count tries, then I guess you should have something like number_of_tries = 0 and number_of_tries += 1 somew... | I am making an experiment, and the participant must get the possibility to correct himself when he has given the wrong answer.
The goal is that the experiment goes on to the next trial when the correct answer is given. When the wrong answer is given, you get another chance.
For the moment, the experiment crashes after... | 0 | 1 | 152 |
0 | 34,484,383 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-12-27T18:04:00.000 | 1 | 2 | 0 | 5D tensor in Theano | 34,483,277 | 1.2 | python,theano,symbolic-computation | Theano variables do not have explicit shape information since they are symbolic variables, not numerical. Even dtensor3 = T.tensor3(T.config.floatX) does not have an explicit shape. When you type dtensor3.shape you'll get an object Shape.0 but when you do dtensor3.shape.eval() to get its value you'll get an error.
For ... | I was wondering how to make a 5D tensor in Theano.
Specifically, I tried dtensor = T.TensorType('float32', (False,)*5). However, the only issue is that dtensor.shapereturns: AttributeError: 'TensorType' object has no attribute 'shape'
Whereas if I used a standard tensor type likedtensor = T.tensor3('float32'), I don't ... | 0 | 1 | 506 |
0 | 34,493,557 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-12-28T07:14:00.000 | 1 | 1 | 0 | Using sklearn DictVectorizer in real-time systems | 34,493,556 | 1.2 | machine-learning,categorical-data,python,scikit-learn | It depends on the learning algorithm that you are using. If you are using a method that has been designated for sparse data sets (FTRL, FFM, linear SVM) one possible approach is the following (note that it will introduce collisions in the features and a lot of constant columns).
First allocate for each element of your ... | Any binary one-hot encoding is aware of only values seen in training, so features not encountered during fitting will be silently ignored. For real time, where you have millions of records in a second, and features have very high cardinality, you need to keep your hasher/mapper updated with the data.
How can we do an i... | 0 | 1 | 237 |
0 | 34,502,877 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-12-29T00:37:00.000 | 2 | 2 | 0 | Python pandas: determining which "group" has the most entries | 34,502,840 | 0.197375 | python,pandas | To sort by name: df.fruit.value_counts().sort_index()
To sort by counts: df.fruit.value_counts().sort_values() | Let's say that I have pandas DataFrame with a column called "fruit" that represents what fruit my classroom of kindergartners had for a morning snack. I have 20 students in my class. Breakdown would be something like this.
Oranges = 7, Grapes = 3, Blackberries = 4, Bananas = 6
I used sort to group each of these fruit t... | 0 | 1 | 38 |
0 | 34,515,890 | 0 | 0 | 0 | 0 | 1 | true | 6 | 2015-12-29T10:59:00.000 | 11 | 1 | 0 | spark.storage.memoryFraction setting in Apache Spark | 34,509,593 | 1.2 | java,python,apache-spark,hadoop-yarn | The Spark executor is set up into 3 regions.
Storage - Memory reserved for caching
Execution - Memory reserved for object creation
Executor overhead.
In Spark 1.5.2 and earlier:
spark.storage.memoryFraction sets the ratio of memory set for 1 and 2. The default value is .6, so 60% of the allocated executor memory is ... | According to Spark documentation
spark.storage.memoryFraction: Fraction of Java heap to use for Spark's memory cache. This should not be larger than the "old" generation of objects in the JVM, which by default is given 0.6 of the heap, but you can increase it if you configure your own old generation size.
I found seve... | 0 | 1 | 7,480 |
0 | 34,560,260 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-12-30T08:55:00.000 | 0 | 1 | 0 | What is the PyMC3 equivalent of the 'pymc.rnormal' function? | 34,526,093 | 1.2 | python,pymc,pymc3 | Found it... a bit silly of me. pymc3.Normal(mu,sd).random(), which basically just calls scipy.stats.norm | Is there a PyMC3 equivalent to the pymc.rnormal function, or has it been dropped in favor of numpy.random.normal? | 0 | 1 | 93 |
0 | 34,558,742 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2016-01-01T13:19:00.000 | 10 | 1 | 0 | Regularization parameter and iteration of SGDClassifier in scikit-learn | 34,556,476 | 1.2 | python,machine-learning,scikit-learn | C and alpha both have the same effect. The difference is a choice of terminology. C is proportional to 1/alpha. You should use GridSearchCV to select either alpha or C the same way, but remember a higher C is more likely to overfit, where a lower alpha is more likely to overfit.
L2 will produce a model with many small ... | Python scikit-learn SGDClassifier() supports both l1, l2, and elastic, it seems to be important to find optimal value of regularization parameter.
I got an advice to use SGDClassifier() with GridSearchCV() to do this, but in SGDClassifier serves only regularization parameter alpha.
If I use loss functions such as SVM o... | 0 | 1 | 4,643 |
0 | 34,572,201 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-01-03T00:21:00.000 | 0 | 1 | 0 | How do I quote a column in a dataframe that has a period? | 34,572,133 | 0 | python,matplotlib,dataframe | You can use the indexing notation: Iris['Petal.Length']. Using . (as in Iris.Species) only works if the column name is a valid Python identifier, but [] always works, even if the column name contains spaces or other symbols. | I could not find a particular way to phrase this into a google search that returned any solid results so asking StackOverflow. Would appreciate the help all!
I am using a CSV file, Iris, in Python to do some basic matplot plotting. Within Iris, I am looking to reference a particular column called Petal.Length.
Normally... | 0 | 1 | 79 |
0 | 34,574,445 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-01-03T07:43:00.000 | 1 | 1 | 0 | How to save the n-d numpy array data and read it quickly next time? | 34,574,396 | 1.2 | python,numpy | If you need to re-read it quickly into numpy you could just use the cPickle module.
This is going to be much faster that parsing it back from an ASCII dump (but however only the program will be able to re-read it). As a bonus with just one instruction you could dump more than a single matrix (i.e. any data structure bu... | Here is my question:
I have a 3-d numpy array Data which in the shape of (1000, 100, 100).
And I want to save it as a .txt or .csv file, how to achieve that?
My first attempt was to reshape it into a 1-d array which length 1000*100*100, and transfer it into pandas.Dataframe, and then, I save it as .csv file.
When I... | 0 | 1 | 53 |
0 | 38,167,204 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-01-04T18:45:00.000 | 0 | 1 | 0 | Create 3D- polynomial via numpy etc. from given coordinates | 34,597,732 | 0 | python,numpy,curve-fitting,algebra | Numpy has functions for multi-variable polynomial evaluation in the polynomial package -- polyval2d, polyval3d -- the problem is getting the coefficients. For fitting, you need the polyvander2d, polyvander3d functions that create the design matrices for the least squares fit. The multi-variable polynomial coefficients ... | Given some coordinates in 3D (x-, y- and z-axes), what I would like to do is to get a polynomial (fifth order). I know how to do it in 2D (for example just in x- and y-direction) via numpy. So my question is: Is it possible to do it also with the third (z-) axes?
Sorry if I missed a question somewhere.
Thank you. | 0 | 1 | 260 |
0 | 34,614,120 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2016-01-05T14:24:00.000 | 0 | 1 | 0 | How to wrap text with in a column while writing in csv file using python | 34,613,958 | 0 | python,django,export-to-csv | CSV is not formatted. If you want to format the text in the cells, you should consider writing a proper Excel or PDF file.
Anyway it looks like that newline characters (\n or \r\n) can be used in CSV files if using semicolon as separator, so this may not be portable.
To write an Excel file, you can use libraries like o... | I am trying to export a file as csv. I need to wrap text for a particular column while writing in csv file. I have a too long string. i need to write it in a csv file using python. While trying to write, it doesn't write in a single cell. some of the lines are written into next rows. I need to write the whole string in... | 0 | 1 | 4,140 |
0 | 59,507,979 | 0 | 0 | 0 | 0 | 1 | false | 30 | 2016-01-06T03:09:00.000 | 3 | 4 | 0 | Is there easy way to grid search without cross validation in python? | 34,624,978 | 0.148885 | python,scikit-learn,random-forest,grid-search | Although the question has been solved years ago, I just found a more natural way if you insist on using GridSearchCV() instead of other means (ParameterGrid(), etc.):
Create a sklearn.model_selection.PredefinedSplit(). It takes a parameter called test_fold, which is a list and has the same size as your input data. In... | There is absolutely helpful class GridSearchCV in scikit-learn to do grid search and cross validation, but I don't want to do cross validataion. I want to do grid search without cross validation and use whole data to train.
To be more specific, I need to evaluate my model made by RandomForestClassifier with "oob score"... | 0 | 1 | 20,780 |
0 | 34,653,721 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-01-07T07:41:00.000 | 3 | 2 | 0 | What the function apply() in scikit-learn can do? | 34,649,751 | 0.291313 | python,machine-learning,scikit-learn | From the Sci-Kit Documentation
apply(X) Apply trees in the ensemble to X, return leaf indices
This function will take input data X and each data point (x) in it will be applied to each non-linear classifier tree. After application, data point x will have associated with it the leaf it end up at for each decision tre... | In scikit-learn new version ,there is a new function called apply() in Gradient boosting. I'm really confused about it .
Does it like the method:GBDT + LR that facebook has used?
If dose, how can we make it work like GBDT + LR? | 0 | 1 | 1,064 |
0 | 34,661,470 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2016-01-07T12:51:00.000 | 3 | 1 | 0 | How to handle class imbalance in sklearn random forests. Should I use sample weights or class weight parameter | 34,655,628 | 1.2 | python,scikit-learn,random-forest,supervised-learning | Class weights are what you should be using.
Sample weights allow you to specify a multiplier for the impact a particular sample has. Weighting a sample with a weight of 2.0 roughly has the same effect as if the point was present twice in the data (although the exact effect is estimator dependent).
Class weights have th... | I am trying to solve a binary classification problem with a class imbalance. I have a dataset of 210,000 records in which 92 % are 0s and 8% are 1s. I am using sklearn (v 0.16) in python for random forests .
I see there are two parameters sample_weight and class_weight while constructing the classifier. I am currentl... | 0 | 1 | 3,318 |
0 | 37,417,767 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-01-08T06:59:00.000 | 0 | 1 | 0 | How to change the sorting order of dates in the drop down property of spotfire | 34,671,202 | 0 | python,spotfire | You can apply custom sorting to STRING columns.
One way to achieve your goal is to create calculated columns for the Year and Month, and use these in your date axis. Then you can apply a custom sorting in Column Properties > Sort Order on your data table. | In Spotfire text area, for drop down property, by default sort order for date column is coming with the oldest to latest. We need to display the dates order from newest to oldest. Can you please advise.
Default Order:12/29/2015 12/30/2015 12/31/2015 01/01/2016
Needed 1/1/2016 12/31/2015 12/30/2015 12/29/2015
T... | 0 | 1 | 975 |
0 | 36,937,080 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2016-01-08T21:11:00.000 | 2 | 1 | 0 | Unable to do bulk indexing for large file in elasticsearch | 34,686,119 | 1.2 | java,python,elasticsearch | You have to increase the content uploading length which is by default 100mb.
Go to elasticsearch.yml in config folder
add/update -
http.max_content_length: 300M | I am trying to do bulk indexing in elasticsearch using Python for a big file (~800MB). However, everytime I try
[2016-01-08 15:06:49,354][WARN ][http.netty ] [Marvel Man] Caught exception while handling client http tra
ffic, closing connection [id: 0x2d26baec, /0:0:0:0:0:0:0:1:58923 => /0:0:0:0:0:0:0:... | 1 | 1 | 2,316 |
0 | 56,676,656 | 0 | 0 | 0 | 0 | 1 | false | 92 | 2016-01-10T06:38:00.000 | 12 | 2 | 0 | Pandas: group by and Pivot table difference | 34,702,815 | 1 | python,pandas | It's more appropriate to use .pivot_table() instead of .groupby() when you need to show aggregates with both rows and column labels.
.pivot_table() makes it easy to create row and column labels at the same time and is preferable, even though you can get similar results using .groupby() with few extra steps. | I just started learning Pandas and was wondering if there is any difference between groupby() and pivot_table() functions. Can anyone help me understand the difference between them.
Help would be appreciated. | 0 | 1 | 55,623 |
0 | 34,736,355 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-01-12T03:17:00.000 | 2 | 2 | 0 | Choosing an sklearn pipeline for classifying user text data | 34,735,016 | 0.197375 | python,machine-learning,scikit-learn,feature-selection | Naive Bayes and MultinomialNB are the same algorithms. The difference that you get is from the tfidf transformation which penalises the words that occur in lots of documents in your corpus.
My advice:
Use tfidf and tune the sublinear_tf, binary parameters and normalization parameters of TfidfVectorization for features.... | I'm working on a machine learning application in Python (using the sklearn module), and am currently trying to decide on a model for performing inference. A brief description of the problem:
Given many instances of user data, I'm trying to classify them into various categories based on relative keyword containment. It ... | 0 | 1 | 2,014 |
0 | 34,787,266 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-01-14T10:30:00.000 | 2 | 2 | 0 | meaning of "." after first element in numpy array | 34,787,213 | 0.197375 | python,numpy | It has nothing to do with the array. 1. means 1.0. 1. is a float, 1 is an int. | what is difference between numpy.array([[1., 2], [3, 4], [5, 6]]) and numpy.array([[1, 2], [3, 4], [5, 6]]). I came across one code using two different type of declaration but could not find its meaning. | 0 | 1 | 38 |
0 | 34,788,487 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2016-01-14T11:02:00.000 | 1 | 2 | 0 | CSV format data manipulation: why use python scripts instead of MS excel functions? | 34,787,957 | 1.2 | python,excel,csv | Using python is recommended for below scenarios:
Repeated action: Perform similar set of action over a similar dataset repeatedly. For ex, say you get a monthly forecast data and you have to perform various slicing & dicing and plotting. Here the structure of data and the steps of analysis is more or less the same, bu... | I am currently working on large data sets in csv format. In some cases, it is faster to use excel functions to get the work done. However, I want to write python scripts to read/write csv and carry out the required function. In what cases would python scripts be better than using excel functions for data manipulation t... | 0 | 1 | 236 |
0 | 34,788,093 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-01-14T11:02:00.000 | 0 | 2 | 0 | CSV format data manipulation: why use python scripts instead of MS excel functions? | 34,787,957 | 0 | python,excel,csv | After learning python, you are more flexible. The operations you can do with
on the user interface of MS excel are limited, whereas there are no limits
if you use python.
The benefit is also, that you automate the modifications, e.g. you can re-use
it or re-apply it to a different dataset. The speed depends heavily on... | I am currently working on large data sets in csv format. In some cases, it is faster to use excel functions to get the work done. However, I want to write python scripts to read/write csv and carry out the required function. In what cases would python scripts be better than using excel functions for data manipulation t... | 0 | 1 | 236 |
0 | 34,796,644 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-01-14T17:37:00.000 | 0 | 3 | 0 | How do you find the largest/smallest number amongst several array's? | 34,796,147 | 0 | python,arrays,numpy | Combine your arrays into one, then take the min/max along the new axis.
A = np.array([a1,a2, ... , an])
A.min(axis=0), A.max(axis=0) | I'm trying to get the largest/smallest number returned out of two or more numpy.array of equal length. Since max()/min() function doesn't work on multiple arrays, this is some of the best(worst) I've come up with:
max(max(a1), max(a2), max(a3), ...) / min(min(a1), min(a2), min(a3), ...)
Alternately one can use numpy's ... | 0 | 1 | 143 |
0 | 53,392,066 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2016-01-14T22:56:00.000 | 0 | 8 | 0 | tensorflow: how to rotate an image for data augmentation? | 34,801,342 | 0 | python,tensorflow | For rotating an image or a batch of images counter-clockwise by multiples of 90 degrees, you can use tf.image.rot90(image,k=1,name=None).
k denotes the number of 90 degrees rotations you want to make.
In case of a single image, image is a 3-D Tensor of shape [height, width, channels] and in case of a batch of images, i... | In tensorflow, I would like to rotate an image from a random angle, for data augmentation. But I don't find this transformation in the tf.image module. | 0 | 1 | 28,208 |
0 | 34,815,680 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2016-01-15T15:51:00.000 | 5 | 1 | 0 | Which one is faster? Logistic regression or SVM with linear kernel? | 34,814,891 | 1.2 | python-2.7,machine-learning,svm,logistic-regression | Faster is a bit of a weird question, in part because it is hard to compare apples to apples on this, and it depends on context. LR and SVM are very similar in the linear case. The TLDR for the linear case is that Logistic Regression and SVMs are both very fast and the speed difference shouldn't normally be too large, a... | I am doing machine learning with python (scikit-learn) using the same data but with different classifiers. When I use 500k of data, LR and SVM (linear kernel) take about the same time, SVM (with polynomial kernel) takes forever. But using 5 million data, it seems LR is faster than SVM (linear) by a lot, I wonder if thi... | 0 | 1 | 4,058 |
0 | 34,821,190 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2016-01-15T22:39:00.000 | 2 | 2 | 0 | Length of comprehensions in Python | 34,821,065 | 1.2 | python,list-comprehension | No, this is impossible with just the length of the inputs. You can use math to determine the length by computing common prime factors, but the work involved would not improve upon just computing the results and taking the len of that, and it requires knowledge of the set contents, not just their length.
After all, with... | New at Python, so please...
Just came across comprehensions and I understand that they are soon going to possibly ramify into perhaps dot products or matrix multiplications (although the fact that the result is a set makes them more interesting), but I at this point I want to ask whether there is any formula to determi... | 0 | 1 | 95 |
0 | 34,841,121 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-01-16T15:13:00.000 | 0 | 1 | 0 | Does the Matplotlib pgf backend support transparency? | 34,828,545 | 1.2 | python,matplotlib,latex,pgf | Yes. The .pgf backend does support transparency. If the *.png and *.pdf files come out as transparent, but the *.pgf does not than it may be a problem with your viewer, or tex packages.
For me it was the package "transparent" which enables transparent text on pictures, but I wasn't actually using, which clashed with p... | I'm currently creating graphics usind the pgf backend for matplotlib. It works very well for integrating graphs generated in python in latex. However, transparency does not seem to be supported, even though I believe this should be possible in pgf. I am currently using version 1.5.1 of matplotlib. | 0 | 1 | 1,196 |
0 | 34,874,093 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-01-18T14:25:00.000 | 0 | 1 | 0 | work on distinct elements of RDD-pyspark | 34,857,074 | 0 | python,pyspark,spark-streaming,rdd | So what i did was to define a function that checks if I have seen that name in the past and then use the .filter(myfunc) to only work with the names i want...
The problem now is that in each new streaming window the function is being applied from the beggining , so if i have seen the name John in the first window 7 tim... | I am receiving data from Kafka into a Spark Streaming application. It comes in the format of Transformed DStreams. I then keep only the features i want.
features=data.map(featurize)
which gives me the "name","age","whatever".
I then want to keep only the name of all the data
features=data.map(featurize).map(lambda Name... | 0 | 1 | 184 |
0 | 34,868,531 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2016-01-18T17:07:00.000 | 3 | 1 | 0 | unable to run tensorflow on multiple GPUs | 34,860,281 | 1.2 | python,gpu,tensorflow | The problem disappeared after I installed an older version (352.55) of nvidia driver. | I am running the cifar10 multi-GPU example from the tensorflow repository. I am able to utilize more than one GPUs. My ubuntu PC has two Titan X's, I see memory are fully occupied by the process on both GPUs. However, only one GPU is actually computing. I obtain no speedup. I have tried tensorflow 0.5.0 and 0.6.0 pip b... | 0 | 1 | 749 |
0 | 34,869,483 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-01-19T05:21:00.000 | 0 | 1 | 0 | Python/OpenCV/Mac ImportError: numpy.core.multiarray failed to import | 34,869,018 | 1.2 | python,macos,opencv | I've found the reaseon, there is a file named time.py in the same foder. I'm sure that's the reason I failed to import numpy.
Plus, if i put the file time.py in the same folder and run python test.py, then I got the message "TypeError: 'module' object is not callable"
Next, without closing the console, and delete the ... | When I put my python code in "~/Downloads/" folder, it works.
However, I failed and gave me the message "ImportError: numpy.core.multiarray failed to import" when I put the test.py file in a deep location like "/Git/Pyehon/....." Why?
I run this on Mac | 0 | 1 | 858 |
0 | 39,321,804 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-01-19T07:45:00.000 | 1 | 2 | 0 | Pandas dropna does not work as expected on a MultiIndex | 34,871,128 | 0.099668 | python,pandas | For me this actually worked :
df1=df1[pd.notnull(df1['Cloumn Name'])] | I have a Pandas DataFrame with a multiIndex. The index consists of a date and a text string. Some of the values are NaN and when I use dropna(), the row disappears as expected. However, when I look at the index using df.index, the dropped dates are still there. This is problematic as when I use the to_panel function, t... | 0 | 1 | 1,798 |
0 | 34,883,536 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-01-19T17:09:00.000 | 1 | 1 | 0 | How do I return a random element from 2 numpy array without repeats? | 34,882,862 | 1.2 | python,python-2.7,numpy | Without seeing any code, this is what I would try.
Make an identically sized 2D array with just Booleans all set to True (available) by default
When your code randomly generates an X,Y location in your 2D array, check the Availability array first:
If the value at that location is True (available), return that value... | I have very big multi dimension array for example 2d inside for loop.
I would like to return one element from this array at each iteration and this element should not returned before. I mean return an element once in the iteration. | 0 | 1 | 38 |
0 | 46,003,443 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2016-01-20T21:48:00.000 | 0 | 1 | 0 | Is there a way to tell NLTK that a certain word isn't a proper noun but a noun? | 34,911,264 | 0 | python,nlp,nltk | Summing it up, you have the following options:
Correcting the tag in the post-processing - a bit ugly but quick and easy.
Employ an external Name Entity Recognizer (Stanford NER as @Bob Dylan has thoughtfully suggested) - this one is more involved, particularly because Stanford NER is in java and is not particularly f... | I'm doing some NLP where I'm finding out when patients were diagnosed with multiple sclerosis.
I'd like to use nltk to tell me that the noun of a sentence was multiple sclerosis. Problem is, doctors frequently refer to multiple sclerosis as MS which nltk picks up as a proper noun.
For example, this sentence, "His MS wa... | 0 | 1 | 436 |
0 | 34,931,076 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-01-21T17:59:00.000 | 0 | 1 | 0 | seaborn plots the same size? | 34,930,986 | 0 | python,matplotlib,size,seaborn | Seaborn sizing options vary by the plot type, which can be a bit confusing, so this is a useful universal approach.
First run this: import matplotlib as plt
Then add the line plt.figure(figsize=(9, 9)) in the notebook cells for each of the plots. You can adjust the integer values as you see fit. | seaborn has a conveninent keyword named size=, that aims to make the plots a certain size. However, the plots significantly differ in size depending on the xy-ticks and the axis labels. What is the best way to generate plots with exactly the same dimensions regardless of ticks and axis labels? | 0 | 1 | 219 |
0 | 37,116,604 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2016-01-22T05:06:00.000 | 1 | 3 | 0 | Can Python's pickle/cpickle/dill speed up imports? | 34,939,388 | 1.2 | python,import,pickle,dill | The import latency is most likely due to loading the dependent shared objects of the GEOS-library.
Optimising this could maybe done, but it would be very hard. One way would be to build a statically compiled custom python interpreter with all DLLs and extension modules built in. But maintaining that would be a major PI... | Can pickle/dill/cpickle be used to pickle an imported module to improve import speed? The Shapely module for example takes 5 seconds on my system to find and load all of the required dependencies, which I'd really like to avoid.
Can I pickle my imports once, then reuse that pickle instead of having to do slow imports ... | 0 | 1 | 627 |
0 | 57,711,857 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-01-23T20:03:00.000 | 3 | 2 | 0 | counting the number of non-zero numbers in a column of a df in pandas/python | 34,968,223 | 0.291313 | python,numpy,pandas | Numpy's count_nonzero function is efficient for this.
np.count_nonzero(df["c"]) | I have a df that looks something like:
a b c d e 0 1 2 3 5 1 4 0 5 2 5 8 9 6 0 4 5 0 0 0
I would like to output the number of numbers in column c that are not zero. | 0 | 1 | 21,246 |
0 | 45,060,104 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-01-24T00:34:00.000 | 0 | 1 | 0 | Training, testing, and validation sets for bidirectional LSTM (BLSTM) | 34,970,818 | 1.2 | python,neural-network,time-series,keras,recurrent-neural-network | I think this has more to do with your particular dataset than Bi-LSTMs in general.
You're confusing splitting a dataset for training/testing vs. splitting a sequence in a particular sample. It seems like you have many different subjects, which constitute a different sample. For a standard training/testing split, you wo... | When it comes to normal ANNs, or any of the standard machine learning techniques, I understand what the training, testing, and validation sets should be (both conceptually, and the rule-of-thumb ratios). However, for a bidirectional LSTM (BLSTM) net, how to split the data is confusing me.
I am trying to improve predict... | 0 | 1 | 1,028 |
0 | 34,978,549 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-01-24T11:53:00.000 | 1 | 1 | 0 | igraph community detection result has too much overlap | 34,975,419 | 0.197375 | python-2.7,cluster-analysis,igraph,k-means | Your approach doesn't work because the fast greedy community detection expects similarities as weights, not distances.
(Actually, this is probably only one of the reasons. The other is that the community detection algorithms in igraph were designed for sparse graphs. If you have calculated all the distances between all... | I have a series of points (long, lat)
1) Found the haversine distance between all the points
2) Saved this to a csv file (source, destination, weight)
3) Read the csv file and generated weighted a graph (where weight is the haversine distance)
4) Used igraphs community detection algorithm - fastgreedy
I was expecting c... | 0 | 1 | 347 |
0 | 34,985,405 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2016-01-25T04:30:00.000 | 5 | 2 | 0 | py2exe: MKL FATAL ERROR: Cannot load mkl_intel_thread.dll | 34,985,134 | 0.462117 | python,matplotlib,py2exe | Never mind! I managed to solve it, by copying the required dll from inside numpy/core, into the dist folder that py2exe creates, not outside of it. | I'm trying to compile a python program in py2exe. It is returning a bunch of missing modules, and when I run the executable, it says: "MKL FATAL ERROR: Cannot load mkl_intel_thread.dll"
All my 'non-plotting' scripts work perfectly, just scripts utilizing 'matplotlib', and 'pyqtgraph' don't work.
I've even found the fil... | 0 | 1 | 2,857 |
0 | 35,020,997 | 0 | 0 | 0 | 0 | 1 | true | 8 | 2016-01-25T10:40:00.000 | 8 | 1 | 0 | Azure Machine Learning Request Response latency | 34,990,561 | 1.2 | python,azure,azure-machine-learning-studio | First, I am assuming you are doing your timing test on the published AML endpoint.
When a call is made to the AML the first call must warm up the container. By default a web service has 20 containers. Each container is cold, and a cold container can cause a large(30 sec) delay. In the string returned by the AML endpoin... | I have made an Azure Machine Learning Experiment which takes a small dataset (12x3 array) and some parameters and does some calculations using a few Python modules (a linear regression calculation and some more). This all works fine.
I have deployed the experiment and now want to throw data at it from the front-end of ... | 0 | 1 | 1,088 |
0 | 34,992,250 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-01-25T11:25:00.000 | 1 | 1 | 0 | Optimizing scipy.spatial.Delaunay.find_simplex | 34,991,430 | 1.2 | python,search,scipy,triangulation,delaunay | You can try point in location test, especially Kirkpatrick algorithm/data structure. Basically you subdivide the mesh in both axis and re-triangulate it. A better and simpler solution is to give each triangle a color and draw a bitmap then check the color of the bitmap with the point. | I have a set of points in a plane where each point has an associated altitude. I'm thinking of using the scipy.spatial library to compute the Delaunay triangulation of the point set and then use the result to interpolate for the points in between.
The library implements a nice function that, given a point, finds the tr... | 0 | 1 | 651 |
0 | 35,028,696 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-01-25T15:01:00.000 | 0 | 1 | 0 | Matplotlib hexbin - get bin borders | 34,995,645 | 0 | python,matplotlib | I'm wishing to do something similar for small hexbins, thinking to:
(1) Get the hexbin centres:
hexobj_cen=hexobj.get_offsets()
lon_hex=hexobj_cen[:,0] #hexbin lon centre
lat_hex=hexobj_cen[:,1] #hexbin lat centre
(2) Run a for loop (for each hexbin centre) to find the Cartesian distance (N.hypo... | Is there a way to get the borders of a matplotlib.pyplot.hexbin plot?
Say, i have a pd.DataFrame with spatial latitude and longitude values, which i plot in a hexbin plot. Afterwards i want to assign the corresponding bin of the hexbin grid to each instance of my DataFrame, by checking if the latitude and longitude val... | 0 | 1 | 399 |
0 | 35,013,791 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-01-25T17:10:00.000 | 1 | 2 | 0 | Broadcast large array in pyspark (~ 8GB) | 34,998,280 | 0.099668 | python,apache-spark,python-3.4,pyspark | This is not the problem of PySpark, this is a limit of Spark implement.
Spark use a scala array to store the broadcast elements, since the max Integer of Scala is 2*10^9, so the total string bytes is 2*2*10^9 = 4GB, you can view the Spark code. | In Pyspark, I am trying to broadcast a large numpy array of size around 8GB. But it fails with the error "OverflowError: cannot serialize a string larger than 4GiB". I have 15g in executor memory and 25g driver memory. I have tried using default and kyro serializer. Both didnot work and show same error.
Can anyone sug... | 0 | 1 | 3,217 |
0 | 35,015,910 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-01-25T20:04:00.000 | 0 | 3 | 0 | Cut selected data from daily precipitation CSV files | 35,001,306 | 0 | python,csv | Apart from extracting the data, the first thing you need to do is rearrange your data.
As it is now, 191 columns are added every day. To do that, the whole file needs to be parsed (probably in memory, data growing every day), data gets added to the end of each row, and everything has to be fully written to disk again.
... | I have a csv file contain of daily precipitation with (253 rows and 191 column daily) so for one year I have 191 * 365 column.
I want to extract data for certain row and column that are my area of interest example row 20 and column 40 for the first day and the 2,3,4 ... 365 days has the same distance between the colum... | 0 | 1 | 91 |
0 | 35,004,791 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2016-01-25T23:46:00.000 | 2 | 1 | 0 | reading a large dataset in tensorflow | 35,004,619 | 1.2 | python,deep-learning,tensorflow | The amount of pre-fetching depends on your queue capacity. If you use string_input_producer for your filenames and batch for batching, you will have 2 queues - filename queue, and prefetching queue created by batch. Queue created by batch has default capacity of 32, controlled by batch(...,capacity=) argument, therefor... | I am not quite sure about how file-queue works. I am trying to use a large dataset like imagenet as input. So preloading data is not the case, so I am wondering how to use the file-queue. According to the tutorial, we can convert data to TFRecords file as input. Now we have a single big TFRecords file. So when we speci... | 0 | 1 | 2,670 |
0 | 38,405,970 | 0 | 0 | 0 | 0 | 3 | false | 90 | 2016-01-28T00:21:00.000 | 81 | 8 | 0 | How big should batch size and number of epochs be when fitting a model? | 35,050,753 | 1 | python,machine-learning,deep-learning | Since you have a pretty small dataset (~ 1000 samples), you would probably be safe using a batch size of 32, which is pretty standard. It won't make a huge difference for your problem unless you're training on hundreds of thousands or millions of observations.
To answer your questions on Batch Size and Epochs:
In gene... | My training set has 970 samples and validation set has 243 samples.
How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size? | 0 | 1 | 132,079 |
0 | 38,457,655 | 0 | 0 | 0 | 0 | 3 | false | 90 | 2016-01-28T00:21:00.000 | 11 | 8 | 0 | How big should batch size and number of epochs be when fitting a model? | 35,050,753 | 1 | python,machine-learning,deep-learning | I use Keras to perform non-linear regression on speech data. Each of my speech files gives me features that are 25000 rows in a text file, with each row containing 257 real valued numbers. I use a batch size of 100, epoch 50 to train Sequential model in Keras with 1 hidden layer. After 50 epochs of training, it conver... | My training set has 970 samples and validation set has 243 samples.
How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size? | 0 | 1 | 132,079 |
0 | 44,901,953 | 0 | 0 | 0 | 0 | 3 | false | 90 | 2016-01-28T00:21:00.000 | 7 | 8 | 0 | How big should batch size and number of epochs be when fitting a model? | 35,050,753 | 1 | python,machine-learning,deep-learning | I used Keras to perform non linear regression for market mix modelling. I got best results with a batch size of 32 and epochs = 100 while training a Sequential model in Keras with 3 hidden layers. Generally batch size of 32 or 25 is good, with epochs = 100 unless you have large dataset. in case of large dataset you can... | My training set has 970 samples and validation set has 243 samples.
How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size? | 0 | 1 | 132,079 |
0 | 35,064,268 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-01-28T14:17:00.000 | 1 | 1 | 0 | reading the last index from a csv file using pandas in python2.7 | 35,063,946 | 1.2 | python-2.7,csv,pandas,pandasql | Reading the entire index column will still need to read and parse the whole file.
If no fields in the file are multiline, you could scan the file backwards to find the first newline (but with a check if there is a newline past the data). The value following that newline will be your last index.
Storing the last index i... | I have a .csv file on disk, formatted so that I can read it into a pandas DataFrame easily, to which I periodically write rows. I need this database to have a row index, so every time I write a new row to it I need to know the index of the last row written.
There are plenty of ways to do this:
I could read the entir... | 0 | 1 | 511 |
0 | 35,069,535 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-01-28T18:40:00.000 | 1 | 2 | 0 | Rearranging Data in Pandas | 35,069,440 | 0.099668 | python,pandas | Look into the DataFrame.pivot method | I've been looking through the documentation (and stack overflow) and am having trouble figuring out how rearrange a pandas data frame the way described below. I wish to have a row where there is a column name, a row name and the value of that specific row and column:
Input:
A B C
X 1 2 3
Y 4 5 6
Output:
X A 1
X B ... | 0 | 1 | 121 |
0 | 35,090,610 | 0 | 0 | 0 | 0 | 2 | true | 19 | 2016-01-28T23:33:00.000 | 37 | 6 | 0 | How to copy/paste a dataframe from iPython into Google Sheets or Excel? | 35,074,209 | 1.2 | python,excel,google-sheets,ipython,ipython-notebook | Try using the to_clipboard() method. E.g., for a dataframe, df: df.to_clipboard() will copy said dataframe to your clipboard. You can then paste it into Excel or Google Docs. | I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython.
I know how to convert results to c... | 0 | 1 | 17,289 |
0 | 66,239,699 | 0 | 0 | 0 | 0 | 2 | false | 19 | 2016-01-28T23:33:00.000 | 1 | 6 | 0 | How to copy/paste a dataframe from iPython into Google Sheets or Excel? | 35,074,209 | 0.033321 | python,excel,google-sheets,ipython,ipython-notebook | Paste the output to an IDE like Atom and then paste in Google Sheets/Excel | I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython.
I know how to convert results to c... | 0 | 1 | 17,289 |
0 | 39,967,831 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2016-01-29T13:30:00.000 | 3 | 3 | 0 | Installing OpenCV 3.1 on OS X El Capitan using Python 3.5.1 | 35,085,809 | 0.197375 | python,opencv,python-3.5 | For me the only working way was using conda:
conda install --channel https://conda.anaconda.org/menpo opencv3
and then import it using import cv2 | I have looked for a proper way to install OpenCV, but all I can find are people fudging around with Python 2.old or virtualenv or other things that are utterly irrelevant. I just want be able to run import cv2 without any import errors.
How do I install OpenCV on OS X 10.11 for use with Python 3.5.1? | 0 | 1 | 4,202 |
0 | 35,093,550 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-01-29T18:11:00.000 | 0 | 1 | 0 | Are there performance benchmarks of NumPy arrays in an IPython Notebook versus a .py script file? | 35,091,235 | 0 | arrays,python-3.x,numpy,ipython-notebook | I have never noticed any performance penalty (5-6 million x 8 arrays here) with IPython/Jupyter, but even if there is some small difference it is unlikely to be noticeable. A much greater speed increase with a similarly low effort would come from writing performance sensitive code in cython, adding type annotations in ... | I'm working with huge multidimensional NumPy arrays in an IPython notebook with Python3 and things are slow going.
Is it appreciably quicker to convert the .ipynb file into a .py file and run via the command line? | 0 | 1 | 155 |
0 | 35,121,242 | 0 | 0 | 0 | 0 | 1 | true | 39 | 2016-02-01T00:13:00.000 | 14 | 3 | 0 | Reading a pickle file (PANDAS Python Data Frame) in R | 35,121,192 | 1.2 | python,r,pandas,dataframe | Edit: If you can install and use the {reticulate} package, then this answer is probably outdated. See the other answers below for an easier path.
You could load the pickle in python and then export it to R via the python package rpy2 (or similar). Once you've done so, your data will exist in an R session linked to py... | Is there an easy way to read pickle files (.pkl) from Pandas Dataframe into R?
One possibility is to export to CSV and have R read the CSV but that seems really cumbersome for me because my dataframes are rather large. Is there an easier way to do so?
Thanks! | 0 | 1 | 36,680 |
0 | 35,149,501 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-02-01T04:59:00.000 | 2 | 2 | 0 | Word clustering in python | 35,123,248 | 0.197375 | python,machine-learning,cluster-analysis,word | Word clustering will be really disappointing because the computer does not understand language.
You could use levenshtein distance and then do hierarchical clustering.
But:
dog and fog have a distance of 1, i.e. are highly similar.
dog and cat have 3 out of 3 letters different.
So unless you can define a good measure... | How to cluster only words in a given set of Data: i have been going through few algorithms online like k-Means algotihm,but it seems they are related to document clustering instead of word clustering.Can anyone suggest me some way to only cluster words in a given set of data???.
please am new to python. | 0 | 1 | 4,147 |
0 | 35,147,923 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-02-01T09:27:00.000 | 1 | 1 | 0 | How can I know if the epoch point is reached in seq2seq model? | 35,126,954 | 0.197375 | python,tensorflow,recurrent-neural-network | It looks like there is a difference between your dev and train data:
global step 374600 learning rate 0.0069 step-time 1.92 perplexity 1.02
eval: bucket 0 perplexity 137268.32
Your training perplexity is 1.02 -- the model is basically perfect on the data it receives for training. But your dev perplexity is enormous, ... | I am training a seq2seq model since many days on a custom parallel corpus of about a million sentences with default settings for the seq2seq model.
Following is the output log which has crossed 350k steps as mentioned in the tutorial. I saw that the bucket perplexity have suddenly increased significantly the overall tr... | 0 | 1 | 972 |
0 | 35,132,831 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-02-01T14:06:00.000 | 0 | 2 | 0 | Use a different estimator based on value | 35,132,569 | 0 | python,feature-selection,supervised-learning | I personally am new to Python but I would use the data type of a list. I would then proceed to making a membership check and reference the list you just wrote. Then proceed to say that if member = true then run/use randomForest regressor. If false use/run another regressor. | What I'm trying to do is build a regressor based on a value in a feature.
That is to say, I have some columns where one of them is more important (let's suppose it is gender) (of course it is different from the target value Y).
I want to say:
- If the gender is Male then use the randomForest regressor
- Else use ano... | 0 | 1 | 47 |
0 | 35,172,568 | 0 | 1 | 0 | 0 | 2 | false | 53 | 2016-02-02T06:21:00.000 | 1 | 4 | 0 | Tensorflow python : Accessing individual elements in a tensor | 35,146,444 | 0.049958 | python,python-2.7,tensorflow | You simply can't get value of 0th element of [[1,2,3]] without run()-ning or eval()-ing an operation which would be getting it. Because before you 'run' or 'eval', you have only a description how to get this inner element(because TF uses symbolic graphs/calculations). So even if you would use tf.gather/tf.slice, you st... | This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge)
Is there any method to do the same faster?
Thanks in Advance. | 0 | 1 | 79,845 |
0 | 35,148,137 | 0 | 1 | 0 | 0 | 2 | false | 53 | 2016-02-02T06:21:00.000 | 1 | 4 | 0 | Tensorflow python : Accessing individual elements in a tensor | 35,146,444 | 0.049958 | python,python-2.7,tensorflow | I suspect it's the rest of the computation that takes time, rather than accessing one element.
Also the result might require a copy from whatever memory is stored in, so if it's on the graphics card it will need to be copied back to RAM first and then you get access to your element. If this is the case you might skip i... | This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge)
Is there any method to do the same faster?
Thanks in Advance. | 0 | 1 | 79,845 |
0 | 35,153,245 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-02-02T11:21:00.000 | 0 | 2 | 0 | Take input of arbitrary size in theano | 35,152,052 | 0 | python-2.7,theano | You need to do some data formatting. The input size of a NN is constant, so if the images for your CNN have different sizes you need to resize them to your input size before feeding them in. Its like a person being too close or far away from a painting, your field of view is contant, in order to see everything clearly ... | I built a cnn network in Theano. Input are many images, but the size of them are different. The elements of numpy.array have the same size.
How can I make them as the input?
Thanks a lot. | 0 | 1 | 61 |
0 | 35,159,902 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2016-02-02T17:16:00.000 | 1 | 2 | 0 | Python set number of arguments to capture | 35,159,748 | 1.2 | python | Is there a way use the def foo(*x) notation to let python know it needs a certain range of number of arguments?
Nope. Also, scipy.optimize.curve_fit ultimately gets its argument count information from f.__code__.co_argcount, not co_nlocals or n_locals (which doesn't exist). | I was working on a project where I doing regressions and I wanted to used scipy.optomize.curve_fit which takes a function and tries to find the right parameters for it. The odd part was that it was never given how many parameters the function took. Eventually we guessed that it used foo.__code__.co_nlocals, but in the ... | 0 | 1 | 40 |
0 | 50,763,868 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2016-02-02T21:04:00.000 | 0 | 4 | 0 | Theano Dimshuffle equivalent in Google's TensorFlow? | 35,163,789 | 0 | python,numpy,theano,tensorflow | tf.transpose is probably what you are looking for. it takes an arbitrary permutation. | I have seen that transpose and reshape together can help but I don't know how to use.
Eg. dimshuffle(0, 'x')
What is its equivalent by using transpose and reshape? or is there a better way?
Thank you. | 0 | 1 | 4,682 |
0 | 36,043,728 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-02-03T17:16:00.000 | 0 | 1 | 1 | Creating Contingency Solution Output File for PSS/E using Python 2.7 | 35,183,538 | 1.2 | python,python-2.7 | @Magalhaes, the auxiliary files *.sub, *.mon and *.con are input files. You have to write them; PSSE doesn't generate them. Your recording shows that you defined a bus subsystem twice, generated a *.dfx from existing auxiliary files, ran an AC contingency solution, then generated an *.acc report. So when you did thi... | I'm using python to interact with PSS/E (siemens software) and I'm trying to create *.acc file for pss/e, from python. I can do this easily using pss/e itself:
1 - create *.sub, *.mon, *.con files
2 - create respective *.dfx file
3 - and finally create *.acc file
The idea is to perform all these 3 tasks automatically, ... | 0 | 1 | 1,112 |
0 | 35,185,050 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-02-03T18:27:00.000 | 2 | 2 | 0 | Shape Mismatch Numpy | 35,184,815 | 0.197375 | python,numpy | In numpy, (10, 1), (10,) are not the same at all:
(10, 1) is a two dimensional array, with a single column.
(10, ) is a one dimensional array
If you have an array a, and print out len(a.shape), you'll see the difference. | I am continously getting the error:
"(shapes (10, 1), (10,) mismatch)"
when doing a NumPy operation and I am somewhat confused.
Wouldn't (10,1) and (10,) be identical shapes? And if for whatever reason this is not valid, is there a way to convert (10,1) to (10,)? I cannot seem to find it in the NumPy doucmentation.
Tha... | 0 | 1 | 1,521 |
0 | 35,184,973 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-02-03T18:31:00.000 | 1 | 1 | 0 | Limiting the number of GB to read in read_csv in Pandas | 35,184,894 | 0.197375 | python-3.x,pandas | You can pass nrows=number_of_rows_to_read to your read_csv function to limit the lines that are read. | I often work with csv files that are 100s of GB in size. Is there any way to tell read_csv to only read a fixed number of MB from a csv file?
Update:
It looks like chunks and chunksize can be used for this, but the documentation looks a bit slim here. What would be an example of how to do this with a real csv file? (e... | 0 | 1 | 779 |
0 | 35,213,773 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2016-02-04T23:19:00.000 | 5 | 4 | 0 | NumPy calculate square of norm 2 of vector | 35,213,592 | 0.244919 | python,numpy,inner-product | I don't know if the performance is any good, but (a**2).sum() calculates the right value and has the non-repeated argument you want. You can replace a with some complicated expression without binding it to a variable, just remember to use parentheses as necessary, since ** binds more tightly than most other operators: ... | I have vector a.
I want to calculate np.inner(a, a)
But I wonder whether there is prettier way to calc it.
[The disadvantage of this way, that if I want to calculate it for a-b or a bit more complex expression, I have to do that with one more line. c = a - b and np.inner(c, c) instead of somewhat(a - b)] | 0 | 1 | 38,727 |
0 | 35,238,504 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-02-05T16:37:00.000 | 1 | 1 | 0 | Matplotlib spectrogram versus STFT | 35,229,136 | 1.2 | python,matplotlib,fft,spectrogram | The redundancy is because you input a strictly real signal to your FFT, thus the DFT result is complex conjugate (Hermitian) symmetric. This redundancy is due to the fact that all the imaginary components of strictly real input are zero. But the output of this DFT can include non-zero imaginary components to indicate... | I'm currently computing the spectrogram with the matplotlib. I specify NFFT=512 but the resulting image has a height of 257. I then tried to just do a STFT (short time fourier transform) which gives me 512 dimensional vectors (as expected). If I plot the result of the STFT I can see that half of the 512 values are just... | 0 | 1 | 1,806 |
0 | 35,237,068 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-02-06T03:00:00.000 | 0 | 1 | 0 | tensorflow no module named example.tutorials.mnist.input_data | 35,236,851 | 0 | python-2.7,tensorflow | @mkarlovitz Looks like /Library/Python/2.7/site-packages/ is not in the list of paths python is looking for.
To see what paths are python uses to find packages, do the below ( you can use command line for this ).
1. import sys
2. sys.path ( This tells the list of Paths )
If the /Library/Python/2.7/site-packages... | I've installed tensor flow on Mac OS X.
Successfully ran simple command line test.
Now trying the first tutorial.
Fail on the first python line:
[python prompt:]
import tensorflow.examples.tutorials.mnist.input_data
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named examples.tutoria... | 0 | 1 | 5,529 |
0 | 35,248,119 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2016-02-06T03:34:00.000 | 9 | 1 | 0 | What is the difference between xgboost, extratreeclassifier, and randomforrestclasiffier? | 35,237,044 | 1 | python,random-forest,xgboost,kaggle | Extra-trees(ET) aka. extremely randomized trees is quite similar to random forest (RF). Both methods are bagging methods aggregating some fully grow decision trees. RF will only try to split by e.g. a third of features, but evaluate any possible break point within these features and pick the best. However, ET will only... | I am new to all these methods and am trying to get a simple answer to that or perhaps if someone could direct me to a high level explanation somewhere on the web. My googling only returned kaggle sample codes.
Are the extratree and randomforrest essentially the same? And xgboost uses boosting when it chooses the featur... | 0 | 1 | 2,360 |
0 | 35,237,949 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-02-06T05:53:00.000 | 0 | 3 | 0 | Python Pandas largest number | 35,237,874 | 0 | python-2.7,pandas | If the operations are done in the pydata stack (numpy/pandas), you're limited to
fixed precision numbers, up to 64bit.
Arbitrary precision numbers as string, perhaps? | I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number.
Example,
1456 ** 15 = 280169351358921184433812095498240410552501272576L, h... | 0 | 1 | 112 |
0 | 35,237,988 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-02-06T05:53:00.000 | 0 | 3 | 0 | Python Pandas largest number | 35,237,874 | 0 | python-2.7,pandas | I was able to overcome by changing the data type from int to float, as doing this gives the answer to 290 ** 15 = 8.629189e+36, which is good enough for my exercise. | I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number.
Example,
1456 ** 15 = 280169351358921184433812095498240410552501272576L, h... | 0 | 1 | 112 |
0 | 45,540,101 | 1 | 0 | 0 | 0 | 1 | false | 4 | 2016-02-06T17:03:00.000 | 0 | 3 | 0 | calculate indegree centralization of graph with python networkx | 35,243,795 | 0 | python,networkx,graph-theory | This answer has been taken from a Google Groups on the issue (in the context of using R) that helps clarify the maths taken along with the above answer:
Freeman's approach measures "the average difference in centrality
between the most central actor and all others".
This 'centralization' is exactly captured in the... | I have a graph and want to calculate its indegree and outdegree centralization. I tried to do this by using python networkx, but there I can only find a method to calculate indegree and outdegree centrality for each node. Is there a way to calculate in- and outdegree centralization of a graph in networkx? | 0 | 1 | 2,646 |
0 | 35,256,493 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-02-07T00:54:00.000 | 3 | 4 | 0 | Tensorflow compat modules issues? | 35,248,476 | 0.148885 | python,tensorflow | You're most likely using an older version of TensorFlow. I just noticed that some of our install docs still link to 0.5 -- try upgrading to 0.6 or to head.
I'll fix the docs soon, but in the meantime, if you installed via pip, you can just change the 0.5 to 0.6 in the path. If you're building from source, just check ... | Getting the following error when working through the ipython notebooks on Google's tensorflow udacity course:
AttributeError: 'module' object has no attribute 'compat'
Trying to call:
tf.compat.as_str(f.read(name)).split()
Running on Ubuntu 14.04 and wondering if this a tensorflow early bug issue or just me bei... | 0 | 1 | 4,727 |
0 | 35,255,569 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-02-07T06:47:00.000 | 0 | 1 | 0 | Processing array larger than memory for training a neural net in python | 35,250,611 | 1.2 | python,machine-learning,neural-network,large-data | What you are probably looking for is minibatching. In general many methods of training neural nets are gradient based, and as your loss function is a function of trianing set - so is the gradient. As you said - it may exceed your memory. Luckily, for additive loss functions (and most you will ever use - are additive) o... | I am trying to train a neural net (backprop + gradient descent) in python with features I am constructing on top of the google books 2-grams (English), it will end up being around a billion rows of data with 20 features each row. This will easily exceed my memory and hence using in-memory arrays such as numpy would not... | 0 | 1 | 597 |
0 | 47,066,621 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-02-09T04:02:00.000 | 0 | 2 | 0 | How to cluster a time series using KMeans in python | 35,283,654 | 0 | python,numpy,pandas,machine-learning,scikit-learn | You can add more features based on the raw data, and using methods like RFM Analysis. RFM = recency, frequency, monetary
For example:
How often the user logged in?
The last time the user logged in? | So I have a data in the form [UID obj1 obj2..] x timestamp and I want to cluster this data in python using kmeans from sklearn. Where should I start?
EDIT:
So basically I'm trying to cluster users based on clickstream data, and classify them based on usage patterns. | 0 | 1 | 2,413 |
0 | 35,295,661 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-02-09T15:11:00.000 | 3 | 1 | 0 | How to drop a pandas dataframe after storing in database | 35,295,491 | 0.53705 | python,pandas | del dataframe will unpollute your namespace and free your memory, while dataframe = None will only free your memory. Hope that helps! | How do I drop a pandas dataframes after I store them in a database. I can only find a way to drop columns or rows from a dataframe but how can I drop a complete data frame to free my computer memory? | 0 | 1 | 86 |
0 | 35,321,747 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-02-10T15:01:00.000 | 1 | 1 | 0 | Modify kmeans alghoritm for 1d array where order matters | 35,318,602 | 1.2 | python,cluster-analysis,data-mining,k-means | K-means is about minimizing the least squares. Among it's largest drawbacks (there are many) is that you need to know k. Why do you want to inherit this drawback?
Instead of hacking k-means into not ignoring the order, why don't you instead look at time series segmentation and change detection approaches that are much ... | I want to find groups in one dimensional array where order/position matters. I tried to use numpys kmeans2 but it works only when i have numbers in increasing order.
I have to maximize average difference between neigbour sub-arrays
For example: if I have array [1,2,2,8,9,0,0,0,1,1,1] and i want to get 4 groups the res... | 0 | 1 | 112 |
0 | 35,329,441 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-02-10T22:29:00.000 | 1 | 2 | 0 | How do I calculate linear trend for a multi-dimensional array in Python | 35,327,272 | 0.099668 | python,arrays,scipy,trend | I would look into numpy.polyfit but I'm not sure what performance gain it has over scipy.stats.linregress.
It's pretty fast from my experience. You might have to do some math on your own to get r and p values from residuals and covariance matrix. | I've got a 3d array of shape (time,latitude,longitude). I'd like to calculate the linear trend at each lon/lat point.
I know I can simply loop over all points and use spicy.stats.linregress at each point. However, that gets quite slow for large arrays.
The scipy function "detrend" can calculate and remove linear tren... | 0 | 1 | 1,476 |
0 | 35,330,365 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-02-11T03:29:00.000 | 3 | 3 | 0 | Why Numpy sometimes omits the dimension of an array | 35,330,282 | 0.197375 | python,arrays,numpy | Numpy does not omit the dimension of an array. It is a library built for multidimensional arrays (not just 1d or 2d arrays), and so it makes very clear distinctions between arrays of different dimensions (it cannot assume that any array is just a degenerate form of a higher dimension array, because the number of dimens... | I am a beginner user of Python. I used to work with matlab intensively. Now I am shifting to python. I have a question about the dimension of an array.
I import Numpy
I first create an array X, then I use some embedded function, like, sum, to play with my array. Eventually, when I try to check the dimension of my array... | 0 | 1 | 2,639 |
0 | 35,343,669 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2016-02-11T14:09:00.000 | 1 | 1 | 0 | Raspberry pi matrix multiplication | 35,341,566 | 1.2 | python,c,raspberry-pi,matrix-multiplication,raspberry-pi2 | Mathematica is part of the standard Raspbian distribution. It should be able to multiply matrices. | What matrix multiplication library would you recommend for Raspberry Pi 2?
I think about BLAS or NumPy, What do you think?
I'm wondering if there is an external hardware module for matrix multiplication available.
Thank you! | 0 | 1 | 357 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.