GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
41,795,121
0
0
0
0
2
true
5
2017-01-22T19:00:00.000
9
2
0
Adding global attribute using xarray
41,794,956
1.2
python,netcdf,python-xarray
In Xarray, directly indexing a Dataset like hndl_nc['variable_name'] pulls out a DataArray object. To get or set attributes, index .attrs like hndl_nc.attrs['global_attribute'] or hndl_nc.attrs['global_attribute'] = 25. You can access both variables and attributes using Python's attribute syntax like hndl_nc.variable_o...
Is there some way to add a global attribute to a netCDF file using xarray? When I do something like hndl_nc['global_attribute'] = 25, it just adds a new variable.
0
1
6,426
0
46,549,251
0
0
0
0
2
false
5
2017-01-22T19:00:00.000
6
2
0
Adding global attribute using xarray
41,794,956
1
python,netcdf,python-xarray
I would add here that both Datasets and DataArrays can have attributes, both called with .attrs e.g. ds.attrs['global attr'] = 25 ds.variable_2.attrs['variable attr'] = 10 ds.variable_2.attrs['variable attr'] = 10
Is there some way to add a global attribute to a netCDF file using xarray? When I do something like hndl_nc['global_attribute'] = 25, it just adds a new variable.
0
1
6,426
0
50,193,390
0
0
0
0
1
false
4
2017-01-22T22:54:00.000
1
3
0
'Inverse' cumprod in pandas
41,797,071
0.066568
python,pandas
Just in case anyone else ends up here, let me provide a more generic answer. Suppose your DataFrame column, Series, vector, whatever, X has n values. At an arbitrary position i you'd like to get (X[i])*(X[i+1])*...*(X[n]), which is equivalent to (X[1])*(X[2])*...*(X[n]) / (X[1])*(X[2])*...*(X[i-1]). Therefore, you ...
I have a data frame which contains dates as index and a value column storing growth percentage between consecutive dates (i.e. dates in the index). Suppose I want to compute 'real' values by setting a 100 basis at the first date of the index and then iteratively applying the % of growth. It is easy with the cumprod met...
0
1
2,303
0
42,111,038
0
0
0
0
1
false
4
2017-01-23T12:14:00.000
2
2
0
Tensorflow: Is preprocessing on TFRecord files faster than real-time data preprocessing?
41,806,128
0.197375
python,machine-learning,tensorflow,computer-vision,deep-learning
I have been wondering the same thing and have been disappointed with my during-training-time image processing performance. It has taken me a while to appreciate quite how big an overhead the image manipulation can be. I am going to make myself a nice fat juicy preprocessed/augmented data file. Run it overnight and then...
In Tensorflow, it seems that preprocessing could be done on either during training time, when the batch is created from raw images (or data), or when the images are already static. Given that theoretically, the preprocessing should take roughly equal time (if they are done using the same hardware), is there any practic...
0
1
2,142
0
41,815,502
0
0
0
0
1
false
7
2017-01-23T19:07:00.000
4
2
0
What are the relative advantages of extending NumPy in Cython vs Boost.Python?
41,813,799
0.379949
numpy,cython,boost-python
For small one shot problems, I tend to prefer cython, for larger integration with c++ code bases, prefer boost Python. In part, it depends on the audience for your code. If you're working with a team with significant experience in python, but little experience of using C++, Cython makes sense. If you have a fixed code...
I need to speed up some algorithms working on NumPy arrays. They will use std::vector and some of the more advanced STL data structures. I've narrowed my choices down to Cython (which now wraps most STL containers) and Boost.Python (which now has built-in support for NumPy). I know from my experience as a programmer t...
0
1
2,636
0
42,273,559
0
0
0
0
1
true
0
2017-01-24T01:05:00.000
0
2
1
Amazon device farm - wheel file from macosx platform not supported
41,818,382
1.2
python-2.7,opencv,numpy,aws-device-farm,python-appium
(numpy-1.12.0-cp27-cp27m-manylinux1_x86_64.whl) is numpy wheel for ubuntu. But still Amazon device farm throws error while configuring tests with this wheel. Basically, Device farm is validating if the .whl file has prefix -none-any.whl Just renaming the file to numpy-1.12.0-cp27-none-any.whl works in device farm. No...
I am facing the following error on configuring Appium python test in AWS device farm: There was a problem processing your file. We found at least one wheel file wheelhouse/numpy-1.12.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl specified a platform that...
0
1
290
0
41,834,290
0
0
0
0
1
true
0
2017-01-24T15:06:00.000
0
1
0
Stop images produced by pymc.Matplot.plot being saved
41,831,529
1.2
python,pymc
There is currently no way to plot them without being saved to disk. I would recommend only plotting a few diagnostic parameters, and specifying plot=False for the others. That would at least cut down on the volume of plots being generated. There probably should be a saveplot argument, however, I agree.
I recently started experimenting with pymc and only just realised that the images being produced by pymc.Matplot.plot, which I use to diagnose whether the MCMC has performed well, are being saved to disk. This results in images appearing wherever I am running my scripts from, and it is time consuming to clear them up. ...
0
1
60
0
41,849,001
0
1
0
0
1
false
0
2017-01-24T17:04:00.000
0
1
0
ipython can't load module when using magic %load, but succeed when loading interactively
41,834,141
0
python-3.x,ipython
Shame on me, it was just a typo: the correct module is named sklearn.ensemble.
When I launch ipython -i script_name or load the script with %load, it fails loading sklearn.ensamble. But it succeed in loading and I am able to use it when I launch ipython alone and then from sklearn.ensamble import *. Why?
0
1
21
0
41,836,728
0
0
0
0
1
true
1
2017-01-24T18:50:00.000
0
1
0
I have a DataFrame with some values of np.inf. How does .corr() work?
41,836,727
1.2
python,pandas
np.inf is treated the same way np.NaN. I replaced the all the values of np.inf with np.NaN and the results were exactly the same. If there is some subtle differences, please let me know. I was looking for an answer on this and couldn't find one anywhere so I figured I would post this here.
What will happen when I use df.corr()? Will np.inf effect my results some how?
0
1
72
0
41,839,189
0
0
0
0
1
false
0
2017-01-24T20:48:00.000
0
1
0
can't install model_selection on python 2.7.12
41,838,726
0
python-2.7,module,scikit-learn,grid-search
I just found the answer. the 0.18 sklearn has seen a number of updates. you may update your sklearn by typing "conda update scikit-learn" in your windows command line. If it still didn't code you might want to update your conda/Anaconda as well: "conda update conda" and "conda update Anaconda"
I try to run this line: from sklearn.model_selection import GridSearchCV but I get an ImportError (i.e. No module named model_selection) although I have installed sklearn and I can import other packages. here is my python version : 2.7.12 |Anaconda 4.2.0 (64-bit)| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit ...
0
1
238
0
41,846,775
0
0
0
0
1
false
0
2017-01-25T08:26:00.000
0
2
0
Pass data between MATLAB R2011b and Python (Windows 7)
41,846,630
0
python,matlab,python-2.7,parameter-passing,language-interoperability
Depending on what you want to do and your type of data, you could write it to a file and read from it in the other language. You could use numpy.fromfile for that in the python part.
Hello Friends, I want to pass data between MATLAB and Python, One way would be to use matlab.engine in Python or Call Python Libraries from MATLAB. But this approach requires MATLAB 2014 Version unlike mine which is MATLAB R2011b. So I request you to please guide for a different Approach in order to comunicate between ...
0
1
110
0
41,851,421
0
1
0
0
1
false
0
2017-01-25T11:22:00.000
0
2
0
How class_weight emphasis a class in in scikit-learn
41,850,349
0
python,scikit-learn
I'm not sure if there is a single method of treating class_weight for all the algorithms. The way Decision Trees (and Forests) deals with this is by modifying the weights of each sample according to its class. You can consider weighting samples as a more general case of oversampling all the minority class samples (usin...
I would like to know how scikit-learn put more emphasis on a class when we use the parameter class_weight. Is it an oversampling of the minority sampling ?
0
1
420
0
41,864,069
0
0
0
0
1
false
15
2017-01-25T23:51:00.000
5
7
0
Is there a built-in KL divergence loss function in TensorFlow?
41,863,814
0.141893
python,statistics,tensorflow,entropy
I'm not sure why it's not implemented, but perhaps there is a workaround. The KL divergence is defined as: KL(prob_a, prob_b) = Sum(prob_a * log(prob_a/prob_b)) The cross entropy H, on the other hand, is defined as: H(prob_a, prob_b) = -Sum(prob_a * log(prob_b)) So, if you create a variable y = prob_a/prob_b, you could...
I have two tensors, prob_a and prob_b with shape [None, 1000], and I want to compute the KL divergence from prob_a to prob_b. Is there a built-in function for this in TensorFlow? I tried using tf.contrib.distributions.kl(prob_a, prob_b), but it gives: NotImplementedError: No KL(dist_a || dist_b) registered for dist_a...
0
1
17,205
0
41,911,859
0
0
0
0
2
false
2
2017-01-27T16:49:00.000
2
2
0
Real time data using sklearn
41,899,011
0.197375
python,machine-learning,scikit-learn,real-time
With most algorithms training is slow and predicting is fast. Therefore it is better to train offline using training data; and then use the trained model to predict each new case in real time. Obviously you might decide to train again later if you acquire more/better data. However there is little benefit in retraining ...
I have a real time data feed of health patient data that I connect to with python. I want to run some sklearn algorithms over this data feed so that I can predict in real time if someone is going to get sick. Is there a standard way in which one connects real time data to sklearn? I have traditionally had static datase...
0
1
2,016
0
43,380,457
0
0
0
0
2
false
2
2017-01-27T16:49:00.000
2
2
0
Real time data using sklearn
41,899,011
0.197375
python,machine-learning,scikit-learn,real-time
It is feasible to train the model from a static dataset and predict classifications for incoming data with the model. Retraining the model with each new set of patient data not so much. Also breaks the train/test mode of testing a ML model. Trained models can be saved to file and imported in the code used for real time...
I have a real time data feed of health patient data that I connect to with python. I want to run some sklearn algorithms over this data feed so that I can predict in real time if someone is going to get sick. Is there a standard way in which one connects real time data to sklearn? I have traditionally had static datase...
0
1
2,016
0
41,908,820
0
1
0
0
1
false
1
2017-01-27T17:41:00.000
0
2
0
sample entries from a matrix while satisfying a given requirement
41,899,930
0
python,algorithm
We can do it in the following manner: first get all the (x,y) tuples (indices) of the matrix A where A[x,y]=1. Let there be k such indices. Now roll a k-sided unbiased dice M times (we can simulate by using function randint(1,k) drawing sample from uniform distribution). If you want samples with replacements (same posi...
There is a 0-1 matrix, I need to sample M different entries of 1 value from this matrix. Are there any efficient Python implements for this kind of requirement? A baseline approach is having M iterations, during each iteration, randomly sample 1, if it is of value 1, then keep it and save its position, otherwise, conti...
0
1
49
0
41,947,311
0
0
0
0
1
false
0
2017-01-30T22:45:00.000
1
3
0
Pandas plot ONLY overlap between multiple data frames
41,946,758
0.066568
python,pandas,matplotlib,ipython,jupyter-notebook
To plot only the portion of df1 whose index lies within the index range of df2, you could do something like this: ax = df1.loc[df2.index.min():df2.index.max()].plot() There may be other ways to do it, but that's the one that occurs to me first. Good luck!
Found on S.O. the following solution to plot multiple data frames: ax = df1.plot() df2.plot(ax=ax) But what if I only want to plot where they overlap? Say that df1 index are timestamps that spans 24 hour and df2 index also are timestamps that spans 12 hours within the 24 hours of df1 (but not exactly the same as df1...
0
1
3,052
0
63,090,168
0
1
0
0
1
false
1
2017-01-31T06:38:00.000
0
2
0
Converting scientific notation in Series to commas and thousands separator
41,951,160
0
python,python-3.x,pandas
You can also use SeriesName.map('{:,}'.format)
I have a Series with Name as the index and a number in scientific notation such as 3.176154e+08. How can I convert this number to 317,615,384.61538464 with a thousands separator? I tried: format(s, ',') But it returns TypeError: non-empty format string passed to object.format There are no NaNs in the data. Thanks for ...
0
1
6,217
0
56,069,261
0
0
0
0
1
false
21
2017-01-31T13:14:00.000
0
4
0
Pruning in Keras
41,958,566
0
python-3.x,neural-network,keras,pruning
If you set an individual weight to zero won't that prevent it from being updated during back propagation? Shouldn't thatv weight remain zero from one epoch to the next? That's why you set the initial weights to nonzero values before training. If you want to "remove" an entire node, just set all of the weights on that ...
I'm trying to design a neural network using Keras with priority on prediction performance, and I cannot get sufficiently high accuracy by further reducing the number of layers and nodes per layer. I have noticed that very large portion of my weights are effectively zero (>95%). Is there a way to prune dense layers in h...
0
1
9,448
0
41,967,371
0
0
0
0
1
false
1
2017-01-31T15:48:00.000
2
1
0
scaling glyphs in data units (not screen units)
41,961,680
0.379949
python-3.x,bokeh
Markers (e.g. Triangle) are really meant for use as "scatter" plot markers. With the exception of Circle, they only accept screen dimensions (pixles) for size. If you need triangular regions that scale with data space range changes, your options are to use patch or patches to draw the triangles as polygons (either one ...
I am plotting both wedges and triangles on the same figure. The wedges scale up as I zoom in (I like this), but the triangles do not (I wish they did), presumably because wedges are sized in data units (via radius property) and traingles are in screen units (via size property). Is it possible to switch the triangles t...
0
1
49
0
41,965,766
0
0
0
0
1
true
5
2017-01-31T18:48:00.000
3
1
1
Repeated task execution using the distributed Dask scheduler
41,965,253
1.2
python,dask
Correct, if a task is allocated to one worker and another worker becomes free it may choose to steal excess tasks from its peers. There is a chance that it will steal a task that has just started to run, in which case the task will run twice. The clean way to handle this problem is to ensure that your tasks are idempo...
I'm using the Dask distributed scheduler, running a scheduler and 5 workers locally. I submit a list of delayed() tasks to compute(). When the number of tasks is say 20 (a number >> than the number of workers) and each task takes say at least 15 secs, the scheduler starts rerunning some of the tasks (or executes them ...
0
1
864
0
41,968,970
0
0
0
0
1
true
0
2017-01-31T20:49:00.000
0
2
0
Database design for complex music analytics
41,967,226
1.2
python,data-modeling
I think that the hard part of the problem is that you'll probably want the stimulus (tune) data formatted differently for different queries. What I would think about doing is making a relatively simple data structure for your stimuli (tunes) and add a unique identifier to each unique tune. You could probably get away w...
I'm a researcher studying animal behavior, and I'm trying to figure out the best way to structure my data. I present short musical tunes to animals and record their responses. The Data Each tune consists of 1-10 notes randomly chosen from major + minor scales spanning several octaves. Each note is played for a fixed du...
0
1
65
0
47,134,942
0
0
0
0
1
false
1
2017-02-01T01:01:00.000
0
1
0
How to deal with name column in Scikitlearn randomforest classifier. python 3
41,970,230
0
python,scikit-learn,random-forest,countvectorizer
Well name is a unique thing and an id kind of use sklearn.preprocessing.LabelEncoder after storing the original to a separate list. It will automatically convert the names to a serial number. Also, note if it's a unique thing you should remove names during predicting.
I have a dataframe containing 13 columns. Among 13 three columns are string. One string column is simple male and female which I converted to 1 and 0 using pd.get_dummies() 2nd column contains three different types of string so, easily converted to array using from sklearn.feature_extraction.text import CountVector...
0
1
498
0
42,181,934
0
0
0
0
1
true
1
2017-02-02T16:38:00.000
1
1
0
Single object detection keras
42,007,591
1.2
python,keras,object-detection,training-data
Your task is a so-called binary classification. Make sure, that your final layer has got only one neuron (e.g. for Sequential model model.add(Dense(1, ... other parameters ... ))) and use the binary_crossentropy as loss function. Hope this helps.
I want to make a system that recognizes a single object using keras. In my case I will be detecting car wheels. How do I train my system just for 1 object? I did classification task before using cats and dogs, but now its a completely different task. Do I still "classify", with class 0= wheels, class = 1 non wheels(jus...
0
1
907
0
42,027,081
0
0
0
0
1
false
1
2017-02-03T14:04:00.000
-1
1
0
Finding out installed packages in Spark
42,026,072
-0.197375
python,apache-spark,pyspark
include the package anyway to be sure eg via spak submit: $SPARK_HOME/bin/spark-shell --packages graphframes:graphframes:0.1.0-spark1.6
I have been at it for some time and tried everything. I need to find out whether the package GraphFrames is included in the spark installation at my office cluster. I am using Spark version 1.5.0. Is there a way to list all the installed packages in Spark?
0
1
1,593
0
42,029,561
1
0
0
0
1
false
1
2017-02-03T16:51:00.000
0
1
0
Python: Shortest Weighted Path and Least Number of Edges
42,029,159
0
python,algorithm,graph
Instead of using floating points for weights, use tuples (weight, number_of_edges) with pairwise addition. The lowest weight path using these new weights will have the lowest weight, and in the case of a tie, be the shortest path. To define these weights I would make them a subclass of tuple with __add__ redefined. T...
I am using a networkx weighted graph in order to model a transportation network. I am attempting to find the shortest path in terms of the sum of weighted edges. I have used Dijkstra path in order to find this path. My problem occurs when there is a tie in terms of weighted edges. When this occurs I would always like t...
0
1
711
0
42,040,862
0
1
0
0
1
false
1
2017-02-04T13:15:00.000
-7
2
0
Why doesn’t 'array' have an in-place sort like list does?
42,040,813
-1
python,arrays,python-3.x,sorting
A list is a data structure that has characteristics which make it easy to do some things. An array is a very well understood standard data structure and isn't optimized for sorting. An array is basically a standard way of storing the product of sets of data. There hasn't ever been a notion of sorting it.
Why doesn’t the array class have a .sort()? I don't know how to sort an array directly. The class array.array is a packed list which looks like a C array. I want to use it because only numbers are needed in my case, but I need to be able to sort it. Is there some way to do that efficiently?
0
1
144
0
42,046,236
0
0
0
0
1
true
0
2017-02-04T13:50:00.000
1
1
0
numpy irfft by amplitude and phase spectrum
42,041,151
1.2
python,numpy,fft,ifft
If you have amplitude and phase vectors for a spectrum, you can convert them to a complex (IQ or Re,Im) vector by multiplying the cosine and sine of each phase value by its associated amplitude value (for each FFT bin with a non-zero amplitude, or vector-wise).
How to compute irfft if I have only amplitude and phase spectrum of signal? In numpy docs I've found only irfft which use fourier coefficients for this transformation.
0
1
411
0
42,046,234
0
0
0
0
1
false
0
2017-02-04T22:07:00.000
3
3
0
Define logarithmic power for NumPy
42,046,184
0.197375
python,numpy,logarithm
You can use ** for exponentiation: np.log(x/y) ** 2
I am trying to define ln2(x/y) in Python, within NumPy. I can define ln(x) as np.log(x) but how I can define ln2(x/y)? ln2(x/y); natural logarithm to the power of 2
0
1
235
0
42,048,793
0
0
0
0
2
false
0
2017-02-05T05:01:00.000
1
2
0
Text Classification - Label Pre Process
42,048,725
0.099668
python,r,nlp,preprocessor,text-classification
Manual annotation is a good option since you have a very good idea of an ideal document corresponding to your label. However, with the large dataset size, I would recommend that you fit an LDA to the documents and look at the topics generated, this will give you a good idea of labels that you can use for text classific...
I have a data set of 1M+ observations of customer interactions with a call center. The text is free text written by the representative taking the call. The text is not well formatted nor is it close to being grammatically correct (a lot of short hand). None of the free text has a label on the data as I do not know what...
0
1
501
0
42,063,332
0
0
0
0
2
true
0
2017-02-05T05:01:00.000
1
2
0
Text Classification - Label Pre Process
42,048,725
1.2
python,r,nlp,preprocessor,text-classification
Text Pre-Processing: Convert all text to lower case, tokenize into unigrams, remove all stop words, use stemmer to normalize a token to it's base word. There are 2 approaches I can think of for classifying the documents a.k.a. the free text you spoke about. Each free text is a document: 1) Supervised classification Ta...
I have a data set of 1M+ observations of customer interactions with a call center. The text is free text written by the representative taking the call. The text is not well formatted nor is it close to being grammatically correct (a lot of short hand). None of the free text has a label on the data as I do not know what...
0
1
501
0
42,057,766
0
0
0
0
1
false
1
2017-02-05T21:42:00.000
1
1
0
How can we use MNIST dataset one class as an input using tensorflow?
42,057,667
0.197375
python,tensorflow
Not sure exactly what you are asking. I will answer about what I understood. In case you want to predict only one class for example digit 5 and rest of the digits. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors who...
I want to find the accuracy of one class in MMNIST dataset .So how can i split it on the basis of classes?
0
1
567
0
59,105,698
0
0
0
0
1
false
2
2017-02-06T01:03:00.000
0
1
0
how to predict only one class in tensorflow
42,059,103
0
python,tensorflow,one-hot-encoding
while preparing the data you can use numpy to set all the data points in class 5 as 1 and the others will be set to as 0 using . arr = np.where(arr!=5,arr,0) arr = np.where(arr=5,arr,1) and then you can create a binary classifier using Tensorflow to classifiy them while using a binary_crossentropy loss to optimize the...
In case you want to predict only one class. Then first you need to label your vectors in such a way that maybe label all those vectors as 'one' which has ground truth 5 and 'zero' to those vectors whose ground truth is not 5. how can I implement this in tensorflow using puthon
0
1
164
0
42,061,940
0
0
0
0
1
true
0
2017-02-06T06:39:00.000
1
1
0
Statistics: How to identify dependent and independent variables in my dataset?
42,061,730
1.2
python,statistics
In any given data set, labeling variables as dependent or independent is arbitrary -- there is no fundamental reason that one column should be independent and another should be dependent. That said, typically it's conventional to say that "causes" are independent variables and "effects" are dependent variables. But thi...
I am a little bit confused in the classification of dependent and independent variables in my dataset, on which I need to make a model for prediction. Any insights or how-to's would be very helpful here. Suppose my dataset have 40 variables. In this case, it would be very difficult to classify the variables as independ...
0
1
2,782
0
42,104,038
0
0
0
0
1
false
1
2017-02-06T14:28:00.000
0
1
0
Combining different names in a database
42,070,138
0
python,regex,database,chess
@Ev.Kounis solution is simple and effective, I've used it myself successfully. Most of the time, we only care the top chess players. That's what I did: Created a simple function like @Ev.Jounis suggests I also scanned the player rating. For example, there were several "Carlsen" players in my database, but they wouldn'...
I am studying a chess database with more than one million games. I am interested in identifying some characteristics of different players. The problem I have is that each single player appears with several identifications. For example, "Carlsen, M.", "Carlsen, Ma", "Carlsen, Magnus" and "Magnus Carlsen" all correspon...
0
1
78
0
69,476,803
0
0
0
0
3
false
5
2017-02-07T02:19:00.000
0
6
0
What is the best way to build and expose a Machine Learning model REST api?
42,080,598
0
java,python,rest,machine-learning,scikit-learn
I have been experimenting with this same task and would like to add another option, not using a REST API: The format of the Apache Spark models is compatible in both the Python and Jave implementations of the framework. So, you could train and build your model in Python (using PySpark), export, and import on the Java s...
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Mode...
1
1
5,677
0
46,918,647
0
0
0
0
3
false
5
2017-02-07T02:19:00.000
0
6
0
What is the best way to build and expose a Machine Learning model REST api?
42,080,598
0
java,python,rest,machine-learning,scikit-learn
I'm using Node.js as my rest service and I just call out to the system to interact with my python that holds the stored model. You could always do that if you are more comfortable writing your services in JAVA, just make a call to Runtime exec or use ProcessBuilder to call the python script and get the reply back.
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Mode...
1
1
5,677
0
42,127,532
0
0
0
0
3
false
5
2017-02-07T02:19:00.000
0
6
0
What is the best way to build and expose a Machine Learning model REST api?
42,080,598
0
java,python,rest,machine-learning,scikit-learn
Well it depends the situation you use python for ML. For classification models like randomforest,use your train dataset to built tree structures and export as nested dict.Whatever the language you uesd,transform the model object to a kind of data structure then you can ues it anywhere. BUT if your situation is a large ...
I have been working on designing REST api using springframework and deploying them on web servers like Tomcat. I have also worked on building Machine Learning model and use the model to make prediction using sklearn in Python. Now I have a use case where in I want to expose a REST api which builds Machine Learning Mode...
1
1
5,677
0
42,081,531
0
1
0
0
1
false
0
2017-02-07T03:27:00.000
0
2
0
Importing images Azure Machine Learning Studio
42,081,202
0
python,azure,azure-blob-storage,azure-machine-learning-studio
yes, you should be able to do that using Python. At the very least, straight REST calls should work.
Is it possible to import images from your Azure storage account from within a Python script module as opposed to using the Import Images module that Azure ML Studio provides. Ideally I would like to use cv2.imread(). I only want to read in grayscale data but the Import Images module reads in RGB. Can I use the BlockBl...
0
1
775
0
42,081,957
0
0
0
0
1
true
1
2017-02-07T04:33:00.000
2
1
0
Pandas dataframe: Listing amount of people per gender in each major
42,081,790
1.2
python,pandas,dataframe
altering @VaishaliGarg's answer a little, you can use df.groupby(['Qgender','Qmajor']).count() Also if needed a dataframe out of it, we need to add .reset_index() since it would be a groupbyObject. df.groupby(['Qgender','Qmajor']).count().reset_index()
Sorry about the vague title, but I didn't know how to word it. So I have a pandas dataframe with 3 columns and any amount of rows. The first column is a person's name, the second column is their major (six possible majors, always written the same), and the third column is their gender (always 'Male' or 'Female'). I w...
0
1
5,858
0
44,600,199
0
0
0
0
1
false
1
2017-02-07T09:32:00.000
1
1
0
pycharm cannot run script but can debug it
42,086,214
0.197375
python,tensorflow,pycharm
I have run into a similar error running caffe on pycharm. I think it's because of the version of Python. When I installed Python 2.7.13, it worked!
When I ran a script in PyCharm, it exited with: I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened...
0
1
1,056
0
42,093,881
0
0
0
0
2
false
7
2017-02-07T14:32:00.000
6
3
0
Accuracy difference on normalization in KNN
42,092,448
1
python,machine-learning,scikit-learn,knn
That's a pretty good question, and is unexpected at first glance because usually a normalization will help a KNN classifier do better. Generally, good KNN performance usually requires preprocessing of data to make all variables similarly scaled and centered. Otherwise KNN will be often be inappropriately dominated by...
I had trained my model on KNN classification algorithm , and I was getting around 97% accuracy. However,I later noticed that I had missed out to normalise my data and I normalised my data and retrained my model, now I am getting an accuracy of only 87%. What could be the reason? And should I stick to using data that is...
0
1
8,817
0
42,093,691
0
0
0
0
2
false
7
2017-02-07T14:32:00.000
2
3
0
Accuracy difference on normalization in KNN
42,092,448
0.132549
python,machine-learning,scikit-learn,knn
If you use normalized feature vectors, the distances between your data points are likely to be different than when you used unnormalized features, particularly when the range of the features are different. Since kNN typically uses euclidian distance to find k nearest points from any given point, using normalized featur...
I had trained my model on KNN classification algorithm , and I was getting around 97% accuracy. However,I later noticed that I had missed out to normalise my data and I normalised my data and retrained my model, now I am getting an accuracy of only 87%. What could be the reason? And should I stick to using data that is...
0
1
8,817
0
42,110,000
0
0
0
0
1
false
0
2017-02-08T05:55:00.000
0
1
0
Integrate Spark SQL using Pyspark with python interpreter and pandas and Ipython notebook
42,105,716
0
python-3.x,pandas,matplotlib,pyspark,apache-spark-sql
Check out the hortonworks sandbox. It's a virtual machine with hadoop and all its components - such as spark ad hdfs - installed and configured. In a addition to that, there is a note book called Zeppelin notebook allowing you to write script in python or other languages. You're also free to install python libs and ac...
I want to know the which interpreter is good for Python to use features like Numpy, pandas and matplotlib with the feature of integrated Ipython note book. Also I want to integrate this with Apache Spark. Is it possible? My aim is I need to load different tables from different sources like Oracle, MS SQL, and HDFS fil...
0
1
186
0
42,115,789
0
0
0
0
1
false
0
2017-02-08T08:40:00.000
0
2
0
How to know the factor by which a feature affects a model's prediction
42,108,324
0
python,machine-learning,scikit-learn,decision-tree
In general - no. Decision trees work differently that that. For example it could have a rule under the hood that if feature X > 100 OR X < 10 and Y = 'some value' than answer is Yes, if 50 < X < 70 - answer is No etc. In the instance of decision tree you may want to visualize its results and analyse the rules. With RF ...
I have trained my model on a data set and i used decision trees to train my model and it has 3 output classes - Yes,Done and No , and I got to know the feature that are most decisive in making a decision by checking feature importance of the classifier. I am using python and sklearn as my ML library. Now that I have fo...
0
1
998
0
54,422,825
0
0
0
0
1
false
1
2017-02-08T10:16:00.000
0
3
0
How will I integrate MATLAB to TensorFlow?
42,110,293
0
python,matlab,tensorflow
I used a mex function for inference via the C++ API of TensorFlow once. That's pretty straight forward. I had to link the required TensorFlow libs statically from source though.
I want to integrate MATLAB and TensorFlow, although I can run TensorFlow native in python but I am required to use MATLAB for image processing. Can someone please help me out with this one?
0
1
1,628
0
42,120,658
0
0
0
0
1
false
1
2017-02-08T15:54:00.000
1
1
0
Keras with Theano BackEnd
42,117,777
0.197375
python,machine-learning,neural-network,theano,keras
In Keras < 1.0 (I believe), one would pass the show_accuracy argument to model.fit in order to display the accuracy during training. This method has been replaced by metrics, as you can now define custom metrics that can help you during training. One of the metrics is of course, accuracy. The changes to your code to ke...
im new to Keras in python, i got this warning message when after executing my code. I tried to search on google, but still didnt manage to solve this problem. Thank you in advance. UserWarning: he "show_accuracy" argument is deprecated, instead you should pass the "accurac " metric to the model at compile time: mo...
0
1
223
0
46,009,804
0
0
0
0
1
false
7
2017-02-08T16:42:00.000
0
2
0
How to retrieve the filename of an image with keras flow_from_directory shuffled method?
42,118,850
0
python,machine-learning,neural-network,generator,keras
I think the only option here is to NOT shuffle the files. I have been wondering this myself and this is the only thing I could find in the docs. Seems odd and not correct...
If I don't shuffle my files, I can get the file names with generator.filenames. But when the generator shuffles the images, filenames isn't shuffled, so I don't know how to get the file names back.
0
1
1,974
0
46,544,816
0
0
0
0
1
false
8
2017-02-09T07:18:00.000
5
2
0
Batch-major vs time-major LSTM
42,130,491
0.462117
python,tensorflow,deep-learning,lstm,recurrent-neural-network
There is no difference in what the model learns. At timestep t, RNNs need results from t-1, therefore we need to compute things time-major. If time_major=False, TensorFlow transposes batch of sequences from (batch_size, max_sequence_length) to (max_sequence_length, batch_size)*. It processes the transposed batch one r...
Do RNNs learn different dependency patterns when the input is batch-major as opposed to time-major?
0
1
4,154
0
42,133,075
0
1
0
0
1
true
2
2017-02-09T08:06:00.000
2
2
0
Django JSON file to Pandas Dataframe
42,131,205
1.2
python,json,django,pandas
You can also use pd.DataFrame.from_records() when you have json or dictonary df = pd.DataFrame.from_records([ json ]) OR df = pd.DataFrame.from_records([ dict. ]) or you need to provide iterables for pandas dataframe: e.g. df = pd.DataFrame({'column_1':[ values ],'column_2':[ values ]})
I have a simple json in Django. I catch the file with this command data = request.body and i want to convert it to pandas datarame JSON: { "username":"John", "subject":"i'm good boy", "country":"UK","age":25} I already tried pandas read_json method and json.loads from json library but it didn't work.
1
1
820
0
42,145,160
0
0
0
0
1
false
3
2017-02-09T19:17:00.000
0
2
0
Why do bokeh tutorials use explicit imports rather than aliases?
42,145,097
0
python,bokeh
Importing individual names from a library isn't really "contamination". What you want to avoid is doing from somelibrary import *. This is different because you don't know which names will be imported, so you can't be sure there won't be a name clash. In contrast, doing from numpy import linspace just creates one nam...
As I was checking out the Bokeh package I noticed that the tutorials use explicit import statements like from bokeh.plotting import figure and from numpy import linspace. I usually try to avoid these in favor of, e.g., import numpy as np, import matplotlib.pyplot as plt. I thought this is considered good practice as it...
0
1
545
0
42,148,230
0
0
0
0
1
true
2
2017-02-09T22:30:00.000
2
2
0
Using scipy routines outside of the GIL
42,148,101
1.2
python,scipy,cython,shared-memory,python-multithreading
Not safe. If CPython could safely run that kind of code without the GIL, we wouldn't have the GIL in the first place.
This is sort of a general question related to a specific implementation I have in mind, about whether it's safe to use python routines designed for use inside the GIL in a shared memory environment. Specifically what I'd like to do is use scipy.optimize.curve_fit on a large array inside a cython function. The data can ...
0
1
312
0
50,688,040
0
0
0
0
1
false
2
2017-02-10T00:59:00.000
0
1
0
How to plot data from different runs on one figure in Spyder
42,149,777
0
ipython,spyder
One way that I have figured out is to define a dictionary and then record the results you want individually. Apparently, this is not the most efficient way, but it works.
What I meant by the title is that I have two different programs and I want to plot data on one figure. In Matlab there is this definition for figure handle which eventually points to a specific plot. Let's say if I call figure(1) the first time, I get a figure named ''1'' created. The second I call figure(1), instead o...
0
1
54
0
56,578,297
0
0
0
0
1
false
18
2017-02-10T10:23:00.000
-1
6
0
How to update model parameters with accumulated gradients?
42,156,957
-0.033321
python,tensorflow,gradient
You can use Pytorch instead of Tensorflow as it allows the user to accumulate gradients during training
I'm using TensorFlow to build a deep learning model. And new to TensorFlow. Due to some reason, my model has limited batch size, then this limited batch-size will make the model has a high variance. So, I want to use some trick to make the batch size larger. My idea is to store the gradients of each mini-batch, for exa...
0
1
8,795
0
42,171,552
0
0
0
0
1
true
90
2017-02-11T02:24:00.000
183
5
0
Get current number of partitions of a DataFrame
42,171,499
1.2
python,scala,dataframe,apache-spark,apache-spark-sql
You need to call getNumPartitions() on the DataFrame's underlying RDD, e.g., df.rdd.getNumPartitions(). In the case of Scala, this is a parameterless method: df.rdd.getNumPartitions.
Is there any way to get the current number of partitions of a DataFrame? I checked the DataFrame javadoc (spark 1.6) and didn't found a method for that, or am I just missed it? (In case of JavaRDD there's a getNumPartitions() method.)
0
1
144,311
0
42,195,147
0
0
0
0
1
false
0
2017-02-12T22:00:00.000
0
3
0
Can't import legacy_seq2seq from tensorflow.contrib
42,193,779
0
python,tensorflow
I'm using tf.nn.seq2seq.sequence_loss_by_example - they've moved a lot of stuff from tf.contrib to main packages. This is because they updated their code, but not their examples - if you open github - you'll see a lot of requests to fix issues related to that!
I'm using tensorflow 0.12.1 on Python 3.5.2 on a Windows 10 64bit computer. For some reason, whenever I try to import legacy_seq2seq from tensorflow.contrib, it always occurs the error: ImportError: cannot import name 'legacy_seq2seq'. What causes the problem and how can I fix it?
0
1
2,843
0
42,225,141
0
0
0
0
1
false
0
2017-02-13T12:54:00.000
0
1
0
How to divide map-reduce tasks?
42,204,582
0
python,hadoop,mapreduce,hadoop-streaming
So, your have a table with 200 columns(say T), a separate list of entries(say L) to be picked from T and with the last 24-hours(from the timestamp in T). MapReduce, mapper does give entries from T sequentially. Before your mapper gets into map(), I.e in setup() put the block of code to read from the L and make it handy...
I have a table containing 200 columns out of which I need around 50 column mentioned in a list, and rows of last 24 months according to column 'timestamp'. I'm confused what comes under mapper and what under reducer? As it is just transformation, will it only have mapper phase, or filtering of rows to last 24 months wi...
0
1
165
0
60,818,333
0
0
0
0
2
false
11
2017-02-13T15:06:00.000
0
4
0
A simple way to insert a table of contents in a multiple page pdf generated using PdfPages
42,207,211
0
python,python-3.x,pandas,pdf,matplotlib
What I do sometimes is generate a HTML file with my tables as I want and after I convert to PDF file. I know this is a little harder but I can control any element on my documents. Logically, this is not a good solution if you want write many files. Another good solution is make PDF from Jupyter Notebook.
I am using Pandas to read data from some datafiles and generate a multiple-page pdf using PdfPages, in which each page contains the matplotlib figures from one datafile. It would be nice to be able to get a linked table of contents or bookmarks at each page, so that i can easily find figures corresponding to a given da...
0
1
1,445
0
51,373,915
0
0
0
0
2
false
11
2017-02-13T15:06:00.000
0
4
0
A simple way to insert a table of contents in a multiple page pdf generated using PdfPages
42,207,211
0
python,python-3.x,pandas,pdf,matplotlib
It sounds like you want to generate fig{1, 2, ..., N}.pdf and then generate a LaTeX source file which mentions an \includegraphics for each of them, and produces a ToC. If you do scratch this particular itch, consider packaging it up for others to use, as it is a pretty generic use case.
I am using Pandas to read data from some datafiles and generate a multiple-page pdf using PdfPages, in which each page contains the matplotlib figures from one datafile. It would be nice to be able to get a linked table of contents or bookmarks at each page, so that i can easily find figures corresponding to a given da...
0
1
1,445
0
42,253,328
0
0
0
0
1
true
2
2017-02-14T11:11:00.000
0
1
0
Tensorflow import error after python update
42,224,655
1.2
python-2.7,import,tensorflow,protocol-buffers,importerror
I had a similar problem. Make sure that pip and python has the same path when typing which pipand which python. If they differ, change your ~.bash_profile so that the python path match the pip path, and use source ~\.bash_profile. If that doesn't work, I would try to reinstall pip and tensorflow. I installed pip using ...
I am using tensorflow with python 2.7. However, after updating python 2.7.10 to 2.7.13, I get an import error with tensorflow File "", line 1, in File "/Users/usrname/Library/Python/2.7/lib/python/site- packages/tensorflow/__init__.py", line 24, in from tensorflow.python import * File ...
0
1
781
0
43,529,327
0
1
0
0
1
false
1
2017-02-14T17:11:00.000
0
3
0
Import cv2: ImportError: DLL load failed: windows 7 Anaconda 4.3.0 (64-bit) Python 3.6.0
42,232,177
0
python,opencv,dll
use python 2.7.1.0 instead of python 3, cv2 worked and dll load error fixed after using python 2.7
I am using Anaconda 4.3.0 (64-bit) Python 3.6.0 on windows 7. I am getting the error "ImportError: DLL load failed: The specified module could not be found." for importing the package import cv2. I have downloaded the OpenCV package and copy paste cv2.pyd into the Anaconda site package and updated my system path to po...
0
1
2,068
0
42,255,630
0
0
0
0
1
false
0
2017-02-14T17:27:00.000
0
1
0
Using external pose estimates to improve stationary marker contour tracking
42,232,500
0
python,computer-vision,opencv3.0,robotics,pose-estimation
The obvious advantage of having a pose estimate is that it restricts the image region for searching your target. Next, if your problem is occlusion, you then need to model that explicitly, rather than just try to paper it over with image processing tricks: add to your detector objective function a term that expresses ...
Suppose that I have an array of sensors that allows me to come up with an estimate of my pose relative to some fixed rectangular marker. I thus have an estimate as to what the contour of the marker will look like in the image from the camera. How might I use this to better detect contours? The problem that I'm trying t...
0
1
68
0
42,240,221
0
1
0
0
1
false
1
2017-02-15T03:22:00.000
1
2
0
Add path in Python to a Notebook
42,240,124
0.099668
python,path,ipython-notebook
It's easy question, modify \C:\Users\User\Desktop\A Student's Guide to Python for Physical Modeling by KInder and Nelson\code to C:\\Users\\User\\Desktop\\A Student's Guide to Python for Physical Modeling by KInder and Nelson\\code.
What am I doing wrong here? I cannot add a path to my Jupyter Notebook. I am stuck. Any of my attempts did not work at all. home_dir="\C:\Users\User\Desktop\" data_dir=home_dir + "\C:\Users\User\Desktop\A Student's Guide to Python for Physical Modeling by KInder and Nelson\code" data_set=np.loadtxt(data_dir + "HIVseri...
0
1
541
0
42,244,824
0
0
0
0
1
true
7
2017-02-15T04:56:00.000
6
1
0
TensorFlow - Text recognition in image
42,241,038
1.2
python,tensorflow,deep-learning,text-recognition
The difficulty is that you don't know where the text is. The solution is, given an image, you need to use a sliding window to crop different part of the image, then use a classifier to decide if there are texts in the cropped area. If so, use your character/digit recognizer to tell which characters/digits they really a...
I am new to TensorFlow and to Deep Learning. I am trying to recognize text in naturel scene images. I used to work with an OCR but I would like to use Deep Learning. The text has always the same format : ABC-DEF 88:88. What I have done is recognize every character/digit. It means that I cropped the image around every ...
0
1
13,369
0
45,752,337
0
0
0
0
1
false
4
2017-02-15T15:49:00.000
0
2
0
unable to import pyspark statistics module
42,253,981
0
python,pyspark
I have the same problem. The Python file stat.py does not seem to be in Spark 2.1.x but in Spark 2.2.x. So it seems that you need to upgrade Spark with its updated pyspark (but Zeppelin 0.7.x does not seem to work with Spark 2.2.x).
Python 2.7, Apache Spark 2.1.0, Ubuntu 14.04 In the pyspark shell I'm getting the following error: >>> from pyspark.mllib.stat import Statistics Traceback (most recent call last): File "", line 1, in ImportError: No module named stat Solution ? similarly >>> from pyspark.mllib.linalg import SparseVector Tracebac...
0
1
1,550
0
42,260,728
0
0
0
0
1
true
0
2017-02-15T21:36:00.000
3
2
0
Writing data into a CSV within a loop Python
42,260,538
1.2
python,loops,csv,time,export-to-csv
How do I best write the data of the polygons that I create into the csv? Do I open the csv at the beginning and then write each row into the file, as I iterate over classes and images? I suspect most folks would gather the data in a list or perhaps dictionary and then write it all out at the end. But if you don't need...
I am currently working on the Dstl satellite kaggle challenge. There I need to create a submission file that is in csv format. Each row in the csv contains: Image ID, polygon class (1-10), Polygons Polygons are a very long entry with starts and ends and starts etc. The polygons are created with an algorithm, for one c...
0
1
705
0
42,642,759
0
1
0
0
2
false
14
2017-02-16T10:07:00.000
0
7
0
How do I resolve these tensorflow warnings?
42,270,739
0
python,tensorflow
Those are simply warnings. They are just informing you if you build TensorFlow from source it can be faster on your machine. Those instructions are not enabled by default on the builds available I think to be compatible with more CPUs as possible.
I just installed Tensorflow 1.0.0 using pip. When running, I get warnings like the one shown below. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. I get 5 more similar warni...
0
1
10,737
0
42,539,825
0
1
0
0
2
false
14
2017-02-16T10:07:00.000
0
7
0
How do I resolve these tensorflow warnings?
42,270,739
0
python,tensorflow
It would seem that the PIP build for the GPU is bad as well as I get the warnings with the GPU version and the GPU installed...
I just installed Tensorflow 1.0.0 using pip. When running, I get warnings like the one shown below. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. I get 5 more similar warni...
0
1
10,737
0
42,284,733
0
0
0
0
1
false
1
2017-02-16T13:02:00.000
2
1
0
How to analyse 3d mesh data(in .stl) by TensorFlow
42,274,756
0.379949
python,machine-learning,3d,tensorflow,scikit-learn
You have to first extract "features" out of your dataset. These are fixed-dimension vectors. Then you have to define labels which define the prediction. Then, you have to define a loss function and a neural network. Put that all together and you can train a classifier. In your example, you would first need to extract a...
I try to write an script in python for analyse an .stl data file(3d geometry) and say which model is convex or concave and watertight and tell other properties... I would like to use and TensorFlow, scikit-learn or other machine learning library. Create some database with examples of objects with tags and in future add...
0
1
1,021
0
42,292,153
0
0
0
0
2
false
1
2017-02-17T05:36:00.000
1
2
0
How to give more than one labels with an image in tensorflow?
42,290,182
0.099668
python,tensorflow,neural-network
Do they have always two labels? If so try "label1-label2" as one label. Or simply build two networks, one for label 1 and the other for label 2. Are they hierarchical labels? Then, check out Hierarchical classifiers.
I want to implememt multitask Neural Network in tensorflow, for which I need my input as: [image label1 label2] which I can give to the neural network for training. My question is, how can I associate more than one label with image in TFRecord file? I currently was using build_image_data.py file of inception model fo...
0
1
249
0
42,298,590
0
0
0
0
2
false
1
2017-02-17T05:36:00.000
0
2
0
How to give more than one labels with an image in tensorflow?
42,290,182
0
python,tensorflow,neural-network
I got this working. For any one looking for reference, you can modify Example proto of build_image_data.py file and associate it with two labels. :)
I want to implememt multitask Neural Network in tensorflow, for which I need my input as: [image label1 label2] which I can give to the neural network for training. My question is, how can I associate more than one label with image in TFRecord file? I currently was using build_image_data.py file of inception model fo...
0
1
249
0
42,313,679
0
1
0
0
1
true
0
2017-02-17T15:32:00.000
1
1
0
Temporarily disable facets in Python's FacetedSearch
42,301,719
1.2
python,elasticsearch,elasticsearch-dsl,elasticsearch-py
So you just want to use the Search object's query, but not it's aggregations? In that case just call the object's search() method to get the Search object and go from there. If you want the aggregations, but just want to skip the python-level facets calculation just use the build_search method to get the raw Search obj...
I have created my own customised FacetedSearch class using Pythons Elasticsearch DSL library to perform search with additional filtering in def search(self). Now I would like to reuse my class to do some statistical aggregations. To stay DRY I want to reuse this class and for performance reason I would like to temporar...
0
1
103
0
42,317,702
0
0
0
0
1
true
1
2017-02-18T14:30:00.000
1
1
0
SVD using Scikit-Learn and Gensim with 6 million features
42,316,431
1.2
python,scikit-learn,gensim,svd
I don't really see why using sparks mllib SVD would improve performance or avoid memory errors. You simply exceed the size of your RAM. You have some options to deal with that: Reduce the dictionary size of your tf-idf (playing with max_df and min_df parameters of scikit-learn for example). Use a hashing vectorizer in...
I am trying to classify paragraphs based on their sentiments. I have training data of 600 thousand documents. When I convert them to Tf-Idf vector space with words as analyzer and ngram range as 1-2 there are almost 6 million features. So I have to do Singular value decomposition (SVD) to reduce features. I have tried ...
0
1
961
0
47,821,847
0
0
0
0
1
false
0
2017-02-18T16:54:00.000
-1
1
0
Select specific MNIST classes to train a neural network in TensorFlow
42,317,953
-0.197375
python,tensorflow,mnist
Found the answer i guess... one hot=True transformed the scalar into a one hot vector :) thanks for your time anyway!
currently i am looking for a way to filter specific classes out of my training dataset (MNIST) to train a neural network on different constellations, e.g. train a network only on classes 4,5,6 then train it on 0,1,2,3,4,5,6,7,8,9 to evaluate the results with the test dataset. I'd like to do it with an argument parser v...
0
1
1,514
0
42,521,000
0
0
0
0
1
true
1
2017-02-20T08:47:00.000
1
1
0
CNTK 2 sorted minibatch sources
42,339,941
1.2
python,cntk
You can create two minibatch sources, one for x and one for x_mask, both with randomize=False. Then the examples will be read in the order in which they are listed in the two map files. So as long as the map files are correct and the minibatch sizes are the same for both sources you will get the images and the masks in...
does anyone know how to create or use 2 minibatch sources or inputs a sorted way? My problem is the following: I have images named from 0 to 5000 and images named 0_mask to 5000_mask. For each image x the coressponding image x_mask is the regression image for a deconvolution output. So i need a way to tell cntk that ea...
0
1
65
0
42,718,191
0
0
0
0
1
false
22
2017-02-20T16:45:00.000
1
5
1
Unable to run pyspark
42,349,980
0.039979
python,pyspark
The Possible Issues faced when running Spark on Windows is, of not giving proper Path or by using Python 3.x to run Spark. So, Do check Path Given for spark i.e /usr/local/spark Proper or Not. Do set Python Path to Python 2.x (remove Python 3.x).
I installed Spark on Windows, and I'm unable to start pyspark. When I type in c:\Spark\bin\pyspark, I get the following error: Python 3.6.0 |Anaconda custom (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Traceback (...
0
1
21,967
1
51,124,327
0
0
0
0
1
false
2
2017-02-20T16:46:00.000
2
2
0
Homography and Lucas Kanade what is the difference?
42,350,006
0.197375
python-2.7,computer-vision,homography,opticalflow
Optical flow: detect motions from one frame to the next. This is either sparse (few positions of interest are tracked, such as in the LKDemo.cpp example) or dense (one motion per position for many positions(e.g. all pixels) such as Farneback demos in openCV). Regardless of whether you have dense or sparse flow, there a...
i am using optical flow to track some features i am a begineer and was tol to follow these steps Match good features to track Doing Lucas-Kanade Algorithm on them Find homography between 1-st frame and current frame Do camera calibration Decompose homography map Now what i don't understand is the homography part bec...
0
1
1,959
0
42,352,213
0
1
0
0
1
false
1
2017-02-20T18:21:00.000
0
5
0
Tensorflow installation on Windows 10, error 'Not a supported wheel on this platform'
42,351,728
0
windows,python-3.x,cmd,tensorflow
So are you sure you correctly downgraded your python? Run this command on command line pip -V. This should print the pip version and the python version.
This question is for a Windows 10 laptop. I'm currently trying to install tensorflow, however, when I run: pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl I get the following error: tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl is no...
0
1
3,732
0
45,426,365
0
0
0
0
1
false
3
2017-02-21T02:09:00.000
1
2
0
Scikit-learn non-negative matrix factorization (NMF) for sparse matrix
42,357,450
0.099668
python,scikit-learn,nmf
In your data matrix the missing values can be 0, but rather than storing a bunch of zeros for a very sparse matrix you would usually store a COO matrix instead, where each row is stored in CSR format. If you are using NMF for recommendations, then you would be factorising your data matrix X by finding W and H such that...
I am using Scikit-learn's non-negative matrix factorization (NMF) to perform NMF on a sparse matrix where the zero entries are missing data. I was wondering if the Scikit-learn's NMF implementation views zero entries as 0 or missing data. Thank you!
0
1
1,903
0
42,357,893
0
0
0
0
1
false
0
2017-02-21T02:51:00.000
0
1
0
Draw rectangle in opencv with length and breadth in cms?
42,357,801
0
python,opencv
Its dependent on the pixel-distance ratio. You can measure this by taking an image of a meter-stick and and measuring its pixel width (for this example say its 1000px). The ratio of pixels to distance is 1000px/100cm, or 10. You can now use this constant as a multiplier, so for a given length and width in cm., you will...
I know how to draw a rectangle in opencv. But can I choose the length and breadth to be in centi meters?
0
1
408
0
42,635,609
0
0
0
0
1
false
0
2017-02-21T05:33:00.000
0
2
0
SIFT Input to ANN
42,359,440
0
python,opencv,neural-network,classification,sift
It will be good if you apply Normalization on each image before getting the feature extractor.
I'm trying to classify images using an Artificial Neural Network and the approach I want to try is: Get feature descriptors (using SIFT for now) Classify using a Neural Network I'm using OpenCV3 and Python for this. I'm relatively new to Machine Learning and I have the following question - Each image that I analyse w...
0
1
578
0
42,386,754
0
0
0
0
1
false
2
2017-02-22T08:41:00.000
2
1
0
Tensorflow: concatenating 2 tensors of shapes containing None
42,386,493
0.379949
python,tensorflow
Before asking a question I should probably try to run the code :) Using tf.concat(values=[A, B], concat_dim=3) seems to be working.
My problem is the following: I have a tensor A of shape [None, None, None, 3] ([batch_size, height, width, num_channels]) and a tensor B. At runtime it is guaranteed that A and B will have the same shape. I would like to concatenate these two tensors along num_channels axis. PS. Note that I simplified my original probl...
0
1
857
0
42,403,483
0
0
0
0
1
false
2
2017-02-22T19:05:00.000
0
1
0
High exponent numbers with scipy.stats functions
42,400,159
0
python,pandas,scipy,double,precision
You could just write the expression for logsf directly using logs of gamma functions from scipy.special (gammaln, loggamma). And you could send a pull request implementing the logsf for the chi-square distribution.
I have a set of number that can get very small, from 1e-100, to 1e-700 and lower. The precision doesn't matter as much as the exponent. I can load such numbers just fine using Pandas by simply providing Decimal as a converter for all such numeric columns. The problem is, even if I use Python's Decimal, I just can't use...
0
1
105
0
58,574,666
0
0
0
0
1
false
2
2017-02-22T20:29:00.000
0
1
0
Shape not the same after dumping to libsvm a numpy sparse matrix
42,401,638
0
python,numpy,scikit-learn,libsvm,sklearn-pandas
I suspect your last two columns consist of only 0's. When loading an libsvm file, it generally doesn't have anything indicating the number of columns. It's a sparse format of col_num:val and will learn the maximum number of columns by the highest column number observed. If you only have 0's in the last two columns, the...
I have numpy sparse matrix that I dump in a libsvm format. VC was created using CountVectorizer where the size of the vocabulary is 85731 vc <1315689x85731 sparse matrix of type '<type 'numpy.int64'>' with 38911625 stored elements in Compressed Sparse Row format> But when I load libsvm file back I see that the shap...
0
1
251
0
42,406,043
0
0
0
1
1
false
0
2017-02-23T01:36:00.000
0
1
0
Selecting data from large MySQL database where value of one column is found in a large list of values
42,405,493
0
python,mysql,sql,python-3.x,pandas
I am not familiar with pandas but strictly speaking from a database point of view you could just have your panda values inserted in a PANDA_VALUES table and then join that PANDA_VALUES table with the table(s) you want to grab your data from. Assuming you will have some indexes in place on both PANDA_VALUES table and th...
I generally use Pandas to extract data from MySQL into a dataframe. This works well and allows me to manipulate the data before analysis. This workflow works well for me. I'm in a situation where I have a large MySQL database (multiple tables that will yield several million rows). I want to extract the data where one o...
0
1
352
0
42,419,250
0
0
0
0
1
true
1
2017-02-23T14:46:00.000
1
1
0
python lbp image classification
42,418,948
1.2
python,pattern-matching,classification,svm,image-recognition
If you want to use SVM as a classifier it does not make a lot of sense to make one average histogram for male and one for female because when you train you SVM classifier you can take all the histograms into account, but if you compute the average histograms you can use a nearest neighbor classifier instead.
I am working on a personal project: gender classification (male | female) in python. I'm beginner in this domain I computed histograms for every image in training data. Now, to test if a test image is male or female is possible to make an average histogram for male | female and compare test histograms? Or I must compar...
0
1
470
0
51,049,687
0
0
0
0
1
false
8
2017-02-24T17:30:00.000
0
3
0
How to turn off events.out.tfevents file in tf.contrib.learn Estimator
42,444,796
0
python,tensorflow
I had the same issue and was not able to find any resolution for this while the events file kept on growing in size. My understanding is that this file stores the events generated by tensorflow. I went ahead and deleted this manually. Interestingly, it never got created again while the other files are getting updated w...
When using estimator.Estimator in tensorflow.contrib.learn, after training and prediction there are these files in the modeldir: checkpoint events.out.tfevents.1487956647 events.out.tfevents.1487957016 graph.pbtxt model.ckpt-101.data-00000-of-00001 model.ckpt-101.index model.ckpt-101.meta When the graph is complicat...
0
1
4,050
0
42,461,595
0
0
0
0
1
true
0
2017-02-25T16:14:00.000
1
1
0
Is there any way to only import the MNIST images with 0's and 1's?
42,458,415
1.2
tensorflow,python-3.5,mnist
Assuming you are using from tensorflow.examples.tutorials.mnist import input_data No, there is no function or argument in that file... What you can do is load all data, and select only the ones and zeros.
I am just starting out with tensorflow and I want to test something only on the 0's and 1's from the MNIST images. Is there a way to import only these images?
0
1
98
0
53,147,988
0
1
0
0
2
false
0
2017-02-25T18:07:00.000
0
2
0
Why does my keras model terminate and freeze my notebook if I do more than one epoch?
42,459,726
0
python,keras
You probably have to look into more factors. Look at the system resources, e.g CPU, Memory, Disk IO. (If you use linux, run sar command) For me, I had other problem with frozen notebook, and it turns out to be the issue of low memory.
I did 1 nb_epoch with batch sizes of 10 and it successfully completed. The accuracy rate was absolutely horrible coming in at a whopping 27%. I want to make it run on more than one epoch to see if the accuracy will, ideally, be above 80% or so, but it keeps freezing my Jupyter Notebook if I try to make it do more than ...
0
1
519
0
42,460,023
0
1
0
0
2
false
0
2017-02-25T18:07:00.000
0
2
0
Why does my keras model terminate and freeze my notebook if I do more than one epoch?
42,459,726
0
python,keras
It takes time to run through the epochs and sometimes it looks like it freezes, but it still runs and if you wait long enough it will finish. Increasing the batch size makes it run through the epochs faster.
I did 1 nb_epoch with batch sizes of 10 and it successfully completed. The accuracy rate was absolutely horrible coming in at a whopping 27%. I want to make it run on more than one epoch to see if the accuracy will, ideally, be above 80% or so, but it keeps freezing my Jupyter Notebook if I try to make it do more than ...
0
1
519
0
42,461,103
0
0
0
0
1
false
6
2017-02-25T20:12:00.000
11
1
0
Subset pandas dataframe using values from two columns
42,461,086
1
python,pandas,dataframe,subset
I will answer my own question, hoping it will help someone. I tried this and it worked. df[(df['gold']>0) & (df['silver']>0)] Note that I have used & instead of and and I have used brackets to separate the different conditions.
I am trying to subset a pandas dataframe based on values of two columns. I tried this code: df[df['gold']>0, df['silver']>0, df['bronze']>0] but this didn't work. I also tried: df[(df['gold']>0 and df['silver']>0). This didn't work too. I got an error saying: ValueError: The truth value of a Series is ambiguous. Use ...
0
1
6,839
0
42,481,143
0
0
0
0
1
false
4
2017-02-27T07:24:00.000
3
1
0
Implement Gaussian Mixture Model using keras
42,479,954
0.53705
python-3.x,tensorflow,keras,gmm
Are you sure that it is what you want? you want to integrate a GMM into a neural network? Tensorflow and Keras are libraries to create, train and use neural networks models. The Gaussian Mixture Model is not a neural network.
I am trying to implement Gaussian Mixture Model using keras with tensorflow backend. Is there any guide or example on how to implement it?
0
1
2,619
0
43,356,794
0
0
0
0
1
false
1
2017-02-27T11:10:00.000
1
1
0
What is Non-Intrusive Load Monitoring or energy disaggregation or power signature analysis?
42,484,305
0.197375
python,github,dataset,energy
The aim of non-intrusive load monitoring is to obtain a breakdown of the net energy consumption of a building in terms of individual appliance consumption. There has been work on multiple algorithms so as to get this done ( with varying performance) and as always these can be written in any programming language. NILMT...
Does anybody know anything about NILM or power signature analysis? Can i do non-intrusive load monitoring using python? I got to know about one python toolkit known as NILMTK. But I need help for knowing about NILM. If anybody know about NILM, then please guide me. Thank you.
0
1
613
0
42,501,574
0
0
0
0
1
false
1
2017-02-28T03:56:00.000
1
2
0
How to compare if two images representing the same object if the pictures of the object belongs from two different sources - in OpenCV?
42,499,927
0.099668
python,opencv,image-processing,object-recognition
SIFT feature matching might produce better results than ORB. However, the main problem here is that you have only one image of each type (from the mobile camera and from the Internet. If you have a large number of images of this car model, then you can train a machine learning system using those images. Later you can s...
Suppose I have an image of a car taken from my mobile camera and I have another image of the same car taken downloaded from the internet. (For simplicity please assume that both the images contain the same side view projection of the same car.) How can I detect that both the images are representing the same object i.e....
0
1
1,709
0
42,510,284
0
0
0
0
1
true
0
2017-02-28T13:27:00.000
2
1
0
What algorithm to chose for binary image classification
42,510,042
1.2
python,image,binary,classification,svm
You should probably post this on cross-validated: But as a direct answer you should probably look into sequence to sequence learners as it has been clear to you SVM is not the ideal solution for this. You should look into Markov models for sequential learning if you dont wanna go the deep learning route, however, Ne...
Lets say I have two arrays in dataset: 1) The first one is array classified as (0,1) - [0,1,0,1,1,1,0.....] 2) And the second array costists of grey scale image vectors with 2500 elements in each(numbers from 0 to 300). These numbers are pixels from 50*50px images. - [[13 160 239 192 219 199 4 60..][....][....][....][....
0
1
695
0
42,522,820
0
0
0
0
1
true
1
2017-03-01T03:30:00.000
0
1
0
How Python data structure implemented in Spark when using PySpark?
42,522,654
1.2
python,python-2.7,apache-spark,pyspark
You can create traditional Python data objects such as array, list, tuple, or dictionary in PySpark. You can perform most of the operations using python functions in Pyspark. You can import Python libraries in Pyspark and use them to process data in Pyspark You can create a RDD and apply spark operations on them
I am currently self-learning Spark programming and trying to recode an existing Python application in PySpark. However, I am still confused about how we use regular Python objects in PySpark. I understand the distributed data structure in Spark such as the RDD, DataFrame, Datasets, vector, etc. Spark has its own trans...
0
1
854
0
52,912,100
0
1
0
0
2
false
0
2017-03-01T19:02:00.000
0
3
0
Why can i not import sklearn
42,539,906
0
python,scikit-learn
If someone is working with via bash here are the steps : For ubunutu : sudo apt-get install python-sklearn
Why am I not able to import sklearn? I downloaded Anaconda Navigator and it has scikit-learn in it. I even pip installed sklearn , numpy and scipy in Command Prompt and it shows that it has already been installed, but still when I import sklearn in Python (I use PyCharm for coding) it doesn't work. It says 'No module n...
0
1
7,012
0
42,574,660
0
1
0
0
2
false
0
2017-03-01T19:02:00.000
0
3
0
Why can i not import sklearn
42,539,906
0
python,scikit-learn
Problem solved! I didn't know that I was supposed to change my interpreter to Anaconda's interpreter(I am fairly new to Python). Thanks for the help!
Why am I not able to import sklearn? I downloaded Anaconda Navigator and it has scikit-learn in it. I even pip installed sklearn , numpy and scipy in Command Prompt and it shows that it has already been installed, but still when I import sklearn in Python (I use PyCharm for coding) it doesn't work. It says 'No module n...
0
1
7,012
0
42,558,220
0
1
0
0
1
true
1
2017-03-02T04:09:00.000
1
2
0
Best way to embed Jupyter/IPython notebook information
42,546,701
1.2
python,python-3.x,ipython-notebook,jupyter-notebook
A link to a gist is by far the superior option from those you have listed as that means helpers can run your code pretty easily and debug it from there. An alternative option is to post the code that creates your DataFrame (or at least a minimal example of it) so that we can recreate it. This is advantageous over a gis...
I just run across a problem when trying to ask help using Pandas DataFrames in Jupyper notebook. More specifically my problem is what is the best way to embed iPython notebook input and output to StackOverflow question? Simply copy&paste breaks DataFrame output formatting so bad it becomes impossible to read. Which ...
0
1
432
0
42,551,702
0
0
0
0
1
false
0
2017-03-02T09:02:00.000
0
1
0
Give relative path of file in csv python
42,550,910
0
python,csv,hyperlink
For HYPERLINK you need use only absolute url
I am creating a csv file in which i need to give hyperlinks to files in the same folder of csv file. I have tried with absolute url like =HYPERLINK("file:///home/user/Desktop/myfolder/clusters.py") and its working fine.But can i given the relative path like =HYPERLINK("file:///myfolder/clusters.py") because that is wha...
0
1
215