GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 50,958,242 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2017-03-02T19:17:00.000 | 1 | 1 | 0 | How can you update a pyfile in the middle of a PySpark shell session? | 42,564,069 | 0.197375 | python,apache-spark,pyspark | I don't think it's feasible during an interactive session. You will have to restart your session to use the modified module. | Within an interactive pyspark session you can import python files via sc.addPyFile('file_location'). If you need to make changes to that file and save them, is there any way to "re-broadcast" the updated file without having to shut down your spark session and start a new one?
Simply adding the file again doesn't work.... | 0 | 1 | 1,140 |
0 | 42,592,884 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2017-03-04T06:12:00.000 | 0 | 3 | 0 | Data structure: Top K ordered dictionary keys by value | 42,592,803 | 0 | python,dictionary,data-structures,heap | If your data will not fit in memory, you need to be particularly mindful of how it's stored. Is it in a database, a flat file, a csv file, JSON, or what?
If it is in a "rectangular" file format, you might do well to simply use a standard *nix sorting utility, and then just read in the first k lines. | I have a very large dictionary with entries of the form {(Tuple) : [int, int]}. For example, dict = {(1.0, 2.1):[2,3], (2.0, 3.1):[1,4],...} that cannot fit in memory.
I'm only interested in the top K values in this dictionary sorted by the first element in each key's value. If there a data structure that would allow ... | 0 | 1 | 698 |
0 | 42,611,673 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-05T09:22:00.000 | 0 | 2 | 0 | Get the Foreground in opencv | 42,606,584 | 0 | python,opencv | There is no way that your camera or software will be able to look at a flat image and decide what is foreground and what is background. Is that parrot sitting on a perch and staring at the camera or is it a picture of a parrot on the wall?
In the past I've made a collection of frames from the camera and formed a refere... | I'm trying to create a program that removes the background and get the foreground in color. For example if a face appears in front of my webcam i need to get the face only. I tried using BackgroundSubtractorMOG in opencv 3. But that didn't solve my problem. Can anyone tell me where to look or what to use. I'm a newbie ... | 0 | 1 | 713 |
0 | 42,610,453 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-05T15:10:00.000 | 1 | 1 | 0 | model evaluation with "train_test_split" not static? | 42,610,000 | 0.197375 | python,machine-learning,scikit-learn | You can set random_state parameter to some constant value to reproduce data splits. On the other hand, it's generally a good idea to test exactly what you are trying to know - i.e. run your training at least twice with different randoms states and compare the results. If they differ a lot it's a sign that something is ... | According to the resources online "train_test_split" function from sklearn.cross_validation module returns data in a random state.
Does this mean if I train a model with the same data twice, I am getting two different models since the training data points used in the learning process is different in each case?
In prac... | 0 | 1 | 86 |
0 | 62,075,227 | 0 | 0 | 0 | 0 | 1 | true | 11 | 2017-03-05T16:02:00.000 | 6 | 1 | 0 | Sklearn Model (Python) with NodeJS (Express): how to connect both? | 42,610,590 | 1.2 | python,node.js,scikit-learn,child-process | My recommendation: write a simple python web service (personally recommend flask) and deploy your ML model. Then you can easily send requests to your python web service from your node back-end. You wouldn't have a problem with the initial model loading. it is done once in the app startup, and then you're good to go
DO ... | I have a web server using NodeJS - Express and I have a Scikit-Learn (machine learning) model pickled (dumped) in the same machine.
What I need is to demonstrate the model by sending/receiving data from it to the server. I want to load the model on startup of the web server and keep "listening" for data inputs. When re... | 1 | 1 | 3,643 |
0 | 42,655,243 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-05T18:43:00.000 | 0 | 1 | 0 | Python - Closeness of two values on a logarithmic scale | 42,612,439 | 0 | python,math | So the solution I have is :
linear_closeness = 1 - (difference / max_deviation)
exponential_closeness = 10^linear_closeness / 10
This is suitable for me. I am open to better solutions. | I have two, time value series (using pandas) and would like to represent the "closeness" of the last value in each series in regards to each other on a logarithmic scale between 0 and 1. 0 being very far away and 1 being the same.
I am not sure how to approach this and any help would be appreciated. | 0 | 1 | 101 |
0 | 47,734,394 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-06T12:35:00.000 | 0 | 2 | 0 | How to combine a Self-organising map and a multilayer perceptron in python | 42,625,825 | 0 | python,neural-network,cluster-analysis,image-recognition,self-organizing-maps | I have been wondering if there is any mileage to training a separate supervised neural network for the inputs which map to each node in the SOM. You'd then have separate supervised learning on the subset of the input data mapping to each SOM node. The networks attached to each node would perhaps be smaller and more eas... | I am working on a image recognition project in python. I have read in journals that if clustering performed by a self-organizing map (SOM) is input into a supervised neural network the accuracy of image recognition improves as opposed to the supervised network on its own. I have tried this myself by using the SOM to pe... | 0 | 1 | 1,430 |
0 | 42,777,643 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-06T12:35:00.000 | 1 | 2 | 0 | How to combine a Self-organising map and a multilayer perceptron in python | 42,625,825 | 0.099668 | python,neural-network,cluster-analysis,image-recognition,self-organizing-maps | Another way to use SOM is for vector quantisation. Rather than using the winning SOM coordinates, use the codebook values of the winning neuron. Not sure which articles you are reading, but I would have said that SOM into MLP will only provide better accuracy in certain cases. Also, you will need to choose parameters l... | I am working on a image recognition project in python. I have read in journals that if clustering performed by a self-organizing map (SOM) is input into a supervised neural network the accuracy of image recognition improves as opposed to the supervised network on its own. I have tried this myself by using the SOM to pe... | 0 | 1 | 1,430 |
0 | 42,632,755 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-03-06T17:56:00.000 | 4 | 2 | 0 | Can Convolution2D work on rectangular images? | 42,632,411 | 1.2 | python,image,keras,convolution | No issues with a rectangle image... Everything will work properly as for square images. | Let's say I have a 360px by 240px image. Instead of cropping my (already small) image to 240x240, can I create a convolutional neural network that operates on the full rectangle? Specifically using the Convolution2D layer.
I ask because every paper I've read doing CNNs seems to have square input sizes, so I wonder if w... | 0 | 1 | 2,468 |
0 | 43,152,282 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-07T06:29:00.000 | 1 | 1 | 0 | mod.predict gives more columns than expected | 42,641,657 | 1.2 | python,mxnet | Using "y = mod.predict(val_iter,num_batch=1)" instead of "y = mod.predict(val_iter)", then you can get only one batch labels. For example,if you batch_size is 10, then you will only get the 10 labels. | I am using MXNet on IRIS dataset which has 4 features and it classifies the flowers as -'setosa', 'versicolor', 'virginica'. My training data has 89 rows. My label data is a row vector of 89 columns. I encoded the flower names into number -0,1,2 as it seems mx.io.NDArrayIter does not accept numpy ndarray with string va... | 0 | 1 | 158 |
0 | 42,657,727 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-07T17:04:00.000 | 0 | 1 | 0 | Do I need to use a bin packing algorithm, or knapsack? | 42,654,075 | 1.2 | python,algorithm,dynamic-programming,knapsack-problem,bin-packing | This task can be reduced to solving several knapsack problems. To solve them, the principle of greedy search is usually used, and the number of cuts is the criterion of the search.
The first obvious step of the algorithm is checking the balance.
The second step is to arrange the arrays of bars and chocolate needs, whic... | Here's the problem statement:
I have m chocolate bars, of integer length, and n children who
want integer amounts of chocolate. Where the total chocolate needs of
the children are less than or equal to the total amount of chocolate
you have. You need to write an algorithm that distributes chocolate to
the chil... | 0 | 1 | 770 |
0 | 42,697,952 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-07T19:40:00.000 | 1 | 2 | 0 | How can I make matplotlib plot rendering faster | 42,656,915 | 0.099668 | python,matplotlib,plot,pyqt4 | Two possible solutions:
Don't show a scatter plot, but a hexbin plot instead.
Use blitting.
(In case someone wonders about the quality of this answer; mind that the questioner specifially asked for this kind of structure in the comments below the question.) | I want to work with a scatter plot within a FigureCanvasQTAgg. The scatter plot may have 50,000 or more data points. The user wants to draw a polygon in the plot to select the data points within the polygon. I've realized that by setting points via mouse clicks and connect them with lines using Axis.plot(). When the us... | 0 | 1 | 1,682 |
0 | 42,658,405 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-07T21:05:00.000 | 1 | 1 | 0 | Speeding up exponential moving average in python | 42,658,330 | 1.2 | python,pandas | by definition, these are functions that are computationally intensive on huge datasets.
So there is very little hope to speed this up. Something you can try is to save the corresponding series as a .csv, do the smoothing in Pandas, and then merge back to your huge dataframe.
Sometimes that can help as carrying a larg... | I found pandas ewm function quite slow when applied to huge data. Is there any way to speed this up or use alternative functions for exponential weighted moving averages? | 0 | 1 | 416 |
0 | 42,665,677 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-08T06:32:00.000 | 0 | 1 | 0 | Can Tensorflow be used to detect if a particular feature exists in an image? | 42,664,493 | 1.2 | python,python-2.7,machine-learning,tensorflow | Yes and no. Tensorflow is a graph computation library mostly suited for neural networks.
You can create a neural network that determines if a face is in the image or not... You can even search for existing implementations that use Tensorflow...
There is no default Haar feature based cascade classifier in Tensorflow... | For example, using OpenCV and haarcascade_frontal_face.xml, we can predict if a face exists in an image. I would like to know if such a thing (detecting an object of interest) is possible with Tensorflow and if yes, how? | 0 | 1 | 540 |
0 | 42,684,721 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2017-03-08T08:20:00.000 | 4 | 2 | 0 | Sklearn K means Clustering convergence | 42,666,255 | 1.2 | python,scikit-learn,k-means | You have access to the n_iter_ field of the KMeans class, it gets set after you call fit (or other routines that internally call fit.
Not your fault for overlooking that, it's not part of the documentation, I just found it by checking the source code ;) | I am trying to construct clusters out of a set of data using the Kmeans algorithm from SkLearn. I want to know how one can determine whether the algorithm actually converged to a solution for one's data.
We feed in the tol parameter to define the tolerance for convergence but there is also a max_iter parameter that de... | 0 | 1 | 2,975 |
0 | 42,676,591 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-08T15:48:00.000 | 0 | 1 | 0 | How to handle huge volume of data in limited RAM | 42,675,835 | 0 | python,memory-management | This question makes me remember the early 80's. Memory used to be expensive and we invented swapping. The (high level part of the) OS sees more memory than actually present, and pages are copied on disk. When one is needed another one is swapped off, and the page is copied back into memory. Performances are awful, but ... | I have to process a large volume of data ( feature maps of individual layers for around 4000 images) which sizes more than 50 GB after some point. The processing involves some calculation after which around 2MB file is written to the HDD.
Since the free ram is around 40GB my process crashes after some point. Can anyone... | 0 | 1 | 313 |
0 | 45,650,018 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-09T05:19:00.000 | 0 | 1 | 0 | How does graph_def store the shape info of input and output of a node | 42,687,327 | 1.2 | python,tensorflow,deep-learning | each node in the graph_def doesn't contains the shape of output tensor, after importing graph_def into memory (with tf.import_graph_def), the shape of each tensor in the graph is automatically determined | I downloaded the inception v3 model(a frozen graph) from website and imported it into a session then I found that the shape of inputs and outputs of all nodes in this graph_def are already fully known, but when I freeze my own graph contaning tf.Examples queues as inputs, the batch_size info seems to be lost and replac... | 0 | 1 | 438 |
0 | 42,703,588 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-09T19:07:00.000 | 0 | 2 | 0 | overflow in exp(x) in python | 42,703,506 | 0 | python-3.x,numpy | What is happening is that you are overflowing the register such that it is overwriting itself. You have exceeded the maximum value that can be stored in the register. You will need to use a different datatype that will most likely not be compatible with exp(.). You might need a custom function that works with 64-bit In... | I'm using function: numpy.log(1+numpy.exp(z))
for small values of z (1-705) it gives identity result(1-705 {as expected}),
but for larger value of z from 710+ it gives infinity, and throw error "runtimeWarning: overflow encountered in exp" | 0 | 1 | 3,068 |
0 | 42,713,556 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-10T00:40:00.000 | 0 | 2 | 0 | How to remove both duplicated values from two csv files in apache spark? | 42,708,129 | 0 | python,csv,apache-spark,pyspark | thx for @himanshulllTian's great answer. And I want so say some more. If you have several columns in your file; then you just want to remove record based on the key column. Also, I don't know whether your csv files have same schema. Here is one way to deal with this situation. Let me borrow the example from himanshull... | Newbie to apache spark. What I want to do is to remove both the duplicated keys from two csv files. I have tried dropDuplicates() and distinct() but all the do is remove one value. For example if key = 1010 appears in both the csv files, I want both of them gone. How do I do this? | 0 | 1 | 368 |
0 | 42,708,813 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-10T01:10:00.000 | 0 | 1 | 0 | Pandas read_csv fails | 42,708,388 | 0 | python,csv,pandas | add this : lineterminator = ':' | I am using pandas read_csv to open a csv file 1327x11. The first 265 rows are only 4 columns wide. Here is row 1 to 5
DWS_LENS1.converter,"-300.0,5.5; -0.1,5.5; 10.0,-5.5; 300.0,-5.5",(mass->volts),: DWS_LENS1.mass_dependent,false,:
DWS_LENS1.voltage.reading,-5.12642,V,:
DWS_LENS1.voltage.target,-4.95000,V,:
DWS_LENS... | 0 | 1 | 1,382 |
0 | 42,730,875 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-10T05:36:00.000 | 0 | 2 | 0 | Interpolate with lmfit? | 42,711,002 | 0 | python,interpolation,curve-fitting,lmfit | There is not a built-in way to automatically interpolate with lmfit. With a lmfit Model, you provide the array on independent values at which the Model should be evaluated, and an array of data to compared to that model.
You're free to interpolate or smooth the data or perform some other transformation (I sometimes Fo... | I am trying to fit a curve with lmfit but the data set I'm working with does not contain a lot of points and this makes the resulting fit look jagged instead of curved.
I'm simply using the line:
out = mod.fit(SV, pars, x=VR)
were VR and SV are the coordinates of the points I'm trying to fit.
I've tried using scipy.int... | 0 | 1 | 424 |
0 | 42,711,798 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-10T06:00:00.000 | 2 | 2 | 0 | How to get the pseudo inverse of a huge diagonal matrix in python? | 42,711,310 | 1.2 | python,numpy,matrix | Just take the reciprocals of the nonzero elements. You can check with a smaller diagonal matrix that this is what pinv does. | If I have a diagonal matrix with diagonal 100Kx1 and how can I to get its pseudo inverse?
I won't be able to diagonalise the matrix and then get the inverse like I would do for small matrix so this won't work
np.linalg.pinv(np.diag(D)) | 0 | 1 | 945 |
0 | 42,725,593 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-10T17:53:00.000 | 1 | 2 | 0 | Fastest way to compare every element with every other in np array of strings | 42,724,795 | 0.099668 | python,numpy,numpy-ufunc | tl;dr
You don't want that.
Details
First let's note that you're actually building a triangular matrix: for the first element, compare it to the rest of the elements, then repeat recursively to the rest. You don't use the triangularity, though. You just cut off the diagonal (each element is always equal to itself) and m... | I have a numpy array of strings, some duplicated, and I'd like to compare every element with every other element to produce a new vector of 1's and 0's indicating whether each pair (i,j) is the same or different.
e.g. ["a","b","a","c"] -> 12-element (4*3) vector [1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1]
Is there... | 0 | 1 | 2,678 |
0 | 42,750,534 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-12T17:11:00.000 | 0 | 1 | 0 | Understanding the None output when using pandas.DataFrame.apply | 42,750,479 | 1.2 | python,pandas,dataframe,apply | The None occurs because the print() function doesn't return any value and apply() expects the function to return something.
If you want to print the data frame, just use print(df), or if you need some other format, tell us what your are trying to get at the printed output. | I'm trying to use the pandas.DataFrame.apply function. My actual code performs similarly to the example below. At the end of the output it outputs "None" for each row in the dataframe. This behavior causes an error in the function I'm passing through apply.
df = pd.DataFrame({"one": range(0,5), "two": range(0,5)})
df.a... | 0 | 1 | 1,727 |
0 | 42,768,183 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-13T10:28:00.000 | 0 | 1 | 0 | export graph prototxt from tensorflow summary | 42,761,359 | 0 | python,tensorflow | The graph is available by calling tf.get_default_graph. You can get it in GraphDef format by doing graph.as_graph_def(). | When using tensorflow, the graph is logged in the summary file, which I "abuse" to keep track of the architecture modifications.
But that means every time I need to use tensorboard to visualise and view the graph.
Is there a way to write out such a graph prototxt in code or export this prototxt from summary file from t... | 0 | 1 | 421 |
0 | 42,765,548 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-13T13:11:00.000 | 2 | 2 | 1 | How to export PATH for sublime build tool? | 42,764,539 | 1.2 | python,tensorflow,sublimetext2,sublimetext3,sublimetext | Ok I got it:
The problem is that the LD_LIBRARY_PATH variable was missing. I only exported it in .bashrc.
When I add
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
to ~/.profile it's working (don't forget to restart).
It also works if I start sublime... | I wanted to create a new "build tool" for sublime text, so that I can run my python scripts with an anaconda env with tensorflow. On my other machines this works without a problem, but on my ubuntu machine with GPU support I get an error.
I think this is due to the missing paths. The path provided in the error message... | 0 | 1 | 668 |
0 | 42,772,141 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-13T19:31:00.000 | 0 | 2 | 0 | Holoviews: AttributeError: 'Image' object has no attribute 'set' | 42,771,938 | 0 | python,bokeh,holoviews | There are some changes in bokeh 0.12.4, which are incompatible with HoloViews 1.6.2. We will be releasing holoviews 1.7.0 later this month, until then you have the option to downgrading to bokeh 0.12.3 or upgrading to the latest holoviews dev release with:
conda install -c ioam/label/dev holoviews
or
pip install https... | I have tried to run the Holoviews examples from the Holoviews website.
I have:
bokeh 0.12.4.
holoviews 1.6.2 py27_0 conda-forge
However, following any of the tutorials I get an error such as the following and am unable to debug:
AttributeError: 'Image' object has no attribute 'set'.
Is anyone able to guide me as to h... | 0 | 1 | 997 |
0 | 50,535,897 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2017-03-14T03:46:00.000 | 5 | 3 | 0 | Why autocompletion options in Spyder 3.1 are not fully working in the Editor? | 42,777,430 | 0.321513 | python,autocomplete,spyder | Autocomplete was not working for me at all.
So, I tried Tools -> Reset Sypder to factory defaults and it worked. | Running on Mac Sierra, the autocompletion in Spyder (from Anaconda distribution), seems quite erratic. When used from the Ipython console, works as expected. However, when used from the editor (which is my main way of writing), is erratic. The autocompletion works (i.e. when pressing TAB a little box appears showing op... | 0 | 1 | 13,872 |
0 | 46,160,256 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2017-03-14T03:46:00.000 | 5 | 3 | 0 | Why autocompletion options in Spyder 3.1 are not fully working in the Editor? | 42,777,430 | 0.321513 | python,autocomplete,spyder | Autocompletion works correctly if there are NO white spaces in the project working directory path. | Running on Mac Sierra, the autocompletion in Spyder (from Anaconda distribution), seems quite erratic. When used from the Ipython console, works as expected. However, when used from the editor (which is my main way of writing), is erratic. The autocompletion works (i.e. when pressing TAB a little box appears showing op... | 0 | 1 | 13,872 |
0 | 42,932,979 | 0 | 0 | 0 | 0 | 2 | true | 66 | 2017-03-14T11:41:00.000 | 30 | 6 | 0 | tf.nn.conv2d vs tf.layers.conv2d | 42,785,026 | 1.2 | python,tensorflow | For convolution, they are the same. More precisely, tf.layers.conv2d (actually _Conv) uses tf.nn.convolution as the backend. You can follow the calling chain of: tf.layers.conv2d>Conv2D>Conv2D.apply()>_Conv>_Conv.apply()>_Layer.apply()>_Layer.\__call__()>_Conv.call()>nn.convolution()... | Is there any advantage in using tf.nn.* over tf.layers.*?
Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so. | 0 | 1 | 35,607 |
0 | 53,683,545 | 0 | 0 | 0 | 0 | 2 | false | 66 | 2017-03-14T11:41:00.000 | 7 | 6 | 0 | tf.nn.conv2d vs tf.layers.conv2d | 42,785,026 | 1 | python,tensorflow | All of these other replies talk about how the parameters are different, but actually, the main difference of tf.nn and tf.layers conv2d is that for tf.nn, you need to create your own filter tensor and pass it in. This filter needs to have the size of: [kernel_height, kernel_width, in_channels, num_filters]
Essentially,... | Is there any advantage in using tf.nn.* over tf.layers.*?
Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so. | 0 | 1 | 35,607 |
0 | 42,902,517 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-03-15T12:48:00.000 | 0 | 1 | 0 | Python vs C++ Tensorflow inferencing | 42,810,240 | 0 | python,c++,tensorflow,benchmarking,inference | Write in the language that you are familiar with, in a way that you can maintain.
If it takes you a day longer to write it in the "faster" language, but only saves a minute of runtime, then it'll have to run 24*60 times to have caught up, and multiple times more than that to have been economical. | Is it really worth it to implement a C++ code for loading an already trained model and then fetch it instead of using Python?.
I was wondering this because as far as I understand, Tensorflow for python is C++ behind the scenes (as it is for numpy). So if one ends up basically having a Python program fetching the model... | 0 | 1 | 789 |
0 | 42,817,461 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-15T17:59:00.000 | 0 | 3 | 0 | How do I save numpy arrays such that they can be loaded later appropriately? | 42,817,337 | 0 | python,arrays,numpy,save | How about ndarray's .tofile() method? To read use numpy.fromfile(). | I have a code which outputs an N-length Numpy array at every iteration.
Eg. -- theta = [ 0, 1, 2, 3, 4 ]
I want to be able to save the arrays to a text file or .csv file dynamically such that I can load the data file later and extract appropriately which array corresponds to which iteration. Basically, it should be sa... | 0 | 1 | 741 |
0 | 42,821,370 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-15T20:24:00.000 | 1 | 1 | 0 | Python's XGBRegressor vs R's XGBoost | 42,819,987 | 1.2 | python,r,machine-learning,scikit-learn,xgboost | Since XGBoost uses decision trees under the hood it can give you slightly different results between fits if you do not fix random seed so the fitting procedure becomes deterministic.
You can do this via set.seed in R and numpy.random.seed in Python.
Noting Gregor's comment you might want to set nthread parameter to 1 t... | I'm using python's XGBRegressor and R's xgb.train with the same parameters on the same dataset and I'm getting different predictions.
I know that XGBRegressor uses 'gbtree' and I've made the appropriate comparison in R, however, I'm still getting different results.
Can anyone lead me in the right direction on how to ... | 0 | 1 | 1,321 |
0 | 42,841,898 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2017-03-16T13:44:00.000 | 3 | 2 | 0 | Which comes first in order of implementation: POS Tagging or Lemmatisation? | 42,835,852 | 1.2 | python,nlp,nltk,pos-tagger,lemmatization | Part of speech is important for lemmatisation to work, as words which have different meanings depending on part of speech. And using this information, lemmatization will return the base form or lemma. So, it would be better if POS Tagging implementation is done first.
The main idea behind lemmatisation is to group diff... | If I wanted to make a NLP Toolkit like NLTK, which features would I implement first after tokenisation and normalisation. POS Tagging or Lemmatisation? | 0 | 1 | 519 |
0 | 42,875,622 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-03-16T13:44:00.000 | 2 | 2 | 0 | Which comes first in order of implementation: POS Tagging or Lemmatisation? | 42,835,852 | 0.197375 | python,nlp,nltk,pos-tagger,lemmatization | Sure make the POS Tagger first. If you do lemmatisation first you could lose the best possible classification of words when doing the POS Tagger, especially in languages where ambiguity is commonplace, as it is in Portuguese. | If I wanted to make a NLP Toolkit like NLTK, which features would I implement first after tokenisation and normalisation. POS Tagging or Lemmatisation? | 0 | 1 | 519 |
0 | 42,857,362 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2017-03-17T05:02:00.000 | 4 | 1 | 0 | How to repartition a dataframe into fixed sized partitions? | 42,849,572 | 1.2 | python,dataframe,dask | Short answer is probably "no, there is no way to do this without looking at the data". The reason here is that the structure of the graph depends on the values of your lazy partitions. For example we'll have a different number of nodes in the graph depending on your total datasize. | I have a dask dataframe created from delayed functions which is comprised of randomly sized partitions. I would like to repartition the dataframe into chunks of size (approx) 10000.
I can calculate the correct number of partitions with np.ceil(df.size/10000) but that seems to immediately compute the result?
IIUC to com... | 0 | 1 | 298 |
0 | 42,853,495 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-17T06:06:00.000 | 0 | 2 | 0 | How can i get 'y' rather than 'yhat' from predicted data by using fbprophet? | 42,850,344 | 0 | python,facebook | If I'm not mistaken, 'y' is the data you're using yo fit with, i.e. the input to prophet. 'yhat' is the mean(or median?) of the predicted distribution. | I can use fbprophet (in python) to get some predicted data, but it just includes 't', 'yhat', 'yhat_upper', 'yhat_lower' and or so rather than 'y' which I also want to acquire.
At present I think I can't get the value of 'y' from the predicted data because Prophet doesn't work for predicting the future value like 'y'.
... | 0 | 1 | 475 |
0 | 42,870,291 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-17T23:21:00.000 | 1 | 1 | 0 | Text recognition and detection using TensorFlow | 42,868,546 | 0.197375 | python,tensorflow,deep-learning,text-classification,text-recognition | To group elements on a page, like paragraphs of text and images, you can use some clustering algo, and/or blob detection with some tresholds.
You can use Radon transform to recognize lines and detect skew of a scanned page.
I think that for character separation you will have to mess with fonts. Some polynomial matching... | I a working on a text recognition project.
I have built a classifier using TensorFlow to predict digits but I would like to implement a more complex algorithm of text recognition by using text localization and text segmentation (separating each character) but I didn't find an implementation for those parts of the algor... | 0 | 1 | 2,116 |
0 | 43,602,549 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-03-18T23:34:00.000 | 0 | 2 | 0 | Cannot import name -> No matching distribution when trying to install | 42,881,154 | 0 | python,pip,installation,importerror | Met the same problem. Fixed by installing the pybrain from github:
pip install https://github.com/pybrain/pybrain/archive/0.3.3.zip | I have no experience with Python. Just trying to run a program I downloaded from GitHub. Had a lot of problems trying to run it. After adding PATH, and installing a few modules(?), I got stuck:
When I type in "python filename.py", I get the error:
ImportError: cannot import name: 'SequentialDataSet'
I got the same e... | 0 | 1 | 2,259 |
0 | 52,922,702 | 0 | 0 | 0 | 0 | 1 | false | 15 | 2017-03-19T03:29:00.000 | -2 | 3 | 0 | Efficient calculation of euclidean distance | 42,882,604 | -0.132549 | python,algorithm,python-3.x,euclidean-distance | I had the same issue before, and it worked for me once I normalized the values. So try to normalize the data before calculating the distance. | I have a MxN array, where M is the number of observations and N is the dimensionality of each vector. From this array of vectors, I need to calculate the mean and minimum euclidean distance between the vectors.
In my mind, this requires me to calculate MC2 distances, which is an O(nmin(k, n-k)) algorithm. My M is ~10,... | 0 | 1 | 1,648 |
0 | 42,885,380 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-19T06:28:00.000 | 1 | 2 | 0 | PyCaffe output layer for testing binary classification model | 42,883,603 | 0.099668 | python,deep-learning,caffe,convolution,pycaffe | SigmoidWithLoss layer outputs a single number per batch representing the loss w.r.t the ground truth labels.
On the other hand, Sigmoid layer outputs a probability value for each input in the batch. This output does not require the ground truth labels to be computed.
If you are looking for the probability per input, yo... | I fine tunes vgg-16 for binary classification. I used sigmoidLoss layer as the loss function.
To test the model, I coded a python file in which I loaded the model with an image and got the output using :
out = net.forward()
My doubt is should I take the output from Sigmoid or SigmoidLoss layer.
And What is the differen... | 0 | 1 | 329 |
1 | 42,889,299 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-19T08:17:00.000 | 0 | 1 | 0 | Image Registration accuracy evaluation (Hausdroff distance) using SimpleITK without segmenting the image | 42,884,375 | 0 | python,image,itk,elastix,simpleitk | If I understand your question correctly, you want the impossible: to have Hausdorff distance measure as if the image were segmented, but without segmenting it because the segmentation is hard. | I have registered two images, let's say fixed and moving are Registered. After registration I want to measure overlap ratio etc.
The SimpleITK has overlap measure filters and to use overlap_measures_filter.Execute(fixed, moving) and hausdroff_measures_filter.Execute() we need to segment the image and we need labels i... | 0 | 1 | 363 |
0 | 62,689,216 | 0 | 0 | 0 | 0 | 2 | false | 10 | 2017-03-19T11:56:00.000 | 1 | 6 | 0 | how to get opencv_contrib module in anaconda | 42,886,286 | 0.033321 | python,opencv,anaconda,conda | The question is old but I thought to update the answer with the latest information. My Anaconda version is 2019.10 and build channel is py_37_0 . I used pip install opencv-python==3.4.2.17 and pip install opencv-contrib-python==3.4.2.17. Now they are also visible as installed packages in Anaconda navigator and I am ab... | Can anyone tell me commands to get contrib module for anaconda
I need that module for
matches = flann.knnMatch(des1,des2,k=2)
to run correctly
error thrown is
cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
Also I am using Anacon... | 0 | 1 | 29,093 |
0 | 44,329,928 | 0 | 0 | 0 | 0 | 2 | false | 10 | 2017-03-19T11:56:00.000 | 14 | 6 | 0 | how to get opencv_contrib module in anaconda | 42,886,286 | 1 | python,opencv,anaconda,conda | I would recommend installing pip in your anaconda environment then just doing: pip install opencv-contrib-python. This comes will opencv and opencv-contrib. | Can anyone tell me commands to get contrib module for anaconda
I need that module for
matches = flann.knnMatch(des1,des2,k=2)
to run correctly
error thrown is
cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
Also I am using Anacon... | 0 | 1 | 29,093 |
0 | 42,890,197 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-19T17:58:00.000 | 0 | 1 | 0 | construct 3d diagonal tensor using 2d tensor | 42,890,105 | 0 | python,numpy,matrix,tensorflow,broadcast | Ok I figure it out. tf.matrix_diag() does the trick... | Given A = [[1,2],[3,4],[5,6]]. How to use tf.diag() to construct a 3d tensor where each stack is a 2d diagonal matrix using the values from A? So the output should be B = [[[1,0],[0,2]],[[3,0],[0,4]],[[5,0],[0,6]]]. I want to use this as my Gaussian covariance matries. | 0 | 1 | 153 |
0 | 57,453,505 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2017-03-20T03:19:00.000 | 1 | 1 | 0 | faster numpy array copy; multi-threaded memcpy? | 42,895,292 | 0.197375 | python,arrays,multithreading,numpy,memcpy | If you are certain that the types/memory layout of both arrays are identical, this might give you a speedup: memoryview(A)[:] = memoryview(B) This should be using memcpy directly and skips any checks for numpy broadcasting or type conversion rules. | Suppose we have two large numpy arrays of the same data type and shape, of size on the order of GB's. What is the fastest way to copy all the values from one into the other?
When I do this using normal notation, e.g. A[:] = B, I see exactly one core on the computer at maximum effort doing the copy for several seconds,... | 0 | 1 | 1,300 |
0 | 43,516,507 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2017-03-20T17:18:00.000 | 0 | 3 | 0 | Can I use Cucumber to test an application that uses more than one language? | 42,909,952 | 0 | java,python,hadoop,cucumber,cucumber-jvm | To use cucumber to test desktop applications you can use specflow which uses a framework in visual studio called teststack.white. Just google on cucumber specflow, teststack.white, etc and you should be able to get on track | I'm currently part of a team working on a Hadoop application, parts of which will use Spark, and parts of which will use Java or Python (for instance, we can't use Sqoop or any other ingest tools included with Hadoop and will be implementing our own version of this). I'm just a data scientist so I'm really only familia... | 0 | 1 | 994 |
0 | 42,931,219 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2017-03-20T17:18:00.000 | 0 | 3 | 0 | Can I use Cucumber to test an application that uses more than one language? | 42,909,952 | 0 | java,python,hadoop,cucumber,cucumber-jvm | The feature files would be written using Gherkin. Gherkin looks the same if you are using Java or Python. So in theory, you are able to execute the same specifications from both Java end Python. This would, however, not make any sense. It would just be a way to implement the same behaviour in two different languages an... | I'm currently part of a team working on a Hadoop application, parts of which will use Spark, and parts of which will use Java or Python (for instance, we can't use Sqoop or any other ingest tools included with Hadoop and will be implementing our own version of this). I'm just a data scientist so I'm really only familia... | 0 | 1 | 994 |
0 | 42,915,731 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-20T22:38:00.000 | 1 | 1 | 0 | Week Number in python for regression | 42,915,249 | 1.2 | python,date,format,regression | You will need to convert the string describing the week into an integer you can use as the abscissa (x-coordinate, or independent variable). Pick a "zero point", such as FY2012 WK 52, so that FY2013 WK 01 translates to the integer 1.
I don't that DateTime handles this conversion; you might have to code the translation... | My current data set is by Fiscal Week. It is in this format "FY2013 WK 2". How can I format it, so that I can use a regression model on it and predict a value for let's say "FY2017 WK 2".
Should I treat Fiscal Week as a categorical value and use dmatrices? | 0 | 1 | 94 |
0 | 43,289,954 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-21T12:26:00.000 | 0 | 1 | 0 | How to create an udf for hive using python with 3rd party package like sklearn? | 42,927,141 | 0 | python,hive,package,udf | I recently started looking into this approach and I feel like the problem is not about to get all the 'hive nodes' having sklearn on them (as you mentioned above), I feel like it is rather a compatibility issue than 'sklearn node availability' one. I think sklearn is not (yet) designed to run as a parallel algorithm su... | I know how to create a hive udf with transform and using, but I can't use sklearn because not all the node in hive cluster has sklearn.
I have an anaconda2.tar.gz with sklearn, What should I do ? | 0 | 1 | 314 |
0 | 42,931,849 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-21T15:30:00.000 | 0 | 1 | 0 | Use SVM model trained in Matlab for classification in python | 42,931,469 | 0 | python,matlab,scikit-learn,svm | I guess you understand how SVM works, so what I would do is to train the model again in python just on the support vectors you found rather than on all the original training data and the result should remain the same (as if you trained it on the full data), since the support vectors are the "interesting" vectors in the... | I have a SVM model trained in MATLAB (using 6 features) for which I have:
Support Vectors [337 x 6]
Alpha [337 x 1]
Bias
Kernel Function: @rbf_kernel
Kernel Function Args = 0.9001
GroupNames [781 x 1]
Support Vector Indices [337 x 1]
Scale Data containing:
shift [1 x 6]
scale factor [1 x 6]
These above are all dat... | 0 | 1 | 519 |
0 | 42,932,214 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-21T15:43:00.000 | 0 | 1 | 0 | How can I stop OpenMDAO evaluating at a given location early | 42,931,794 | 1.2 | python,openmdao | Depending on your setup, you can raise an error inside the component that will kill the run. They you just change the input and start up the next run. Alternately, modify your wrapper for the subsequent code so that if it sees a NAN it skips running and just reports a garbage number thats easily identifiable. | I am using OpenMDAO 1.7.3 for an optimization problem on a map.
My parameters are the coordinates on this map. The first thing I do is interpolating the height at this location from a map in one component. Then some more complex calculations follow in other components.
If OpenMDAO chooses a location outside the boundar... | 0 | 1 | 55 |
0 | 42,968,126 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-23T04:51:00.000 | 3 | 3 | 0 | Adding a jar file to pyspark after context is created | 42,967,472 | 0.197375 | python,apache-spark,jar | sparksession._jsc.addJar does the job. | I am using pyspark from a notebook and I do not handle the creation of the SparkSession.
I need to load a jar containing some functions I would like to use while processing my rdds. This is something which you can easily do using --jars which I cannot do in my particular case.
Is there a way to access the spark scala ... | 0 | 1 | 4,578 |
0 | 62,708,019 | 0 | 1 | 0 | 0 | 1 | false | 47 | 2017-03-23T14:03:00.000 | -2 | 7 | 0 | Anaconda version with Python 3.5 | 42,978,349 | -0.057081 | python,anaconda | It is very simple, first, you need to be inside the virtualenv you created, then to install a specific version of python say 3.5, use Anaconda, conda install python=3.5
In general you can do this for any python package you want
conda install package_name=package_version | I want to install tensorflow with python 3.5 using anaconda but I don't know which anaconda version has python 3.5. When I go to anaconda download page am presented with Anaconda 4.3.1 which has either version 3.6 or 2.7 of python | 0 | 1 | 123,158 |
0 | 43,007,775 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-23T18:08:00.000 | 0 | 1 | 0 | Does updating one Bokeh ColumnDataSource affect the entire document? | 42,983,784 | 0 | python,bokeh | As of Bokeh 0.12.5 update messages are:
triggered immediately for a given property change
granular (are not batched in any way)
So, updating model.foo triggers an immediate message sent to the browser, and that message only pertains to the corresponding model.foo in the browser.
There Bokeh protocol allows for batc... | I have a Bokeh document with many plots/models, each of which has its own ColumnDataSource. If I update one ColumnDataSource does that trigger updates to all of my models or only to the models to which the changed source is relevant?
I ask because I have a few models, some of which are complex and change slowly and ot... | 0 | 1 | 247 |
0 | 42,996,091 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-23T20:48:00.000 | 1 | 1 | 0 | Selectively Iterate over Tensor | 42,986,686 | 0.197375 | python,machine-learning,tensorflow,keras,conv-neural-network | The general approach is to use binary masks. Tensorflow provides several boolean functions such as tf.equal and tf.not_equal. For selecting only enterings which are equal to a certain value, you could use tf.equal and then multiply the loss tensor by the obtained binary mask. | I am currently building a CNN with Keras and need to define a custom loss function. I would only like to consider specific parts of my data in the loss and ignore others based on a certain parameter value. But, I am having trouble iterating over the Tensor objects that the Keras loss function expects.
Is there a simple... | 0 | 1 | 707 |
0 | 42,995,646 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2017-03-24T08:49:00.000 | 1 | 2 | 0 | piecewise linear interpolation function in python | 42,995,027 | 0.099668 | python,interpolation,coefficients | If you're doing linear interpolation you can just use the formula that the line from point (x0, y0) to (x1, y1) the line that interpolates them is given by y - y0 = ((y0 - y1)/(x0 - x1)) * (x - x0). You can take 2 element slices of your list using the slice syntax; for example to get [2.5, 3.4] you would use x[1:3].
Us... | I'm fairly new to programming and thought I'd try writing a piecewise linear interpolation function. (perhaps which is done with numpy.interp or scipy.interpolate.interp1d)
Say I am given data as follows: x= [1, 2.5, 3.4, 5.8, 6] y=[2, 4, 5.8, 4.3, 4]
I want to design a piecewise interpolation function that will give t... | 0 | 1 | 4,195 |
0 | 42,999,562 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-24T10:55:00.000 | 1 | 1 | 0 | assign new dimension of length one | 42,997,677 | 1.2 | python,python-xarray | You can use xarray.concat to achieve this:
da = xarray.DataArray(0, coords={"x": 42})
xarray.concat((da,), dim="x") | I have a DataArray for which da.dims==(). I can assign a coordinate da.assign_coords(foo=42). I would like to add a corresponding dimension with length one, such that da.dims==("foo",) and the corresponding coordinate would be foo=[42]. I cannot use assign_coords(foo=[42]), as this results in the error message canno... | 0 | 1 | 281 |
0 | 43,012,808 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2017-03-24T14:12:00.000 | 2 | 3 | 0 | How should I interpret the output of numpy.fft.rfft2? | 43,001,729 | 0.132549 | python,numpy,fft | Also note the ordering of the coefficients in the fft output:
According to the doc: by default the 1st element is the coefficient for 0 frequency component (effectively the sum or mean of the array), and starting from the 2nd we have coeffcients for the postive frequencies in increasing order, and starts from n/2+1 the... | Obviously the rfft2 function simply computes the discrete fft of the input matrix. However how do I interpret a given index of the output? Given an index of the output, which Fourier coefficient am I looking at?
I am especially confused by the sizes of the output. For an n by n matrix, the output seems to be an n by (n... | 0 | 1 | 5,101 |
0 | 43,018,809 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-03-25T16:15:00.000 | 0 | 4 | 0 | Python How to make array in array? | 43,018,769 | 0 | python,arrays,list,numpy,types | You can't make that array. Arrays is numpy are similar to matrices in math. They have to be mrows, each having n columns. Use a list of lists, or list of np.arrays | I have a numpy array Z1, Z2, Z3:
Z1 = [1,2,3]
Z2 = [4,5]
Z3 = [6,7,8,9]
I want new numpy array Z that have Z1, Z2, Z3 as array like:
Z = [[1,2,3],[4,5],[6,7,8,9]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
I used np.append, hstack, vstack, insert, concatenate ..... | 0 | 1 | 1,326 |
0 | 43,039,088 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2017-03-25T16:15:00.000 | 0 | 4 | 0 | Python How to make array in array? | 43,018,769 | 0 | python,arrays,list,numpy,types | everybody thanks! Answers are a little bit different with that I want, but eventually I solve that without use 'for' or 'while'.
First, I made "numpy array" Z1, Z2, Z3 and put them into "list" Z. There are array in List.
Second, I convert "list" Z into "numpy array" Z. It is array in array that I want. | I have a numpy array Z1, Z2, Z3:
Z1 = [1,2,3]
Z2 = [4,5]
Z3 = [6,7,8,9]
I want new numpy array Z that have Z1, Z2, Z3 as array like:
Z = [[1,2,3],[4,5],[6,7,8,9]
print(type(Z),type(Z[0]))
>>> <class 'numpy.ndarray'> <class 'numpy.ndarray'>
I used np.append, hstack, vstack, insert, concatenate ..... | 0 | 1 | 1,326 |
0 | 43,021,757 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2017-03-25T20:29:00.000 | 0 | 1 | 0 | Can training and evaluation sets be the same in predictive analytics? | 43,021,624 | 1.2 | python,scikit-learn,anaconda,data-mining,prediction | The training set and the evaluation set must be different. The whole point of having an evaluation set is guard against over-fitting.
In this case what you should do is take say 100,000 customers, picked at random. Then use the data to try and learn what is about customers that make them likely purchase A. Then use th... | I'm creating a model to predict the probability that customers will buy product A in a department store that sells product A through Z. The store has it's own credit card with demographic and transactional information of 140,000 customers.
There is a subset of customers (say 10,000) who currently buy A. The goal is to ... | 0 | 1 | 55 |
0 | 43,032,478 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-26T13:29:00.000 | 1 | 1 | 0 | How to get confidence levels(probabilities) for DNN Regressor in Tensorflow | 43,029,358 | 0.197375 | python,tensorflow | In terminal, type help(tf.contrib.learn.DNNRegressor. There you will see the object has methods such as predict() which returns predicted scores.
DNNRegressor does regression, not classification, so you don't get a probability distribution over classes. | For DNN Classifier there is a method predict_proba to get the probabilities, whereas for DNN Regressor it is not there. Please help. | 0 | 1 | 827 |
0 | 43,035,582 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2017-03-26T21:24:00.000 | 3 | 4 | 0 | Why doesn't Python come pre-built with required libraries like pandas, numpy etc | 43,034,716 | 0.148885 | python,anaconda,package,python-packaging,canopy | This is a bit like asking "Why doesn't every motor come with a car around it?"
While a car without a motor is pretty useless, the inverse doesn't hold: Most motors aren't even used for cars. Of course one could try selling a complete car to people who want to have a generator, but they wouldn't buy it.
Also the peopl... | What is the reason packages are distributed separately?
Why do we have separate 'add-on' packages like pandas, numpy?
Since these modules seem so important, why are these not part of Python itself?
Are the "single distributions" of Python to come pre-loaded?
If it's part of design to keep the 'core' separate from ad... | 0 | 1 | 1,040 |
0 | 43,035,255 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2017-03-26T21:24:00.000 | -1 | 4 | 0 | Why doesn't Python come pre-built with required libraries like pandas, numpy etc | 43,034,716 | -0.049958 | python,anaconda,package,python-packaging,canopy | PyPi currently has over 100,000 libraries available. I'm sure someone thinks each of these is important.
Why do you need or want to pre-load libraries, considering how easy a pip install is especially in a virtual environment? | What is the reason packages are distributed separately?
Why do we have separate 'add-on' packages like pandas, numpy?
Since these modules seem so important, why are these not part of Python itself?
Are the "single distributions" of Python to come pre-loaded?
If it's part of design to keep the 'core' separate from ad... | 0 | 1 | 1,040 |
0 | 44,992,636 | 0 | 1 | 0 | 0 | 3 | false | 7 | 2017-03-27T04:22:00.000 | 0 | 5 | 0 | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 43,037,903 | 0 | python,scikit-learn | Just closed the Spyder editor and restarted. This Issue got fixed. | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model... | 0 | 1 | 18,428 |
0 | 55,497,890 | 0 | 1 | 0 | 0 | 3 | false | 7 | 2017-03-27T04:22:00.000 | 1 | 5 | 0 | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 43,037,903 | 0.039979 | python,scikit-learn | The same error appeared when I tried to import hmm from hmmlearn, I reinstalled scipy and it worked. Hope this can be helpful.(I have tried updating all of the packages involved to solve the problem, but did not work. My computer system is ubuntu 16.04, with anaconda3 installed.) | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model... | 0 | 1 | 18,428 |
0 | 43,158,642 | 0 | 1 | 0 | 0 | 3 | true | 7 | 2017-03-27T04:22:00.000 | 10 | 5 | 0 | ImportError: cannot import name 'logsumexp' when importing sklearn.model_selection | 43,037,903 | 1.2 | python,scikit-learn | I came across exactly the same problem just now. After I updated scikit-learn and tried to import sklearn.model_selection, the ImportError appeared.
I just restarted anaconda and ran it again.
It worked. Don't know why. | I was trying to import sklearn.model_selection with Jupiter Notebook under anaconda environment with python 3.5, but I was warned that I didn't have "model_selection" module, so I did conda update scikit-learn.
After that, I received a message of ImportError: cannot import name 'logsumexp' when importing sklearn.model... | 0 | 1 | 18,428 |
0 | 53,089,089 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-27T08:56:00.000 | 0 | 3 | 0 | Using quantopian for data analysis | 43,041,964 | 0 | python-3.x,zipline | You can get data for non-NYSE stocks as well like Nasdaq securities. Screens are also available by fundamentals(market, exchange, market cap). These screens can limit stocks analyzed from the broad universe. | I want to know were Quantopian gets data from?
If I want to do an analysis on a stock market other than NYSE, will I get the data? If not, can I manually upload the data so that I can run my algorithms on it. | 0 | 1 | 312 |
0 | 43,051,434 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-27T13:42:00.000 | 0 | 2 | 0 | How to determine size in bytes of H2O frame in Python? | 43,048,126 | 0 | python,h2o | This refers to 2-4 times the size of the file on disk, so rather than looking at the memory in Python, look at the original file size. Also, the 2-4x recommendation varies by algorithm (GLM & DL will requires less memory than tree-based models). | I am loading Spark dataframes into H2O (using Python) for building machine learning models. It has been recommended to me that I should allocate an H2O cluster with RAM 2-4x as big as the frame I will be training on, so that the analysis fits comfortably within memory. But I don't know how to precisely estimate the siz... | 0 | 1 | 699 |
0 | 43,056,493 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-27T14:50:00.000 | 0 | 1 | 0 | Best way to save a CNN's weights in order to reuse them | 43,049,673 | 0 | python,tensorflow,deep-learning | Tensorflow provides a way to save your model: tensorflow.org/api_docs/python/tf/train/Saver. Your friend should then also use Tensorflow to load them. The language you load / save with doesn't affect how it is saved - if you save them in Tensorflow in Python they can be read in Tensorflow in C++. | I would like to save weights (and biases) from a CNN that I implemented and trained from scratch using Tensorflow (Python API).
Now I would like to save these weights in a file and share it with someone so he can use my network. But since I have a lot of weights I don't know. How can/should I do that? Is there a form... | 0 | 1 | 383 |
0 | 43,058,780 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-28T00:36:00.000 | 0 | 1 | 0 | Slow datetime parsing in Pandas | 43,058,703 | 0 | python,performance,csv,pandas,datetime | and 2. In my experience, if processing time is not critical for your study (say you process the data once and then you run your analysis) then I would recommend you parse the dates using pd.to_datetime() and others after you have read in the data.
anything that will help Pandas reduce the set of possibilities about the... | These questions about datetime parsing in pandas.read_csv() are all related.
Question 1
The parameter infer_datetime_format is False by default. Is it safe to set it to True? In other words, how accurately can Pandas infer date formats? Any insight into its algorithm would be helpful.
Question 2
Loading a CSV file with... | 0 | 1 | 527 |
0 | 44,095,963 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-03-28T04:50:00.000 | 1 | 5 | 0 | ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights | 43,060,827 | 0.039979 | python,tensorflow | Adding reuse = tf.get_variable_scope().reuse to BasicLSTMCell is OK to me. | I run ptb_word_lm.py provided by Tensorflow 1.0, but it shows this message:
ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights:
'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell'; and the cell was
not constructed as BasicLSTMCell(..., reuse=True). To share the... | 0 | 1 | 3,069 |
0 | 45,557,687 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-03-28T04:50:00.000 | 0 | 5 | 0 | ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights | 43,060,827 | 0 | python,tensorflow | You can trying to add scope='lstmrnn' in your tf.nn.dynamic_rnn() function. | I run ptb_word_lm.py provided by Tensorflow 1.0, but it shows this message:
ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights:
'Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell'; and the cell was
not constructed as BasicLSTMCell(..., reuse=True). To share the... | 0 | 1 | 3,069 |
0 | 43,080,891 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-28T22:00:00.000 | 2 | 1 | 0 | Sklearn predict function | 43,080,722 | 1.2 | python,machine-learning,scikit-learn,scikits | i'th output is prediction for i'th input. Whatever you passed to .predict is a collection of objects, and the ordering of predictions is the same as the ordering of data passed in. | I am using sklearn's Linear Regression ML model in Python to predict. The predict function returns an array with a lot of floating point numbers, (which is correct) but I don't quite understand what the floating point numbers represent. Is it possible to map them back?
For context, I am trying to predict sales of a pro... | 0 | 1 | 4,905 |
0 | 43,098,199 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-03-29T11:40:00.000 | 5 | 3 | 0 | How to add new nodes / neurons dynamically in tensorflow | 43,092,454 | 1.2 | python,machine-learning,tensorflow,neural-network,artificial-intelligence | Instead of creating a whole new graph you might be better off creating a graph which has initially more neurons than you need and mask it off by multiplying by a non-trainable variable which has ones and zeros. You can then change the value of this mask variable to allow effectively new neurons to act for the first tim... | If I want to add new nodes to on of my tensorflow layers on the fly, how can I do that?
For example if I want to change the amount of hidden nodes from 10 to 11 after the model has been training for a while. Also, assume I know what value I want the weights coming in and out of this node/neuron to be.
I can create a wh... | 0 | 1 | 2,690 |
0 | 43,114,845 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-29T16:27:00.000 | 1 | 1 | 0 | Apache Spark: Pre requisite questions | 43,099,139 | 0.197375 | java,python,scala,ubuntu,hadoop | You need to install hadoop-2.7 more to whatever you are installing.
Java version is fine.
The mentioned configuration should work with scala 2.12.1. | I am about to install Apache Spark 2.1.0 on Ubuntu 16.04 LTS. My goal is a standalone cluster, using Hadoop, with Scala and Python (2.7 is active)
Whilst downloading I get the choice: Prebuilt for Hadoop 2.7 and later (File is spark-2.1.0-bin-hadoop2.7.tgz)
Does this package actually include HADOOP 2.7 or does it need... | 0 | 1 | 730 |
0 | 43,239,777 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-03-29T17:29:00.000 | 1 | 1 | 1 | undefined symbol: cudnnCreate in ubuntu google cloud vm instance | 43,100,290 | 0.197375 | python,tensorflow,ubuntu-16.04,cudnn | Answering my own question: The issue was not that the library was not installed, the library installed was the wrong version hence it could not find it. In this case it was cudnn 5.0. However even after installing the right version it still didn't work due to incompatibilities between versions of driver, CUDA and cudnn... | I'm trying to run a tensorflow python script in a google cloud vm instance with GPU enabled. I have followed the process for installing GPU drivers, cuda, cudnn and tensorflow. However whenever I try to run my program (which runs fine in a super computing cluster) I keep getting:
undefined symbol: cudnnCreate
I have ... | 0 | 1 | 304 |
0 | 43,103,742 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-29T19:28:00.000 | 0 | 2 | 0 | why argument random_state in cross_validation.train_test_split is integer not boolean | 43,102,532 | 0 | python,machine-learning,cross-validation,sklearn-pandas | To expand a bit further on Kelvin's answer, if you want a random train-test split, then don't specify the random_state parameter. If you do not want a random train-test split (i.e. you want an identically-reproducible split each time), specify random_state with an integer of your choice. | i need to know why argument random_state in cross_validation.train_test_split is integer not Boolean, since it's role is to flag random allocation or not? | 0 | 1 | 280 |
0 | 43,103,553 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-03-29T19:28:00.000 | 2 | 2 | 0 | why argument random_state in cross_validation.train_test_split is integer not boolean | 43,102,532 | 0.197375 | python,machine-learning,cross-validation,sklearn-pandas | random_state is not only a flag of randomness or not, but which random seed to use. If you choose random_state = 3 you will "randomly" split the dataset, but you are able to reproduce the same split each time. I.e. each call with the same dataset will yield the same split, which is not the case if you don't specify the... | i need to know why argument random_state in cross_validation.train_test_split is integer not Boolean, since it's role is to flag random allocation or not? | 0 | 1 | 280 |
0 | 43,107,623 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2017-03-29T22:15:00.000 | 3 | 1 | 0 | How are variables shared between concurrent `session.run(...)` calls in tensorflow? | 43,105,148 | 1.2 | python,multithreading,tensorflow | After doing some experimentation it appears that each call to sess.run(...) does indeed see a consistent point-in-time snapshot of the variables.
To test this I performed 2 big matrix multiply operations (taking about 10 sec each to complete), and updated a single, dependent, variable before, between, and after. In an... | If you make two concurrent calls to the same session, sess.run(...), how are variables concurrently accessed in tensorflow?
Will each call see a snapshot of the variables as of the moment run was called, consistent throughout the call? Or will they see dynamic updates to the variables and only guarantee atomic updates... | 0 | 1 | 646 |
0 | 43,189,429 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-03-30T16:24:00.000 | 1 | 1 | 0 | Difference of two dataframes in python | 43,123,378 | 0.197375 | python | df=pd.concat([a,b])
df = df.reset_index(drop=True)
df_gpby = df.groupby(list(df.columns))
idx = [x[0] for x in df_gpby.groups.values() if len(x) == 1]
df1=df.reindex(idx) | I have a two dataframes
ex:
test_1
name1 name2
a1 b1
a1 b2
a2 b1
a2 b2
a2 b3
test_2
name1 name2
a1 b1
a1 b2
a2 b1
I need the difference of two dataframes like
name1 name2
a2 b2
a2 b3 | 0 | 1 | 63 |
0 | 43,141,030 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-03-31T11:36:00.000 | 0 | 2 | 0 | How can I create a deep neural network which has a capability to take a decision for hypothesis? | 43,139,718 | 1.2 | python,machine-learning,statistics,deep-learning | If you are sure enough that the alternative hypothesis data come from different distribution than the null hypothesis, you can try unsupervised learning algorithm. i.e a K-mean or a GMM with the right number of cluster could yield a great separation of the data. You can then assign label to the second class data and tr... | Basically, I am interested in solving a hypothesis problem, where I am only aware of the data distribution of a null hypothesis and don't know anything about the alternative case.
My concern is how should I train my deep neural network so that it can classify or recognise whether a particular sample data has a similar ... | 0 | 1 | 103 |
0 | 43,141,287 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-03-31T12:51:00.000 | 0 | 2 | 0 | How to convert a column of a dataframe from char to ascii integers? [Pandas] | 43,141,160 | 0 | python-3.x,pandas | e.g. "a"=97 in ascii}
write print(ord("a"))
print(ord("a"))
answer would be 97 | I have a dataframe in which one columns called 'label' holds values like 'b', 'm', 'n' etc.
I want 'label' to instead hold the ascii equivalent of the letter.
How do I do it? | 0 | 1 | 3,205 |
0 | 43,158,973 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-03-31T16:17:00.000 | 1 | 3 | 0 | numpy array of zeros or empty | 43,145,332 | 0.066568 | python,arrays,numpy | It is better to create array of zeros and fill it using if-else. Even conditions makes slow your code, reshaping empty array or concatenating it with new vectors each iteration of loop is more slower operation, because each time new array of new size is created and old array is copied there together with new vector val... | I am writing code and efficiency is very important.
Actually I need 2d array, that I am filling with 0 and 1 in for loop. What is better and why?
Make empty array and fill it with "0" and "1". It's pseudocode, my array will be much bigger.
Make array filled by zeros and make if() and if not zero - put one.
So I need ... | 0 | 1 | 5,613 |
0 | 43,160,280 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-04-01T17:56:00.000 | 0 | 2 | 0 | in Python 3.6 and numpy, what does the comma mean or do in "predictors[training_indices,:]" | 43,160,202 | 0 | python,python-3.x,numpy,comma | In your code predictors is a two dimensional array. You're taking a slice of the array. Your output will be all the values with training_indices as their index in the first axis. The : is slice notation, meaning to take all values along the second axis.
This kind of indexing is not common in Python outside of numpy, bu... | I am in an online course, and I find I do not understand this expression:
predictors[training_indices,:]
predictors is an np.array of floats.
training_indices is a list of integers known to be indices of predictors, so 0=< i < len(training_indices)).
Is this a special numpy expression?
Thanks! | 0 | 1 | 802 |
0 | 59,593,711 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-04-02T08:48:00.000 | 1 | 2 | 0 | How to read a csv file as series instead of dataframe in pandas? | 43,166,420 | 0.099668 | python,pandas | There is 2 option read series from csv file;
pd.Series.from_csv('File_name.csv')
pd.read_csv('File_name.csv', squeeze=True)
My prefer is; using squeeze=True with read_csv | When I try to use x = pandas.Series.from_csv('File_name.csv', header = None)
It throws an error saying IndexError: single positional indexer is out-of-bounds.
However, If I read it as dataframe and then extract series, it works fine.
x = pandas.read_csv('File_name.csv', header = None)[0]
What could be wrong with firs... | 0 | 1 | 2,872 |
0 | 43,170,260 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-04-02T14:54:00.000 | 2 | 1 | 0 | Using your own Data in Tensorflow | 43,169,766 | 0.379949 | python,tensorflow,neural-network,dataset | I suggest you use OpenCV library. Whatever you uses your MNIST data or PIL, when it's loaded, they're all just NumPy arrays. If you want to make MNIST datasets fit with your trained model, here's how I did it:
1.Use cv2.imread to load all the images you want them to act as training datasets.
2.Use cv2.cvtColor to conve... | I already know how to make a neural network using the mnist dataset. I have been searching for tutorials on how to train a neural network on your own dataset for 3 months now but I'm just not getting it. If someone can suggest any good tutorials or explain how all of this works, please help.
PS. I won't install NLTK. I... | 0 | 1 | 1,168 |
0 | 43,214,452 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2017-04-03T01:21:00.000 | 3 | 1 | 0 | check if tensorflow placeholder is filled | 43,175,272 | 1.2 | python,tensorflow,deep-learning | You can create a third placeholder variable of type boolean to select which branch to use and feed that in at run time.
The logic behind it is that since you are feeding in the placholders at runtime anyways you can determine outside of tensorflow which placeholders will be fed. | Suppose I have two placeholder quantities in tensorflow: placeholder_1 and placeholder_2. Essentially I would like the following computational functionality: "if placeholder_1 is defined (ie is given a value in the feed_dict of sess.run()), compute X as f(placeholder_1), otherwise, compute X as g(placeholder_2)." Think... | 0 | 1 | 1,190 |
0 | 43,182,382 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-04-03T04:42:00.000 | 2 | 1 | 1 | spark consume from stream -- considering data for longer period | 43,176,607 | 1.2 | python-3.x,apache-spark,pyspark | Your streaming job is not supposed to calculate the Daily count/Avg.
Approach 1 :
You can store the data consumer from Kafka into a persistent storage like DB/HBase/HDFS , and then you can run Daily batch which will calculate all the statistics for you like Daily count or avg.
Approach 2 :
In order to get that informat... | We have a spark job running which consumes data from kafka stream , do some analytics and store the result.
Since data is consumed as they are produced to kafka, if we want to get
count for the whole day, count for an hour, average for the whole
day
that is not possible with this approach. Is there any way... | 0 | 1 | 35 |
0 | 43,355,585 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-04-03T07:32:00.000 | 0 | 1 | 0 | How to use hmmlearn to classify English text? | 43,178,966 | 0 | python-3.x,text-classification,markov-models,hmmlearn | hmmlearn is designed for unsupervised learning of HMMs, while your problem is clearly supervised: given examples of English and random strings, learn to distinguish between the two. Also, as you've correctly pointed it out, the notion of hidden states is tricky to define for text data, therefore for your problem plain ... | I want to implement a classic Markov model problem: Train MM to learn English text patterns, and use that to detect English text vs. random strings.
I decided to use hmmlearn so I don't have to write my own. However I am confused about how to train it. It seems to require the number of components in the HMM, but what i... | 0 | 1 | 764 |
0 | 43,199,249 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-04-04T05:42:00.000 | 4 | 2 | 0 | Regression vs Classifier predict_proba | 43,199,108 | 1.2 | python,machine-learning,scikit-learn,classification,regression | Generally, for a qualitative problem that is to classify between categories or class, we prefer classification.
for example: to identify if it is night or day.
For Quantitative problems, we prefer regression to solve the problems.
for example: to identify if its 0th class or 1st class.
But in a special case, when w... | Just a quick question, if I want to classify objects into either 0 or 1 but I would like the model to return me a 'likeliness' probability for example if an object is 0.7, it means it has 0.7 chance of being in class 1, do I do a regression or stick to classifiers and use the predict_proba function?
How is regression ... | 0 | 1 | 2,562 |
0 | 46,034,678 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2017-04-04T08:56:00.000 | 8 | 1 | 0 | gensim KeydVectors dimensions | 43,202,548 | 1 | python-3.x,gensim | kv.vector_size still works; I'm using gensim 2.3.0, which is the latest as I write. (I am assuming kv is your KeyedVectors object.) It appears object properties are not documented on the API page, but auto-complete suggests it, and there is no deprecated warning or anything.
Your question helped me answer my own, whic... | Im gensims latest version, loading trained vectors from a file is done using KeyedVectors, and dosent requires instantiating a new Word2Vec object. But now my code is broken because I can't use the model.vector_size property. What is the alternative to that? I mean something better than just kv[kv.index2word[0]].size. | 0 | 1 | 4,245 |
0 | 43,210,008 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-04-04T10:22:00.000 | 8 | 2 | 1 | Broken Pipe Error Redis | 43,204,496 | 1.2 | python,sockets,redis,redis-py | Redis' String data type can be at most 512MB. | We are trying to SET pickled object of size 2.3GB into redis through redis-py package. Encountered the following error.
BrokenPipeError: [Errno 32] Broken pipe
redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.
I would like to understand the root cause. Is it due to input/o... | 0 | 1 | 9,490 |
0 | 43,208,268 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-04-04T12:26:00.000 | -2 | 3 | 0 | Find inverse polynomial in python in GF2 | 43,207,222 | -0.132549 | python,numpy,scipy,polynomials,inverse | Try to use mathematical package sage | I'm fairly new to Python and I have a question related to the polynomials.
Lets have a high-degree polynomial in GF(2), for example :
x^n + x^m + ... + 1, where n, m could be up to 10000.
I need to find inverse polynomial to this one. What will be the fastest way to do that in Python (probably using numpy) ?
Thanks | 0 | 1 | 3,247 |
0 | 43,324,614 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-04-04T12:35:00.000 | 0 | 2 | 0 | Why OpenCV face detection recognition the faces for untrained face? | 43,207,422 | 0 | python-2.7,opencv3.0,face-detection,face-recognition,opencv3.1 | i guess here in your problem you are not actually referring to detection ,but recognition ,you must know the difference between these two things:
1-detection does not distinguish between persons, it just detects the facial shape of a person based on the haarcascade previously trained
2-recognition is the case where u ... | I trained 472 unique images for a person A for Face Recognition using "haarcascade_frontalface_default.xml".
While I am trying to detect face for the same person A for the same images which I have trained getting 20% to 80% confidence, that's fine for me.
But, I am also getting 20% to 80% confidence for person B which ... | 0 | 1 | 402 |
0 | 43,216,616 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-04-04T13:48:00.000 | 0 | 1 | 0 | Given a sparse matrix with shape (num_samples, num_features), how do I estimate the co-occurrence matrix? | 43,209,135 | 0 | python,machine-learning,data-mining | This can be solved reasonably easily if you go to a transposed matrix.
Of any two features (now rows, originally columns) you compute the intersection. If it's larger than 50, you have a frequent cooccurrence.
If you use an appropriate sparse encoding (now of rows, but originally of columns - so you probably need not o... | The sparse matrix has only 0 and 1 at each entry (i,j) (1 stands for sample i has feature j). How can I estimate the co-occurrence matrix for each feature given this sparse representation of data points? Especially, I want to find pairs of features that co-occur in at least 50 samples. I realize it might be hard to pro... | 0 | 1 | 101 |
0 | 52,570,244 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2017-04-04T18:55:00.000 | 1 | 2 | 0 | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 43,215,443 | 0.099668 | python,influxdb,grafana | I believe this is currently available via kapacitor, but assume a more elegant solution will be readily accomplished using FluxQL.
Consuming the influxdb measurements into kapacitor will allow you to force equivalent time buckets and present the data once normalized. | So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels ... | 0 | 1 | 569 |
0 | 43,306,424 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2017-04-04T18:55:00.000 | 0 | 2 | 0 | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 43,215,443 | 0 | python,influxdb,grafana | I can confirm from my grafana instance that it's not possible to add a shift to one timeseries and not the other in one panel.
To change the timestamp, I'd just simply do it the obvious way. Load a few thousands of entries at a time to python, change the the timestamps and write it to a new measure (and indicate the sh... | So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels ... | 0 | 1 | 569 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.