GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 39,280,341 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-01T20:21:00.000 | 0 | 2 | 0 | Pandas - how to remove spaces in each column in a dataframe? | 39,280,278 | 0 | python,pandas | data[c] does not return a value, it returns a series (a whole column of data).
You can apply the strip operation to an entire column df.apply. You can apply the strip function this way. | I'm trying to remove spaces, apostrophes, and double quote in each column data using this for loop
for c in data.columns:
data[c] = data[c].str.strip().replace(',', '').replace('\'', '').replace('\"', '').strip()
but I keep getting this error:
AttributeError: 'Series' object has no attribute 'strip'
data is the dat... | 0 | 1 | 5,363 |
0 | 39,290,812 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-01T21:30:00.000 | 0 | 2 | 0 | Solving matrix equation A B = C. with B(n* 1) and C(n *1) | 39,281,149 | 0 | python,matrix,equation-solving | If you're solving for the matrix, there is an infinite number of solutions (assuming that B is nonzero). Here's one of the possible solutions:
Choose an nonzero element of B, Bi. Now construct a matrix A such that the ith column is C / Bi, and the other columns are zero.
It should be easy to verify that multiplying th... | I am trying to solve a matrix equation such as A.B = C. The A is the unknown matrix and i must find it.
I have B(n*1) and C(n*1), so A must be n*n.
I used the BT* A.T =C.T method (numpy.linalg.solve(B.T, C.T)).
But it produces an error:
LinAlgError: Last 2 dimensions of the array must be square.
So the problem is t... | 0 | 1 | 339 |
0 | 39,334,690 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-05T14:36:00.000 | 0 | 1 | 0 | Failure to import sknn.mlp / Theano | 39,332,901 | 0 | python,installation,attributes,scikit-learn,theano | Apparently it was caused by some issue with Visual Studio. The import worked when I reinstalled VS and restarted the computer.
Thanks @super_cr7 for the prompt reply! | I'm trying to use scikit-learn's neural network module in iPython... running Python 3.5 on a Win10, 64-bit machine.
When I try to import from sknn.mlp import Classifier, Layer , I get back the following AttributeError: module 'theano' has no attribute 'gof' ...
The command line highlighted for the error is class Discon... | 0 | 1 | 331 |
0 | 39,395,872 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-08T04:42:00.000 | 1 | 1 | 0 | Saving Python list containing Tensorflow Sparsetensors to file for later access? | 39,382,725 | 0.197375 | python,json,tensorflow | A Tensor in TensorFlow is a node in the graph which, when run, will produce a tensor. So you can't save the SparseTensor directly because it's not a value (you can serialize the graph). If you do evaluate the sparsetensor, you get a SparseTensorValue object back which can be serialized as it's just a tuple. | I'm creating a list of Sparsetensors in Tensorflow. I want to access them in later sessions of my program. I've read online that you can store Python lists as json files but how do I save a list of Sparsetensors to a json file and then use that later on?
Thanks in advance | 0 | 1 | 70 |
0 | 44,253,561 | 0 | 0 | 0 | 0 | 2 | false | 149 | 2016-09-08T06:03:00.000 | 21 | 10 | 0 | Show distinct column values in pyspark dataframe | 39,383,557 | 1 | python,apache-spark,pyspark,apache-spark-sql | You can use df.dropDuplicates(['col1','col2']) to get only distinct rows based on colX in the array. | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in ... | 0 | 1 | 344,799 |
0 | 60,578,769 | 0 | 0 | 0 | 0 | 2 | false | 149 | 2016-09-08T06:03:00.000 | 1 | 10 | 0 | Show distinct column values in pyspark dataframe | 39,383,557 | 0.019997 | python,apache-spark,pyspark,apache-spark-sql | If you want to select ALL(columns) data as distinct frrom a DataFrame (df), then
df.select('*').distinct().show(10,truncate=False) | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in ... | 0 | 1 | 344,799 |
0 | 52,673,944 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2016-09-09T13:35:00.000 | 20 | 4 | 0 | pandas.read_html not support decimal comma | 39,412,829 | 1 | python,pandas,decimal,xlm | This did not start working for me until I used both decimal=',' and thousands='.'
Pandas version: 0.23.4
So try to use both decimal and thousands:
i.e.:
pd.read_html(io="http://example.com", decimal=',', thousands='.')
Before I would only use decimal=',' and the number columns would be saved as type str with the numbe... | I was reading an xlm file using pandas.read_html and works almost perfect, the problem is that the file has commas as decimal separators instead of dots (the default in read_html).
I could easily replace the commas by dots in one file, but i have almost 200 files with that configuration.
with pandas.read_csv you can ... | 1 | 1 | 4,626 |
0 | 39,433,909 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-09-11T05:16:00.000 | 1 | 1 | 0 | Find when integral of an interpolated function is equal to a specific value (python) | 39,433,108 | 1.2 | python,scipy | One simple way is to use the CubicSpline class instead. Then it's CubicSpline(x, y).antiderivative().solve(0.05*M) or thereabouts. | I have arrays t_array and dMdt_array of x and y points. Let's call M = trapz(dMdt_array, t_array). I want to find at what value of t the integral of dM/dt vs t is equal to a certain value -- say 0.05*M. In python, is there a nice way to do this?
I was thinking something like F = interp1d(t_array, dMdt_array). Then some... | 0 | 1 | 113 |
0 | 39,446,832 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-12T01:34:00.000 | 0 | 1 | 0 | Training a neural network with two groups of spacial coordinates per observation? | 39,442,327 | 0 | python,machine-learning,scikit-learn,neural-network,regression | If I got this right, you are basically trying to implement classification variables into your input, and this is basically done by adding an input variable for each possible class (in your case "group 1" and "group 2") that holds binary values (1 if the sample belongs to the group, 0 if it doesn't). Wheather or not you... | I'm trying to predict an output (regression) where multiple groups have spacial (x,y) coordinates. I've been using scikit-learn's neural network packages (MLPClassifier and MLPRegressor), which I know can be trained with spacial data by inputting a 1-D array per observation (ex. the MNIST dataset).
I'm trying to figur... | 0 | 1 | 244 |
0 | 39,447,667 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-09-12T08:34:00.000 | 2 | 1 | 0 | Python: Binary image segmentation | 39,446,262 | 1.2 | python-2.7,image-segmentation,binary-image | To my mind, this is exactly what can be done using scipy.ndimage.measurements.label and scipy.ndimage.measurements.find_objects
You have to specify what "touching" means. If it means edge-sharing, then the default structure of ndimage.measurements.label is the one you need so you just need to pass your array. If touch... | Is there a easy way to implement the segmentation of a binary image in python?
My 2d-"images" are numpy arrays. The used values are 1.0 and 0.0. I would require a list of all objects with the value 1.0. Every black pixel is a pixel of an object. An object may contain many touching pixels with the value 1.0.
I can use n... | 0 | 1 | 1,440 |
0 | 39,451,022 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-12T11:53:00.000 | 0 | 1 | 0 | Python 2.7 opencv Yuv/ YPbPr | 39,449,728 | 1.2 | python-2.7,opencv,yuv | You should take care of how YUV is arranged in memory. There are various formats involved. The most common being YUV NV12 and NV21. In general, data is stored as unsigned bytes. While the range of Y is from 0~255, it is -128~127 for U and V. As both U and V approach 0, you have less saturation and approach grayscale. I... | So I've heard that the YUV and YPbPr colour system is essentially the same.
When I convert BGR to YUV, presumably to the Color_BGR2YUV opencv command, what are the ranges for the values that return for Y, U and V? Because on Colorizer.org, the values seem to be decimals, but I haven't seen opencv spit out any decimal p... | 0 | 1 | 220 |
0 | 39,481,787 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-09-13T09:34:00.000 | 2 | 2 | 0 | Use of Scaler with LassoCV, RidgeCV | 39,466,671 | 0.197375 | python,machine-learning,scikit-learn | I got the answer through the scikit-learn mailing list so here it is:
'There is no way to use the "efficient" EstimatorCV objects with pipelines.
This is an API bug and there's an open issue and maybe even a PR for that.'
Many thanks to Andreas Mueller for the answer. | I would like to use scikit-learn LassoCV/RidgeCV while applying a 'StandardScaler' on each fold training set. I do not want to apply the scaler before the cross-validation to avoid leakage but I cannot figure out how I am supposed to do that with LassoCV/RidgeCV.
Is there a way to do this ? Or should I create a pipeli... | 0 | 1 | 726 |
0 | 39,477,667 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-09-13T18:47:00.000 | 1 | 4 | 1 | Error "mach-o, but wrong architecture" after installing anaconda on mac | 39,477,023 | 0.049958 | python,macos,python-2.7 | you are mixing 32bit and 64bit versions of python.
probably you installed 64bit python version on a 32bit computer.
go on and uninstall python and reinstall it with the right configuration. | I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it.
Current Python Version - 2.7.10
`MyMachine:desktop *********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportEr... | 0 | 1 | 7,799 |
0 | 70,210,511 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-09-13T18:47:00.000 | 3 | 4 | 1 | Error "mach-o, but wrong architecture" after installing anaconda on mac | 39,477,023 | 0.148885 | python,macos,python-2.7 | Below steps resolved this problem for me.
Quit the terminal.
Go to Finder => Apps
Right Click on Terminal
Get Info
Check the checkbox Open using Rosetta
Now, open the terminal and try again.
PS: Rosetta allows Mac with M1 architecture to use apps built for Mac with Intel chip. Most of the times the reason behind most... | I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it.
Current Python Version - 2.7.10
`MyMachine:desktop *********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportEr... | 0 | 1 | 7,799 |
0 | 46,030,298 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-14T13:06:00.000 | 0 | 1 | 0 | Matplotlib imshow() | 39,491,258 | 0 | python,matplotlib | Without more detail on your specific problem, it's hard to guess what is the best way to represent your data. I am going to give an example, hopefully it is relevant.
Suppose we are collecting height and weight of a group of people. Maybe the index of the person is your first dimension, and the height and weight depend... | I am stuck with python and matplotlib imshow(). Aim is it to show a twodimensonal color map which represents three dimensions.
My x-axis is represented by an array'TG'(93 entries). My y-axis is a set of arrays dependend of my 'TG' To be precise we have 93 different arrays with the length of 340. My z-axis is also a se... | 0 | 1 | 343 |
0 | 39,493,833 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-09-14T14:59:00.000 | 3 | 2 | 0 | Python how does == work for float/double? | 39,493,732 | 1.2 | python,pandas,floating-point | The same string representation will become the same float representation when put through the same parse routine. The float inaccuracy issue occurs either when mathematical operations are performed on the values or when high-precision representations are used, but equality on low-precision values is no reason to worry. | I know using == for float is generally not safe. But does it work for the below scenario?
Read from csv file A.csv, save first half of the data to csv file B.csv without doing anything.
Read from both A.csv and B.csv. Use == to check if data match everywhere in the first half.
These are all done with Pandas. The colu... | 0 | 1 | 717 |
0 | 39,517,206 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2016-09-15T17:29:00.000 | 0 | 3 | 0 | Finding closest value in a dictionary | 39,517,040 | 0 | python,python-2.7,loops,dictionary,iteration | Since the values for a given a are strictly increasing with successive i values, you can do a binary search for the value that is closest to your target.
While it's certainly possible to write your own binary search code on your dictionary, I suspect you'd have an easier time with a different data structure. If you use... | I have a dictionary, T, with keys in the form k,i with an associated value that is a real number (float). Let's suppose I choose a particular key a,b from the dictionary T with corresponding value V1—what's the most efficient way to find the closest value to V1 for a key that has the form a+1,i, where i is an integer t... | 0 | 1 | 2,461 |
0 | 62,178,663 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-15T21:18:00.000 | 0 | 4 | 0 | How to measure the memory footprint of importing pandas? | 39,520,532 | 0 | python,pandas | After introducing pandas to my script and loaded dataframe with 0.8MB data, ran the script and surprised to see the memory usage got increased from 13MB to 49MB. I suspected my existing script has some memory leak and I used memory profiler to check what is consuming much memory and finally the culprit is pandas. Just ... | I am running Python on a low memory system.
I want to know whether or not importing pandas will increase memory usage significantly.
At present I just want to import pandas so that I can use the date_range function. | 0 | 1 | 493 |
0 | 39,520,649 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-15T21:18:00.000 | 3 | 4 | 0 | How to measure the memory footprint of importing pandas? | 39,520,532 | 0.148885 | python,pandas | You may also want to use a Memory Profiler to get an idea of how much memory is allocated to your Pandas objects. There are several Python Memory Profilers you can use (a simple Google search can give you an idea). PySizer is one that I used a while ago. | I am running Python on a low memory system.
I want to know whether or not importing pandas will increase memory usage significantly.
At present I just want to import pandas so that I can use the date_range function. | 0 | 1 | 493 |
0 | 39,530,209 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-16T06:41:00.000 | 0 | 2 | 1 | Configurate Spark by given Cluster | 39,525,214 | 0 | java,python,scala,apache-spark,pyspark | your question is unclear. If the data are on your local machine, you should first copy your data to the cluster on HDFS filesystem. Spark can works in 3 modes with YARN (are u using YARN or MESOS ?): cluster, client and standalone. What you are looking for is client-mode or cluster mode. But if you want to start the ap... | I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to.
My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster?
I am w... | 0 | 1 | 35 |
0 | 39,576,294 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-16T11:10:00.000 | 0 | 1 | 0 | Is it possible to create a polynomial through Numpy's C API? | 39,530,054 | 1.2 | python,c++,numpy,swig | Numpy's polynomial package is largely a collection of functions that can accept array-like objects as the polynomial. Therefore, it is sufficient to convert to a normal ndarray, where the value at index n is the coefficient for the term with exponent n. | I'm using SWIG to wrap a C++ library with its own polynomial type. I'd like to create a typemap to automatically convert that to a numpy polynomial. However, browsing the docs for the numpy C API, I'm not seeing anything that would allow me to do this, only numpy arrays. Is it possible to typemap to a polynomial? | 0 | 1 | 42 |
0 | 39,538,477 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-09-16T18:52:00.000 | 0 | 3 | 0 | Given an acyclic directed graph, return a collection of collections of nodes "at the same level"? | 39,538,363 | 0 | python,algorithm,graph,graph-theory,networkx | Why would bfs not solve it? A bfs algorithm is breadth traversal algorithm, i.e. it traverses the tree level wise. This also means, all nodes at same level are traversed at once, which is your desired output.
As pointed out in comment, this will however, assume a starting point in the graph. | Firstly I am not sure what such an algorithm is called, which is the primary problem - so first part of the question is what is this algorithm called?
Basically I have a DiGraph() into which I insert the nodes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and the edges ([1,3],[2,3],[3,5],[4,5],[5,7],[6,7],[7,8],[7,9],[7,10])
From th... | 0 | 1 | 661 |
0 | 39,809,660 | 0 | 1 | 0 | 0 | 1 | false | 17 | 2016-09-16T22:06:00.000 | 10 | 2 | 0 | Google Cloud Vision - Numbers and Numerals OCR | 39,540,741 | 1 | python,ocr,google-cloud-platform,google-cloud-vision,text-recognition | I am unable to tell you why this works, perhaps it has to do with how the language is read, o vs 0, l vs 1, etc. But whenever I use OCR and I am specifically looking for numbers, I have read to set the detection language to "Korean". It works exceptionally well for me and has influenced the accuracy greatly. | I've been trying to implement an OCR program with Python that reads numbers with a specific format, XXX-XXX. I used Google's Cloud Vision API Text Recognition, but the results were unreliable. Out of 30 high-contrast 1280 x 1024 bmp images, only a handful resulted in the correct output, or at least included the correct... | 0 | 1 | 5,635 |
0 | 39,543,370 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2016-09-17T00:17:00.000 | 1 | 1 | 0 | Setting default histtype in matplotlib? | 39,541,655 | 1.2 | python,matplotlib,data-analysis | Thank you for prompting me to look at this, as I much prefer 'step' style histograms too! I solved this problem by going into the matplotlib source code. I use anaconda, so it was located in anaconda/lib/site-packages/python2.7/matplotlib.
To change the histogram style I edited two of the files. Assuming that the curre... | Is there a way to configure the default argument for histtype of matplotlib's hist() function? The default behavior is to make bar-chart type histograms, which I basically never want to look at, since it is horrible for comparing multiple distributions that have significant overlap.
In case it's somehow relevant, the d... | 0 | 1 | 602 |
0 | 39,577,466 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-18T07:09:00.000 | 0 | 1 | 0 | Can I safely do inference from another thread while training the network? | 39,555,060 | 0 | python,multithreading,thread-safety,locking,tensorflow | Do your inference calls need to be on an up-to-date version of the graph? If you don't mind some delay, you could make a copy of the graph by calling sess.graph.as_graph_def on the training thread, and then create a new session on the inference thread using that graph_def periodically. | I have several threads that either update the weights of my network or run inference on it. I use the use_locking parameter for the optimizer to prevent concurrent updates of the weights.
Inference should always use a recent, and importantly, consistent, version of the weights. In other words, I want to prevent using a... | 0 | 1 | 67 |
1 | 39,557,406 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-09-18T07:38:00.000 | 2 | 1 | 0 | Calling a C++ CUDA device function from a Python kernel | 39,555,235 | 1.2 | python,cuda,cython,numba,pycuda | As far as I am aware, this isn't possible in either language. Neither exposes the necessary toolchain controls for separate compilation or APIs to do runtime linking of device code. | I'm working on a project that involves creating CUDA kernels in Python. Numba works quite well (what these guys have accomplished is quite incredible), and so does PyCUDA.
My problem is that I want to call a C device function from my Python generated kernel. I couldn't find a way to accomplish this. Numba can call CFFI... | 0 | 1 | 604 |
0 | 39,563,394 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-18T21:40:00.000 | 0 | 1 | 0 | predict external dataset with models from random forest | 39,562,939 | 0 | python | First of all, you should not save the result of cross validation. Cross validation is not a training method, it is an evaluation scheme. You should build a single model on your whole dataset and use it to predict.
If, for some reason, you can no longer train your model, you can still use this 5 predictions by averaging... | I used joblib.dump in python to save models from 5 fold cross validation modelling using random forest. As a result I have 5 models for each dataset saved as: MDL_1.pkl, MDL_2.pkl, MDL_3.pkl, MDL_4.pkl, MDL_5.pkl. Now I want to use these models for prediction of external dataset using predict_proba when the final predi... | 0 | 1 | 43 |
0 | 57,812,535 | 0 | 0 | 0 | 0 | 1 | false | 34 | 2016-09-19T06:39:00.000 | 5 | 2 | 0 | Writing Dask partitions into single file | 39,566,809 | 0.462117 | python,dask | you can convert your dask dataframe to a pandas dataframe with the compute function and then use the to_csv. something like this:
df_dask.compute().to_csv('csv_path_file.csv') | New to dask,I have a 1GB CSV file when I read it in dask dataframe it creates around 50 partitions after my changes in the file when I write, it creates as many files as partitions.
Is there a way to write all partitions to single CSV file and is there a way access partitions?
Thank you. | 0 | 1 | 14,834 |
0 | 39,581,743 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-19T13:35:00.000 | 0 | 1 | 0 | Is there any K-means++ implementation outside of scikit-learn for Python 2.7? | 39,574,567 | 0 | python-2.7,scipy,scikit-learn,k-means | So, the situation as of today is: there is no distributed Python implementation of KMeans++ other than in scikit-learn. That situation may change if a good implementation finds its way into scipy. | I have nothing against scikit-learn, but I had to install anaconda to get it, which is a bit obtrusive. | 0 | 1 | 72 |
0 | 39,583,123 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-19T22:21:00.000 | 1 | 1 | 0 | regarding the tensor shape is (?,?,?,1) | 39,582,974 | 1.2 | python,tensorflow | This output means that TensorFlow's shape inference has only been able to infer a partial shape for the mask tensor. It has been able to infer (i) that mask is a 4-D tensor, and (ii) its last dimension is 1; but it does not know statically the shape of the first three dimensions.
If you want to get the actual shape of ... | During debuging the Tensorflow code, I would like to output the shape of a tensor, say, print("mask's shape is: ",mask.get_shape()) However, the corresponding output is mask's shape is (?,?,?,1) How to explain this kind of output, is there anyway to know the exactly value of the first three dimensions of this tensor? | 0 | 1 | 569 |
0 | 39,605,142 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2016-09-20T22:51:00.000 | 6 | 2 | 0 | What does (n,) mean in the context of numpy and vectors? | 39,604,918 | 1 | python,numpy,machine-learning,neural-network | (n,) is a tuple of length 1, whose only element is n. (The syntax isn't (n) because that's just n instead of making a tuple.)
If an array has shape (n,), that means it's a 1-dimensional array with a length of n along its only dimension. It's not a row vector or a column vector; it doesn't have rows or columns. It's jus... | I've tried searching StackOverflow, googling, and even using symbolhound to do character searches, but was unable to find an answer. Specifically, I'm confused about Ch. 1 of Nielsen's Neural Networks and Deep Learning, where he says "It is assumed that the input a is an (n, 1) Numpy ndarray, not a (n,) vector."
At fir... | 0 | 1 | 2,722 |
0 | 39,607,825 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-09-21T04:42:00.000 | 0 | 4 | 0 | Number of shortest paths | 39,607,721 | 0 | python,algorithm,chess | Try something. Draw boards of the following sizes: 1x1, 2x2, 3x3, 4x4, and a few odd ones like 2x4 and 3x4. Starting with the smallest board and working to the largest, start at the bottom left corner and write a 0, then find all moves from zero and write a 1, find all moves from 1 and write a 2, etc. Do this until the... | Here is the problem:
Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)
Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on ... | 0 | 1 | 3,995 |
0 | 39,608,395 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-09-21T04:42:00.000 | 0 | 4 | 0 | Number of shortest paths | 39,607,721 | 0 | python,algorithm,chess | My approach to this question would be backtracking as the number of squares in the x-axis and y-axis are different.
Note: Backtracking algorithms can be slow for certain cases and fast for the other
Create a 2-d Array for the chess-board. You know the staring index and the final index. To reach to the final index u n... | Here is the problem:
Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)
Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on ... | 0 | 1 | 3,995 |
0 | 39,613,813 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-09-21T10:09:00.000 | 0 | 1 | 0 | N-grams - not in memory | 39,613,555 | 0 | python,n-gram,language-model | Sounds like you need to store the intermediate frequency counts on disk rather than in memory. Luckily most databases can do this, and python can talk to most databases. | I have 3 milion abstracts and I would like to extract 4-grams from them. I want to build a language model so I need to find the frequencies of these 4-grams.
My problem is that I can't extract all these 4-grams in memory. How can I implement a system that it can estimate all frequencies for these 4-grams? | 0 | 1 | 83 |
0 | 39,621,200 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-21T14:08:00.000 | 0 | 1 | 0 | Deprecated Scikit-learn module prevents joblib from loading it | 39,618,985 | 0 | python,scikit-learn,joblib | After reverting to scikit-learn 0.16.x, I just needed to install OpenBlas for Ubuntu. It appears that the problem was more a feature of the operating system rather than Python. | I have a Hidden Markov Model that has been pickled with joblib using the sklearn.hmm module. Apparently, in version 0.17.x this module has been deprecated and moved to hmmlearn. I am unable to load the model and I get the following error:
ImportError: No module named 'sklearn.hmm'
I have tried to revert back to versi... | 0 | 1 | 576 |
0 | 39,620,443 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-09-21T15:03:00.000 | 1 | 2 | 0 | sklearn Logistic Regression with n_jobs=-1 doesn't actually parallelize | 39,620,185 | 0.099668 | python,python-2.7,parallel-processing,scikit-learn,logistic-regression | the parallel process backend also depends on the solver method. if you want to utilize multi core, the multiprocessing backend is needed.
but solver like 'sag' can only use threading backend.
and also mostly, it can be blocked due to a lot of pre-processing. | I'm trying to train a huge dataset with sklearn's logistic regression.
I've set the parameter n_jobs=-1 (also have tried n_jobs = 5, 10, ...), but when I open htop, I can see that it still uses only one core.
Does it mean that logistic regression just ignores the n_jobs parameter?
How can I fix this? I really need this... | 0 | 1 | 2,100 |
0 | 39,640,835 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2016-09-21T23:01:00.000 | 2 | 2 | 0 | Create Folder with Numpy Savetxt | 39,627,787 | 0.197375 | python,numpy | Actually in order to make all intermediate directories if needed the os.makedirs(path, exist_ok=True) . If not needed the command will not throw an error. | I'm trying loop over many arrays and create files stored in different folders.
Is there a way to have np.savetxt creating the folders I need as well?
Thanks | 0 | 1 | 6,937 |
0 | 39,628,096 | 0 | 1 | 0 | 0 | 2 | true | 2 | 2016-09-21T23:01:00.000 | 3 | 2 | 0 | Create Folder with Numpy Savetxt | 39,627,787 | 1.2 | python,numpy | savetxt just does a open(filename, 'w'). filename can include a directory as part of the path name, but you'll have to first create the directory with something like os.mkdir. In other words, use the standard Python directory and file functions. | I'm trying loop over many arrays and create files stored in different folders.
Is there a way to have np.savetxt creating the folders I need as well?
Thanks | 0 | 1 | 6,937 |
0 | 39,664,586 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-22T15:34:00.000 | 0 | 2 | 0 | Is it possible to group tensorflow FLAGS by type and generate a string from them? | 39,643,256 | 0 | python,machine-learning,computer-vision,tensorflow | I'm guessing that you're wanting to automatically store the hyper-parameters as part of the file name in order to organize your experiments better? Unfortunately there isn't a good way to do this with TensorFlow, but you can look at some of the high-level frameworks built on top of it to see if they offer something sim... | Is it possible to group tensorflow FLAGS by type?
E.g.
Some flags are system related (e.g. # of threads) while others are model hyperparams.
Then, is it possible to use the model hyperparams FLAGS, in order to generate a string? (the string will be used to identify the model filename)
Thanks | 0 | 1 | 79 |
0 | 39,650,350 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-09-22T23:24:00.000 | 3 | 1 | 0 | Convert a numpy array into an array of signs with 0 as positive | 39,650,312 | 1.2 | python,numpy | If x is the array, you could use 2*(x >= 0) - 1.
x >= 0 will be an array of boolean values (i.e. False and True), but when you do arithmetic with it, it is effectively cast to an array of 0s and 1s.
You could also do np.sign(x) + (x == 0). (Note that np.sign(x) returns floating point values, even when x is an integer ... | I have a large numpy array with positive data, negative data and 0s. I want to convert it to an array with the signs of the current values such that 0 is considered positive. If I use numpy.sign it returns 0 if the current value is 0 but I want something that returns 1 instead. Is there an easy way to do this? | 0 | 1 | 2,087 |
0 | 39,652,742 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-23T04:16:00.000 | 0 | 2 | 1 | How to know which .whl module is suitable for my system with so many? | 39,652,553 | 0 | python,python-wheel,python-install | You don't have to know. Use pip - it will select the most specific wheel available. | We have so may versions of wheel.
How could we know which version should be installed into my system?
I remember there is a certain command which could check my system environment.
Or is there any other ways?
---------------------Example Below this line -----------
scikit_learn-0.17.1-cp27-cp27m-win32.whl
scikit_learn... | 0 | 1 | 1,458 |
0 | 39,668,864 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-09-23T19:24:00.000 | 1 | 2 | 0 | homography and image scaling in opencv | 39,668,174 | 0.099668 | python,opencv,coordinate-transformation,homography | The way I see it, the problem is that homography applies a perspective projection which is a non linear transformation (it is linear only while homogeneous coordinates are being used) that cannot be represented as a normal transformation matrix. Multiplying such perspective projection matrix with some other transformat... | I am calculating an homography between two images img1 and img2 (the images contain mostly one planar object, so the homography works well between them) using standard methods in OpenCV in python. Namely, I compute point matches between the images using sift and then call cv2.findHomography.
To make the computation fas... | 0 | 1 | 2,276 |
0 | 39,671,924 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-24T01:34:00.000 | 0 | 3 | 0 | Is my understanding of Hashsets correct?(Python) | 39,671,661 | 0 | python,algorithm,data-structures | The lookup time wouldn't be O(n) because not all items need to be searched, it also depends on the number of buckets. More buckets would decrease the probability of a collision and reduce the chain length.
The number of buckets can be kept as a constant factor of the number of entries by resizing the hash table as need... | I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and th... | 0 | 1 | 1,427 |
0 | 39,671,749 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-24T01:34:00.000 | 0 | 3 | 0 | Is my understanding of Hashsets correct?(Python) | 39,671,661 | 0 | python,algorithm,data-structures | For your first question - why is the average time complexity of a lookup O(1)? - this statement is in general only true if you have a good hash function. An ideal hash function is one that causes a nice spread on its elements. In particular, hash functions are usually chosen so that the probability that any two element... | I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and th... | 0 | 1 | 1,427 |
0 | 39,672,721 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-09-24T04:07:00.000 | 0 | 1 | 0 | Organizing records to classes | 39,672,376 | 0 | python,class | I would say yes. Basically I want to:
Take the unique set of data
Filter it so that just a subset is considered (filter parameters can be time of recording for example)
Use a genetic algorithm the filtered data to match on average a target.
Step 3 is irrelevant to the post, I just wanted to give the big picture in or... | I'm planning to develop a genetic algorithm for a series of acceleration records in a search to find optimum match with a target.
At this point my data is array-like with a unique ID column, X,Y,Z component info in the second, time in the third etc...
That being said each record has several "attributes". Do you think i... | 0 | 1 | 17 |
0 | 54,791,471 | 0 | 0 | 0 | 0 | 2 | false | 206 | 2016-09-25T21:12:00.000 | 1 | 9 | 0 | Ordering of batch normalization and dropout? | 39,691,902 | 0.022219 | python,neural-network,tensorflow,conv-neural-network | The correct order is: Conv > Normalization > Activation > Dropout > Pooling | The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.
When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried... | 0 | 1 | 130,653 |
0 | 63,051,525 | 0 | 0 | 0 | 0 | 2 | false | 206 | 2016-09-25T21:12:00.000 | 0 | 9 | 0 | Ordering of batch normalization and dropout? | 39,691,902 | 0 | python,neural-network,tensorflow,conv-neural-network | ConV/FC - BN - Sigmoid/tanh - dropout.
If activiation func is Relu or otherwise, the order of normalization and dropout depends on your task | The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.
When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried... | 0 | 1 | 130,653 |
0 | 39,714,032 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-26T23:18:00.000 | 0 | 2 | 0 | Need some clarification on Kruskals and Union-Find | 39,713,798 | 0 | python,graph,kruskals-algorithm | Actually the running time of the algorithm is O(E log(V)).
The key to its performance lies on your point 4, more specifically, the implementation of determining for a light edge e = (a, b) if 'a' and 'b' belongs to the same set and, if not, performing the union of their respective sets.
For more clarifications on the t... | Please help me fill any gaps in my knowledge(teaching myself):
So far I understand that given a graph of N vertices, and edges we want to form a MST that will have N-1 Edges
We order the edges by their weight
We create a set of subsets where each vertice is given its own subset. So if we have {A,B,C,D} as our initial ... | 0 | 1 | 329 |
0 | 39,716,115 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-09-27T03:22:00.000 | 0 | 2 | 0 | How to put an overlay on a video | 39,715,472 | 0 | python,opencv,video,overlay | What you need are 2 Mat objects- one to stream the camera (say Mat_cam), and the other to hold the overlay (Mat_overlay).
When you draw on your main window, save the line and Rect objects on Mat_overlay, and make sure that it is not affected by the streaming video
When the next frame is received, Mat_cam will be update... | I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top o... | 0 | 1 | 1,160 |
0 | 39,721,387 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-09-27T03:22:00.000 | 0 | 2 | 0 | How to put an overlay on a video | 39,715,472 | 0 | python,opencv,video,overlay | I am not sure that I have understood your question properly.What I got from your question is that you want the overlay to remain on your frame, streamed from Videocapture, for that one simple solution is to declare your "Mat_cam"(camera streaming variable) outside the loop that is used to capture frames so that "Mat_c... | I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top o... | 0 | 1 | 1,160 |
0 | 39,743,987 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-09-28T05:27:00.000 | 0 | 2 | 0 | How do you get a probability of all classes to predict without building a classifier for each single class? | 39,738,703 | 0 | python,machine-learning,scikit-learn | Random forests do indeed give P(Y/x) for multiple classes. In most cases
P(Y/x) can be taken as:
P(Y/x)= the number of trees which vote for the class/Total Number of trees.
However you can play around with this, for example in one case if the highest class has 260 votes, 2nd class 230 votes and other 5 classes 10 votes... | Given a classification problem, sometimes we do not just predict a class, but need to return the probability that it is a class.
i.e. P(y=0|x), P(y=1|x), P(y=2|x), ..., P(y=C|x)
Without building a new classifier to predict y=0, y=1, y=2... y=C respectively. Since training C classifiers (let's say C=100) can be quite sl... | 0 | 1 | 1,945 |
0 | 39,747,938 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-28T12:42:00.000 | 5 | 2 | 0 | The efficient way of Array transformation by using numpy | 39,747,900 | 0.462117 | python,numpy | Just numpy.transpose(U) or U.T. | How to change the ARRAY U(Nz,Ny, Nx) to U(Nx,Ny, Nz) by using numpy? thanks | 0 | 1 | 79 |
0 | 47,186,550 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-28T16:13:00.000 | 1 | 2 | 0 | MPlot3D Image Manipulation in IPython | 39,752,700 | 0.099668 | python,ipython,spyder,mplot3d | I initially faced same issue
Everything seems to be alright but couldn't rotate the picture.
After toggling between graphical and automatic in
Tools > preferences > IPython console > Graphics > Graphics backend > Backend: ....
I could rotate the image | I am developing a Python program that involves displaying X-Y-Z Trajectories in 3D space. I'm using the Spyder IDE that naturally comes with Anaconda, and I've been running my scripts in IPython Consoles.
So I've been able to generate the 3D plot successfully and use pyplot.show() to display it on the IPython Console. ... | 0 | 1 | 472 |
0 | 46,331,190 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-28T16:13:00.000 | 4 | 2 | 0 | MPlot3D Image Manipulation in IPython | 39,752,700 | 0.379949 | python,ipython,spyder,mplot3d | Yes, you can rotate and interact with Mplot3d plots in Spyder, you just have to change the setting so that plots appear in a separate window, rather than in the IPython console. Just change the inline setting to automatic:
Tools > preferences > IPython console > Graphics > Graphics backend > Backend: Automatic
Then cli... | I am developing a Python program that involves displaying X-Y-Z Trajectories in 3D space. I'm using the Spyder IDE that naturally comes with Anaconda, and I've been running my scripts in IPython Consoles.
So I've been able to generate the 3D plot successfully and use pyplot.show() to display it on the IPython Console. ... | 0 | 1 | 472 |
0 | 39,778,818 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-29T00:44:00.000 | 0 | 2 | 0 | manually building installing python packages in linux so they are recognized | 39,759,680 | 0 | python | think i figured it out. Apparently SLES 11.4 does not include the development headers in the default install from their SDK for numpy 1.8.
And of course they don't offer matplotlib along with a bunch of common python packages.
The python packages per the SLES SDK are the system default are located under/usr/lib64/pyth... | My system is SLES 11.4 having python 2.6.9.
I know little about python and have not found where to download rpm's that give me needed python packages.
I acquired numpy 1.4 and 1.11 and I believe did a successful python setup.py build followed by python setup.py install on numpy.
Going from memory I think this installed... | 0 | 1 | 44 |
0 | 39,813,792 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-29T20:27:00.000 | 0 | 2 | 0 | how to export data to unix system location using python | 39,779,412 | 1.2 | python,unix | I have find the solution. It might because I am using Spyder from anaconda. As long as I use "\" instead of "\", python can recognize the location. | I am trying to write the file to my company's project folder which is unix system and the location is /department/projects/data/. So I used the following code
df.to_csv("/department/projects/data/Test.txt", sep='\t', header = 0)
The error message shows it cannot find the locations. how to specify the file location in U... | 0 | 1 | 43 |
0 | 39,817,575 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-10-02T13:33:00.000 | 1 | 1 | 0 | Python sorted plot | 39,817,545 | 1.2 | python,sorting,plot,bar-chart,seaborn | Do you save the changes by pd.sort_values? If not, probably you have to add the inplace keyword:
mydf.sort_values(['myValueField'], ascending=False, inplace=True) | I want to use seaborn to perform a sns.barplot where the values are ordered e.g. in ascending order.
In case the order parameter of seaborn is set the plot seems to duplicate the labels for all non-NaN labels.
Trying to pre-sort the values like mydf.sort_values(['myValueField'], ascending=False) does not change the res... | 0 | 1 | 144 |
0 | 39,833,244 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-10-03T13:23:00.000 | 4 | 2 | 0 | Convert Pandas Dataframe Date Index and Column to Numpy Array | 39,832,735 | 1.2 | python,arrays,pandas,numpy,dataframe | If A is dataframe and col the column:
import pandas as pd
output = pd.np.column_stack((A.index.values, A.col.values)) | How can I convert 1 column and the index of a Pandas dataframe with several columns to a Numpy array with the dates lining up with the correct column value from the dataframe?
There are a few issues here with data type and its driving my nuts trying to get both the index and the column out and into the one array!!
Help... | 0 | 1 | 4,044 |
0 | 39,856,855 | 0 | 0 | 0 | 0 | 1 | false | 12 | 2016-10-04T05:30:00.000 | 13 | 4 | 1 | "'CXXABI_1.3.8' not found" in tensorflow-gpu - install from source | 39,844,772 | 1 | python,tensorflow | I solved this problem by copying the libstdc++.so.6 file which contains version CXXABI_1.3.8.
Try run the following search command first:
$ strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep CXXABI_1.3.8
If it returns CXXABI_1.3.8. Then you can do the copying.
$ cp /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /home/j... | I have re-installed Anaconda2.
And I got the following error when 'python -c 'import tensorflow''
ImportError: /home/jj/anaconda2/bin/../lib/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /home/jj/anaconda2/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)
environment
CUDA8.0
cuDNN... | 0 | 1 | 24,320 |
0 | 39,858,096 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-04T15:29:00.000 | 1 | 1 | 0 | How to visualize DNNs dependent of the output class in TensorFlow? | 39,856,291 | 0.197375 | python-2.7,tensorflow,deep-learning | The "basic" version of this is straightforward. You use the same graph as for training the network, but instead of optimizing w.r.t. the parameters of the network, you optimize w.r.t the input (which has to be a variable with the shape of your input image). Your optimization target is the negative (because you want to ... | In TensorFlow it is pretty straight forward to visualize filters and activation layers given a single input.
But I'm more interested in the opposite way: feeding a class (as one-hot vector) to the output layer and see something like the optimal input image for that specific class.
Is there a way to do so or to run the ... | 0 | 1 | 95 |
0 | 39,892,009 | 0 | 0 | 0 | 1 | 1 | false | 2 | 2016-10-05T07:45:00.000 | 0 | 1 | 0 | DBF Table Join without using Arcpy? | 39,868,163 | 0 | python-2.7,dbf,arcpy,pyshp | Not exactly a programmatical solution for my problem but a practical one:
My shapefile is always static, only the attributes of the features will change. So I copy my original shapefile (only the basic files with endings .shp, .shx, .prj) to my output folder and rename it to the name I want.
Then I create my CSV-File ... | I have created a rather large CSV file (63000 rows and around 40 columns) and I want to join it with an ESRI Shapefile.
I have used ArcPy but the whole process takes 30! minutes. If I make the join with the original (small) CSV file, join it with the Shapefile and then make my calculations with ArcPy and continously ad... | 0 | 1 | 377 |
0 | 39,883,453 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-10-05T19:56:00.000 | 2 | 1 | 0 | How would you store a pyramidal image representation in Python? | 39,882,632 | 1.2 | python,numpy,image-processing | You could just use a list of numpy arrays. Assuming a scale factor of two, for the i,jth pixel at scale n:
The indices of its "parent" pixel at scale n-1 will be (i//2, j//2)
Its "child" pixels at scale n+1 can be indexed by (slice(2*i, 2*(i+1)), slice(2*j, 2*(j+1))) | Suppose I have N images which are a multiresolution representation of a single image (the Nth image being the coarsest one). If my finest scale is a 16x16 image, the next scale is a 8x8 image and so on.
How should I store such data to fastly be able to access, at a given scale and for a given pixel, its unique parent ... | 0 | 1 | 46 |
0 | 40,723,826 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-07T02:27:00.000 | 1 | 1 | 0 | Is it possible to use label spreading scikit algorithm on edgelist? | 39,908,430 | 1.2 | python,scikit-learn | To use Label Spreading you should follow these steps:
1. create a vector of labels (y), where all the unlabeled instances are set to -1.
2. fit the model using your feature data (X) and y.
3. create predict_entropies vector using stats.distributions.entropy(yourmodelname.label_distributions_.T)
4. create an uncertainty... | I have a network edgelist and I want to use the Label Spreading/Label Propagation algorithm from scikit-learn. I have a set of nodes that are labeled and want to spread the labels on the unlabeled portion of the network. I can generate the adjacency matrix or confusion matrix if needed.
Can someone point me in the rig... | 0 | 1 | 620 |
0 | 39,926,150 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-07T20:29:00.000 | 0 | 3 | 0 | NLP Code Mixed : Code Switching | 39,925,325 | 0 | python-3.x,machine-learning,nlp,stanford-nlp,opennlp | I think easy solution is remove navbar-inverse class and place this css.
.navbar {
background-color: blue;
} | I am engaged in a competition where we have to build a system using given data set. I am trying to learn the proceedings in linguistics research.
The main goal of this task is to identify the sentence level sentiment polarity of the code-mixed dataset of Indian languages pairs. Each of the sentences is annotated with l... | 0 | 1 | 165 |
0 | 50,913,862 | 0 | 0 | 0 | 0 | 2 | false | 29 | 2016-10-08T09:46:00.000 | 4 | 4 | 0 | Cannot import keras after installation | 39,930,952 | 0.197375 | python,ubuntu,tensorflow,anaconda,keras | I had pip referring by default to pip3, which made me download the libs for python3. On the contrary I launched the shell as python (which opened python 2) and the library wasn't installed there obviously.
Once I matched the names pip3 -> python3, pip -> python (2) all worked. | I'm trying to setup keras deep learning library for Python3.5 on Ubuntu 16.04 LTS and use Tensorflow as a backend. I have Python2.7 and Python3.5 installed. I have installed Anaconda and with help of it Tensorflow, numpy, scipy, pyyaml. Afterwards I have installed keras with command
sudo python setup.py install
Alth... | 0 | 1 | 129,794 |
0 | 55,900,347 | 0 | 0 | 0 | 0 | 2 | false | 29 | 2016-10-08T09:46:00.000 | 0 | 4 | 0 | Cannot import keras after installation | 39,930,952 | 0 | python,ubuntu,tensorflow,anaconda,keras | Firstly checked the list of installed Python packages by:
pip list | grep -i keras
If there is keras shown then install it by:
pip install keras --upgrade --log ./pip-keras.log
now check the log, if there is any pending dependencies are present, it will affect your installation. So remove dependencies and then again in... | I'm trying to setup keras deep learning library for Python3.5 on Ubuntu 16.04 LTS and use Tensorflow as a backend. I have Python2.7 and Python3.5 installed. I have installed Anaconda and with help of it Tensorflow, numpy, scipy, pyyaml. Afterwards I have installed keras with command
sudo python setup.py install
Alth... | 0 | 1 | 129,794 |
0 | 41,820,410 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2016-10-08T20:19:00.000 | 0 | 1 | 0 | Feature detection for embedded platform OpenCV | 39,936,967 | 0 | python,algorithm,opencv,raspberry-pi,computer-vision | I have done similar project in my Masters Degree.
I had used Raspberry Pi 3 because it is faster than Pi 2 and has more resources for image processing.
I had used KNN algorithm in OpenCV for Number Detection. It was fast and had good efficiency.
The main advantage of KNN algorithm is it is very light weight. | I'm trying to do object recognition in an embedded environment, and for this I'm using Raspberry Pi (Specifically version 2).
I'm using OpenCV Library and as of now I'm using feature detection algorithms contained in OpenCV.
So far I've tried different approaches:
I tried different keypoint extraction and description ... | 0 | 1 | 406 |
0 | 39,973,094 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-10-10T11:53:00.000 | 0 | 2 | 0 | store multiple images efficiently in Python data structure | 39,957,657 | 0 | python,image,algorithm,opencv,image-processing | I used several lists and list.append() for storing the image.
For finding the white regions in the black & white images I used cv2.findNonZero(). | I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.
I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixe... | 0 | 1 | 783 |
0 | 39,973,408 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-10-10T11:53:00.000 | 1 | 2 | 0 | store multiple images efficiently in Python data structure | 39,957,657 | 1.2 | python,image,algorithm,opencv,image-processing | PIL and Pillow are only marginally useful for this type of work.
The basic algorithm used for "finding and counting" objects like you are trying to do goes something like this: 1. Conversion to grayscale 2. Thresholding (either automatically via Otsu method, or similar, or by manually setting the threshold values) 3. ... | I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.
I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixe... | 0 | 1 | 783 |
0 | 39,971,118 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-10-11T04:29:00.000 | 0 | 3 | 0 | Python: Plot a sparse matrix | 39,970,515 | 0 | python,matplotlib | It seems to me heatmap is the best candidate for this type of plot. imshow() will return u a colored matrix with color scale legend.
I don't get ur stretched ellipses problem, shouldnt it be a colored squred for each data point?
u can try log color scale if it is sparse. also plot the 12 classes separately to analyze... | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Atte... | 0 | 1 | 2,877 |
0 | 40,127,976 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-10-11T04:29:00.000 | 0 | 3 | 0 | Python: Plot a sparse matrix | 39,970,515 | 0 | python,matplotlib | plt.matshow also turned out to be a feasible solution. I could also plot a heatmap with colorbars and all that. | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Atte... | 0 | 1 | 2,877 |
0 | 39,972,738 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-10-11T07:25:00.000 | 0 | 1 | 0 | How to update pandas when python is installed as part of ArcGIS10.4, or another solution | 39,972,261 | 0 | python,pandas,upgrade,arcmap | I reinstalled python again directly from python.org and then installed pandas which seems to work.
I guess this might stop the ArcMap version of python working properly but since I'm not using python with ArcMap at the moment it's not a big problem. | I recently installed ArcGIS10.4 and now when I run python 2.7 programs using Idle (for purposes unrelated to ArcGIS) it uses the version of python attached to ArcGIS.
One of the programs I wrote needs an updated version of the pandas module. When I try to update the pandas module in this verion of python (by opening co... | 0 | 1 | 212 |
0 | 44,608,692 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-12T15:14:00.000 | 1 | 2 | 0 | fit_transform with the training data and transform with the testing | 40,002,232 | 0.099668 | python,scikit-learn | If you use fit only on the training and transform on the test data, you won't get the correct result.
When using fit_transform on the training data, it means that the machine is learning from the parameters in the feature space and also transforming (scaling) the training data. On the other hand, you should only use tr... | As the title says, I am using fit_transform with the CountVectorizer on the training data .. and then I am using tranform only with the testing data ... will this gives me the same as using fit only on the training and tranform only on the testing data ? | 0 | 1 | 2,610 |
0 | 40,025,421 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-12T17:01:00.000 | 0 | 1 | 0 | Putting a Regression Line When Using Pandas scatter_matrix | 40,004,334 | 0 | python-2.7,pandas | I think this is a misleading question/thought process.
If you think of data in strictly 2 dimension then a regression line on a scatter plot makes sense. But let's say you have 5 dimensions of data you are plotting in your scatter matrix. In this case the regression for each pair of dimensions is not an accurate repre... | I'm using scatter_matrix for correlation visualization and calculating correlation values using corr(). Is it possible to have the scatter_matrix visualization draw the regression line in the scatter plots? | 0 | 1 | 674 |
0 | 40,007,828 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-10-12T20:27:00.000 | 0 | 1 | 0 | Digital Image Processing via Python | 40,007,759 | 1.2 | python,image-processing | You can search for this libraries: dlib, PIL (pillow), opencv and scikit learn image. This libraries are image processing libraries for python.
Hope it helps. | I am starting a new project with a friend of mine, we want to design a system that would alert the driver if the car is diverting from its original path and its dangerous.
so in a nutshell we have to design a real-time algorithm that would take pictures from the camera and process them. All of this will be done in Pyth... | 0 | 1 | 124 |
0 | 57,003,384 | 0 | 0 | 0 | 0 | 2 | false | 30 | 2016-10-13T03:36:00.000 | 1 | 7 | 0 | NLTK vs Stanford NLP | 40,011,896 | 0.028564 | python,nlp,nltk,stanford-nlp | NLTK can be used for the learning phase to and perform natural language process from scratch and basic level.
Standford NLP gives you high-level flexibility to done task very fast and easiest way.
If you want fast and production use, can go for Standford NLP. | I have recently started to use NLTK toolkit for creating few solutions using Python.
I hear a lot of community activity regarding using Stanford NLP.
Can anyone tell me the difference between NLTK and Stanford NLP? Are they two different libraries? I know that NLTK has an interface to Stanford NLP but can anyone throw ... | 0 | 1 | 15,755 |
0 | 50,968,392 | 0 | 0 | 0 | 0 | 2 | false | 30 | 2016-10-13T03:36:00.000 | 1 | 7 | 0 | NLTK vs Stanford NLP | 40,011,896 | 0.028564 | python,nlp,nltk,stanford-nlp | I would add to this answer that if you are looking to parse date/time events StanfordCoreNLP contains SuTime which is the best datetime parser available. The support for arbitrary texts like 'Next Monday afternoon' is not present in any other package. | I have recently started to use NLTK toolkit for creating few solutions using Python.
I hear a lot of community activity regarding using Stanford NLP.
Can anyone tell me the difference between NLTK and Stanford NLP? Are they two different libraries? I know that NLTK has an interface to Stanford NLP but can anyone throw ... | 0 | 1 | 15,755 |
0 | 40,021,035 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2016-10-13T12:17:00.000 | 2 | 1 | 0 | Add a library in Spark in Bluemix & connect MongoDB , Spark together | 40,020,767 | 1.2 | python,apache-spark,ibm-cloud,ibm-cloud-plugin | In a Python notebook:
!pip install <package>
and then
import <package> | 1) I have Spark on Bluemix platform, how do I add a library there ?
I can see the preloaded libraries but cant add a library that I want.
Any command line argument that will install a library?
pip install --package is not working there
2) I have Spark and Mongo DB running, but I am not able to connect both of them.
... | 0 | 1 | 106 |
0 | 40,046,686 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2016-10-14T03:12:00.000 | 2 | 1 | 0 | Using compression library to estimate information complexity of an english sentence? | 40,034,334 | 1.2 | python,scala,compression,information-theory | All this is going to do is tell you whether the words in the sentence, and maybe phrases in the sentence, are in the dictionary you supplied. I don't see how that's complexity. More like grade level. And there are better tools for that. Anyway, I'll answer your question.
Yes, you can preset the zlib compressor a dictio... | I'm trying to write an algorithm that can work out the 'unexpectedness' or 'information complexity' of a sentence. More specifically I'm trying to sort a set of sentences so the least complex come first.
My thought was that I could use a compression library, like zlib?, 'pre train' it on a large corpus of text in the ... | 0 | 1 | 101 |
0 | 40,049,831 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2016-10-14T17:36:00.000 | 3 | 1 | 0 | Importing PMML models into Python (Scikit-learn) | 40,048,987 | 1.2 | python,r,scikit-learn,pmml | You can't connect different specialized representations (such as R and Scikit-Learn native data structures) over a generalized representation (such as PMML). You may have better luck trying to translate R data structures to Scikit-Learn data structures directly.
XGBoost is really an exception to the above rule, because... | There seem to be a few options for exporting PMML models out of scikit-learn, such as sklearn2pmml, but a lot less information going in the other direction. My case is an XGboost model previously built in R, and saved to PMML using r2pmml, that I would like to use in Python. Scikit normally uses pickle to save/load m... | 0 | 1 | 2,878 |
0 | 40,065,428 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-15T23:49:00.000 | 0 | 1 | 0 | Tensorflow: Converting a tenor [B, T, S] to list of B tensors shaped [T, S] | 40,065,396 | 1.2 | python,tensorflow,deep-learning,lstm | It sounds like you want tf.unpack() | Tensorflow: Converting a tenor [B, T, S] to list of B tensors shaped [T, S]. Where B, T, S are whole positive numbers ...
How can I convert this? I can't do eval because no session is running at the time I want to do this. | 0 | 1 | 76 |
0 | 40,087,542 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-17T11:36:00.000 | 2 | 1 | 0 | Which scipy function should I use to interpolate from a rectangular grid to regularly spaced rectangular grid in 2D? | 40,085,367 | 1.2 | python,numpy,scipy,grid | RectBivariateSpline
Imagine your grid as a canyon, where the high values are peaks and the low values are valleys. The bivariate spline is going to try to fit a thin sheet over that canyon to interpolate. This will still work on irregularly spaced input, as long as the x and y array you supply are also irregularly spac... | I pretty new to python, and I'm looking for the most efficient pythonic way to interpolate from a grid to another one.
The original grid is a structured grid (the terms regular or rectangular grid are also used), and the spacing is not uniform.
The new grid is a regularly spaced grid. Both grids are 2D. For now it's ok... | 0 | 1 | 401 |
0 | 40,091,765 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-10-17T16:01:00.000 | 2 | 2 | 0 | Check failed: error == cudaSuccess (2 vs. 0) out of memory | 40,090,892 | 0.197375 | python,gpu,caffe | This happens when you run out of memory in the GPU. Are you sure you stopped the first script properly? Check the running processes on your system (ps -A in ubuntu) and see if the python script is still running. Kill it if it is. You should also check the memory being used in your GPU (nvidia-smi). | I am trying to run a neural network with pycaffe on gpu.
This works when I call the script for the first time.
When I run the same script for the second time, CUDA throws the error in the title.
Batch size is 1, image size at this moment is 243x322, the gpu has 8gb RAM.
I guess I am missing a command that resets the me... | 0 | 1 | 2,965 |
0 | 40,099,554 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-18T03:26:00.000 | 1 | 1 | 0 | how to install previous version of xarray | 40,099,001 | 0.197375 | python-xarray | Use "conda install xarray==0.8.0" if you're using anaconda, or "pip install xarray==0.8.0" otherwise. | I am reading other's pickle file that may have data type based on xarray. Now I cannot read in the pickle file with the error "No module named core.dataset".
I guess this maybe a xarray issue. My collaborator asked me to change my version to his version and try again.
My version is 0.8.2, and his version 0.8.0. So how ... | 0 | 1 | 999 |
0 | 40,109,662 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-18T13:16:00.000 | 0 | 1 | 0 | How do I link python 3.4.3 to opencv? | 40,109,379 | 0 | python,python-2.7,python-3.x,opencv,numpy | You can try:
Download the OpenCV module
Copy the ./opencv/build/python/3.4/x64/cv2.pyd file
To the python installation directory path: ./Python34/Lib/site-packages.
I hope this helps | So I have OpenCV on my computer all sorted out, I can use it in C/C++ and the Python 2.7.* that came with my OS.
My computer runs on Linux Deepin and whilst I usually use OpenCV on C++, I need to use Python 3.4.3 for some OpenCV tasks.
Problem is, I've installed python 3.4.3 now but whenever I try to run an OpenCV prog... | 0 | 1 | 1,213 |
0 | 40,116,907 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-18T13:56:00.000 | 0 | 2 | 0 | Python, scipy, curve_fit, bounds: How can I constraint param by two intervals? | 40,110,260 | 1.2 | python,scipy,curve-fitting | No, least_squares (hence curve_fit) only supports box constraints. | I`m using scipy.optimize.curve_fit for fitting a sigmoidal curve to data. I need to bound one of parameters from [-3, 0.5] and [0.5, 3.0]
I tried fit curve without bounds, and next if parameter is lower than zero, I fit once more with bounds [-3, 0.5] and in contrary with[0.5, 3.0]
Is it possible, to bound function cu... | 0 | 1 | 569 |
0 | 51,822,131 | 0 | 0 | 0 | 0 | 1 | false | 20 | 2016-10-18T19:11:00.000 | 17 | 4 | 0 | xgboost sklearn wrapper value 0for Parameter num_class should be greater equal to 1 | 40,116,215 | 1 | python,scikit-learn,xgboost | In my case, the same error was thrown during a regular fit call. The root of the issue was that the objective was manually set to multi:softmax, but there were only 2 classes. Changing it to binary:logistic solved the problem. | I am trying to use the XGBClassifier wrapper provided by sklearn for a multiclass problem. My classes are [0, 1, 2], the objective that I use is multi:softmax. When I am trying to fit the classifier I get
xgboost.core.XGBoostError: value 0for Parameter num_class should be greater equal to 1
If I try to set the num_c... | 0 | 1 | 14,175 |
0 | 45,497,131 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2016-10-19T10:28:00.000 | 1 | 4 | 1 | How to install openCV 2.4.13 for Python 2.7 on Ubuntu 16.04? | 40,128,751 | 0.049958 | python-2.7,opencv,ubuntu | sudo apt-get install build-essential cmake git pkg-config
sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev
sudo apt-get install libgtk2.0-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python... | I have tried a lot of online posts to install opencv but they are not working for Ubuntu 16.04. May anyone please give me the steps to install openCV 2.4.13 on it? | 0 | 1 | 29,525 |
0 | 41,708,567 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-10-19T23:10:00.000 | 5 | 1 | 0 | Add text to scatter point using python gmplot | 40,142,959 | 0.761594 | python | Just looking for the answer to this myself. gmplot was updated to June 2016 to include a hovertext functionality for the marker method, but unfortunately this isn't available for the scatter method. The enthusiastic user will find that the scatter method simply calls the marker method over and over, and could modify t... | I plotted some points on google maps using gmplot's scatter method (python). I want to add some text to the points so when someone clicks on those points they can see the text.
I am unable to find any documentation or example that shows how to do this.
Any pointers are appreciated. | 0 | 1 | 8,147 |
0 | 40,163,636 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-10-20T00:36:00.000 | 4 | 1 | 0 | fuzzy match between 2 columns (Python) | 40,143,675 | 0.664037 | python,python-3.x,pandas,fuzzywuzzy | Thanks everyone for your inputs. I have solved my problem! The link that "agg3l" provided was helpful. The "TypeError" I saw was because either the "url_entrance" or "company_name" has some floating types in certain rows. I converted both columns to string using the following scripts, re-ran the fuzz.ratio script and g... | I have a pandas dataframe called "df_combo" which contains columns "worker_id", "url_entrance", "company_name". I am trying to produce an output column that would tell me if the URLs in "url_entrance" column contains any word in "company_name" column. Even a close match like fuzzywuzzy would work.
For example, if the ... | 0 | 1 | 3,169 |
0 | 40,695,844 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-10-21T00:03:00.000 | 2 | 1 | 0 | spyder cant load tensorflow | 40,166,386 | 0.379949 | python-2.7,ubuntu,tensorflow,spyder | Enter the enviornment
source activate tensorflow
install spyder
conda install spyder
Run spyder
spyder
` | I build and installed tensorflow in my ubuntu 16.04 with gpu. In command line I can easily activate tensorflow environment but while I try to run the code through spyder it show this : "No module named tensorflow.examples.tutorials.mnist"
how can I run my python code from spyder with tensorflow? | 0 | 1 | 771 |
0 | 40,188,461 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-10-22T04:16:00.000 | 3 | 1 | 0 | Difference between numpy.zeros(n) and numpy.zeros(n,1) | 40,188,251 | 0.53705 | python,numpy | The first argument indicates the shape of the array. A scalar argument implies a "flat" array (vector), whereas a tuple argument is interpreted as the dimensions of a tensor. So if the argument is the tuple (m,n), numpy.zeros will return a matrix with m rows and n columns. In your case, it is returning a matrix with n ... | What is the difference between
numpy.zeros(n)
and
numpy.zeros(n,1)?
The output for the first statement is
[0 0 ..... n times]
whereas the second one is
([0]
[0]
.... n rows) | 0 | 1 | 1,550 |
0 | 65,651,852 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2016-10-22T14:38:00.000 | 0 | 8 | 0 | How to check if a CSV has a header using Python? | 40,193,388 | 0 | python,python-2.7,csv | I think the best way to check this is -> simply reading 1st line from file and then match your string instead of any library. | I have a CSV file and I want to check if the first row has only strings in it (ie a header). I'm trying to avoid using any extras like pandas etc. I'm thinking I'll use an if statement like if row[0] is a string print this is a CSV but I don't really know how to do that :-S any suggestions? | 0 | 1 | 24,553 |
0 | 40,236,052 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-24T18:22:00.000 | 0 | 1 | 0 | Python machine learning sklearn.linear_model vs custom code | 40,224,973 | 0 | python,machine-learning | I recommend you use as much as possible the functions given by sklearn or another ML library (I like TensorFlow). That's because it's very difficult to get the performance of any library. They are calculating in a low level of the operating system, meanwhile the common users don't implement the computational actions ou... | I am new to machine learning and Python. I am trying to understand when to use the functions in sklearn.linear_model (linearregression and logisticregression) and when to implement my own code for the same. Any suggestions or references will be highly appreciated.
regards
Souvik | 0 | 1 | 72 |
0 | 48,947,334 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2016-10-25T03:25:00.000 | 6 | 1 | 0 | CountVectorizer and Out-Of-Vocabulary (OOV) tokens? | 40,230,865 | 1 | python,scikit-learn | There is no inbuilt way in scikit-learn to do this, you need to write some additional code to be able to do this. However you could use the vocabulary_ attribute of CountVectorizer to achieve this.
Cache the current vocabulary
Call fit_transform
Compute the diff with the new vocabulary and the cached vocabulary | Right now I'm using CountVectorizer to extract features. However, I need to count words not seen during fitting.
During transforming, the default behavior of CountVectorizer is to ignore words that were not observed during fitting. But I need to keep a count of how many times this happens!
How can I do this?
Thanks! | 0 | 1 | 1,618 |
0 | 40,231,181 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-25T03:56:00.000 | 0 | 5 | 0 | add a dummy dimension for a multi-dimensional array | 40,231,128 | 0 | python,numpy,scipy | Sure, no problem. Use 'reshape'. Assuming A1 is a numpy array
A1 = A1.reshape([1,255,255,3])
This will reshape your matrix.
If A1 isn't a numpy array then use
A1 = numpy.array(A1).reshape([1,255,255,3]) | There has a nd array A with shape [100,255,255,3], which correspond to 100 255*255 images. I would like to iterate this multi-dimensional array, and each iteration I get one image. This is what I do, A1 = A[i,:,:,:] The resulting A1 has shape [255,255,3]. However, i would like to enforce it have the shape [1,255,255,3... | 0 | 1 | 1,297 |
0 | 40,236,282 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-25T08:58:00.000 | 2 | 1 | 0 | Python: Blur specific region in an image | 40,235,643 | 1.2 | python,opencv,image-processing,scikit-image | What was the result in the first case? It sounds like a good approach. What did you expect and what you get?
You can also try something like that:
Either create a copy of a whole image or just slightly bigger ROI (to include samples that will be used for blurring)
Apply blur on the created image
Apply masks on two ima... | I'm trying to blur around specific regions in a 2D image (the data is an array of size m x n).
The points are specified by an m x n mask. cv2 and scikit avaiable.
I tried:
Simply applying blur filters to the masked image. But that isn't not working.
Extracting the points to blur by np.nan the rest, blurring and reasse... | 0 | 1 | 1,655 |
0 | 40,268,763 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-26T14:17:00.000 | 1 | 1 | 0 | Generate random sparse matrix filled with values greater than 1 python | 40,264,741 | 0.197375 | python-3.x,scipy,sparse-matrix | sparse.rand calls sparse.random. random adds a optional data_rvs.
I haven't used data_rvs. It can probably emulate the dense randint, but definition is more complicated.
Another option is to generate the random floats and then convert them with a bit of math to the desired integers. You have to be a little careful s... | The method available in python scipy sps.rand() generates sparse matrix of random values in the range (0,1). How can we generate discrete random values greater than 1 like 2, 3,etc. ? Any method in scipy, numpy ? | 0 | 1 | 258 |
0 | 44,125,243 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-10-26T15:30:00.000 | 0 | 1 | 0 | Java heap size error when running Spark from Python | 40,266,372 | 1.2 | java,python,apache-spark,pyspark | This is because you're setting the maximum available heap size (128M) to be larger than the initial heap size error. Check the _JAVA_OPTIONS parameter that you're passing or setting in the configuration file. Also, note that the changes in the spark.driver.memory won't have any effect because the Worker actually lies w... | I'm trying to run a Python script with the pyspark library.
I create a SparkConf() object using the following commands:
conf = SparkConf().setAppName('test').setMaster(<spark-URL>)
When I run the script, that line runs into an error:
Picked up _JAVA_OPTIONS: -Xmx128m
Picked up _JAVA_OPTIONS: -Xmx128m
Error occurred d... | 0 | 1 | 1,509 |
0 | 47,528,448 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-10-27T10:38:00.000 | 0 | 1 | 0 | what's the equivalent of the python zip function in dataflow? | 40,282,480 | 1.2 | python,google-cloud-dataflow | As jkff pointed out in the above comment, the code is indeed correct and the procedure is the recommended way of programming a tensorflow algorithm. The DoFn applied to each element was the bottleneck. | I'm using the python apache_beam version of dataflow. I have about 300 files with an array of 4 million entries each. The whole thing is about 5Gb, stored on a gs bucket.
I can easily produce a PCollection of the arrays {x_1, ... x_n} by reading each file, but the operation I now need to perform is like the python zip ... | 0 | 1 | 283 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.