GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 45,639,855 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-12-03T16:34:00.000 | 0 | 2 | 0 | how to define an issue with neural networks | 40,949,988 | 0 | python,machine-learning,computer-vision,neural-network | It is good that you have created your own program. I would suggest you to keep experimenting with basic problems, such as MNIST by adding more hidden layers, plotting variation of loss with training iterations using different learning rates, etc.
In general, the learning rate should not be kept high initially when the ... | I have built a system where the neural network can change size (number of and size of hidden layers, etc). When training it with a learning rate of 0.5, 1 hidden layer of 4 neurons, 2 inputs and 1 output, it successfully learns the XOR and AND problem (binary inputs, etc). Works really well.
When I then make the struc... | 0 | 1 | 24 |
0 | 40,950,086 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-12-03T16:34:00.000 | 0 | 2 | 0 | how to define an issue with neural networks | 40,949,988 | 0 | python,machine-learning,computer-vision,neural-network | A quick advice may be to solve an intermediate task (e.g. to use your own 5x5 ASCII "pictures" of digits), to have more neurons in the hidden layer, to reduce the data set for quicker simulation, to compare your implementation to other custom implementations in your programming language. | I have built a system where the neural network can change size (number of and size of hidden layers, etc). When training it with a learning rate of 0.5, 1 hidden layer of 4 neurons, 2 inputs and 1 output, it successfully learns the XOR and AND problem (binary inputs, etc). Works really well.
When I then make the struc... | 0 | 1 | 24 |
0 | 40,960,903 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-04T13:34:00.000 | 0 | 1 | 0 | In tensorflow,does embedding matrix remain unchanged? | 40,959,177 | 1.2 | python,tensorflow,deep-learning | Embedding matrix is similar to any other variable. If you set the trainable flag to True it will train it (see tf.Variable) | In tensorflow,we may see these codes.
embeddings=tf.Variable(tf.random_uniform([vocabulary_size,embedding_size],-1.0,1.0))
embed=tf.nn.embedding_lookup(embeddings,train_inputs)
When tensorflow is training,does embedding matrix remain unchanged?
In a blog,it is said that embedding matrix can update.I wonder how does it ... | 0 | 1 | 94 |
0 | 40,962,551 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-04T14:28:00.000 | 0 | 2 | 0 | Python Pandas - Only showing rows in DF for the MAX values of a column | 40,959,626 | 0 | python,pandas | If your index is unique and you are OK with returning one row (in the case of multiple rows having the same max value) then you can use the idxmax method.
df.loc[df['money'].idxmax()]
And if you want to add some flare you can highlight the max value in each column with:
df.loc[df['money'].idxmax()].style.highlight_max(... | searched for this, but cannot find an answer.
Say I have a dataframe (apologies for formatting):
a Dave $400
a Dave $400
a Dave $400
b Fred $220
c James $150
c James $150
d Harry $50
And I want to filter the dataframe so it only shows the rows where the third column is the MAXIMUM value, cou... | 0 | 1 | 327 |
0 | 41,774,573 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-04T15:42:00.000 | -1 | 2 | 0 | Python Decision Tree Regressor Pruning | 40,960,357 | -0.099668 | python,tree,regression,cart | You can't; use matlab. Struggling with this at the moment. Using a python based home-cooked decision tree is also an option. However, there is no guarantee it will work properly (lots of places you can screw up). And you need to implement with numpy if you want any kind of reasonable run-time (also struggling with t... | I'm using scikit-learn to construct regression trees, using tree.DecisionTreeRegression().
I'm giving 56 data samples and it constructs me a Tree with 56 nodes (pruning=0).
How can I implement some pruning to the tree? Any help is appreciated! | 0 | 1 | 1,207 |
0 | 40,977,897 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-05T14:46:00.000 | 0 | 1 | 0 | Access key value from JSON array of objects Python | 40,976,901 | 0 | python,json,csv | turns out it was the json.dumps(), should've read more into what it does! Thanks. | I've been researching the past few days on how to achieve this, to no avail.
I have a JSON file with a large array of json objects like so:
[{
"tweet": "@SHendersonFreep @realDonaldTrump watch your portfolios go to the Caribbean banks and on to Switzerland. Speculation without regulation",
"user": "DGregsonRN"
... | 0 | 1 | 1,675 |
0 | 40,995,578 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-06T12:20:00.000 | 0 | 1 | 0 | How can I update npz file in python? | 40,995,291 | 0 | python,numpy,hdf5 | accepted
locals().update(npzfile)
a # and/or b
In the Ipython session, locals() is a large dictionary with variables the you've defined, the input history lines, and various outputs. Update adds the dictionary values of npzfile to that larger one.
By the way, you can also load and save MATLAB .mat files. Use scipy.io... | I have a large data-set that I want save them in a npz file. But because the size of file is big for memory I cant save them in npz file.
Now i want insert data in iterations into npz file.
How can I do this?
Are HDF5 is better for this? | 0 | 1 | 528 |
0 | 41,197,119 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-06T14:33:00.000 | 1 | 1 | 0 | Not enough memory to read .mat result file into Python | 40,997,813 | 0.197375 | python,dymola | You can reduce the size of the simulation result file by using variable selections in Dymola. That will restrict the output to states, parameters, and the variables that match your selection criteria.
The new Dymola 2017 FD01 has a user interface for defining variable selections. | I have been having some issues trying to open a simulation result output file (.mat) in Python. Upon loading the file I am faced with the following error:
ValueError: Not enough bytes to read matrix 'description'; is this a
badly-formed file? Consider listing matrices with whosmat and
loading named matrices with v... | 0 | 1 | 374 |
0 | 41,003,996 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-06T20:05:00.000 | 1 | 1 | 0 | Scikit-learn - Cross validates score and predictions at one go? | 41,003,897 | 0.197375 | python,scikit-learn,cross-validation | If you run cross_val_predict then you can check the metric on the result. It is not a waste of compute time because cross_val_predict doesn't compute scores itself.
This won't give you per-fold scores though, only the aggregated score (which is not necessarily bad). I think you can workaround that by creating KFold / ... | I'm not sure whether I'm missing something really easy but I have a hard time trying to google something out.
I can see there are cross_val_score and cross_val_predict functions in scikit-learn. However, I can't find a way to get both score and predictions at one go. Seems quite obvious as calling the functions above o... | 0 | 1 | 80 |
0 | 41,021,060 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-12-07T06:09:00.000 | 2 | 2 | 1 | For distributing calculation task, which is better celery or spark | 41,010,560 | 0.197375 | python,apache-spark,celery,distributed,jobs | Adding to the above answer, there are other areas also to identify.
Integration with the existing big data stack if you have.
Data pipeline for ingestion
You mentioned "backend for web application". I assume its for read operation. The response times for any batch application might not be a good fit for any web applic... | Problem: calculation task can be paralleled easily. but it is needed real-time response.
There can be two approaches.
1. using Celery: runs job in parallel from scratch
2. using Spark: runs job in parallel with spark framework
I think spark is better in scalability perspective. But is it OK Spark as backend of web-appl... | 1 | 1 | 3,004 |
0 | 41,012,633 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2016-12-07T06:09:00.000 | 1 | 2 | 1 | For distributing calculation task, which is better celery or spark | 41,010,560 | 1.2 | python,apache-spark,celery,distributed,jobs | Celery :- is really a good technology for distributed streaming And its supports Python language . Which is it self strong in computation and easy to write. The streaming application in Celery supports so many features as well . Its little over head on CPU.
Spark- Its supports various programming language Java,Scala,Py... | Problem: calculation task can be paralleled easily. but it is needed real-time response.
There can be two approaches.
1. using Celery: runs job in parallel from scratch
2. using Spark: runs job in parallel with spark framework
I think spark is better in scalability perspective. But is it OK Spark as backend of web-appl... | 1 | 1 | 3,004 |
0 | 41,116,628 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-12-08T17:35:00.000 | 2 | 2 | 1 | Add more Python libraries | 41,045,491 | 0.197375 | python,azure-data-lake,u-sql | Assuming the libs work with the deployed Python runtime, try to upload the libraries into a location in ADLS and then use DEPLOY RESOURCE "path to lib"; in your script. I haven't tried it, but it should work. | Is it or will it be possible to add more Python libraries than pandas, numpy and numexpr to Azure Data Lake Analytics? Specifically, we need to use xarray, matplotlib, Basemap, pyresample and SciPy when processing NetCDF files using U-SQL. | 0 | 1 | 360 |
0 | 41,071,087 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-12-09T11:32:00.000 | 2 | 1 | 0 | To what extent is it precise to evaluate Sympy enormous fractions (Rational)? | 41,059,520 | 0.379949 | python,python-3.x,sympy | The size of SymPy Rationals is limited only by your available memory. If you want an approximate but memory bounded number, use a Float. You can convert a Rational into a Float with evalf. | I am using Sympy Rational in an algorithm and I am getting enormous factions. Numerator and Denominator grow up to 10000 digits.
I would like to stop the algorithm as soon as the fractions become unevaluable. So the question is, what is the maximum magnitude I can allow for sympy.Rational? | 0 | 1 | 59 |
0 | 45,069,376 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-12-10T11:24:00.000 | 0 | 2 | 0 | opencv-python imshow giving errors in mac | 41,074,980 | 0 | python,macos,opencv | The fix that worked best for me was using mathplotlib instead.
Since you may have to remove all previous versions of OpenCV otherwise and reinstall from source! | I installed opencv-python using pip install, in mac os. Now the cv2.imshow function giving following error
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cm... | 0 | 1 | 2,492 |
0 | 41,079,325 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-12-10T18:26:00.000 | 0 | 2 | 0 | Title: SVC-Scikit Learn issue | 41,078,835 | 0 | python-2.7,error-handling,scikit-learn,svm | Since it cannot used sparse input on dense data, either convert your dense data to sparse data (recommended) or your sparse data to dense data. Use SciPy to create a sparse matrix from a dense one. | I am getting this error in Scikit learn. Previously I worked with K validation, and never encountered error. My data is sparse and training and testing set is divided in the ratio 90:10
ValueError: cannot use sparse input in 'SVC' trained on dense data
Is there any straightforward reason and solution for this? | 0 | 1 | 960 |
0 | 41,079,820 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-12-10T18:26:00.000 | 3 | 2 | 0 | Title: SVC-Scikit Learn issue | 41,078,835 | 1.2 | python-2.7,error-handling,scikit-learn,svm | This basically means that your testing set is not in the same format as your training set.
A code snippet would have been great, but make sure you are using the same array format for both sets. | I am getting this error in Scikit learn. Previously I worked with K validation, and never encountered error. My data is sparse and training and testing set is divided in the ratio 90:10
ValueError: cannot use sparse input in 'SVC' trained on dense data
Is there any straightforward reason and solution for this? | 0 | 1 | 960 |
0 | 41,082,749 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-11T03:25:00.000 | 1 | 1 | 0 | Throw an exception while doing dataframe sum in Pandas | 41,082,675 | 1.2 | python,pandas,dataframe | You can fix resulting DataFrame using df.replace({'FieldName': {'ErrorError': ''}}) | I have a dataframe in which one of the row is filled with string "Error"
I am trying to add rows of 2 different dataframe. However, since I have the string in one of the row, it is concatenating the 2 strings.
So, I am having the dataframe filled with a row "ErrorError". I would prefer leaving this row empty than conc... | 0 | 1 | 49 |
0 | 41,088,981 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-12-11T17:18:00.000 | 1 | 3 | 0 | Converting objects from CSV into datetime | 41,088,840 | 1.2 | python,csv,datetime,pandas,dataframe | I found that the problem was to do with missing values within the column. Using coerce=True so df["Date"] = pd.to_datetime(df["Date"], coerce=True) solves the problem. | I've got an imported csv file which has multiple columns with dates in the format "5 Jan 2001 10:20". (Note not zero-padded day)
if I do df.dtype then it shows the columns as being a objects rather than a string or a datetime. I need to be able to subtract 2 column values to work out the difference so I'm trying to get... | 0 | 1 | 3,761 |
0 | 49,290,726 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-13T20:37:00.000 | 0 | 1 | 0 | No module named 'tools' while importing scikits.talkbox | 41,130,126 | 0 | python-3.x,machine-learning,signal-processing | I had the same problem,just run pip install tools | I just installed scikits.talkbox, and tried using it in my program. But I get the following error
'ImportError: No module named 'tools'
How do I solve this problem? | 0 | 1 | 1,498 |
0 | 41,137,195 | 0 | 0 | 0 | 0 | 1 | true | 14 | 2016-12-14T07:15:00.000 | 11 | 2 | 0 | What's the difference between dummy variable and one-hot encoding? | 41,136,853 | 1.2 | python,machine-learning | In fact, there is no difference in the effect of the two approaches (rather wordings) on your regression.
In either case, you have to make sure that one of your dummies is left out (i.e. serves as base assumption) to avoid perfect multicollinearity among the set.
For instance, if you want to take the weekday of an obs... | I'm making features for a machine learning model. I'm confused with dummy variable and one-hot encoding.For a instance,a category variable 'week' range 1-7.When using one-hot encoding, encode week = 1 as 1,000,000,week = 2 is 0,100,000... .But I can also make a dummy variable 'week_v',and in this way, I must set a
hid... | 0 | 1 | 14,355 |
0 | 41,163,228 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-12-15T11:33:00.000 | -2 | 4 | 0 | Package installation of Keras in Anaconda? | 41,163,150 | -0.099668 | python,anaconda,python-3.5,packaging,keras | Navigate to Anaconda installation folder/Scripts and install with pip command | Python 3.5, I am trying to find command to install a Keras Deep Learning package for Anaconda. The command conda install -c keras does not work, can anyone answer Why it doesn't work? | 0 | 1 | 6,149 |
0 | 41,199,608 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2016-12-16T13:35:00.000 | 0 | 1 | 0 | fastest Connect 4 win checking method | 41,185,646 | 1.2 | python,performance,python-3.x,artificial-intelligence | From your question, it's a bit unclear how your approaches would be implemented. But from the alpha-beta pruning, it seems as if you want to look at a lot of different game states, and in the recursion determine a "score" for each one.
One very important observation is that recursion ends once a 4-in-a-row has been fo... | I am trying to make an ai following the alpha-beta pruning method for tic-tac-toe. I need to make checking a win as fast as possible, as the ai will goes through many different possible game states. Right now I have thought of 2 approaches, neither which is very efficient.
Create a large tuple for scoring every possib... | 0 | 1 | 812 |
0 | 41,200,368 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-17T12:47:00.000 | 2 | 3 | 0 | Pandas replace method and object datatypes | 41,198,719 | 0.132549 | python,pandas,dataframe | If the rest of the data in your columns is numeric then you should use pd.to_numeric(df, errors='coerce') | I am using df= df.replace('No data', np.nan) on a csv file containing ‘No data’ instead of blank/null entries where there is no numeric data. Using the head() method I see that the replace method does replace the ‘No data’ entries with NaN. When I use df.info() it says that the datatypes for each of the series is an ob... | 0 | 1 | 1,782 |
0 | 42,058,253 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-12-17T14:08:00.000 | 0 | 1 | 0 | Install opencv in anaconda3 | 41,199,408 | 0 | python,opencv,anaconda | I might be missing something, but i believe you are just missing seting up the envi. variables.
Set Enviromental Variables
Right-click on "My Computer" (or "This PC" on Windows 8.1) -> left-click Properties -> left-click "Advanced" tab -> left-click "Environment Variables..." button.
Add a new User Variable to point to... | Hello guys i ve just installed anaconda3 in windows 8.1 and opencv 2.4.13 and 3.1.0/ Ive copied from the file c:/..../opencv/build/python/2.7/x64/cv2.pyd and i pasted it to C:\Users.....\Anaconda3\Lib\site-packages. I ve pasted both for opencv 2.4.13 as cv2.pyd and for opencv 3.1.0 as cv2(3)pyd in order to change it wh... | 0 | 1 | 1,034 |
0 | 41,252,942 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-19T02:55:00.000 | 1 | 1 | 0 | Why does GridSearchCV give different optimums on repeated runs? | 41,215,169 | 0.197375 | python,machine-learning,scikit-learn,cross-validation,grid-search | Try setting the random seed if you want to get the same result each time. | I am performing parameter selection using GridSearchCv (sklearn package in python) where the model is an Elastic Net with a Logistic loss (i.e a logistic regression with L1- and L2- norm regularization penalties). I am using SGDClassifier to implement this model. There are two parameters I am interested in searching th... | 0 | 1 | 907 |
0 | 41,230,349 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-12-19T18:42:00.000 | 0 | 2 | 0 | can import tensorflow in python 3.4 but not in ipython notebook | 41,228,983 | 0 | python,ubuntu,tensorflow,pip,ipython | Each major version of python has its own site-packages directory. It seems that you have both python 3.4 and 3.5 and you have jupyter installed in 3.5 and tensorflow in 3.4. The easy solution is to install tensorflow in 3.5 as well. This should allow you to use it with the 3.5 notebook kernel. You could attempt to add ... | I have been running in circles trying to get tensorflow to work in a jupyter notebook. I installed it via pip on ubuntu and also tried a conda environment (but unless I'm mistaken, getting that to work with ipython is beyond my ability). Tensorflow works fine in python3.4, but not python 3.5, which is used when I load ... | 0 | 1 | 255 |
0 | 50,797,957 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-12-19T18:42:00.000 | 0 | 2 | 0 | can import tensorflow in python 3.4 but not in ipython notebook | 41,228,983 | 0 | python,ubuntu,tensorflow,pip,ipython | The best way to setup tensorflow with jupyter
1.Install anaconda
2.Create a environment named "tensorflow"
3.activate that environment by the following command in command prompt
Install anaconda
Create a environment named "Tensorflow"
activate that environment by the following command in command prompt
activate tensor... | I have been running in circles trying to get tensorflow to work in a jupyter notebook. I installed it via pip on ubuntu and also tried a conda environment (but unless I'm mistaken, getting that to work with ipython is beyond my ability). Tensorflow works fine in python3.4, but not python 3.5, which is used when I load ... | 0 | 1 | 255 |
0 | 60,222,616 | 0 | 1 | 0 | 0 | 1 | false | 86 | 2016-12-20T01:33:00.000 | 2 | 3 | 0 | Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads | 41,233,635 | 0.132549 | python,parallel-processing,tensorflow,distributed-computing | Tensorflow 2.0 Compatible Answer: If we want to execute in Graph Mode of Tensorflow Version 2.0, the function in which we can configure inter_op_parallelism_threads and intra_op_parallelism_threads is
tf.compat.v1.ConfigProto. | Can somebody please explain the following TensorFlow terms
inter_op_parallelism_threads
intra_op_parallelism_threads
or, please, provide links to the right source of explanation.
I have conducted a few tests by changing the parameters, but the results have not been consistent to arrive at a conclusion. | 0 | 1 | 54,304 |
0 | 41,257,567 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-21T07:29:00.000 | 11 | 1 | 0 | OpenCV Error: Assertion failed (L.channels() == 1 && I.channels() == 1) in connectedComponents_sub1 | 41,257,336 | 1 | python,python-2.7,opencv,opencv3.1,connected-components | Let us analyze it:
Assertion failed (L.channels() == 1 && I.channels() == 1)
The images that you are passing to some function should be 1 channel (gray not color).
__extractPlantArea(plant_img)
That happened in your code exactly at the function called __extractPlantArea.
cv2.connectedComponentsWithStats
While you... | I got the following error in OpenCV (python) and have googled a lot but have not been able to resolve.
I would be grateful if anyone could provide me with some clue.
OpenCV Error: Assertion failed (L.channels() == 1 && I.channels() == 1)
in connectedComponents_sub1, file /home/snoopy/opencv-
3.1.0/modules/... | 0 | 1 | 9,922 |
0 | 41,295,384 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-21T23:47:00.000 | 0 | 1 | 0 | TensorFlow while_loop parallelization TensorArray | 41,273,756 | 0 | python,tensorflow | You should probably get the parallel execution of the first 5 iterations and the second 5 iterations. I can say for sure if you provide a code sample. | I don't exactly understand how the while_loop parallelization works. Suppose I have a TensorArray having 10 Tensors all of same shape. Now suppose the computations in the loop body for the first 5 Tensors are independent of the computations in the remaining 5 Tensors. Would TensorFlow run these two in parallel? Also if... | 0 | 1 | 558 |
0 | 41,279,184 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-22T04:59:00.000 | 0 | 1 | 0 | Why NLTK uses regular expressions for word tokenization, but training for sentence tokenization? | 41,276,039 | 1.2 | python,nlp,nltk | I'm not sure if you can say that sentence splitting is harder than (word) tokenisation. But tokenisation depends on sentence splitting, so errors in sentence splitting will propagate to tokenisation. Therefore you'd want to have reliable sentence splitting, so that you don't have to make up for it in tokenisation. And ... | I am using NLTK in python. I understood that it uses regular expressions in its word tokenization functions, such as TreebankWordTokenizer.tokenize(), but it uses trained models (pickle files) for sentence tokenization. I don't understand why they don't use training for word tokenization? Does it imply that sentence to... | 0 | 1 | 198 |
0 | 54,832,398 | 0 | 0 | 1 | 0 | 1 | false | 2 | 2016-12-22T14:30:00.000 | 1 | 2 | 0 | Can't find "gen_training_ops" in the tensorflow GitHub | 41,285,440 | 0.099668 | python,optimization,tensorflow,deep-learning,jupyter-notebook | If you find it, youll realize it just jumps to pyhon/framework, where the actual update is just an assign operation and then gets grouped | I'm working on a new optimizer, and I managed to work out most of the process. Only thing I'm stuck on currently is finding gen_training_ops.
Apparently this file is crucial, because in both implementations of Gradient Descent, and Adagrad optimizers they use functions that are imported out of a wrapper file for gen_tr... | 0 | 1 | 719 |
0 | 41,299,177 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-12-23T09:23:00.000 | 0 | 3 | 0 | OpenCV VideoCapture device index / device number | 41,298,588 | 0 | python,c++,windows,opencv,usb | If you can differentiate the cameras by their serial number or device and vendor id, you can loop through all video devices before opening with opencv and search for the camera device you want to open. | I have a python environment (on Windows 10) that uses OpenCV VideoCapture class to connect to multiple usb cameras.
As far as I know, there is no way to identify a specific camera in OpenCV other than the device parameter in the VideoCapture class constructor / open method.
The problem is that the device parameter chan... | 0 | 1 | 10,992 |
0 | 41,499,342 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-23T09:24:00.000 | 0 | 1 | 0 | Python for Data Analysis: Chp 2 Pg 38 "prop_cumsum" error | 41,298,599 | 0 | python,cumsum,prop | It seems that you invoked sort_index instead of sort_values. The by='prop' doesn't make sense in such a context (you sort the index by the index, not by columns in the data frame).
Also, in my early release copy of the 2nd edition, this appears near the top of page 43. But since this is early release, the page number... | I'm working on this book and keep running error when i'm run "Prop_cumsum"
> prop_cumsum = df.sort_index(by='prop', ascending=False).prop.cumsum()
/Users/anaconda/lib/python3.5/site-packages/ipykernel/main.py:1:
FutureWarning: by argument to sort_index is deprecated, pls use
.sort_values(by=...) if name == 'main... | 0 | 1 | 187 |
0 | 50,601,700 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-23T09:57:00.000 | 0 | 2 | 0 | TensorFlow from Google - Data Security | 41,299,126 | 0 | python,machine-learning,tensorflow,deep-learning,google-developer-tools | Doesn't TF actually also uses google's models from cloud? I'm pretty sure google uses cloud data to provide better models for TF.
I'd recommend you to stay away from it. Only by writing your models from scratch you will learn to do useful stuff with it long term. I can also recommend weka for java, it's open source an... | Does anyone have any idea whether Google collects data that one supplies to Tensorflow? I mean it is open source, but it falls under their licences. | 1 | 1 | 861 |
0 | 41,315,329 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2016-12-24T09:54:00.000 | 2 | 2 | 0 | Number of classes for inception network (Tensorflow) | 41,312,197 | 1.2 | python,computer-vision,tensorflow,conv-neural-network | Only two classes. "Not food" is your background class. If you were trying to detect food or dogs, you could have 3 classes: "food", "dog", "neither food nor dog". | I see that a background class is used as a bonus class. So this class is used in case of not classifying an image in the other classes? In my case, I have a binary problem and I want to understand if an image contains food or not. I need to use 2 classes + 1 background class = 3 classes or only 2 classes? | 0 | 1 | 397 |
0 | 41,329,052 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-12-25T03:15:00.000 | 1 | 1 | 0 | Voice activated password implementation in python | 41,318,435 | 1.2 | python-2.7,numpy,scipy,voice-recognition,voice | It is not possible to compare to speech samples on a sample level (or time domain). Each part of the spoken words might vary in length, so they won't match up, and the levels of each part will also vary, and so on. Another problem is that the phase of the individual components that the sound signal consists of can chan... | I want to record a word beforehand and when the same password is spoken into the python script, the program should run if the spoken password matches the previously recorded file. I do not want to use the speech recognition toolkits as the passwords might not be any proper word but could be complete gibberish. I starte... | 0 | 1 | 851 |
0 | 65,705,673 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2016-12-26T10:15:00.000 | 2 | 4 | 0 | LineSegmentDetector in Opencv 3 with Python | 41,329,665 | 0.099668 | python-2.7,opencv3.0 | old implementation is not available.
Now it is available as follows:
fld = cv2.ximgproc.createFastLineDetector()
lines = fld.detect(image) | Can a sample implementation code or a pointer be provided for implementing LSD with opencv 3.0 and python? HoughLines and HoughLinesP are not giving desired results in python and want to test LSD in python but am not getting anywhere.
I have tried to do the following:
LSD=cv2.createLineSegmentDetector(0)
lines_std=LSD... | 0 | 1 | 18,293 |
0 | 41,346,692 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-27T11:07:00.000 | 0 | 2 | 0 | Why does the 'tensorflow' module import fail in Spyder and not in Jupyter Notebook and not in Python prompt? | 41,344,017 | 0 | python,ubuntu,anaconda,environment,spyder | (Posted on behalf of the OP).
It is solved: I reinstalled spyder and it works properly now. Thank you. | I have not used Linux/Unix for more a decade. Why does the 'tensorflow' module import fail in Spyder and not in Jupyter Notebook and not in Python prompt?
SCENARIO:
[terminal] spyder
[spyder][IPython console] Type 'import tensorflow as tf' in the IPython console
CURRENT RESULT:
[spyder][IPython console] Message erro... | 0 | 1 | 1,193 |
0 | 66,783,030 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-12-27T13:25:00.000 | 0 | 1 | 0 | Feeding a seed value to solver in Python Logistic Regression | 41,346,055 | 0 | python,machine-learning,scikit-learn,logistic-regression | You can use the warm_start option (with solver not liblinear), and manually set coef_ and intercept_ prior to fitting.
warm_start : bool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. | I am using scikit-learn's linear_model.LogisticRegression to perform multinomial logistic regress. I would like to initialize the solver's seed value, i.e. I want to give the solver its initial guess as the coefficients' values.
Does anyone know how to do that? I have looked online and sifted through the code too, but... | 0 | 1 | 493 |
0 | 41,402,234 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-12-28T10:39:00.000 | 5 | 1 | 0 | How to do a FIFO push-operation for rows on Pandas dataframe in Python? | 41,360,265 | 1.2 | python,pandas | @JohnGalt posted an answer to this on the comments. Thanks a lot. I just wanted to put the answer here just in case if people are looking for similar information in the future.
df.shift(1) df.loc[0] = new_row
df.shift(n) will shift the rows n times, filling the first n rows with na and getting rid of last n rows. The n... | I need to maintain a Pandas dataframe with 500 rows, and as the next row becomes available I want to push that new row in and throw out the oldest row from the dataframe. e.g. Let's say I maintain row 0 as newest, and row 500 as oldest. When I get a new data, I would push data to row 0, and it will shift row 0 to row 1... | 0 | 1 | 2,326 |
0 | 41,366,334 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-12-28T11:25:00.000 | 1 | 1 | 0 | Understanding Spark MLlib ALS.trainImplicit input format | 41,361,080 | 1.2 | python,pyspark,collaborative-filtering | It it not necessary (for implicit) and shouldn't be done (for explicit) so in this case bass only data you actually have. | I`m trying to make a recommender system based on purchase history using trainImplicit. My input is in domain [1, +inf) (the sum of views and purchases).
So the element of my input RDD looks like this: [(user_id,item_id),rating] --> [(123,5564),6] - the user(id = 123) interacted with the item(id=5564) 6 times.
Should I... | 0 | 1 | 363 |
0 | 41,361,442 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2016-12-28T11:29:00.000 | 2 | 3 | 0 | python pandas: create multiple empty dataframes | 41,361,151 | 1.2 | python,pandas | the constructor pd.Dataframe must be called like a function, so followed by parentheses (). Now you are refering to the module pd.dataframes (also note the final 's').
the for x-construction you're using creates a sequence. In this form you can't assign it to the variable x. Instead, enclose everything right of the equ... | I am trying to create multiple empty pandas dataframes in the following way:
dfnames = ['df0', 'df1', 'df2']
x = pd.Dataframes for x in dfnames
The above mentionned line returns error syntax.
What would be the correct way to create the dataframes? | 0 | 1 | 3,447 |
0 | 41,361,512 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-12-28T11:29:00.000 | 0 | 3 | 0 | python pandas: create multiple empty dataframes | 41,361,151 | 0 | python,pandas | You can't have many data frames within a single variable name, here you are trying to save all empty data frames in x. Plus, you are using wrong attribute name, it is pd.DataFrame and not pd.Dataframes.
I did this and it worked-
dfnames = ['df0', 'df1', 'df2']
x = [pd.DataFrame for x in dfnames] | I am trying to create multiple empty pandas dataframes in the following way:
dfnames = ['df0', 'df1', 'df2']
x = pd.Dataframes for x in dfnames
The above mentionned line returns error syntax.
What would be the correct way to create the dataframes? | 0 | 1 | 3,447 |
0 | 41,371,381 | 0 | 0 | 0 | 0 | 1 | true | 9 | 2016-12-28T23:09:00.000 | 7 | 1 | 0 | Is it possible to merge multiple TensorFlow graphs into one? | 41,370,987 | 1.2 | python,machine-learning,tensorflow | I kicked this around with my local TF expert, and the brief answer is "no"; TF doesn't have a built-in facility for this. However, you could write custom endpoint layers (input and output) with synch operations from Python's process management, so that they'd maintain parallel processing of each input, and concatenate... | I have two models trained with Tensorflow Python, exported to binary files named export1.meta and export2.meta. Both files will generate only one output when feeding with input, say output1 and output2.
My question is if it is possible to merge two graphs into one big graph so that it will generate output1 and output2 ... | 0 | 1 | 2,821 |
0 | 41,383,430 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-29T13:25:00.000 | 0 | 1 | 0 | installing sklearn version 0.18.1 in Apache web server | 41,380,710 | 0 | python,apache,flask,scikit-learn | Use anaconda. It will save you so much time with these annoying dependency issues. | I am trying to install the latest version (0.18.1) of sklearn for use in a web app
I am hosting my webapp with apache web server and flask
I have tried sudo apt-get -y install python3-sklearn and this works but installs an older version of sklearn (0.17)
I have also tried pip3 and easy_install and these complete the in... | 1 | 1 | 362 |
0 | 41,382,785 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-29T15:34:00.000 | 2 | 3 | 0 | Numpy Array Change indices | 41,382,736 | 1.2 | python,numpy | There are two ways to archive this either np.reshape(x, ndims) or np.transpose(x, dims).
For pictures I propose np.transpose(x, dims) which can be applied using
X_train = np.transpose(X_train, (3,0,1,2)). | I have an numpy-array with 32 x 32 x 3 pictures with X_train.shape: (32, 32, 3, 73257). However, I would like to have the following array-shape: (73257, 32, 32, 3).
How can I accomplish this? | 0 | 1 | 3,420 |
0 | 41,387,325 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-29T20:04:00.000 | 0 | 1 | 0 | Split Dartboard into Polygons | 41,386,463 | 0 | python-2.7,opencv | Not sure what the issue is. Normally, x and y coordinates (of the dart) will be given relative to the top-left corner of the image so you will need to add the radius of the dartboard to each to get your coordinates relative to the centre of the dartboard.
There are 20 segments on a dartboard, so each segment will subt... | I'm looking for a way to split a dartboard image into polygons so that given an x,y coordinate I can find out which zone the dart fell within. I have found a working python script to detect if the coordinate falls within a polygon (stored as a list of x,y pairs), but I am lost as to how to generate the polygons as a li... | 0 | 1 | 109 |
0 | 41,408,698 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-30T23:39:00.000 | 1 | 1 | 0 | Detect object in an image using openCV python on a raspberry pi | 41,404,053 | 0.197375 | python,opencv,image-processing,raspberry-pi | Well I can suggest a way of doing this. So basically what you can do is you can use some kind of Object Detection coupled with a Machine Learning Algo. So the way this might work is you first train your camera to recongnize the closed box. You can take like 10 pics of the closed box(just an example) and train your prog... | I have a small project that I am tinkering with. I have a small box and I have attached my camera on top of it. I want to get a notification if anything is added or removed from it.
My original logic was to constantly take images and compare it to see the difference but that process is not good even the same images on ... | 0 | 1 | 619 |
0 | 41,404,825 | 0 | 0 | 0 | 0 | 2 | true | 12 | 2016-12-31T02:08:00.000 | 12 | 2 | 0 | statsmodels add_constant for OLS intercept, what is this actually doing? | 41,404,817 | 1.2 | python,linear-regression,statsmodels | It doesn't add a constant to your values, it adds a constant term to the linear equation it is fitting. In the single-predictor case, it's the difference between fitting an a line y = mx to your data vs fitting y = mx + b. | Reviewing linear regressions via statsmodels OLS fit I see you have to use add_constant to add a constant '1' to all your points in the independent variable(s) before fitting. However my only understanding of intercepts in this context would be the value of y for our line when our x equals 0, so I'm not clear what purp... | 0 | 1 | 9,060 |
0 | 43,397,319 | 0 | 0 | 0 | 0 | 2 | false | 12 | 2016-12-31T02:08:00.000 | 7 | 2 | 0 | statsmodels add_constant for OLS intercept, what is this actually doing? | 41,404,817 | 1 | python,linear-regression,statsmodels | sm.add_constant in statsmodel is the same as sklearn's fit_intercept parameter in LinearRegression(). If you don't do sm.add_constant or when LinearRegression(fit_intercept=False), then both statsmodels and sklearn algorithms assume that b=0 in y = mx + b, and it'll fit the model using b=0 instead of calculating what b... | Reviewing linear regressions via statsmodels OLS fit I see you have to use add_constant to add a constant '1' to all your points in the independent variable(s) before fitting. However my only understanding of intercepts in this context would be the value of y for our line when our x equals 0, so I'm not clear what purp... | 0 | 1 | 9,060 |
0 | 62,550,444 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-31T07:17:00.000 | 1 | 2 | 0 | pandas "cumulative" rolling_corr | 41,406,339 | 0.099668 | python,pandas,rolling-computation | Just use rolling correlation, with a very large window, and min_period = 1. | Is there any built-in pandas' method to find the cumulative correlation between two pandas series?
What it should do is effectively fixing the left side of the window in pandas.rolling_corr(data, window) so that the width of the window increases and eventually the window includes all data points. | 0 | 1 | 506 |
0 | 45,947,287 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-31T09:53:00.000 | 0 | 2 | 0 | Does TensorFlow execute entire computation graph with sess.run()? | 41,407,241 | 0 | python,machine-learning,tensorflow | Since Python code of TF only setups the graph, which is actually executed by native implementation of all ops, your variables need to be executed in this underlying environment. This happens by executing two ops - for local and global variables initialization:
session.run(tf.global_variables_initializer(), tf.local_va... | For example, when we compute a variable c as result = sess.run(c), does TF only compute the inputs required for computing c or updates all the variables of the complete computational graph?
Also, I don't seem to be able to do this:
c = c*a*b
as I am stuck with uninitialized variable error even after initializing c as t... | 0 | 1 | 1,412 |
0 | 41,417,067 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-01T15:44:00.000 | 0 | 1 | 0 | Cluster two features in Python | 41,416,652 | 0 | python,machine-learning,scikit-learn,cluster-analysis | You can use the skikit-learn Affinity propagation or Mean-shift libraries for clustering. Those algorithms will output a number of clusters and centers. To use the Y seems to be a different question because you can't plot the multi dimensional point on a 3D plane unless you do some import some other libraries. | I have two sparse scipy matrix's, title and paragraph whose dimensions are (284,183) and (284,4195) respectively. Each row of both matrix's are features from one instance of my dataset. I wish to cluster these without a predefined number of clusters and then plot them.
I also have an array, Y that relates to each row.... | 0 | 1 | 156 |
0 | 41,447,225 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-03T15:38:00.000 | 0 | 1 | 0 | Scipy.optimize.minimize using a design vector x that contains integers only | 41,447,048 | 0 | python,optimization,scipy,integer,minimum | That is actually a way harder problem speaking math, the same algorithm will not be capable! This problem is np-hard. Maybe check out pyglpk... And check out mixed integer programming. | I'd like to minimize some objective function f(x1,x2,x3) in Python. Its quite a simple function but the problem is that the design vector x=[x1,x2,x3] constains integers only.
So for example I'd like to get the result:
"f is minimum for x=[1, 3, 2]" and not:
"f is minimum for x=[1.12, 3.36, 2.24]" since this would n... | 0 | 1 | 267 |
0 | 59,636,728 | 0 | 1 | 0 | 0 | 1 | false | 10 | 2017-01-03T15:56:00.000 | 2 | 6 | 0 | Easy way to add thousand separator to numbers in Python pandas DataFrame | 41,447,383 | 0.066568 | python,pandas,number-formatting,separator | If you want "." as thousand separator and "," as decimal separator this will works:
Data = pd.read_Excel(path)
Data[my_numbers] = Data[my_numbers].map('{:,.2f}'.format).str.replace(",", "~").str.replace(".", ",").str.replace("~", ".")
If you want three decimals instead of two you change "2f" --> "3f"
Data[my_numbers] =... | Assuming that I have a pandas dataframe and I want to add thousand separators to all the numbers (integer and float), what is an easy and quick way to do it? | 0 | 1 | 20,270 |
0 | 41,730,024 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-01-04T09:02:00.000 | 0 | 2 | 0 | Python/Pybrain: How can I fix weights of a neural network during training? | 41,459,860 | 0 | python,python-2.7,neural-network,pybrain | I am struggling with a similar problem.
So far I am using net._setParameters command to fix the weights after each training step, but there should be a better answer..
It might help for the meantime, I am waiting for the better answer as well :-) | I am quite new to neural networks and trying to use pybrain to build and train a network.
I am building my network manually with full connections between all layers (input, two hidden layers, output) and then set some weights to zero using _SetParameters as I don't want connections between some specific nodes.
My probl... | 0 | 1 | 562 |
0 | 41,472,883 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-01-04T19:34:00.000 | 5 | 2 | 0 | How Can I Write Charts to Python DocX Document | 41,471,887 | 0.462117 | python,excel,matplotlib,python-docx | The general approach that's currently supported is to export the chart from matplotlib or wherever as an image, and then add the image to the Word document.
While Word allows "MS Office-native" charts to be created and embedded, that functionality is not in python-docx yet. | python beginner here with a simple question. Been using Python-Docx to generate some reports in word from Python data (generated from excel sheets). So far so good, but would like to add a couple of charts to the word document based on the data in question. I've looked at pandas and matplotlib and all seem like they... | 0 | 1 | 10,764 |
0 | 41,486,968 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2017-01-05T10:35:00.000 | 0 | 2 | 0 | Does NLTK return different results on each run? | 41,482,733 | 1.2 | python,python-2.7,nltk | Neither modify their logic or computation in any iterative loop.
In NLTK, tokenzation by default is rule based, using Regular Expressions, to split tokens from a sentence
POS tagging by default uses a trained model for English, and will therefore give the same POS tag per token for the given trained model. If that mode... | Does Python's NLTK toolkit return different results for each iteration of:
1) tokenization
2) POS tagging?
I am using NLTK to tag a large text file. The tokenized list of tuples has a different size every time. Why is this? | 0 | 1 | 110 |
0 | 41,487,255 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-01-05T10:35:00.000 | 0 | 2 | 0 | Does NLTK return different results on each run? | 41,482,733 | 0 | python,python-2.7,nltk | Both the tagger and the tokenizer are deterministic. While it's possible that iterating over a Python dictionary would return results in a different order in each execution of the program, this will not affect tokenization -- and hence the number of tokens (tagged or not) should not vary. Something else is wrong with y... | Does Python's NLTK toolkit return different results for each iteration of:
1) tokenization
2) POS tagging?
I am using NLTK to tag a large text file. The tokenized list of tuples has a different size every time. Why is this? | 0 | 1 | 110 |
0 | 41,497,154 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-05T14:38:00.000 | 0 | 2 | 0 | What is the matter when I installing spark-python on CentOS | 41,487,708 | 0 | python,apache-spark,cloudera | You are installing spark Python 1.6 which depends on Python 2.6
I think the current stable version is 2.x and the package for that is pyspark. Try installing that. It might require Python 3.0 but thats easy enough to install.
You'll probably need to reinstall the other spark packages as well to make sure they are the r... | I have a problem installing spark-python on CentOS.
When I installed it using yum install spark-python, I get the following error message.
Error: Package: spark-python-1.6.0+cdh5.9.0+229-1.cdh5.9.0.p0.30.el5.noarch (cloudera-cdh5)
Requires: python26
You could try using --skip-broken to work around the pro... | 0 | 1 | 141 |
0 | 41,511,835 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-01-06T18:03:00.000 | 2 | 2 | 0 | How to change Bokeh favicon to another image | 41,511,597 | 0.197375 | python,bokeh | As of Bokeh 0.12.4 it is only possible to remove it, not change it, directly from the python library. This can be done by setting the property logo=None on a plot.toolbar. | Bokeh plots include a Bokeh favicon in the upper right of most plots. Is it possible to replace this icon with another icon? If so, how? | 0 | 1 | 716 |
0 | 61,167,164 | 0 | 0 | 0 | 0 | 1 | false | 27 | 2017-01-07T05:58:00.000 | 7 | 5 | 0 | What is the difference between resize and reshape when using arrays in NumPy? | 41,518,351 | 1 | python,numpy | reshape() is able to change the shape only (i.e. the meta info), not the number of elements.
If the array has five elements, we may use e.g. reshape(5, ), reshape(1, 5),
reshape(1, 5, 1), but not reshape(2, 3).
reshape() in general don't modify data themselves, only meta info about them,
the .reshape() method (of nda... | I have just started using NumPy. What is the difference between resize and reshape for arrays? | 0 | 1 | 29,555 |
0 | 41,527,710 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-01-07T21:19:00.000 | 3 | 2 | 0 | How to determine if an image is dark? | 41,526,677 | 1.2 | python,python-2.7,opencv,image-processing | To determin if an image is dark, simply calculate the average intensity and judge it.
The problem for the recognition although is not that the image is dark, but that it has a low contrast. A bright image with the same contrast would yield the same bad results.
Histogram equalization is a method that is used to improve... | I have some images i'm using for face recognition.
Some of the images are very dark.
I don't want to use Histogram equalisation on all the images only on the dark ones.
How can i determine if an image is dark?
I'm using opencv in python.
I would like to understand the theory and the implementation.
Thanks | 0 | 1 | 2,224 |
0 | 41,543,451 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-01-08T15:45:00.000 | 1 | 4 | 0 | How do I safely preallocate an integer matrix as an index matrix in numpy | 41,534,489 | 0.049958 | python,numpy | If you really really want to catch errors that way, initialize your indices with NaN.
IXS=np.full((r,c),np.nan, dtype=int)
That will always raise an IndexError. | I want to preallocate an integer matrix to store indices generated in iterations. In MATLAB this can be obtained by IXS = zeros(r,c) before for loops, where r and c are number of rows and columns. Thus all indices in subsequent for loops can be assigned into IXS to avoid dynamic assignment. If I accidentally select a 0... | 0 | 1 | 620 |
0 | 41,564,872 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2017-01-09T21:12:00.000 | 0 | 1 | 0 | Is there an easy way to solve a system of linear equations over Z/2Z in Python? | 41,557,022 | 1.2 | python-3.x,math,matrix | I'd use Sage if this were a quick hack, and maybe consider using something optimized for GF(2) if the matrices are really big, to ensure that only one bit is used for each entry and that addition of several elements can be accomplished using a single XOR operation. One benefit of working over a finite field is that you... | I'm practicing programming and I would like to know what is the easiest way to solve linear system of equations over the field Z/2Z? I found a problem where I managed to reduce the problem to solve a system of about 2200 linear equations over Z/2Z but I'm not sure what is the easiest way to write a solver for the equat... | 0 | 1 | 612 |
0 | 41,577,386 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-01-10T03:46:00.000 | 0 | 3 | 0 | Numpy not found after installation | 41,560,796 | 0 | python,numpy,python-3.5 | Winpython has two size, and the smallest "Zero" size doesn't include numpy | I just installed numpy on my PC (running windows 10, running python 3.5.2) using WinPython, but when i try to import it in IDLE with: import numpy I get the ImportError: Traceback (most recent call last):
File "C:\Users\MY_USERNAME\Desktop\DATA\dataScience1.py", line 1, in <module>
import numpy
ImportError: No ... | 0 | 1 | 3,858 |
0 | 44,357,542 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-10T18:15:00.000 | 0 | 1 | 0 | Large graph processing on Hadoop | 41,575,620 | 0 | python,hadoop,graph,random-walk,bigdata | My understanding is, you need to process large graphs which are stored on file systems. There are various distributed graph processing frameworks like Pregel, Pregel+, GraphX, GPS(Stanford), Mizan, PowerGraph etc.
It is worth taking a look at these frameworks. I will suggest coding in C, C++ using openMPI like which c... | I am working on a project that involves a RandomWalk on a large graph(too big to fit in memory). I coded it in Python using networkx but soon, the graph became too big to fit in memory, and so I realised that I needed to switch to a distributed system. So, I understand the following:
I will need to use a graph databas... | 0 | 1 | 480 |
0 | 41,626,482 | 0 | 1 | 0 | 0 | 1 | false | 29 | 2017-01-11T08:55:00.000 | 6 | 7 | 0 | OpenCV - Saving images to a particular folder of choice | 41,586,429 | 1 | python,opencv,image-processing | Thank you everyone. Your ways are perfect. I would like to share another way I used to fix the problem. I used the function os.chdir(path) to change local directory to path. After which I saved image normally. | I'm learning OpenCV and Python. I captured some images from my webcam and saved them. But they are being saved by default into the local folder. I want to save them to another folder from direct path. How do I fix it? | 0 | 1 | 137,529 |
0 | 41,591,155 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-11T11:23:00.000 | 2 | 1 | 0 | use Phantom 2 for real time image processing | 41,589,611 | 0.379949 | python-2.7,image-processing,raspberry-pi,phantom-types | I would be surprised if a off-the-shelf multicopter would comprise enough processing power to do any reasonable image processing on-board. It wouldn't make sense for the manufacturer.
But I guess it has some video or streaming capabilties or can be equipped with such. Then you can process the data on a remote computer,... | I have a project to detect ripeness of specific fruit, I will use phantom 2 with autopilot feature to fly through fruit trees and capture images then I want to make real time image processing.
I was searching a lot but didn't find the answers for the following questions.
can I use phantom 2 for real time image proce... | 0 | 1 | 91 |
0 | 56,167,288 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2017-01-11T12:23:00.000 | 2 | 4 | 0 | Change data type of a specific column of a pandas dataframe | 41,590,884 | 0.099668 | python,pandas | To simply change one column, here is what you can do:
df.column_name.apply(int)
you can replace int with the desired datatype you want e.g (np.int64), str, category.
For multiple datatype changes, I would recommend the following:
df = pd.read_csv(data, dtype={'Col_A': str,'Col_B':int64}) | I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions? | 0 | 1 | 77,099 |
0 | 41,591,077 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2017-01-11T12:23:00.000 | 26 | 4 | 0 | Change data type of a specific column of a pandas dataframe | 41,590,884 | 1 | python,pandas | df['colname'] = df['colname'].astype(int) works when changing from float values to int atleast. | I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions? | 0 | 1 | 77,099 |
0 | 41,604,196 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2017-01-11T19:34:00.000 | 0 | 1 | 0 | Translation from Camera Coordinates System to Robotic-Arm Coordinates System | 41,599,283 | 0 | python,opencv,coordinates,robotics,coordinate-transformation | Define your 2D coordinate on the board, create a mapping from the image coordinate (2D) to the 2D board, and also create a mapping from the board to robot coordinate (3D). Usually, robot controller has a function to define your own coordinate (the board). | I am new in robotics and I am working on a project where I need to pass the coordinates from the camera to the robot.
So the robot is just an arm, it is then stable in a fixed position. I do not even need the 'z' axis because the board or the table where everything is going on have always the same 'z' coordinates.
The ... | 0 | 1 | 1,200 |
0 | 42,087,865 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-12T04:53:00.000 | 0 | 2 | 0 | Updating the supported tags for pip | 41,605,355 | 0 | python,python-3.x,tensorflow,pip | I have the same error when I run this command. I found error that the installed version of python was x86 and TensorFlow is for x64 versions. I reinstalled the python with x64 version and it works now! I hope this works for you too! | I'm trying to install Tensorflow, and received the following error.
tensorflow-0.12.1-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
By reading through other questions, I think I've traced the issue to the cp35 tag not being supported by the version of pip I have installed. What's odd is that I b... | 0 | 1 | 2,301 |
0 | 41,620,561 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-01-12T17:28:00.000 | 1 | 1 | 0 | Can I apply Cross Validation in a Linear Regression model? | 41,619,431 | 1.2 | python,scikit-learn,linear-regression | Yes, using cross validation will give you a better estimate of your model performance.
Splitting randomly(cross validation) will however not work for time-series and/or all distributions of data.
The "final model" will not be better only your estimate on model performance. | I have a dataset with a total of 58 samples. The dataset has two columns "measured signals" and "people_in_area". Due to it, I am trying to train a Linear Regression model using Scikit-learn. For the moment, I splited 75% of my dataset for training and 25% for testing. However, depending on the order in which the data ... | 0 | 1 | 1,054 |
0 | 46,266,094 | 0 | 0 | 0 | 0 | 1 | false | 24 | 2017-01-13T04:08:00.000 | 7 | 4 | 0 | Download data from a jupyter server | 41,627,247 | 1 | python,download,ipython,jupyter-notebook,jupyter | The download option did not appear for me.
The solution was to open the file (which could not be correctly read as it was a binary file), and to download it from the notebook's notepad. | I'm using ipython notebook by connecting to a server
I don't know how to download a thing (data frame, .csv file,... for example) programatically to my local computer. Because I can't specific declare the path like C://user//... It will be downloaded to their machine not mine | 0 | 1 | 45,980 |
0 | 41,672,695 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2017-01-13T17:01:00.000 | 1 | 1 | 0 | Globally load R libraries in Snakemake | 41,639,782 | 1.2 | python,r,snakemake | I'm afraid not. This has performance reasons on (a) local systems (circumventing the Python GIL) and (b) cluster systems (scheduling to separate nodes).
Even if there was a solution on local machines, it would need to take care that no sessions are shared between parallel jobs. If you really need to safe that time, I s... | I'm currently building my NGS pipeline using Snakemake and have an issue regarding the loading of R libraries. Several of the scripts, that my rules call, require the loading of R libraries. As I found no way of globally loading them, they are loaded inside of the R scripts, which of course is redundant computing time ... | 0 | 1 | 239 |
0 | 41,763,164 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-16T01:09:00.000 | 1 | 1 | 0 | TensorFlow: how to determine if we want to break the training dataset into batches | 41,668,158 | 0.197375 | python,python-3.x,tensorflow,deep-learning,data-science | Generally Deep Learning algorithms are ran on GPUs which has limited memory and thus a limited number of input data samples (in the algorithm commonly defined as batch size) could be loaded at a time.
In general larger batch size reduces the overall computation time (as the internal matrix multiplications are done in a... | I am learning TensorFlow (as well as general deep learning). I am wondering when do we need to break the input training data into batches? And how do we determine the batch size? Is there a rule of thumb? Thanks! | 0 | 1 | 211 |
0 | 50,435,429 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-16T14:57:00.000 | 0 | 2 | 0 | Is there some way can accomplish stochastic gradient descent not from scratch | 41,679,182 | 0 | python,optimization,machine-learning,tensorflow,deep-learning | Both Theano & Tensorflow have built-in differentiation for you. So you only need to form the loss. | For a standard machine learning problem, e.g, image classification on MNIST, the loss function is fixed, therefor the optimization process can be accomplished simply by calling functions and feed the input into them. There is no need to derive gradients and code the descent procedure by hand.
But now I'm confused when... | 0 | 1 | 115 |
0 | 41,684,697 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-16T18:22:00.000 | 0 | 2 | 0 | network analysis: how to create nodes and edges files from csv | 41,682,737 | 0 | python,r,social-networking | Decide how you want the graph to represent the data. From what you've described one approach would be to have nodes in your graph represent People, and edges represent grants. In that case, create a pairwise lis of people who are on the same grant. Edges are bidirectional by default in iGraph, so you just need each ... | I have a two-mode (grant X person) network in csv format. I would like to create personXperson projection of this network and calculate some network measures (including centrality measures of closeness and betweenness, etc.).
What would be my first step? I am guessing creating 2 separate files for Nodes and Edges and ... | 0 | 1 | 1,405 |
0 | 41,753,582 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2017-01-17T04:52:00.000 | 1 | 2 | 1 | When using qsub to submit jobs, how can I include my locally installed python packages? | 41,689,297 | 0.099668 | python,cluster-computing,pbs,qsub,supercomputers | If you are using pbs professional then try to export PYTHONPATH in your environment and then submit job using "-V" option with qsub. This will make qsub take all of your environment variables and export it for the job.
Else, try setting it using option "-v" (notice small v) and then put your environment variable key/va... | I have an account on a supercomputing cluster where I've installed some packages using e.g. "pip install --user keras".
When using qsub to submit jobs to the queue, I try to make sure the system can see my local packages by setting "export PYTHONPATH=$PYTHONPATH:[$HOME]/.local/lib/python2.7/site-packages/keras" in the ... | 0 | 1 | 2,181 |
0 | 41,700,079 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-17T14:38:00.000 | 0 | 2 | 0 | Initialize an empty list of the shape/structure of a given list without numpy | 41,699,897 | 0 | python,list | You can use: B=[[None]*m]*n
It creates a list of n rows of m columns of None. | Given a list A with n rows each having m columns each.
Is there a one liner to create an empty list B with same structure (n rows each with m components)?
Numpy lists can be created/reshaped. Does the python in-built list type support such an argument? | 0 | 1 | 1,097 |
0 | 49,909,264 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-18T09:36:00.000 | 1 | 3 | 0 | SciKit One-class SVM classifier training time increases exponentially with size of training data | 41,715,835 | 0.066568 | python,scikit-learn,svm | Hope I'm not too late. OCSVM, and SVM, is resource hungry, and the length/time relationship is quadratic (the numbers you show follow this). If you can, see if Isolation Forest or Local Outlier Factor work for you, but if you're considering applying on a lengthier dataset I would suggest creating a manual AD model that... | I am using the Python SciKit OneClass SVM classifier to detect outliers in lines of text. The text is converted to numerical features first using bag of words and TF-IDF.
When I train (fit) the classifier running on my computer, the time seems to increase exponentially with the number of items in the training set:
Numb... | 0 | 1 | 2,038 |
0 | 41,732,675 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-01-18T11:54:00.000 | 1 | 1 | 0 | Extract the features from Doc2Vec in Python | 41,718,767 | 0.197375 | python,doc2vec | Yes, if words is a list of word strings, preprocessed/tokenized the same way as training data was fed to the model during training. | For a small project I need to extract the features obtained from Doc2Vec object in gensim.
I have used vector = model.infer_vector(words) is it correct? | 0 | 1 | 439 |
0 | 41,740,148 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-18T17:40:00.000 | 0 | 1 | 0 | How tf-idf is relevant in calculating sentence vectors | 41,725,993 | 0 | python,machine-learning | Any aggregative operation on the word vectors can give you a sentence vector.
You should consider what do you want your representation to mean and choose the operation accordingly.
Possible operations are summing the vectors, averaging them, concatenating, etc. | I am interested to find sentence vectors using word vectors.I read that by multiplying each word's tf-idf weights with their vectors and finding their average we can get whole sentence vector.
Now I want to know that how these tf-idf weights helps us to get sentence vectors i.e how these tf-idf and sentence vector are... | 0 | 1 | 878 |
0 | 41,792,826 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-19T15:02:00.000 | 1 | 1 | 0 | Tensorflow Inception v3 retraining - attach text/labels to individual images | 41,745,022 | 0.197375 | python,machine-learning,tensorflow,neural-network,deep-learning | You have 3 main options - multiply your classes, multi-label learning or training several models.
The first option is the most straight forward - instead of having teachers who belong to John and teachers who belong to Jane you can have teachers whose class is Teachers_John and class whose class is Teachers_John and l... | I am using the inception v3 model to retrain my own dataset. I have few folder which represent the classes which contain images for each class. What i would like to do is to 'attach' some text ids to these images so when they are retrained and used to run classification/similarity-detection those ids are retrieved too.... | 0 | 1 | 596 |
0 | 41,749,141 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2017-01-19T18:07:00.000 | 1 | 2 | 0 | Finding cube root of a number less than 1 using binary search | 41,748,751 | 1.2 | python,algorithm,binary-search | Yes, your instructor's one statement is a flaw. For 0 < x < 1, the root will lie between x and 1. This is true for any power in the range (0, 1) (roots > 1).
You can reflect the statement to the negative side, since this is an odd root. The cube root of -1 <= x <= 0 will be in the range [-1, x]. For x < -1, your ra... | I am doing MIT6.00.1x course on edX and the professor tells that "If x<1, search space is 0 to x but cube root is greater than x and less than 1".
There are two cases :
1. The number x is between 0 and 1
2. The number x is less than 0 (negative)
In both the cases, the cube root of x will lie between x and 1. I unde... | 0 | 1 | 1,193 |
0 | 44,792,766 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-01-19T18:07:00.000 | 1 | 2 | 0 | Finding cube root of a number less than 1 using binary search | 41,748,751 | 0.099668 | python,algorithm,binary-search | I think I know the problem you're talking about. The only reason she put that is that she deals with the absolute difference:
while abs(guess**3 - cube) >= epsilon
However, the code will need another line to deal with negative cubes all together which will be something along the lines of:
if cube<0: guess = -guess
I ... | I am doing MIT6.00.1x course on edX and the professor tells that "If x<1, search space is 0 to x but cube root is greater than x and less than 1".
There are two cases :
1. The number x is between 0 and 1
2. The number x is less than 0 (negative)
In both the cases, the cube root of x will lie between x and 1. I unde... | 0 | 1 | 1,193 |
0 | 41,749,570 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-19T18:47:00.000 | 0 | 1 | 0 | Memory error with np array when making document term matrix in python 2.7 | 41,749,448 | 0 | python | I assume you are using 32bit python. 32bit python limits your program ram memory to 2 gb (all 32bit programs have this as a hard limit), some of this is taken up by python overhead, more of this is taken up by your program. normal python objects do not need contiguous memory and will map disparate regions of memory
nu... | I am using matrix = np.array(docTermMatrix) to make DTM. But sometimes it will run into memory error problems at this line. How can I prevent this from happening? | 0 | 1 | 84 |
0 | 41,767,302 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-01-19T21:46:00.000 | 6 | 1 | 0 | NEAT-Python not finding Visualize.py | 41,752,291 | 1.2 | python,importerror,iterm2,neat,virtual-environment | I think you could simply copying the visualize.py into the same directory as the script you are running.
If you wanted it in your lib/site-packages directory so you could import it with the neat module:
copy visualize.py into lib/site-packages/neat/ and modify __init__.py to add the line import neat.visualize as visual... | So recently I have found about a NEAT algorithm and wanted to give it a try using NEAT-Python(not sure if this is even the correct source :| ). So I created my virtual environment activated it and installed the neat-python using pip in the VE. When I then tried to run one of the examples from their GitHub page it threw... | 0 | 1 | 8,034 |
0 | 47,515,380 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-01-20T01:52:00.000 | 0 | 1 | 0 | Tableau: How to automate publishing dashboard to Tableau server | 41,754,825 | 0 | python,powershell,scripting,server,tableau-api | Getting data from excel to Tableau Server:
Setup the UNC path so it is accessible from your server. If you do this, you can then set up an extract refresh to read in the UNC path at the frequency desired.
Create an extract with the Tableau SDK.
Use the Tableau SDK to read in the CSV file and generate a file.
In our ... | I used python scripting to do a series of complex queries from 3 different RDS's, and then exported the data into a CSV file. I am now trying to find a way to automate publishing a dashboard that uses this data into Tableau server on a weekly basis, such that when I run my python code, it will generate new data, and su... | 0 | 1 | 1,415 |
0 | 41,767,039 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-20T04:07:00.000 | 0 | 2 | 0 | Seaborn pairplot not showing KDE | 41,755,950 | 0 | python,matplotlib,seaborn | Looks like the problem was with statsmodels (which seaborn uses to do KDE). I reinstalled statsmodels and that cleared up the problem. | After upgrading to matplotlib 2.0 I have a hard time getting seaborn to plot a pairplot. For example...
sns.pairplot(df.dropna(), diag_kind='kde') returns the following error TypeError: slice indices must be integers or None or have an __index__ method. My data doesn't have any Nans in it. Infact, removing the kde o... | 0 | 1 | 1,051 |
0 | 49,675,309 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-01-20T21:01:00.000 | 0 | 3 | 0 | Does Fortify support Python, Scala, and Apache Spark? | 41,772,263 | 0 | python,scala,apache-spark,fortify | Fortify support python scan. Since it is not compiled, you can directly feed the code to python, it will detect the same, scan and give you the result. | Does Fortify Supports Python, Scala, and Apache Spark? If it supports how to scan these codes using Fortify.
We need to have compiler to scan C++ code using Fortify. This can be done using Microsoft visual studio.
Similarly should we need to have some plugin to scan Python, Scala, and Spark codes? | 0 | 1 | 7,798 |
0 | 60,477,227 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2017-01-21T07:26:00.000 | 0 | 4 | 0 | In-place sort_values in pandas what does it exactly mean? | 41,776,801 | 0 | python,sorting,pandas,in-place | "inplace=True" is more like a physical sort while "inplace=False" is more like logic sort. The physical sort means that the data sets saved in the computer is sorted based on some keys; and the logic sort means the data sets saved in the computer is still saved in the original (when it was input/imported) way, and the ... | Maybe a very naive question, but I am stuck in this: pandas.Series has a method sort_values and there is an option to do it "in place" or not. I have Googled for it a while, but I am not very clear about it. It seems that this thing is assumed to be perfectly known to everybody but me. Could anyone give me some illustr... | 0 | 1 | 21,797 |
0 | 71,012,398 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2017-01-21T07:26:00.000 | 0 | 4 | 0 | In-place sort_values in pandas what does it exactly mean? | 41,776,801 | 0 | python,sorting,pandas,in-place | inplace = True changes the actual list itself while sorting.
inplace = False will return a new sorted list without changing the original.
By default, inplace is set to False if unspecified. | Maybe a very naive question, but I am stuck in this: pandas.Series has a method sort_values and there is an option to do it "in place" or not. I have Googled for it a while, but I am not very clear about it. It seems that this thing is assumed to be perfectly known to everybody but me. Could anyone give me some illustr... | 0 | 1 | 21,797 |
0 | 56,411,030 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-01-21T13:25:00.000 | 0 | 2 | 0 | AttributeError: module 'theano' has no attribute 'tests' | 41,779,922 | 0 | python-3.x,deep-learning,theano-cuda | For latest version of Theano (1.04)
import theano generates an error without the nose package installed
install via conda or pip pip install nose / conda install nose | I am trying to use theano gpu on my ubuntu, but each time after it running one time successfully, it will give me the error like this when I try to run next time. No idea why, could anyone help me ?
import theano
Traceback (most recent call last):
File "", line 1, in
File "/home/sirius/anacond... | 0 | 1 | 1,036 |
0 | 50,360,800 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-01-21T13:25:00.000 | 0 | 2 | 0 | AttributeError: module 'theano' has no attribute 'tests' | 41,779,922 | 0 | python-3.x,deep-learning,theano-cuda | I met the same problem.I just fix it with conda install nose | I am trying to use theano gpu on my ubuntu, but each time after it running one time successfully, it will give me the error like this when I try to run next time. No idea why, could anyone help me ?
import theano
Traceback (most recent call last):
File "", line 1, in
File "/home/sirius/anacond... | 0 | 1 | 1,036 |
0 | 41,796,793 | 0 | 1 | 0 | 0 | 1 | true | 33 | 2017-01-21T18:36:00.000 | 41 | 9 | 0 | How do I convert timestamp to datetime.date in pandas dataframe? | 41,783,003 | 1.2 | python,date,datetime,pandas | I got some help from a colleague.
This appears to solve the problem posted above
pd.to_datetime(df['mydates']).apply(lambda x: x.date()) | I need to merge 2 pandas dataframes together on dates, but they currently have different date types. 1 is timestamp (imported from excel) and the other is datetime.date.
Any advice?
I've tried pd.to_datetime().date but this only works on a single item(e.g. df.ix[0,0]), it won't let me apply to the entire series (e.g. d... | 0 | 1 | 92,271 |
0 | 47,182,528 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-01-22T08:39:00.000 | 1 | 1 | 0 | How to calculate ctc probability for given input and expected output? | 41,788,924 | 0.197375 | python,c++,tensorflow | according to Graves paper [1], the loss for a batch is defined as sum(log(p(z|x))) over all samples (x,z) in this batch.
If you use a batch size of 1, you get log(p(z|x)), that is the log-probability of seeing the labelling z given the input x. This can be achieved with the ctc_loss function from TensorFlow.
You can al... | I'm doing my first tensorflow project.
I need to get ctc probability (not ctc loss) for given input and my expected sequences.
Is there any api or ways to do it in python or c++?
I prefer python side, but c++ side is also okay. | 0 | 1 | 495 |
0 | 41,855,351 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-01-22T11:37:00.000 | 1 | 1 | 0 | Python (Pandas) : When to use replace vs. map vs. transform? | 41,790,392 | 1.2 | python,pandas | As far as I understand, Replace is used when working on missing values and transform is used while doing group_by operations.Map is used to change series or index | I'm trying to clearly understand for which type of data transformation the following functions in pandas should be used:
replace
map
transform
Can anybody provide some clear examples so I can better understand them?
Many thanks :) | 0 | 1 | 2,423 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.