GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
52,786,576
0
0
0
0
1
false
0
2018-10-08T17:02:00.000
0
1
0
Keras LSTM Input Dimension understanding each other
52,706,996
0
python,machine-learning,keras,lstm,rnn
First: Regressors will replicate if you input a feature that gives some direct intuition about the predicted input might be to secure the error is minimized, rather than trying to actually predict it. Try to focus on binary classification or multiclass classification, whether the closing price go up/down or how much. ...
but I have been trying to play around with it for awhile. I've seen a lot of guides on how Keras is used to build LSTM models and how people feed in the inputs and get expected outputs. But what I have never seen yet is, for example stock data, how we can make the LSTM model understand patterns between different dimens...
0
1
51
0
52,719,901
0
1
0
0
1
true
5
2018-10-09T11:15:00.000
3
3
1
Deploying python with docker, images too big
52,719,729
1.2
python,amazon-web-services,docker
First see if there are easy wins to shrink the image, like using Alpine Linux and being very careful about what gets installed with the OS package manager, and ensuring you only allow installing dependencies or recommended items when truly required, and that you clean up and delete artifacts like package lists, big thi...
We've built a large python repo that uses lots of libraries (numpy, scipy, tensor flow, ...) And have managed these dependencies through a conda environment. Basically we have lots of developers contributing and anytime someone needs a new library for something they are working on they 'conda install' it. Fast forward...
0
1
2,376
0
62,167,763
0
0
0
0
3
false
7
2018-10-09T12:58:00.000
1
3
0
Clustering images using unsupervised Machine Learning
52,721,662
0.066568
python,computer-vision,cluster-analysis,k-means,unsupervised-learning
I have implemented Unsupervised Clustering based on Image Similarity using Agglomerative Hierarchical Clustering. My use case had images of People, so I had extracted the Face Embedding (aka Feature) Vector from each image. I have used dlib for face embedding and so each feature vector was 128d. In general, the featur...
I have a database of images that contains identity cards, bills and passports. I want to classify these images into different groups (i.e identity cards, bills and passports). As I read about that, one of the ways to do this task is clustering (since it is going to be unsupervised). The idea for me is like this: the cl...
0
1
4,082
0
52,735,568
0
0
0
0
3
true
7
2018-10-09T12:58:00.000
3
3
0
Clustering images using unsupervised Machine Learning
52,721,662
1.2
python,computer-vision,cluster-analysis,k-means,unsupervised-learning
Label a few examples, and use classification. Clustering is as likely to give you the clusters "images with a blueish tint", "grayscale scans" and "warm color temperature". That is a quote reasonable way to cluster such images. Furthermore, k-means is very sensitive to outliers. And you probably have some in there. Sin...
I have a database of images that contains identity cards, bills and passports. I want to classify these images into different groups (i.e identity cards, bills and passports). As I read about that, one of the ways to do this task is clustering (since it is going to be unsupervised). The idea for me is like this: the cl...
0
1
4,082
0
52,721,794
0
0
0
0
3
false
7
2018-10-09T12:58:00.000
5
3
0
Clustering images using unsupervised Machine Learning
52,721,662
0.321513
python,computer-vision,cluster-analysis,k-means,unsupervised-learning
Most simple way to get good results will be to break down the problem into two parts : Getting the features from the images: Using the raw pixels as features will give you poor results. Pass the images through a pre trained CNN(you can get several of those online). Then use the last CNN layer(just before the fully con...
I have a database of images that contains identity cards, bills and passports. I want to classify these images into different groups (i.e identity cards, bills and passports). As I read about that, one of the ways to do this task is clustering (since it is going to be unsupervised). The idea for me is like this: the cl...
0
1
4,082
0
52,728,282
0
0
0
0
2
true
4
2018-10-09T13:35:00.000
1
2
0
What is the appropriate distance metric when clustering paragraph/doc2vec vectors?
52,722,423
1.2
python,cluster-analysis,distance,doc2vec,hdbscan
I believe in practice cosine-distance is used, despite the fact that there are corner-cases where it's not a proper metric. You mention that "elements of the resulting docvecs are all in the range [-1,1]". That isn't usually guaranteed to be the case – though it would be if you've already unit-normalized all the raw d...
My intent is to cluster document vectors from doc2vec using HDBSCAN. I want to find tiny clusters where there are semantical and textual duplicates. To do this I am using gensim to generate document vectors. The elements of the resulting docvecs are all in the range [-1,1]. To compare two documents I want to compare th...
0
1
1,238
0
52,735,502
0
0
0
0
2
false
4
2018-10-09T13:35:00.000
1
2
0
What is the appropriate distance metric when clustering paragraph/doc2vec vectors?
52,722,423
0.099668
python,cluster-analysis,distance,doc2vec,hdbscan
The proper similarity metric is the dot product, not cosine. Word2vec etc. are trained using the dot product, not normalized by the vector length. And you should exactly use what was trained. People use the cosine all the time because it worked well for bag of words. The choice is not based on a proper theoretical anal...
My intent is to cluster document vectors from doc2vec using HDBSCAN. I want to find tiny clusters where there are semantical and textual duplicates. To do this I am using gensim to generate document vectors. The elements of the resulting docvecs are all in the range [-1,1]. To compare two documents I want to compare th...
0
1
1,238
0
53,180,151
0
0
0
0
1
false
0
2018-10-09T14:30:00.000
0
1
0
rasterio - load multi-dimensional data
52,723,483
0
python-3.x,rasterio
rasterio is really not the tool of choice for multi-dimensional netCDF data. It excels at handling 3D (band, y, x) data where band is some relatively short, unlabeled axis. Look into xarray instead, which is built around the netCDF model and supports labeled axes and many dimensions, plus lazy loading, out-of-memory co...
I just discovered rasterio for easy raster handling in Python. I am working with multi-dimensional climate data (4D and 5D). I was successful to open and read my 4D-NetCDF file with rasterio (lat: 180, lon: 361, time: 6, number: 51). However, the rasterio dataset object shows me three dimensions (180, 361, 306), whereb...
0
1
162
0
52,730,991
0
0
0
0
1
false
0
2018-10-09T20:44:00.000
0
1
0
Right way to serialize a Random Forest Regression File
52,729,048
0
python,machine-learning,pickle,random-forest,data-science-experience
Aim of saving complete model is modifying the model in the future. If you are not planning to modify your model, you can just save the weights and use them for the prediction. This will save a huge space for you.
I am working on building a Random Forest Regression model for predicting ETA. I am saving the model in pickle format by using pickle package. I have also used joblib to save the model. But the size of file is really large (more than 100 GB). I would like to ask the data science experts that is it the correct format to ...
0
1
234
0
52,729,697
0
1
0
0
1
false
0
2018-10-09T21:27:00.000
0
3
0
Python tool to find meaningful pairs of words in a document
52,729,565
0
python,nltk,natural-language-processing
Interesting problem to play with, assuming there's not already a lexicon of meaningful compound words that you could leverage. And I'd love to see "computer science" as a trending topic. Let's take the approach that we know nothing about compound words in English, whether "stop sign" is as meaningfully distinct from "s...
I'm writing a program that gathers tweets from Twitter, and evaluates the text to find trending topics. I'm planning on using NLTK to stem the terms and do some other operations on the data. What I need is a tool that can determine if two adjacent words in a tweet should be treated as a single term. For example, if "f...
0
1
681
0
52,904,530
0
0
0
0
1
false
0
2018-10-10T10:39:00.000
-1
1
0
TensorFlow: Correct way of using steps in Stochastic Gradient Descent
52,738,335
-0.197375
python,tensorflow,machine-learning
step is the literal meaning: means you refresh the parameters in your batch size; so for linear_regessor.train, it will train 100 times for this batch_size 1. epoch means to refresh the whole data, which is 17,000 in your set.
I am currently using TensorFlow tutorial's first_steps_with_tensor_flow.ipynb notebook to learn TF for implementing ML models. In the notebook, they have used Stochastic Gradient Descent (SGD) to optimize the loss function. Below is the snippet of the my_input_function: def my_input_fn(features, targets, batch_size=1,...
0
1
185
0
52,746,594
0
1
0
0
1
false
1
2018-10-10T18:20:00.000
0
2
0
Correct usage of itertools in python 3
52,746,519
0
python,python-3.x
Provided that the zip object is created correctly, you can either do list(zip_object) or [*zip_object] to get the list.
I am trying to expand the following list [(1, [('a', '12'), ('b', '64'), ('c', '36'), ('d', '48')]), (2, [('a', '13'), ('b', '26'), ('c', '39'), ('d', '52')])] to [(1,a,12),(1,b,24),(1,c,36),(1,d,48),(2,a,13),(2,b,26),(2,c,39),(2,d,52)] I used zip(itertools.cycle()) in python 3, but instead get a zip object refere...
0
1
66
0
52,752,228
0
0
0
0
1
true
0
2018-10-11T01:55:00.000
0
1
0
Different exceptions happened when running Keras and scikit-learn
52,751,040
1.2
python,tensorflow,machine-learning,scikit-learn,keras
Found it. It should be: clf = KerasClassifier(build_fn=get_model) Instead of: clf = KerasClassifier(build_fn=get_model())
I try to pass a Keras model (as a function) to KerasClassifier wrapper from scikit_learn, and then use GridSearchCV to create some settings, and finally fit the train and test datasets (both are numpy array) I then, with the same python script, got different exceptions, some of them are: _1. Traceback (most recent cal...
0
1
196
0
52,772,728
0
0
0
0
1
false
0
2018-10-11T13:04:00.000
0
1
0
Finding correlation of two data frames using python
52,760,769
0
python,correlation
Bin them both to vectors of equal length, with bin or window sizes dependent on the shapes of the input frames, then calculate correlation on the vectors.
I am working on a data set and after performing the bucketing operation over two columns, I ended up with two buckets that have maximum number of data points. For those two buckets, I have created two separate data frames, which is of different shapes (number of columns are same and the number of rows are different) so...
0
1
48
0
52,767,061
0
1
0
0
1
false
0
2018-10-11T18:40:00.000
2
1
0
How to get a file from the files on my computer to JupyterLab?
52,766,928
0.379949
python,jupyter-notebook
Every running program, including JupyterLab, has a "working directory" which is where it thinks it is on your computer's file system. What exactly this directory is usually depends on how you launched it (e.g., when you run a program from terminal, its working directory is initially the folder your terminal was in when...
I am very new to this and struggling with the basics. I have a csv file /home/emily/Downloads/Roger-Federer.csv The textbook says that I need to "extract the file and download it to my current directory" on JupyterLab (I am using Python). What does this mean? How do I do this? Thank you
0
1
628
0
58,498,554
0
0
0
0
1
false
1
2018-10-11T18:47:00.000
0
1
0
Importing tensorflow not working when upgraded
52,767,007
0
python,python-3.x,tensorflow
you can uninstall the tensorflow and re-install it.
Tensorflow was working fine when I had 1.4 but when I upgraded it, it stopped working. The version that I installed is 1.11 with CUDA 9 and cuDNN 7. Traceback (most recent call last): File "C:\Users\anime\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, ...
0
1
358
0
52,773,168
0
0
0
0
1
false
0
2018-10-12T05:36:00.000
0
1
0
How to add a none option in scikit learn predict
52,772,906
0
python,python-3.x,machine-learning,scikit-learn,leap-motion
Instead of treating this as a classification problem with 4 levels, treat it as a classification problem with 5 levels. 4 levels would correspond to one of the original four and the 5th level can be used for all others.
I have the next problem, I'm trying to classify one of four hand position, I'm using SVM and this positions will be used to make commands in my program, the predict function works fine, but for example if I made some other gestures (none of the 4 that I use for commands) the predict function try to classify this gestur...
0
1
87
0
52,773,601
0
0
0
0
1
false
1
2018-10-12T06:25:00.000
1
3
0
Reading and Writing into CSV file at the same time
52,773,491
0.066568
python,python-3.x
You can do open("data.csv", "rw"), this allows you to read and write at the same time.
I wanted to read some input from the csv file and then modify the input and replace it with the new value. For this purpose, I first read the value but then I'm stuck at this point as I want to modify all the values present in the file. So is it possible to open the file in r mode in one for loop and then immediately i...
0
1
9,000
0
52,843,229
0
0
0
0
1
false
6
2018-10-13T01:01:00.000
0
2
0
Early Stopping with a Cross-Validated Metric in Keras
52,788,635
0
python,keras,prediction,cross-validation
I imagine that using a callback as suggested by @VincentPakson would be cleaner and more efficient, but the level of programming required is beyond me. I was able to create a for loop to do what I wanted by: Training a model for a single epoch and saving it using model.save(). Loading the saved model and training the ...
Is there a way in Keras to cross-validate the early stopping metric being monitored EarlyStopping(monitor = 'val_acc', patience = 5)? Before allowing training to proceed to the next epoch, could the model be cross-validated to get a more robust estimate of the test error? What I have found is that the early stopping me...
0
1
960
0
52,846,778
0
0
0
0
1
false
0
2018-10-13T03:42:00.000
0
1
0
How to set size of hidden state vector in LSTM, keras?
52,789,325
0
python,keras,lstm
If by vector size you mean the number of nodes in a layer, then yes you are doing it right. The output dimensionality of your layer is the same as the number of nodes. The same thing applies to convolutional layers, number of filters and output dimensionality along the last axis, aka number of color channels is the sa...
I am currently setting the vector size by using model.add(LSTM(50)) i.e setting the value in units attribute but I highly doubt its correctness(In keras documentation, units is explained as dimensionality of the output space). Anyone who can help me here?
0
1
567
0
64,389,564
0
0
0
0
5
false
4
2018-10-13T09:27:00.000
0
5
0
'pandas' has no attribute 'read_csv'"
52,791,477
0
python,pandas,csv
Since I had this problem just now, and on the whole internet no answer covered my issue: You may have pandas installed (Like I did), but in the wrong environment. Especially when you just start out in Python and use an IDE like PyCharm, you don't realise that you may create a new Environment (Called "pythonProject", "p...
I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error AttributeError("module 'pandas' has no attribute 'read_csv'") Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help. I know where both my cvs + py...
0
1
5,427
0
53,273,547
0
0
0
0
5
false
4
2018-10-13T09:27:00.000
2
5
0
'pandas' has no attribute 'read_csv'"
52,791,477
0.07983
python,pandas,csv
I've just been spinning my wheels on the same problem. TL/DR: try renaming your python files I think there must be a number of other naming conflicts besides some of the conceivably obvious ones like csv.py and pandas.py mentioned in other posts on the topic. In my case, I had a single file called inspect.py. Running ...
I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error AttributeError("module 'pandas' has no attribute 'read_csv'") Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help. I know where both my cvs + py...
0
1
5,427
0
68,730,346
0
0
0
0
5
false
4
2018-10-13T09:27:00.000
0
5
0
'pandas' has no attribute 'read_csv'"
52,791,477
0
python,pandas,csv
I have faced the same problem when I update my python packages using conda update --all. The error: AttributeError: module 'pandas' has no attribute 'read_csv' I believe it is a pandas path problem. The solution: print(pd) To see where are your pandas come from. I was getting <module 'pandas' (namespace)> Then I used t...
I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error AttributeError("module 'pandas' has no attribute 'read_csv'") Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help. I know where both my cvs + py...
0
1
5,427
0
69,326,903
0
0
0
0
5
false
4
2018-10-13T09:27:00.000
1
5
0
'pandas' has no attribute 'read_csv'"
52,791,477
0.039979
python,pandas,csv
After spending 2 hours researching a solution to this question, running pip uninstall pandas and then pip install pandas in your terminal will work.
I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error AttributeError("module 'pandas' has no attribute 'read_csv'") Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help. I know where both my cvs + py...
0
1
5,427
0
64,144,515
0
0
0
0
5
false
4
2018-10-13T09:27:00.000
1
5
0
'pandas' has no attribute 'read_csv'"
52,791,477
0.039979
python,pandas,csv
I had the same issue and it is probably cause of writing dataframe = pd.read.csv("dataframe.csv") instead of dataframe = pd.read_csv("dataframe.csv") that little "_" is the problem. Hope this helps somebody else too.
I have been using pandas for a while and it worked fine but out of nowhere, it decided to give me this error AttributeError("module 'pandas' has no attribute 'read_csv'") Now I have spent many many hours trying to solve this issue viewing every StackOverflow forum but they don't help. I know where both my cvs + py...
0
1
5,427
0
52,806,830
0
0
0
0
1
true
0
2018-10-14T19:10:00.000
1
2
0
imshow() with desired framerate with opencv
52,806,175
1.2
python,opencv
I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer() calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer. eg cv2.WaitKey((1000/50) - (time processing finis...
Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the...
0
1
2,282
0
52,816,856
0
0
0
0
1
false
0
2018-10-15T10:41:00.000
0
2
0
Neuron freezing in Tensorflow
52,814,880
0
python,tensorflow,deep-learning
A neuron in a dense neural network layer simply corresponds to a column in a weight matrix. You could therefore redefine your weight matrix as a concatenation of 2 parts/variables, one trainable and one not. Then you could either: selectively pass only the trainable part in the var_list argument of the minimize functi...
I need to implement neurons freezing in CNN for a deep learning research, I tried to find any function in the Tensorflow docs, but I didn't find anything. How can I freeze specific neuron when I implemented the layers with tf.nn.conv2d?
0
1
473
0
52,839,277
0
1
0
0
1
false
0
2018-10-16T15:37:00.000
1
3
0
Cannot import python pack
52,839,205
0.066568
python,package
You have to install it first. Search “Python Pip” on google and download Pip. Then use that to open CMD and type “pip install (Module)”. Then it should import with no errors.
I cannot install package in Python.For example the package numpy or pandas.I download the python today. I press import numpy as np and nothing
0
1
40
0
52,859,940
0
0
0
0
1
false
0
2018-10-16T21:04:00.000
0
2
0
uWSGI process 1 got Segmentation Fault _ Fail to deploy Flask App on Pythonanywhere
52,844,037
0
python,flask,deployment,wsgi,pythonanywhere
uWSGI is a C/C++ compiled app and segmentation fault is its internal error that means that there is some incorrect behavior in uWSGI logic: somewhere in its code it's trying to get access to area of memory it's not allowed to access to, so OS kills this process and returns "segfault" error. So make sure you have the la...
I'm trying to deploy my flask app on Pythonanywhere but am getting an error i have no idea what to do about. I've looked online and people haven't been getting similar errors like mine. My app loads a bunch of pretrained ML models. Would love some help! 2018-10-16 20:52:38 /home/drdesai/.virtualenvs/flask-app-env/li...
1
1
1,381
0
52,850,290
0
0
0
0
1
true
1
2018-10-16T21:43:00.000
1
1
0
Azure Machine Learning Studio execute python script, Theano unable to execute optimized C-implementations (for both CPU and GPU)
52,844,431
1.2
python,theano,azure-machine-learning-studio
I don't think you can fix that - the Python script environment in Azure ML Studio is rather locked down, you can't really configure it (except for choosing from a small selection of Anaconda/Python versions). You might be better off using the new Azure ML service, which allows you considerably more configuration optio...
I am execute a python script in Azure machine learning studio. I am including other python scripts and python library, Theano. I can see the Theano get loaded and I got the proper result after script executed. But I saw the error message: WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to exe...
0
1
96
0
52,848,240
0
0
0
1
1
false
1
2018-10-17T05:46:00.000
1
1
0
Importing a pandas dataframe into Teradata database
52,847,985
0.197375
python,pandas,sqlalchemy,teradata
I solved it! Although I do not know why, I'm hoping someone can explain: tf.to_sql('rt_test4', con=td_engine, schema='db_sandbox', index = False, dtype= {'A': CHAR, 'B':Integer})
I am attempting to import an Excel file into a new table in a Teradata database, using SQLAlchemy and pandas. I am using the pandas to_sql function. I load the Excel file with pandas and save it as a dataframe named df. I then use df.to_sql and load it into the Teradata database. When using the code: df.to_sql('rt_test...
0
1
780
0
52,855,692
0
0
0
0
2
true
0
2018-10-17T12:33:00.000
1
2
0
Why are different libraries searched for in tensorflow, even though both were installed the same way?
52,854,983
1.2
python,tensorflow
I would say you have a broken CUDA installation somewhere in the library path. It is libcuda.so that has a dependency on libnvidia-fatbinaryloader.so, so maybe the symbolic links point to a library that no longer exists but was installed before. You can find this information by running the ldd command on the libcuda.so...
I built tensorflow from source and got a *.whl file that I could install on my pc with pip install *.whl. Now in the virtualenv where I installed it I can open python and do import tensorflow without a problem and also use tf. Now I tried to install this same wheel on an other pc in a virtualenv and it worked successfu...
0
1
48
0
52,855,249
0
0
0
0
2
false
0
2018-10-17T12:33:00.000
0
2
0
Why are different libraries searched for in tensorflow, even though both were installed the same way?
52,854,983
0
python,tensorflow
The building process is related to the computer environment.Could building tensorflow in the same machine and installing it on the same machine help?Building on one machine and generating the *.whl,but installing on other machines may cause problem.
I built tensorflow from source and got a *.whl file that I could install on my pc with pip install *.whl. Now in the virtualenv where I installed it I can open python and do import tensorflow without a problem and also use tf. Now I tried to install this same wheel on an other pc in a virtualenv and it worked successfu...
0
1
48
0
52,859,799
0
1
0
0
1
false
0
2018-10-17T14:35:00.000
2
1
0
Installing R kernel with conda creates an unwanted addtional python kernel in Jupyter
52,857,461
0.379949
python,r,jupyter-notebook,conda
r-essentials comes with python as well as the jupyter_client and the ipykernel packages which enables your jupyter to propose this R and thus the python installed as kernels in a notebook. ipykernel is mandatory for the jupyter to propose the R as a kernel and python is a dependency to ipykernel so... I don't think you...
I created an R kernel to use in a Jupyter notebook with: conda create -n myrenv r-essentials -c r And when running Jupyter, in the menu to create a new notebook, i can see the choice of my new kernel new --> R [conda env:myrenv] but I also have the choice (among others) of new --> Python [conda env:myrenv]. How can I ...
0
1
86
0
61,304,539
0
1
0
0
1
false
50
2018-10-17T16:54:00.000
1
5
0
Interactive matplotlib figures in Google Colab
52,859,983
0.039979
python,matplotlib,google-colaboratory
In addition to @Nilesh Ingle excellent answer, in order to solve the problem of axes and title not displaying : you should change the link https://cdn.plot.ly/plotly-1.5.1.min.js?noext (which refers to an older version of plotly, thus not displaying axes labels) by https://cdn.plot.ly/plotly-1.5.1.min.js?noext when cal...
Normally in a jupyter notebook I would use %matplotlib notebook magic to display an interactive window, however this doesn't seem to work with google colab. Is there a solution, or is it not possible to display interactive windows in google colab?
0
1
44,387
0
52,873,525
0
0
0
0
1
false
0
2018-10-17T18:52:00.000
0
1
0
Getting the list of features used during training of Random Forest Regressor
52,861,742
0
python,pandas,scikit-learn,random-forest
Is there a function which allows to get the list of names of columns used during the training of the Random Forest Regressor model? RF uses all features from your dataset. Each tree may contain sqrt(num_of_features) or log2(num_of_features) or whatever but these columns are selected at random. So usually RF covers a...
I used one set of data to learn a Random Forest Regressor and right now I have another dataset with smaller number of features (the subset of the previous set). Is there a function which allows to get the list of names of columns used during the training of the Random Forest Regressor model? If not, then is there a fun...
0
1
216
0
72,310,873
0
0
0
1
1
false
10
2018-10-17T20:03:00.000
1
1
0
How do I use python pandas to read an already opened excel sheet
52,862,768
0.197375
python,excel,pandas
There is no way to do this. The table is not saved to disk, so pandas can not read it from disk.
Assuming I have an excel sheet already open, make some changes in the file and use pd.read_excel to create a dataframe based on that sheet, I understand that the dataframe will only reflect the data in the last saved version of the excel file. I would have to save the sheet first in order for pandas dataframe to take i...
0
1
1,261
0
52,864,018
0
0
0
1
2
false
0
2018-10-17T21:28:00.000
1
3
0
Load thousands of CSV files into tableau
52,863,882
0.066568
python,csv,tableau-api
I would suggest doing any data prep outside of Tableau. Since you seem to be familiar with Python, try Pandas to combine all the csv files into one dataframe then output to a database or a single csv. Then connect to that single source.
I have a gazillion CSV files and in column 2 is the x-data and column 3 is the y-data. Each CSV file is a different time stamp. The x-data is slightly different in each file, but the number of rows is constant. I'm happy to assume the x-data is in fact identical. I am persuaded that Tableau is a good interface for me t...
0
1
1,928
0
52,864,473
0
0
0
1
2
false
0
2018-10-17T21:28:00.000
0
3
0
Load thousands of CSV files into tableau
52,863,882
0
python,csv,tableau-api
If you are using Windows, you can combine all the csv files into a single csv, then import that into Tableau. This of course assumes that all of your csv files have the same data structure. Open the command prompt Navigate to the directory where the csv files are (using the cd command) Use the command copy *.csv combi...
I have a gazillion CSV files and in column 2 is the x-data and column 3 is the y-data. Each CSV file is a different time stamp. The x-data is slightly different in each file, but the number of rows is constant. I'm happy to assume the x-data is in fact identical. I am persuaded that Tableau is a good interface for me t...
0
1
1,928
0
52,871,555
0
0
0
0
1
true
0
2018-10-18T09:53:00.000
0
1
0
Is it possible to manipulate data from csv without the need for producing a new csv file?
52,871,447
1.2
python,pandas
This is not possible using pandas. This lib creates copy of your .csv / .xls file and stores it in RAM. So all changes are applied to file stored in you memory not on disk.
I know how to import and manipulate data from csv, but I always need to save to xlsx or so to see the changes. Is there a way to see 'live changes' as if I am already using Excel? PS using pandas Thanks!
0
1
47
0
52,928,244
0
1
0
0
1
false
1
2018-10-18T10:03:00.000
1
2
0
Ordinal 241 could not be located
52,871,597
0.099668
python,anaconda,jupyter
Eventually, I came to a conclusion that there is a problem with adaptation of this Anaconda version with my Win 8.1. So, I downgraded Anaconda version to Anaconda3-5.2.0-Windows-x86_64 and that solved the issue.
I am using Anaconda3-5.3.0-Windows-x86_64 release and experienced the following problem: While running import command (e.g. - import numpy as np), from Jupyter notebook, I receive the following error - The ordinal 241 could not be located in the dynamic link library path:\mkl_intel_thread.dll> Where 'path' is the pa...
0
1
930
0
52,880,527
0
1
0
0
1
false
1
2018-10-18T14:31:00.000
1
1
0
Elements in array change data types from float to string
52,876,380
0.197375
python,arrays,append,reshape
Because of the mix of numbers and strings, np.array will use the common format: string. The solution here is to convert data to type object which supports mixed element types. This is performed by using: data = np.array(data, dtype=object) prior to hstack.
When I append elements to a list that have the following format and type: data.append([float, float, string]) Then stack the list using: data = np.hstack(data) And then finally reshape the array and transpose using: data = np.reshape(data,(-1,3)).T All the array elements in data are changed to strings. I want (and exp...
0
1
397
0
52,879,296
0
0
0
0
1
true
4
2018-10-18T17:03:00.000
6
1
0
How are tensors immutable in TensorFlow?
52,879,126
1.2
python,tensorflow
Tensors, differently from variables, can be compared to a math equation. When you say a tensor equals 2+2, it's value is not actually 4, it's the computing instructions that leads to the value of 2+2 and when you start a session an execute it, TensorFlow runs the computations needed to return the value of 2+2 and gives...
I read the following sentence in the TensorFlow documentation: With the exception of tf.Variable, the value of a tensor is immutable, which means that in the context of a single execution tensors only have a single value. However, evaluating the same tensor twice can return different values; for example that ten...
0
1
2,114
0
52,889,192
0
1
0
0
1
false
2
2018-10-19T09:04:00.000
0
5
0
how to remove zeros after decimal from string remove all zero after dot
52,889,130
0
python,pandas
A quick-and-dirty solution is to use "%g" % value, which will convert floats 1.5 to 1.5 but 1.0 to 1 and so on. The negative side-effect is that large numbers will be represented in scientific notation like 4.44e+07.
I have data frame with a object column lets say col1, which has values likes: 1.00, 1, 0.50, 1.54 I want to have the output like the below: 1, 1, 0.5, 1.54 basically, remove zeros after decimal values if it does not have any digit after zero. Please note that i need answer for dataframe. pd.set_option and round don't w...
0
1
5,418
0
52,943,459
0
0
1
0
1
false
1
2018-10-20T01:58:00.000
0
1
0
How to find redundant paths (subpaths) in the trajectory of a moving object?
52,901,800
0
python,python-2.7,video-tracking
I would first create a detection procedure that outputs a list of points visited along with their video frame number. Then use list exploration functions to know how many redundant suites are found and where. As you see I don't write your code. If you need anymore advise please ask!
I need to track a moving deformable object in a video (but only 2D space). How do I find the paths (subpaths) revisited by the object in the span of its whole trajectory? For instance, if the object traced a path, p0-p1-p2-...-p10, I want to find the number of cases the object traced either p0-...-p10 or a sub-path lik...
0
1
77
0
52,912,811
0
0
0
0
1
false
1
2018-10-21T06:05:00.000
1
1
0
How To Detect English Language Words Using Machine Learning From Data
52,912,553
0.197375
python,tensorflow,machine-learning
Character frequency scanning is one way to do this. For example for each language obtain a list of character frequencies, A: 3% B: 1% C: 0.5% D: 0.7% E: 4% etc.. Then evaluate your string's character frequency against your static map. You can obtain a probabilistic model of the likelihood of the string being one of you...
I have data that contains English text messages. I want to detect messages that are "written in English letters", but aren't English words. (For example with codes based rules, but I don't want to hard coded the rules). Please note that the computer being used does not have an active internet connection (so I cannot ch...
0
1
440
0
52,934,432
0
0
0
0
1
false
0
2018-10-21T23:20:00.000
0
2
0
How can I sort 128 bit unsigned integers in Python?
52,920,727
0
python,sorting,numpy,int128
I was probably expecting too much from Python, but I'm not disappointed. A few minutes of coding allowed me to create something (using built-in lists) that can process the sorting a hundred million uint128 items on an 8GB laptop in a couple of minutes. Given a large number of items to be sorted (1 trillion), it's clear...
I have a huge number of 128-bit unsigned integers that need to be sorted for analysis (around a trillion of them!). The research I have done on 128-bit integers has led me down a bit of a blind alley, numpy doesn't seem to fully support them and the internal sorting functions are memory intensive (using lists). What I'...
0
1
1,852
0
52,929,724
0
0
0
0
2
false
3
2018-10-22T12:42:00.000
0
2
0
Why do I need sklearn in docker container if I already have the model as a pickle?
52,929,649
0
python,python-3.x,docker,scikit-learn,pickle
The pickle is just the representation of the data inside the model. You still need the code to use it, that's why you need to have sklearn inside the container.
I pickled a model and want to expose only the prediction api written in Flask. However when I write a dockerfile to make a image without sklearn in it, I get an error ModuleNotFoundError: No module named 'sklearn.xxxx' where xxx refers to sklearn's ML algorithm classes, at the point where I am loading the model using p...
0
1
1,171
0
52,929,910
0
0
0
0
2
true
3
2018-10-22T12:42:00.000
1
2
0
Why do I need sklearn in docker container if I already have the model as a pickle?
52,929,649
1.2
python,python-3.x,docker,scikit-learn,pickle
You have a misconception of how pickle works. It does not seralize anything, except of instance state (__dict__ by default, or custom implementation). When unpickling, it just tries to create instance of corresponding class (here goes your import error) and set pickled state. There's a reason for this: you don't know b...
I pickled a model and want to expose only the prediction api written in Flask. However when I write a dockerfile to make a image without sklearn in it, I get an error ModuleNotFoundError: No module named 'sklearn.xxxx' where xxx refers to sklearn's ML algorithm classes, at the point where I am loading the model using p...
0
1
1,171
0
52,942,231
0
0
0
0
1
false
0
2018-10-22T15:08:00.000
0
1
0
python - multivariate regression with discrete and continuous
52,932,510
0
python,regression
Correlation is used only for numeric data, discrete / binary data need to be treated differently. Have a look at Phi coefficient for binary. As for correlation coefficient (for numeric data), it depends on the relationship between the variables. If these are linear then Pearson is preferred, otherwise Spearman (or some...
I have a dataset with 53 independent variables (X) and 1 dependent (Y). The dependent variable is a boolean (either 1 or 0), while the independent set is made of both continuous and discrete variables. I was planning to use pandas.DataFrame.corr() to list the most influencing variables for the output Y. corr can be: ...
0
1
339
0
56,162,459
0
0
0
0
1
false
3
2018-10-22T17:39:00.000
1
1
0
How to choose beta in F-beta score
52,934,864
0.197375
python-3.x,machine-learning,scikit-learn,random-forest,grid-search
To give more weight to the Precision, we pick a Beta value in the interval 0 < Beta < 1 To give more weight to the Recall, we pick a Beta Value in the interval 1 < Beta When you set beta = Cost of False Negative/Cost of False Positive then you'll give more weight to recall, in case of the cost of False negative is hi...
I am using grid search to optimize the hyper-parameters of a Random Forest fit on a balanced data set, and I am struggling with which model evaluation metric to choose. Given the real-world context of this problem, false negatives are more costly than false positives. I initially tried optimizing recall but I was endin...
0
1
1,826
0
52,957,261
0
0
0
0
2
false
3
2018-10-23T13:34:00.000
1
6
0
valueError when using multi_gpu_model in keras
52,950,449
0.033321
python,tensorflow,keras,google-cloud-platform,gpu
TensorFlow is only seeing one GPU (the gpu and xla_gpu devices are two backends over the same physical device). Are you setting CUDA_VISIBLE_DEVICES? Does nvidia-smi show all GPUs?
I am using google cloud VM with 4 Tesla K80 GPU's. I am running a keras model using multi_gpu_model with gpus=4(since i have 4 gpu's). But, i am getting the following error ValueError: To call multi_gpu_model with gpus=4, we expect the following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1', '/gpu:2', '/...
0
1
7,993
0
58,273,653
0
0
0
0
2
false
3
2018-10-23T13:34:00.000
0
6
0
valueError when using multi_gpu_model in keras
52,950,449
0
python,tensorflow,keras,google-cloud-platform,gpu
I had the same issue. Tensorflow-gpu 1.14 installed, CUDA 10.0, and 4 XLA_GPUs were displayed with device_lib.list_local_devices(). I have another conda environement and there is just Tensorflow 1.14 installed and no tensorflow-gpu, and i don't know why, but i can run my multi_gpu model on all gpus with that environmen...
I am using google cloud VM with 4 Tesla K80 GPU's. I am running a keras model using multi_gpu_model with gpus=4(since i have 4 gpu's). But, i am getting the following error ValueError: To call multi_gpu_model with gpus=4, we expect the following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1', '/gpu:2', '/...
0
1
7,993
0
52,973,320
0
0
1
0
1
true
0
2018-10-23T20:32:00.000
1
1
0
running PVWatts for module system not in Sandia DB (python library)
52,957,433
1.2
python,pvlib
Yes, use an incident angle modifier function such as physicaliam to calculate the AOI loss, apply the AOI loss to the in-plane direct component, then add the in-plane diffuse component.
I want to run the PVWatts model (concretely to get pvwatts_dc) on an Amerisolar 315 module which doesn't seem to appear. What I am trying to do is to replicate the steps in the manual, which only requires system DC size. When I go into the power model, the formula says g_poa_effective must be already angle-of-incidenc...
0
1
77
0
52,965,846
0
0
0
0
1
true
0
2018-10-24T09:41:00.000
0
2
0
Using convolution layer trained weights for different image size
52,965,773
1.2
python,tensorflow,deep-learning
As convolution layer are independent of image size Actually it's more complicated than that. The kernel itself is independent of the image size because we apply it on each pixel. And indeed, the training of these kernels can be reused. But this means that the output size is dependent on the image size, because this is ...
I want to use the first three convolution layers of vgg-16 to generate feature maps. But i want to use it with variable image size,i.e not imagenet size of 224x224 or 256x256. Such as 480x640or any other randome image dimension. As convolution layer are independent of image spatial size, how can I use the weights for ...
0
1
129
0
53,006,871
0
0
0
0
1
true
0
2018-10-24T20:27:00.000
0
1
0
Why is pip not updating tensorflow correctly, and, if it is, why is the 'attrib error' still thrown?
52,977,453
1.2
python,tensorflow,pip,artificial-intelligence
As mentioned in the comments, the most probable solution to the attribute error is the update problem. However, if you're encountering the Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow, the easiest solution is to use following code. pi...
I've installed tensorflow over pip3 and python3, and am working on it. While using the colum function, the commonly experienced error AttributeError: module 'tensorflow' has no attribute 'feature_column'. It might look like a duplicate question, but I've looked at the other occurrences of the same question, but, after...
0
1
194
0
52,981,417
0
0
0
0
1
false
0
2018-10-25T02:44:00.000
0
2
0
How to use Tensorflow Keras API
52,980,583
0
python,python-3.x,tensorflow,tensorboard
You can use these APIs all together. E.g. if you have a regular dense network, but with an special layer you can use higher level API for dense layers (tf.layers and tf.keras) and low level API for your special layer. Furthermore, it is complex graphs are easier to define in low level APIs, e.g. if you want to share v...
Well I start learning Tensorflow but I notice there's so much confusion about how to use this thing.. First, some tutorials present models using low level API tf.varibles, scopes...etc, but other tutorials use Keras instead and for example to use tensor board to invoke callbacks. Second, what's the purpose of having to...
0
1
241
0
53,030,878
0
0
0
0
1
true
0
2018-10-25T11:08:00.000
0
1
0
Keras flow_from_dataframe wrong data ordering
52,987,835
1.2
python,keras
While I haven't found a way to decide the order in which the generator produces data, the order can be obtained with the generator.filenames property.
I am using keras's data generator with flow_from_dataframe. for training it works just fine, but when using model.predict_generator on the test set, I discovered that the ordering of the generated results is different than the ordering of the "id" column in my dataframe. shuffle=False does make the ordering of the gene...
0
1
403
0
53,005,078
0
1
0
0
1
false
0
2018-10-25T12:18:00.000
0
1
0
Python DLL load fail after updating all packages
52,989,115
0
python,scikit-learn
Ended up unstalling anaconda and re-installing. Seems to work again now
I just updated all conda packages, as Jupyter had a kernel error. Had been working in Pycharm for a while, but wanted to continue in Jupyter now that the code was working. Updating fixed my jupyter kernel error, but now the script won't work, in jupyter, pycharm, or from console. I get same error in each case: File "m...
0
1
392
0
52,990,768
0
1
0
0
1
false
1
2018-10-25T13:05:00.000
0
1
0
Generate Permutations of a large number( probably 30) with constraints
52,990,024
0
python,constraints,permutation,itertools
Calculating all permutations for a list of size 30 is impossible, irrespective of the implementation approach as there will be a total of 30! permutations. It seems to me that the permutation you require can simply be achieved by sorting the the given list though by using arr.sort() and then calculating the difference ...
I have list of numbers ( 1 to 30 ) most probably. I need to arrange the list in such a way that the absolute difference between two successive elements is not more than 2 or 3 or 4, and the sum of absolute differences of all the successive elements is minimum. I tried generating all possible permutations of list rangin...
0
1
343
0
52,993,813
0
0
0
0
1
true
0
2018-10-25T15:16:00.000
1
2
0
Write python functions to operate over arbitrary axes
52,992,767
1.2
python,numpy,multidimensional-array,indexing
numpy functions use several approaches to do this: transpose axes to move the target axis to a known position, usually first or last; and if needed transpose the result reshape (along with transpose) to reduce the problem simpler dimensions. If your focus is on the n'th dimension, it might not matter where the (:n) d...
I've been struggling with this problem in various guises for a long time, and never managed to find a good solution. Basically if I want to write a function that performs an operation over a given, but arbitrary axis of an arbitrary rank array, in the style of (for example) np.mean(A,axis=some_axis), I have no idea in ...
0
1
68
0
52,994,178
0
0
0
1
1
false
1
2018-10-25T16:21:00.000
2
1
0
How to store np arrays into psql database and django
52,993,954
0.379949
python,numpy,psql
json.dumps(np_array.tolist()) is the way to convert a numpy array to json. np_array.fromlist(json.loads(json.dumps(np_array.tolist()))) is how you get it back.
I develop an application that will be used for running simulation and optimization over graphs (for instance Travelling salesman problem or various other problems). Currently I use 2d numpy array as graph representation and always store list of lists and after every load/dump from/into DB I use function np.fromlist, np...
0
1
975
0
53,008,270
0
0
0
0
1
false
0
2018-10-25T17:48:00.000
0
1
0
Training in Keras with external evaluation function
52,995,288
0
python,unity3d,keras,deep-learning,ml-agent
After some talks at my university: the setup won't work this way since I need to split the process. I need the parameters of working agents to train the network based only on the level description(e.g. matrix like video game description language). To obtain the parametrized agents based on the actual level and the grou...
Let me first describe the setup: We have an autonomous agent in Unity, whose decisions are based on the perceived environment(level) and some pre-defined parameters for value mapping. Our aim is to pre-train the agents' parameters in a DNN. So the idea is basically to define an error metric to evaluate the performance ...
0
1
78
0
52,998,511
0
0
0
0
1
false
0
2018-10-25T20:43:00.000
0
1
0
Kernel size change in convolutional neural networks
52,997,810
0
python,tensorflow,neural-network,conv-neural-network,convolution
you need 64 kernel, each with the size of (32,5,5) . depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same. e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an in...
I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers. Convolutional layer with kernel_size = (5,5) with 32 outpu...
0
1
1,046
0
53,003,804
0
0
0
0
1
false
0
2018-10-26T07:04:00.000
0
2
0
How to get the dimension of tensors at runtime?
53,003,231
0
python,tensorflow
You have to make the tensors an output of the graph. For example, if showme_tensor is the tensor you want to print, just run the graph like that : _showme_tensor = sess.run(showme_tensor) and then you can just print the output as you print a list. If you have different tensors to print, you can just add them like tha...
I can get the dimensions of tensors at graph construction time via manually printing shapes of tensors(tf.shape()) but how to get the shape of these tensors at session runtime? The reason that I want shape of tensors at runtime is because at graph construction time shape of some tensors is coming as (?,8) and I cannot...
0
1
885
0
53,010,613
0
1
0
0
1
true
0
2018-10-26T09:32:00.000
0
1
0
What does "a","c","f" mean in Spyder-methods (Python)
53,005,691
1.2
python,function,class,methods,spyder
So what is the difference between (f), (a), (c) ? My first guess would be "function", "attributes", "class" but I'm not entirely sure (Spyder maintainer here) This is the right interpretation.
So, I know this might not be the place where to ask, but I can simply not figure it out! When im using Spyder and say numpy (np) when I type np. a lot of options pop up - I know most of them are functions related to np, but I kinda struggling to figure out exactly what the different calls are; they all have one letter ...
0
1
419
0
53,015,631
0
0
0
0
1
true
0
2018-10-26T19:58:00.000
1
1
0
Why is a learning curve necessary to determine if a neural network has high bias or variance?
53,015,550
1.2
python,tensorflow,machine-learning,neural-network
Yes, there is, but it's not for spotting overfitting only. But anyway, plotting is just fancy way to see numbers, and sometimes it gives you insights. If you are monitoring loss on train/validation simultaneously – you're looking at same data, obviously. Regarding Andrew's ideas – I suggest looking into Deep Learning c...
In Andrew Ng's machine learning course it is recommended that you plot the learning curve (training set size vs cost) to determine if your model has a high bias or variance. However, I am training my model using Tensorflow and see that my validation loss is increasing while my training loss is decreasing. It's my unde...
0
1
141
0
53,015,879
0
1
0
0
1
false
0
2018-10-26T20:05:00.000
1
1
0
What is the difference between max(my_array) and my_array.max()
53,015,623
0.197375
python,numpy,methods,syntax
As people have stated in the comments of your question, they are referencing two different functions. max(my_array) is a built in Python function available to any sequence data in python. Whereas, my_array.max() is referencing a function within the object. In this case my_array is from the Numpy array class. Numpy does...
For example, if I define my array as my_array = np.array([1,2,3]), What is the difference between max(my_array)and my_array.max() ? Is this one of those cases where one is 'syntactic sugar' for the other? Also, why does the first one work with a Python list, but the second doesn't?
0
1
79
0
53,020,115
0
0
0
0
1
true
1
2018-10-26T21:45:00.000
2
1
0
DenseNet in Tensorflow
53,016,653
1.2
python,tensorflow
No, tf.layers.dense implements what is more commonly known as a fully-connected layer, i.e. the basic building block of multilayer perceptrons. If you want dense blocks, you will need to to write your own implementation or use one of those you found on Github.
I am fairly new to tensorflow and I am interested in developing a DeseNet Architecture. I have found implementations from scratch on Github. I was wondering if the tensorflow API happen to implement the dense blocks. Is tensorflow's tf.layers.dense the same as the dense blocks in DenseNet? Thanks!
0
1
354
0
69,335,685
0
0
0
0
1
false
2
2018-10-27T10:53:00.000
0
1
0
python - pandas dataframe to powerpoint chart backend
53,021,158
0
python,python-3.x,pandas,powerpoint
you will need to read a bit about python-pptx. You need chart's index and slide index of the chart. Once you know them get your chart object like this-> chart = presentation.slides[slide_index].shapes[shape_index].chart replacing data chart.replace_data(new_chart_data) reset_chart_data_labels(chart) then when you save ...
I have a pandas dataframe result which stores a result obtained from a sql query. I want to paste this result onto the chart backend of a specified chart in the selected presentation. Any idea how to do this? P.S. The presentation is loaded using the module python-pptx
0
1
1,226
0
53,070,814
0
0
0
0
1
false
0
2018-10-29T12:47:00.000
1
2
0
partially define initial centroid for scikit-learn K-Means clustering
53,045,859
0.099668
python,machine-learning,scikit-learn,cluster-analysis,k-means
That is a very nonstandard variation of k-means. So you cannot expect sklearn to be prepared for every exotic variation. That would make sklearn slower for everybody else. In fact, your approach is more like certain regression approaches (predicting the last value of the cluster centers) rather than clustering. I also ...
Scikit documentation states that: Method for initialization: ‘k-means++’ : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k_init for more details. If an ndarray is passed, it should be of shape (n_clusters, n_features) and gives the initial centers. ...
0
1
1,975
0
53,062,724
0
0
0
0
1
false
0
2018-10-30T10:45:00.000
0
2
0
Calculate mean of column for each row excluding the row for which mean is calculated
53,062,585
0
python,pandas,dataframe,machine-learning,data-science
you can dataframe["ColumnName"].mean() for single column, or dataframe.describe() for all columns
I need to calculate the mean of a certain column in DataFrame, so that means for each row is calculated excluding the value of the row for which it's calculated. I know I can iterate each row by index, dropping each row by index in every iteration, and then calculating mean. I wonder if there's a more efficient way of...
0
1
336
0
53,064,651
0
0
0
0
1
true
0
2018-10-30T12:06:00.000
1
1
0
Racecar image tagging
53,063,944
1.2
python,tensorflow,computer-vision,object-detection,image-recognition
The best approach would be to use all 3 methods as an ensamble. You train all 3 of those models, and pass the input image to all 3 of them. Then, there are several ways how you can evaluate output. You can sum up the probabilities for all of the classes for all 3 models and then draw a conclusion based on the highest...
I am working on a system to simplify our image library which grows anywhere from 7k to 20k new pictures per week. The specific application is identifying which race cars are in pictures (all cars are similar shapes with different paint schemes). I plan to use python and tensorflow for this portion of the project. My ...
0
1
43
0
53,066,538
0
0
0
0
1
false
2
2018-10-30T14:18:00.000
1
1
0
When to use tensorflow estimators?
53,066,376
0.197375
python,tensorflow
This is a very opinionated answer but I will still write it: The Estimator-API was developed to simplify building and sharing models. You could compare it with Keras and in fact Estimators is built with tf.keras.layers so one could say it is a simplification of a simplification. This is obviously good for beginners or ...
I have a general tensorflow question about when to use estimators. I feel sometimes estimators are not convenient to build something, since we need to meet some fixed requirements when building the graph. On the other hand, using lower level api can be tedious sometimes. Therefore, I want to ask when it is proper to us...
0
1
52
0
66,114,558
0
0
0
0
2
false
2
2018-10-30T17:56:00.000
0
2
0
Get list of Keras variables
53,070,199
0
python-3.x,tensorflow,keras
To get the variable's name you need to access it from the weight attribute of the model's layer. Something like this: names = [weight.name for layer in model.layers for weight in layer.weights] And to get the shape of the weight: weights = [weight.shape for weight in model.get_weights()]
I'd like to compare variables in a Keras model with those from a TensorFlow checkpoint. I can get the TF variables like this: vars_in_checkpoint = tf.train.list_variables(os.path.join("./model.ckpt")) How can I get the Keras variables to compare from my model?
0
1
2,810
0
53,163,169
0
0
0
0
2
false
2
2018-10-30T17:56:00.000
2
2
0
Get list of Keras variables
53,070,199
0.197375
python-3.x,tensorflow,keras
You can get the variables of a Keras model via model.weights (list of tf.Variable instances).
I'd like to compare variables in a Keras model with those from a TensorFlow checkpoint. I can get the TF variables like this: vars_in_checkpoint = tf.train.list_variables(os.path.join("./model.ckpt")) How can I get the Keras variables to compare from my model?
0
1
2,810
0
55,940,953
0
0
0
1
1
false
0
2018-10-31T05:57:00.000
0
1
0
Google Cloud Platform int64_field_0
53,077,155
0
python,csv
Per the comments, using Pandas Data Frame's pd.to_csv(filename, index=false) resolved the issue.
We are getting an extra column 'int64_field_0' while loading data from CSV to BigTable in GCP. Is there any way to avoid this first column. We are using the method load_table_from_file and setting option AutoDetect Schema as True. Any suggestions please. Thanks.
0
1
182
0
53,079,992
0
0
0
0
2
true
0
2018-10-31T09:02:00.000
1
2
0
adding noise to an array. Is it addition or multiplication?
53,079,698
1.2
python,numpy,noise,conceptual
Well as you have said it yourself, the problem is that you don't know what you want. Both methods will increase the entropy of the original data. What is the purpose of your task? If you want to simulate something like sensor noise, the addition will do just fine. You can try both and observe what happens to the distri...
I have some code that just makes some random noise using the numpy random normal distribution function and then I add this to a numpy array that contains an image of my chosen object. I then have to clip the array to between values of -1 and 1. I am just trying to get my head round whether I should be adding this to th...
0
1
424
0
53,080,222
0
0
0
0
2
false
0
2018-10-31T09:02:00.000
2
2
0
adding noise to an array. Is it addition or multiplication?
53,079,698
0.197375
python,numpy,noise,conceptual
It depends what sort of physical model you are trying to represent; additive and multiplicative noise do not correspond to the same phenomenon. Your image can be considered a variable that changes through time. Noise is an extra term that varies randomly as time passes. If this noise term depends on the state of the im...
I have some code that just makes some random noise using the numpy random normal distribution function and then I add this to a numpy array that contains an image of my chosen object. I then have to clip the array to between values of -1 and 1. I am just trying to get my head round whether I should be adding this to th...
0
1
424
0
53,091,954
0
0
0
0
1
true
1
2018-10-31T14:50:00.000
5
1
0
What's the reason for the weights of my NN model don't change a lot?
53,086,166
1.2
python,machine-learning,neural-network,torch
There are almost always many local optimal points in a problem so one thing you can't say specially in high dimensional feature spaces is which optimal point your model parameters will fit into. one important point here is that for every set of weights that you are computing for your model to find a optimal point, beca...
I am training a neural network model, and my model fits the training data well. The training loss decreases stably. Everything works fine. However, when I output the weights of my model, I found that it didn't change too much since random initialization (I didn't use any pretrained weights. All weights are initialized ...
0
1
157
0
53,088,607
0
0
0
0
1
false
0
2018-10-31T16:44:00.000
1
1
0
Why does opencv on Canopy downgrade numpy, scipy, and other packages when I try to install it?
53,088,354
0.197375
python-2.7,opencv,package,enthought,canopy
You haven't provided any version or platform information. But perhaps you are using an old Canopy version (current is 2.1.9), or perhaps you are using the subscriber-only "full" installer, which is only intended for airgapped or other non-updateable systems. Otherwise, the currently supported version of opencv is 3.2.0...
On my package manager for canopy, every time I try to download opencv it downgrades several other important packages. I am then not able to upgrade those same packages or run my code. How can I download opencv without downgrading my other packages?
0
1
113
0
53,114,849
0
0
0
0
1
true
0
2018-11-01T00:13:00.000
2
1
0
Deeplearning with electroencephalography (EEG) data
53,093,576
1.2
python,deep-learning,neuroscience
It depends on what you want to test. A test set is used to estimate the generalization (i.e. performance on unseen data). So the question is: Do want to estimate the generalization to unseen data from the same participants (whose data was used to train the classifier)? Or do you want to estimate the generalization to ...
I am making a convolutional network model with which I want to classify EEG data. The data is an experiment where participants are evoked with images of 3 different classes with 2 subclasses each. To give a brief explanation about the dataset size, a subclass has ±300 epochs of a given participant (this applies for all...
0
1
174
0
53,094,747
0
0
0
0
1
true
0
2018-11-01T03:08:00.000
0
1
0
cv2 show video stream & add overlay after another function finishes
53,094,695
1.2
python,cv2
A common approach would be to create a flag that allows the detection algorithim to only run once every couple of frames and save the predicted reigons of interest to a list, whilst creating bounding boxes for every frame. So for example you have a face detection algorithim, process every 15th frame to detect faces, bu...
I am current working on a real time face detection project. What I have done is that I capture the frame using cv2, do detection and then show result using cv2.imshow(), which result in a low fps. I want a high fps video showing on the screen without lag and a low fps detection bounding box overlay. Is there a solutio...
0
1
413
0
53,096,595
0
0
0
0
1
false
0
2018-11-01T04:04:00.000
0
1
0
How to choose the right neural network in a binary classification problem for a unbalanced data?
53,095,061
0
python-3.x,keras
First of all, two features is a really small amount. Neural Networks are highly non-linear models with a really really high amount of freedom degrees, thus if you try to train a network with more than just a couple of networks it will overfit even with balanced classes. You can find more suitable models for a small dim...
I am using keras sequential model for binary classification. But My data is unbalanced. I have 2 features column and 1 output column(1/0). I have 10000 of data. Among that only 20 results in output 1, all others are 0. Then i have extended the data size to 40000. Now also only 20 results in output 1, all others are 0. ...
0
1
63
0
53,112,148
0
0
0
0
1
false
0
2018-11-01T22:25:00.000
0
2
0
GROUPBY with showing all the columns
53,110,240
0
python,pandas,dataframe,group-by
Did you try this : d_copy.groupby(['CITYS','MODELS']).mean() to have the average percentage of a model by city. Then if you want to catch the percentages you have to convert it in DF and select the column : pd.DataFrame(d_copy.groupby(['CITYS','MODELS']).mean())['PERCENTAGE']
I want to do a groupby of my MODELS by CITYS with keeping all the columns where i can print the percentage of each MODELS IN THIS CITY. I put my dataframe in PHOTO below. And i have written this code but i don"t know how to do ?? for name,group in d_copy.groupby(['CITYS'])['MODELS']:
0
1
60
0
53,119,543
0
0
0
0
1
true
0
2018-11-02T13:24:00.000
2
1
0
Autoencoder Decoded Output
53,119,402
1.2
python,tensorflow,autoencoder
I guess the question is whether your returned signal is a faithful representation of the input signal (but it's just constrained to the range 0 to 1)? If so, you could simply multiply it by 79, and then subtract 47. We'd need to see code if it's more than just a scaling issue.
I am trying to build an AutoEncoder, where I am trying to de-noise the signal. Now, for example, the amplitude range of my input signal varies in between -47 to +32. But while I am getting the decoded signal (reconstructed one), that signal only ranges in between 0 to +1 amplitude. How can I get my reconstructed signa...
0
1
38
0
53,125,246
0
0
0
0
1
false
1
2018-11-02T19:35:00.000
1
1
0
K-Nearest Neighbors find all ties
53,124,843
0.197375
python,algorithm,pandas,numpy,scikit-learn
In theory, all points in the set may tie, making the problem a different one. Indeed, the K nearest neighbors can be reported in time O(Log N + K) in the absence of ties, whereas ties can imply K = O(N) making any solution O(N). In practice, if the coordinates are integer, the ties will be a rare event, unless the prob...
I'm currently using sklearn to compute all the k-nearest neighbors from a dataset. Say k = 10. The problem I'm having is sklearn will only return the 10 nearest neighbors and none of the other data points that may tie for the 10th nearest neighbor in terms of distance. I was wondering is there any efficient way to find...
0
1
203
0
53,135,661
0
0
0
0
1
true
0
2018-11-03T19:48:00.000
1
1
0
Which classification model do you suggest for predicting a credit score?
53,134,930
1.2
python,machine-learning,classification
Do you have credit scores? Without labeled data I think you might consider reformulating the problem. If you do, then you can implement any number of regression algorithms from OLS all the way up to an ANN. Rather than look for the "one true" algorithm, many projects implement TPOT or grid search as part of model selec...
I have a data set that contains information about whether medium-budget companies can get loans. There are data on the data set that approximately 38,000 different companies will receive loans. And based on this data, I'm trying to estimate each company's credit score. What would be your suggestion?
0
1
155
0
53,145,276
0
0
0
0
1
true
0
2018-11-04T20:33:00.000
0
1
0
f1 or accuracy scoring after downsampling - classification, svm - Python
53,145,190
1.2
python,classification
If you've rebalanced your data, then it's not unbalanced anymore and I see no problem with using accuracy as the success metric. Accuracy can mislead you in very skewed datasets but since it isn't skewed anymore, it should work.
I have a dataset consisting in 15 columns and 3000 rows to train a model for a binary classification. There is a imbalance for y (1:2). Both outcomes (0,1) are equally important. After downsampling (because the parameter class_weight = balanced didn't work well), I used the parameter scoring = "f1", because I read tha...
0
1
122
0
53,565,096
0
0
0
0
1
false
0
2018-11-05T05:57:00.000
1
1
0
Does catboost implements xgboost (extreme gradient boosting) or a simple gradient boosting?
53,149,072
0.197375
python,xgboost,catboost
Gradient boosting is a meta-algorithm. There is no simple gradient boosting. Each boosting library uses their own unique algorithm to search for regression trees, as a result, we obtain different results. Extreme gradient boosting is just an implementation of standard gradient boosting on decision trees from xgboost wi...
On their website they say 'gradient boosting' but it seems people here compare it to other 'xgboost' algorithm. I would like to know whether it is a real extreme gradient boosting algorithm. thanks
0
1
102
0
53,159,976
0
0
0
0
2
true
1
2018-11-05T18:14:00.000
1
2
0
What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?
53,159,930
1.2
python,keras,conv-neural-network
If you want to convolve along the dimension of your channels, you should add a singleton dimension in the position of channel. If you don't want to convolve along the dimension of your channels, you should use a 2D CNN.
I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands. Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image. I want to inpu...
0
1
138
0
53,163,769
0
0
0
0
2
false
1
2018-11-05T18:14:00.000
1
2
0
What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?
53,159,930
0.099668
python,keras,conv-neural-network
What you want is a 2D CNN, not a 3D one. A 2D CNN already supports multiple channels, so you should have no problem using it with a hyperspectral image.
I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands. Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image. I want to inpu...
0
1
138
0
53,161,713
0
0
0
0
1
true
0
2018-11-05T18:52:00.000
0
1
0
Applying Network flows
53,160,428
1.2
java,python,algorithm,computer-science,network-flow
Create vertices for each student and each school. Draw an edge with capacity 1 from each student to each school that they can attend according to your distance constraint. Create a source vertex with edges to each student with a capacity of 1. Create a sink vertex with edges coming in from each school with capacities e...
So I've recently started looking into network flows (Max flow, min cuts, etc) and the general problems for network flow always involve assigning "n" of something to "k" of another thing. For example, how would I set up a network flow for "n" children in a city that has "k" schools such that the children's homes are wit...
0
1
146
0
53,161,823
0
0
0
0
1
false
2
2018-11-05T19:38:00.000
1
3
0
How to return a new dataframe excluding certain columns?
53,161,078
0.066568
python,pandas,numpy,dataframe,indexing
To build on @sven-harris answer. List the columns: remove = [x for x in df.columns if 'job' in x or 'birth' in x] remove += ['name', 'userID', 'IgID'] df = df.drop(remove, axis=1) # axis=1 to drop columns, 0 for rows.
I am trying to take a dataframe df and return a new dataframe excluding any columns with the word 'job' in its name, excluding any columns with the string 'birth' in its name, and excluding these columns: name, userID, lgID. How can I do that?
0
1
149
0
55,496,917
0
0
0
0
1
true
1
2018-11-05T21:51:00.000
1
1
0
Joblib persistence and Pandas
53,162,741
1.2
python,python-3.x,pandas,parallel-processing,joblib
Since Pandas data frames are built on Numpy arrays, yes, they will be persisted. Joblib implements its optimized persistence by hooking in to the pickle protocol. Anything that includes numpy arrays in its pickled representation will benefit from Joblib's optimizations.
There is good documentation on persisting Numpy arrays in Joblib using a memory-mapped file. In recent versions, Joblib will (apparently) automatically persist and share Numpy arrays in this fashion. Will Pandas data frames also be persisted, or would the user need to implement persistence manually?
0
1
488
0
53,169,648
0
0
0
0
1
false
5
2018-11-06T01:51:00.000
3
1
0
Difference between Conv3d vs Conv2d
53,164,733
0.53705
python,tensorflow,neural-network,deep-learning,convolution
If you have a stack of images, you have a video. You can not have two input forms. You have either images or videos. For the video case you can use 3D convolution and 2D convolution is not defined for it. If you stack the channels as you mentioned it (3N) the 2D convolution will interpret the stack as one image with a ...
I am a little confused with the difference between conv2d and conv3d functions. For example, if I have a stack of N images with H height and W width, and 3 RGB channels. The input to the network can be two forms form1: (batch_size, N, H, W, 3) this is a rank 5 tensor form2: (batch_size, H, W, 3N ) this is a rank 4 ten...
0
1
10,046
0
53,166,406
0
1
0
0
1
false
0
2018-11-06T05:41:00.000
5
1
0
Family tree in Python
53,166,322
0.761594
python,algorithm,family-tree
There's plenty of ways to skin a cat, but I'd suggest to create: A Person class which holds relevant data about the individual (gender) and direct relationship data (parents, spouse, children). A dictionary mapping names to Person elements. That should allow you to answer all of the necessary questions, and it's flex...
I need to model a four generational family tree starting with a couple. After that if I input a name of a person and a relation like 'brother' or 'sister' or 'parent' my code should output the person's brothers or sisters or parents. I have a fair bit of knowledge of python and self taught in DSA. I think I should mode...
0
1
6,151
0
53,184,428
0
0
0
0
1
false
0
2018-11-06T07:03:00.000
0
1
0
Tensorflow MixtureSameFamily and gaussian mixture model
53,167,161
0
python,tensorflow,gmm
I found an answer for above question thanks to my collegue. The 4 components of gaussian mixture have had very similar means that the mixture seems like it has only one mode. If I put four explicitly different values as means to the MixtureSameFamily class, I could get a plot of gaussian mixture with 4 different modes...
I am really new to Tensorflow as well as gaussian mixture model. I have recently used tensorflow.contrib.distribution.MixtureSameFamily class for predicting probability density function which is derived from gaussian mixture of 4 components. When I plotted the predicted density function using "prob()" function as Tenso...
0
1
339
0
53,173,320
0
0
0
0
1
true
1
2018-11-06T10:07:00.000
0
1
0
Is there any good way to read the content of a Spark RDD into a Dask structure
53,169,690
1.2
python,pyspark,dask,dask-distributed,fastparquet
I solved this by doing the following Having a Spark RDD with a list of custom objects as Row values I created a version of the rdd where I serialised the objects to strings using cPickle.dumps. Then converted this RDD to a simple DF with string columns and wrote it to parquet. Dask is able to read parquet files with si...
Currently the integration between Spark structures and Dask seems cubersome when dealing with complicated nested structures. Specifically dumping a Spark Dataframe with nested structure to be read by Dask seems to not be very reliable yet although the parquet loading is part of a large ongoing effort (fastparquet, pyar...
0
1
460
0
53,183,497
0
0
0
0
2
true
0
2018-11-07T02:17:00.000
0
2
0
Pit in LSTM programming by python
53,182,773
1.2
python-3.x,tensorflow,keras,lstm,rnn
No, samples is different from batch_size. samples is the total number of samples you would have. batch_size would be the size of each batch or the number of samples per each batch used in training, like by .fit. For example, if samples=128 and batch_size=16, then your data would be divided into 8 batches with each hav...
As we all Know, if we want to train a LSTM network, we must reshape the train dataset by the function numpy.reshape(), and reshaping result is like [samples,time_steps,features]. However, the new shape is influenced by the original one. I have seen some blogs teaching LSTM programming taking 1 as time_steps, and if tim...
0
1
74