GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 59,083,949 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-28T07:38:00.000 | 0 | 1 | 0 | How to find anomalies in wind-sensor TimeSeries data? | 59,083,825 | 0 | python,machine-learning,deep-learning,time-series,data-science-experience | Very broad question so this will be a generic/broad answer:
To define anomalies you'll need to think and define what you consider normal.
Usually we consider two things in terms of (time series) data;
data availability:
is the data there that you expect? Usually monitorred by looking at a row count over time (are yo... | I have time series data set which contain TimeStamp[hour base] and wind sensor value. I need to find anomalies from this data set.
What are the techniques to find out anomalies ?
How to find anomalies with only these two features ( TimeStamp, sensor-value ) ? | 0 | 1 | 50 |
0 | 59,084,811 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-28T07:51:00.000 | 0 | 1 | 0 | How to Execute R Machine Learning Model using Python REST API? | 59,084,012 | 0 | python,r,rest,api,rpy2 | You will need a Python based REST framework like Flask(Django or Pyramid will also do) to do this. You need to understand how to write a REST API by going through their respective docs. You can basically hide the R model behind a REST Resource. This resource will be responsible in receiving the inputs for the model. Us... | I need some help in R Machine Learning model execution by a Python REST API.
I have 2 executable ML model, developed in R Language by my colleague.
Now I need to deploy those models as a REST API, as we need to pass some input parameter and take output from those Models as a return statement.
I found, we can do this wi... | 0 | 1 | 331 |
0 | 59,116,976 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-28T12:32:00.000 | 1 | 2 | 0 | NN multidim regression with matrix as output | 59,089,002 | 0.099668 | python,tensorflow,keras,neural-network,regression | This is the answer to a question regarding unknown labels.
You have to know labels before using any supervised algorithm. Otherwise, there is no way you can train a model. You need to think of solving this problem by employing one of the unsupervised techniques, such as k-means algorithm, Gaussian Mixture Models, or Cl... | I want to build a simple NN for regression purposes; the dimension of my input data reads (100000,3): meaning I have 1mio particles and their corresponding x,y,z coordinates. Out of these particles I want to predict centers which the particles correspond to where the data of the centers read (1000,3).
my question is: s... | 0 | 1 | 68 |
0 | 59,106,465 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-28T12:32:00.000 | 1 | 2 | 0 | NN multidim regression with matrix as output | 59,089,002 | 0.099668 | python,tensorflow,keras,neural-network,regression | All you have to do is to match the sizes. Assuming you know what particle belongs to what center that shouldn't be too hard.
So in your case you should have a matrix of (1000000,3)(atoms) and a vector of (1000000,)(centers) as their labels. This means that each entry in the vector corresponds to one row in the atom mat... | I want to build a simple NN for regression purposes; the dimension of my input data reads (100000,3): meaning I have 1mio particles and their corresponding x,y,z coordinates. Out of these particles I want to predict centers which the particles correspond to where the data of the centers read (1000,3).
my question is: s... | 0 | 1 | 68 |
0 | 59,095,070 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-28T16:34:00.000 | 0 | 2 | 0 | Is there any supervised clustering algorithm or a way to apply prior knowledge to your clustering? | 59,093,163 | 0 | python,machine-learning,cluster-analysis,unsupervised-learning,supervised-learning | A standard approach would be to use the dendrogram.
Then merge branches only if they agree with your positive examples and don't violate any of your negative examples. | In my case I have a dataset of letters and symbols, detected in an image. The detected items are represented by their coordinates, type (letter, number etc), value, orientation and not the actual bounding box of the image. My goal is, using this dataset, to group them into different "words" or contextual groups in gene... | 0 | 1 | 1,973 |
0 | 59,107,037 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-29T08:58:00.000 | 0 | 1 | 0 | Is there a way to use pre-trained R ML model in python web app? | 59,101,644 | 0 | python,r,machine-learning,web-applications | Not sure what calling R code from Python has to do with ML models.
If you have a trained model, you can try converting it into ONNX format (emerging standard), and try using the result from Python. | More of a theoretical question:
Use case: Create an API that takes json input, triggers ML algorithm inside of it and returns result to the user.
I know that in case of python ML model, I could just pack whole thing into pickle and use it easily inside of my web app. The problem is that all our algorithms are current... | 0 | 1 | 60 |
0 | 59,104,620 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-11-29T12:10:00.000 | -1 | 3 | 0 | Effective way to map 15k cities in Python | 59,104,543 | -0.066568 | python,algorithm,sorting,dataframe | Try mapping by first letters of a city that will reduce your work load | I have a data set of around 15k observations. This observations are city names from all over the world. This Data set has been populated by people from many different countries which means that i have several duplicates of the same city in different languages. see below DF extract:
city_name
bruselas
brussel
brussels
b... | 0 | 1 | 333 |
0 | 59,178,620 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2019-11-30T03:20:00.000 | 1 | 1 | 1 | Encountering "Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED" on a previously working system | 59,112,898 | 1.2 | python,tensorflow | There was indeed a system-wide upgrade.
Updating cuda to cuda 10.2 and nvidia-driver to 440 and making libcudnn7 7.6.5 fixed the problem. | Everything was ok around a week ago.
Even though I am running on a server, I really don't think much has changed.
Wonder what could have caused it.
Tensorflow has version 2.1.0-dev20191015
Anyway, here is the GPU status:
NVIDIA-SMI 430.50
Driver Version: 430.50
CUDA Version: 10.1
Epoch 1/5
2019-11-29 22:08:00.334979:... | 0 | 1 | 389 |
0 | 59,117,449 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-11-30T10:43:00.000 | 1 | 3 | 0 | ModuleNotFoundError: No module named 'tensorflow.python' Anaconda | 59,115,365 | 1.2 | python,tensorflow,anaconda | Apparently the reason was the Python version (which is strange as according to documentation Tensorflow supports Python 3.7). I downgraded to 3.6 and I am able to import Tensorflow again | I have been using Tensorflow on Anaconda for a while now, but recently I have been getting the mentioned error when trying to import Tensorflow.
This has been asked here multiple times, so I tried suggested solutions but nothing worked so far (reinstalling tensorflow (both normal and gpu versions), reinstalling Anacon... | 0 | 1 | 1,414 |
0 | 59,123,027 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2019-11-30T13:06:00.000 | 0 | 4 | 0 | No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package | 59,116,456 | 0 | python,tensorflow | Found a noob problem. I was using my file name as csv.py which already exist in python library, which I think was messing up the paths. But don't know how yet. | Everything was working smoothly until I started getting the following error:
Traceback (most recent call last):
File "", line 1, in
File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow/init.py", line 98, in
from tensorflow_core import *
File "/home... | 0 | 1 | 12,652 |
0 | 68,268,774 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2019-11-30T13:06:00.000 | 0 | 4 | 0 | No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package | 59,116,456 | 0 | python,tensorflow | You don't need to uninstall tensorflow what version you have because it will take time to reinstall. You can fix this issue just by installing tensorflow==2.0.
pip install tensorflow==2.0 | Everything was working smoothly until I started getting the following error:
Traceback (most recent call last):
File "", line 1, in
File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow/init.py", line 98, in
from tensorflow_core import *
File "/home... | 0 | 1 | 12,652 |
0 | 60,205,686 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2019-11-30T13:06:00.000 | 1 | 4 | 0 | No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package | 59,116,456 | 0.049958 | python,tensorflow | I just faced this problem right now. I ran the source code on another computer and it showed the same error. I went ahead and compared the version of TensorFlow and turns out that the other computer was running tensorflow==2.1.0 and mine was running tensorflow==1.14.0.
In short, downgrade your tensorflow installation ... | Everything was working smoothly until I started getting the following error:
Traceback (most recent call last):
File "", line 1, in
File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow/init.py", line 98, in
from tensorflow_core import *
File "/home... | 0 | 1 | 12,652 |
0 | 59,144,477 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2019-12-02T16:47:00.000 | 3 | 1 | 0 | Performance difference in json data into BigQuery loading methods | 59,143,310 | 1.2 | python,google-bigquery | It's for two different logics and they have their own limits.
Load from file is great if you can have your data placed in files. A file can be up to 5TB in size. This load is free. You can query data immediately after completion.
The streaming insert, is great if you have your data in form of events that you can stre... | What is the performance difference between two JSON loading methods into BigQuery:
load_table_from_file(io.StringIO(json_data) vs create_rows_json
The first one loads the file as a whole and the second one streams the data. Does it mean that the first method will be faster to complete, but binary, and the second one sl... | 0 | 1 | 88 |
0 | 59,149,250 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-03T01:23:00.000 | 0 | 1 | 0 | Multiple Labels as Training data for ML | 59,148,940 | 0 | python,machine-learning,keras | You can use the method apply_transform of the ImageDataGenerator in which you can specify the parameters of the transformation you want while for example saving these parameters in a list or another structure, using them later as features. | Using Keras in Python to create a CNN that pumps out the angle of rotation and zoom of an image. I am working on create the training data. I have a few questions though.
I plan on using Keras Preprocessing as the tool to manipulate before I train, but is there a way to save what angle and zoom is used so that I can us... | 0 | 1 | 44 |
0 | 59,160,973 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-03T14:42:00.000 | 0 | 1 | 0 | How to deal with different categories in pytorch train, test, and holdout set | 59,159,578 | 0 | python,pytorch | u can try to use one hot encoding instead
PS: this is a suggestion not an answer | I have a tabular pytorch model that takes in cities and zipcodes as a categorical embeddings. However, I can't stratify effectively based on those columns.
How can I get pytorch to run if it's missing a categorical value in the test set that's not in the train set, or has a categorical value in the holdout set that w... | 0 | 1 | 39 |
0 | 59,160,453 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-03T15:24:00.000 | 2 | 2 | 0 | Is there a function to convert a string into a number and back for machine learning | 59,160,301 | 1.2 | python,pandas,keras | You should consider a one-hot encoding, which can be done easily with pandas via the get_dummies function. This will create binary columns for each "category" (i.e. unique string). | I have a lot of strings in a pandas dataframe, I want to assign every string a number for keras.
the string represent a location:
CwmyNiVcURtyAf+o/6wbAg==
I want to turn it into a number and back again. I'm using keras, tensorflow and pandas. Does one of the modules contain a function which does that? Or do I have to w... | 0 | 1 | 373 |
0 | 59,172,700 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-04T09:22:00.000 | 0 | 2 | 0 | Emphasis on a feature while training a vanilla nn | 59,172,607 | 0 | python-3.x,machine-learning,scikit-learn,neural-network,hyperparameters | First of all, I would make sure that this feature alone has a decent prediction probability, but I am assuming that you already made sure of it.
Then, one approach that you could take, is to "embed" your 359 other features in a first layer, and only feed in your special feature once you have compressed the remaining in... | I have some 360 odd features on which I am training my neural network model.
The accuracy I am getting is abysmally bad. There is one feature amongst the 360 that is more important than the others.
Right now, it does not enjoy any special status amongst the other features.
Is there a way to lay emphasis on one of the f... | 0 | 1 | 28 |
0 | 59,182,082 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-04T16:14:00.000 | 1 | 1 | 0 | How to represent the trend (upward/downward/no change) of data? | 59,180,323 | 0.197375 | python,pandas,math,regression,data-science | If you expect the trend to be linear, you could fit a linear regression to each row separately, using time to predict number of occurences of a behavior. Then store the slopes.
This slope represents the effect of increasing time by 1 episode on the behavior. It also naturally accounts for the difference in length of th... | I have a dataset where each row represents the number of occurrences of certain behaviour. The columns represent a window of a set amount of time. It looks like this:
+----------+----------+----------+----------+-----------+------+
| Episode1 | Episode2 | Episode3 | Episode4 | Episode5 | ... |
+----------+----------... | 0 | 1 | 192 |
0 | 59,188,878 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-05T04:34:00.000 | 0 | 1 | 0 | Categorical variables with only two values | 59,188,318 | 0 | python-3.x,encoding,categorical-data,one-hot-encoding,labeling | There are only a few cases where LabelEncoder is useful because of the ordinality issue. If your categorial features are ordinal then use LabelEncoder otherwise use One-hot encoding. But, One-hot encoding increases dimension. In this case, I typically employ One-hot encoding followed by PCA for dimensionality reduction... | I am dealing with different datasets that have only Categorical variables/features with only two values such as (temperature = 'low' and 'high') or (light = 'on' and 'off' or '0' and '1').
I am not really sure whether to use "one-hot encoding" or "Label Encoding" method to train my models.
I am working on a classific... | 0 | 1 | 239 |
0 | 59,196,692 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-12-05T06:08:00.000 | 0 | 3 | 0 | Find the maximum result after collapsing an array with subtractions | 59,189,207 | 0 | python,arrays,algorithm | The other answers are fine, but here's another way to think about it:
If you expand the result into individual terms, you want all the positive numbers to end up as additive terms, and all the negative numbers to end up as subtractive terms.
If you have both signs available, then this is easy:
Subtract all but one of ... | Given an array of integers, I need to reduce it to a single number by repeatedly replacing any two numbers with their difference, to produce the maximum possible result.
Example1 - If I have array of [0,-1,-1,-1] then performing (0-(-1)) then (1-(-1)) and then (2-(-1)) will give 3 as maximum possible output
Example2- [... | 0 | 1 | 113 |
0 | 59,281,571 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-12-05T10:37:00.000 | 4 | 1 | 0 | Is it Valid to Aggregate SHAP values to Sets of of Features? | 59,193,277 | 0.664037 | python,shap | From Lundberg, package author: "The short answer is yes, you can add up SHAP values across the columns to get the importance of a whole group of features (just make sure you don't take the absolute value like we do when going across rows for global feature importance).
The long answer is that when Shapley values "fairl... | SHAP values seem to be additive and e.g. the overall feature importance plot simply adds the absolute SHAP values per feature and compares them. This allows us to use SHAP for global importance aswell as local importance. We could also get feature importance for a particular subset of data records the same way.
By t... | 0 | 1 | 1,514 |
0 | 59,200,836 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-12-05T17:41:00.000 | 0 | 3 | 0 | Combining logistic and continuous regression with scikit-learn | 59,200,594 | 0 | python,machine-learning,scikit-learn,regression | If your target data Y has multiple columns you need to use multi-task learning approach. Scikit-learn contains some multi-task learning algorithms for regression like multi-task elastic-net but you cannot combine logistic regression with linear regression because these algorithms use different loss functions to optimiz... | In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns.
I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C.
I would like to train a model on the columns of X to predict the columns of y. However, having tr... | 0 | 1 | 458 |
0 | 59,203,141 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-12-05T17:41:00.000 | 0 | 3 | 0 | Combining logistic and continuous regression with scikit-learn | 59,200,594 | 0 | python,machine-learning,scikit-learn,regression | What i understand you want to do is to is to train a single model that both predicts a continuous variable and a class. You would need to combine both loses into one single loss to be able to do that which I don't think is possible in scikit-learn. However I suggest you use a deep learning framework (tensorflow, pytorc... | In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns.
I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C.
I would like to train a model on the columns of X to predict the columns of y. However, having tr... | 0 | 1 | 458 |
0 | 59,209,501 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-06T08:07:00.000 | 0 | 1 | 0 | The naming and the sorting of the trained RF model's features in Python | 59,209,196 | 1.2 | python,python-3.x,machine-learning,data-science,random-forest | The algorithm works independent from your column names. You can name your columns whatever you want in most algorithms(except fbprophet etc.)
But there is one important point here:
When you want to predict a dataset result you need to give your dataset columns respect to training model columns' order.
In your case yo... | So I have trained a RandomForest model on a fairly simple customer data. The prediction is either 1 or 0 telling if a customer will churn or not.
Let's say I have 10 features called 'f1', 'f2', 'f3' and so on... As the model has already been trained I took another period of the similar data to see how the model perfor... | 0 | 1 | 110 |
0 | 59,209,729 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-06T08:38:00.000 | 0 | 1 | 0 | How to create blender with opencv-python | 59,209,586 | 0 | python,python-3.x,opencv | Patented stuff is usually just moved to the contrib repository, so you have to clone the original OpenCV repo, then add the contrib over and maybe modify a few compile options to get all your required things back and running. | I'm using opencv3.4 to stitch images with a lot of customization, but cv.detail_MultiBandBlender seems only avaiable in opencv 4.x but with 4.x surf is "patented and is excluded". Is there any hack so that I can use blender with opencv-python3.4? | 0 | 1 | 427 |
0 | 59,212,992 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-12-06T08:56:00.000 | 1 | 2 | 0 | Will removing a column having same values for all observations affect my model? | 59,209,830 | 0.099668 | python,r,pandas,machine-learning,data-science | A Machine Learning Model is nothing but a mathematical equation i.e.
y = f(x)
in which
y = Target/Dependent Variable
f(x) = Independent Variables(In our case a DataFrame containing the Train/Test Data)
So technically, ML models quantifies and estimates about for what value of X, what will the probable output y.
Assumin... | One of the columns in my dataset has the same value for all observations/rows.
Should I remove that column while building a machine learning model?
Will removing this column affect my model/performance metric?
If I replace all the values with a different constant value, will it change the model/performance metric? | 0 | 1 | 1,263 |
0 | 59,210,087 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-12-06T08:56:00.000 | 2 | 2 | 0 | Will removing a column having same values for all observations affect my model? | 59,209,830 | 0.197375 | python,r,pandas,machine-learning,data-science | If one of your column in the dataset is having the same values, you can drop this column as it will not do any help to your model to differentiate between two different labels while on the other hand, it can even negatively affect your model by creating a bias in the data.
For Example: Consider you have two different f... | One of the columns in my dataset has the same value for all observations/rows.
Should I remove that column while building a machine learning model?
Will removing this column affect my model/performance metric?
If I replace all the values with a different constant value, will it change the model/performance metric? | 0 | 1 | 1,263 |
0 | 64,245,990 | 0 | 0 | 0 | 0 | 1 | false | 12 | 2019-12-06T13:36:00.000 | 5 | 3 | 0 | python tsne.transform does not exist? | 59,214,232 | 0.321513 | python,machine-learning | As the accepted answer says, there is no separate transform method and it probably wouldn't work in a a train/test setting.
However, you can still use TSNE without information leakage.
Training Time
Calculate the TSNE per record on the training set and use it as a feature in classification algorithm.
Testing Time
Appen... | I am trying to transform two datasets: x_train and x_test using tsne. I assume the way to do this is to fit tsne to x_train, and then transform x_test and x_train. But, I am not able to transform any of the datasets.
tsne = TSNE(random_state = 420, n_components=2, verbose=1, perplexity=5, n_iter=350).fit(x_train)
I ass... | 0 | 1 | 6,221 |
0 | 59,246,265 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-12-06T14:08:00.000 | 0 | 1 | 0 | How to build whl package for pandas? | 59,214,739 | 1.2 | python,pandas,python-wheel | You cannot pack back an installed wheel. Either you download a ready-made wheel with pip download or build from sources: python setup.py bdist_wheel (need to download the sources first). | Hi I have a built up Python 2.7 environment with Ubuntu 19.10.
I would like to build a whl package for pandas.
I pip installed the pandas but do not know how to pack it into whl package.
May I ask what I should do to pack it.
Thanks | 0 | 1 | 139 |
0 | 59,226,070 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-12-07T08:56:00.000 | 6 | 1 | 0 | ROS: Is ZeroMQ better for large data streams, e.g. raw images, than native image topic? | 59,224,453 | 1.2 | python,zeromq,ros | 10E6 [B] over a private, 100% free 100E6 [b/s] channel takes no less ~0.8 [s]
_5E6 [B] over a private, 100% free 100E6 [b/s] channel takes no less ~0.4 [s]
Q : What are the limitations in <something> on large data streams?
Here we always fight a three-fold Devil mix of:
Power( for data processing, a 10[MB]->5[MB] com... | Fairly new to ROS, but haven't been able to find this information searching around.
We're building an instrument where we need to transfer large data streams over the network on a 100Mbit limited cable. Preferably we need to transfer RAW images (~10MB a piece) or we can do some lossless compression resulting in about ... | 0 | 1 | 1,161 |
0 | 60,907,594 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-07T21:25:00.000 | 0 | 1 | 0 | Can't read .avi files using Python OpenCV 4.1.2-dev | 59,230,366 | 0 | python-3.x,opencv,artificial-intelligence,opencv3.1,opencv4 | Originally I used cv2.VideoWriter_fourcc(*'XVID') getting the same error
switch the (*'XVID') to (*'MJPG')
I am using a raspberry pi Gen. 4 (4GB) with image: Raspbian Buster Lite | I wanna run my opencv3.1 programs, but when i try to read a file using cv2.VideoCapture shows me the error:
error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): ./../images/walking.avi in function 'icvExtractPattern'
But, when i using the camera with cv2.VideoCapture(0) it works perfec... | 0 | 1 | 1,168 |
0 | 59,231,944 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-12-08T02:25:00.000 | 5 | 3 | 0 | Rounding large exponential numbers e.g. (6.624147...e+25 to 6.62e+25) | 59,231,931 | 1.2 | python,python-3.x,rounding | I think, if I understand your problem correctly, you could use float("%.2e" % x)
This just converts the value to text, in exponential format, with two fractional places (so 'pi' would become "3.14e+00"), and then converts that back to float. It will work with your example, and with small numbers like 5.42242344e-30
For... | I have a list of very large numbers I need to round.
For example:
6.624147027484989e+25 I need to round to 6.62e25.
However, np.around, math.ceiling, round(), etc... are not working. I'm thinking because instead of round 6.624147027484989e+25 to 6.62e25, it's just making it an integer while I actually need to make the... | 0 | 1 | 179 |
0 | 59,236,950 | 0 | 0 | 0 | 0 | 2 | true | 6 | 2019-12-08T14:39:00.000 | 3 | 3 | 0 | How to add a new class to an existing classifier in deep learning? | 59,236,502 | 1.2 | python,keras,deep-learning,multiclass-classification,online-machine-learning | You probably have used a softmax after 3 neuron dense layer at the end of the architecture to classify into 3 classes. Adding a class will lead to doing a softmax over 4 neuron dense layer so there will be no way to accommodate that extra neuron in your current graph with frozen weights, basically you're modifying the ... | I trained a deep learning model to classify the given images into three classes. Now I want to add one more class to my model. I tried to check out "Online learning", but it seems to train on new data for existing classes. Do I need to train my whole model again on all four classes or is there any way I can just train ... | 0 | 1 | 5,204 |
0 | 60,471,776 | 0 | 0 | 0 | 0 | 2 | false | 6 | 2019-12-08T14:39:00.000 | 3 | 3 | 0 | How to add a new class to an existing classifier in deep learning? | 59,236,502 | 0.197375 | python,keras,deep-learning,multiclass-classification,online-machine-learning | You have to remove the final fully-connected layer, freeze the weights in the feature extraction layers, add a new fully-connected layer with four outputs and retrain the model with images of the original three classes and the new fourth class. | I trained a deep learning model to classify the given images into three classes. Now I want to add one more class to my model. I tried to check out "Online learning", but it seems to train on new data for existing classes. Do I need to train my whole model again on all four classes or is there any way I can just train ... | 0 | 1 | 5,204 |
0 | 59,395,618 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2019-12-08T17:44:00.000 | 1 | 2 | 0 | Find how similar a text is - One Class Classifier (NLP) | 59,238,140 | 0.099668 | python,twitter,nlp,classification,text-classification | Sam H has a great answer about using your dataset as-is, but I would strongly recommend annotating data so you have a few hundred negative examples, which should take less than an hour. Depending on how broad your definition of "activism" is that should be plenty to make a good classifier using standard methods. | I have a big dataset containing almost 0.5 billions of tweets. I'm doing some research about how firms are engaged in activism and so far, I have labelled tweets which can be clustered in an activism category according to the presence of certain hashtags within the tweets.
Now, let's suppose firms are tweeting about an... | 0 | 1 | 540 |
0 | 59,262,244 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-09T10:34:00.000 | 0 | 1 | 0 | Existing Tensorflow model to use GPU | 59,246,985 | 0 | python,tensorflow | Not enough to give exact answer.
Have you installed tensorflow-gpu separately? Check using pip list.
Cause, initially, you were using tensorflow (default for CPU).
Once you use want to use Nvidia, make sure to install tensorflow-gpu.
Sometimes, I had problem having both installed at the same time. It would always go f... | I made a TensorFlow model without using CUDA, but it is very slow. Fortunately, I gained access to a Linux server (Ubuntu 18.04.3 LTS), which has a Geforce 1060, also the necessary components are installed - I could test it, the CUDA acceleration is working.
The tensorflow-gpu package is installed (only 1.14.0 is worki... | 0 | 1 | 38 |
0 | 59,253,034 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-09T12:23:00.000 | 1 | 1 | 0 | how to select the metric to optimize in sklearn's fit function? | 59,248,882 | 1.2 | python,machine-learning,optimization,scikit-learn | This is not possible with Support Vector Machines, as far as I know. With other models you might either change the loss that is optimized, or change the classification threshold on the predicted probability.
SVMs however minimize the hinge loss, and they do not model the probability of classes but rather their separat... | When using tensorflow to train a neural network I can set the loss function arbitrarily. Is there a way to do the same in sklearn when training a SVM? Let's say I want my classifier to only optimize sensitivity (regardless of the sense of it), how would I do that? | 0 | 1 | 396 |
0 | 59,252,280 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-09T13:11:00.000 | 0 | 1 | 0 | Applying identical Canny to two different images | 59,249,664 | 0 | python,opencv,canny-operator | To archieve comparable results you should resize the bigger image to the size of the smaller one. Image upscaling is "creating" information which isn't contained in your image, that's why you see the blur. Using interpolation=cv2.INTER_AREA should deliver good results, if you used a camera for images acquisition. | I have two images - the images are identical but of different sizes.
Currently I complete a Canny analysis of the smaller image using track bars in an interactive environment.
I want to have this output created on the second (larger) image - when I apply the same parameters the output is different
I've tried to use cv... | 0 | 1 | 26 |
0 | 59,268,457 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-09T13:36:00.000 | 2 | 2 | 0 | How to add new Category in the CNN for Attendance by AI | 59,250,070 | 0.197375 | python,keras,neural-network,artificial-intelligence,conv-neural-network | You don't need classification. Classification is not the solution for everywhere problem.
You should look into these:
Cosine Similarity
Siamese Network
You can use existing models from FaceNet or OpenCV.
Since they are already trained on a huge dataset of faces, you can extract feature vector easily.
Store the fea... | I am working on the project with the group and we have decided to make the project on the ' Automatic Attendance System by AI '
I have learned the CNNs to categorize the objects i.e dogs and cats.
With that knowledge, we have decided to make the attendance system based on CNN. ( Please tell me if we shouldn't take this... | 0 | 1 | 99 |
0 | 68,878,419 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2019-12-09T16:29:00.000 | 1 | 2 | 0 | conda environment: does each new conda environment needs a new kernel to work? How can I have specific libraries for all my environments? | 59,252,973 | 0.099668 | python,anaconda,conda,windows-subsystem-for-linux,jupyter-lab | To my best understanding:
You need ipykernel in each of the environments so that jupyter can import the other library.
In my case, I have a new environment called TensorFlow, then I activate it and install the ipykernel, and then add it to jupyter kernelspec. Finally I can access it in jupyter no matter the environment... | I use ubuntu (through Windows Subsystem For Linux) and I created a new conda environment, I activated it and I installed a library in it (opencv). However, I couldn't import opencv in Jupyter lab till I created a new kernel that it uses the path of my new conda environment. So, my questions are:
Do I need to create a ... | 0 | 1 | 1,076 |
0 | 59,259,166 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-09T22:58:00.000 | 0 | 2 | 0 | Outlier detection DBSCAN | 59,257,864 | 0 | python,machine-learning,dataset,outliers,dbscan | Are describing a classification problem, not a clustering problem.
Also that data does not have a bottom of density, does it?
Last but not least, (A) click fraud is heavily clustered, not outliers, (B) noise (low density) is not the same as outlier (rare) and (C) first get the data, then speculate about possible algori... | I am working on school's project about Outlier detecttion. I think i will create my own small dataset and use DBSCAN to work with it. I think i will try to create a dataset that about a click on ads on a website is cheat or not. Below is detail information of the dataset that i am gona create.
Dataset Name: Cheat Ads C... | 0 | 1 | 1,315 |
0 | 59,265,331 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-12-10T10:30:00.000 | 0 | 2 | 0 | Numpy array comprehension | 59,265,151 | 0 | python,numpy,list-comprehension | A fundamental problem here is that numpy arrays are of static size whereas python lists are dynamic. Since the list comprehension doesn't know a priori how long the returned list is going to be, one necessarily needs to maintain a dynamic list throughout the generation process. | Is there a way to do a numpy array comprehension in Python? The only way I have seen it does is by using list comprehension and then casting the results as a numpy array, e.g. np.array(list comprehension). I would have expected there to be a way to do it directly using numpy arrays, without using lists as an intermedia... | 0 | 1 | 3,420 |
0 | 59,268,849 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-10T11:39:00.000 | 0 | 1 | 0 | How can i get all the prediction probability value? | 59,266,375 | 0 | python,keras | What i've done is include a randomized feature. This way the network won't be purely deterministic | I'm doing stock prediction using keras.While prediction i get only one possible result.I need to view all the probability value for example,
input 100 120 100 120, target while training 100
while prediction if i give the same input it returns 120 as a output
So that,is there any possibility of view predicti... | 0 | 1 | 36 |
0 | 59,276,925 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-10T23:36:00.000 | -1 | 2 | 0 | Pandas set index or reindex without changing the order of the data frame | 59,276,899 | -0.099668 | python,pandas | df=df.reset_index(drop=True)? – ansev 1 min ago | Hello I have a dataframe I sorted so the index is not in order so I want to reorder the index so that sorted values have an index that is sequential I have not been able to figure this out should I remove the index or is there a way to set the index? When I reindex it should sorts by the index which unsorts by index. | 0 | 1 | 762 |
0 | 59,279,941 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2019-12-11T06:13:00.000 | 8 | 4 | 0 | How to check if an object is an np.array()? | 59,279,803 | 1 | python,arrays | isinstance(obj, numpy.ndarray) may work | I'm trying to build a code that checks whether a given object is an np.array() in python.
if isinstance(obj,np.array()) doesn't seem to work.
I would truly appreciate any help. | 0 | 1 | 3,934 |
0 | 59,279,975 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2019-12-11T06:13:00.000 | 0 | 4 | 0 | How to check if an object is an np.array()? | 59,279,803 | 0 | python,arrays | The type of what numpy.array returns is numpy.ndarray. You can determine that in the repl by calling type(numpy.array([])). Note that this trick works even for things where the raw class is not publicly accessible. It's generally better to use the direct reference, but storing the return from type(someobj) for later... | I'm trying to build a code that checks whether a given object is an np.array() in python.
if isinstance(obj,np.array()) doesn't seem to work.
I would truly appreciate any help. | 0 | 1 | 3,934 |
0 | 59,893,011 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-12-11T09:06:00.000 | 3 | 2 | 0 | How do we approximately calculate how much memory is required to run a program? | 59,282,135 | 1.2 | python,tensorflow,memory,memory-management | In Object Detection, most of the Layers used will be CNNs and the Calculation of Memory Consumption for CNN is explained below. You can follow the same approach for other layers of the Model.
For example,
consider a convolutional layer with 5 × 5 filters, outputting 200
feature maps of size 150 × 100, with stride ... | Today I was trying to implement an object detection API in Tensorflow. After carrying out the training process, I was trying to run the program to detect objects in webcam. As I was running it, the following message was printed in the terminal:
Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB
with ... | 0 | 1 | 2,441 |
0 | 59,301,763 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-11T13:58:00.000 | 0 | 1 | 1 | Azure Installing Pandas Module | 59,287,420 | 1.2 | python,azure,azure-web-app-service | I solved this by using the SSH terminal instead of the Kudu terminal. I find no reason why it was not working in the Kudu Remote Execution terminal, but using "pip install pandas" in Azure's SSH terminal solved it. | I have been trying to install Pandas on my Azure App Service (running Flask) for a long time now but nothing seems to work.
I tried to use wheel, created a wheelhouse directory manually and tried to install the relevant Pandas .whl file (along with its dependent packages) but it still doesn't work. This approach gives ... | 1 | 1 | 537 |
0 | 59,291,330 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-12-11T16:34:00.000 | 3 | 1 | 0 | Is it normal to get different graphs for same data after umap | 59,290,251 | 1.2 | python,r,ggplot2,scikit-learn | Yes it is. Dimensions reduction algorithms like tSNE and uMAP are stochastic, so every time you run the clustering and values will be different. If you want to keep the same graph you need to set a common seed. You can achieve that in R by setting the seed (e.g. set.seed(123)) before calling uMAP (or set flag if the fu... | I am not sure how can I describe all the steps that I am taking but basically my question is simple:
I use same code, same data from text file, gather some statistics about that data and then use umap for 2D reduction.
Is it normal to have different graphs when I plot the result?
I use scikit-learn, umap-learn, ggplot... | 0 | 1 | 1,264 |
0 | 59,290,915 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-11T17:07:00.000 | 0 | 2 | 0 | Sum of neighbors in tensorflow | 59,290,785 | 0 | python,tensorflow | A new convolutional layer with the filter size of 3x3 and filters initialized to 1 will do the job. Just be careful to declare this special filter as an untrainable variable otherwise your optimizer would change its contents. Additionally, set padding to "SAME" to get the same size output from that convolutional layer.... | I have a tensorflow model with my truth data in the shape (N, 32, 32, 5) ie. 32x32 images with 5 channels.
Inside the loss function I would like to calculate, for each pixel, the sum of the values of the neighboring pixels for each channel, generating a new (N, 32, 32, 5) tensor.
The tf.nn.pool function does something ... | 0 | 1 | 188 |
0 | 59,318,510 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-13T01:06:00.000 | 1 | 1 | 0 | Concatenating 'N' 2D arrays in NumPy with varying dimensions into one 3D array | 59,314,807 | 0.197375 | python,numpy,keras,numpy-ndarray | Keras does allow for variable length input to an LSTM but within a single batch all inputs must have the same length. A way to reduce the padding needed would be to batch your input sequences together based on their length and only pad up to the maximum length within each batch. For example you could have one batch wit... | I have N samples of 2D features with variable dimensions along one axis. For example:
Sample 1 : (100,20)
Sample 2 : (150,20)
Sample 3 : (90,20)
Is there a way to combine all N samples into a 3D array so that the first dimension (N,?,?) denotes the sample number?
PS: I wish to avoid padding and reshaping, and want... | 0 | 1 | 124 |
0 | 59,319,017 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2019-12-13T08:39:00.000 | 0 | 3 | 0 | Changes to model performance by changing random_state of XGBClassifier | 59,318,853 | 0 | python,xgboost,feature-selection,xgbclassifier | random_state parameter just helps in replicating results every time you run your model.
Since you are using cross_validation, assuming it is k-fold, then all your data will go into train and test and the CV score will be anyways average of the number of folds you decide. I believe you can set on any random_state and qu... | I trained a XGBClassifier for my classification problem and did Hyper-parameter tuning over huge grid(probably tuned every possible parameter) using optuna. While testing, change of random_state changes model performance metrics (roc_auc/recall/precision), feature_importance and even model predictions (predict_prob). ... | 0 | 1 | 729 |
0 | 59,320,839 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2019-12-13T08:39:00.000 | 1 | 3 | 0 | Changes to model performance by changing random_state of XGBClassifier | 59,318,853 | 0.066568 | python,xgboost,feature-selection,xgbclassifier | These are my two cents. Take the answer with a grain of salt.
The XGB classifier is a boosting algorithm, which naturally depends on randomness (so is a Random Forest for example).
Hence, changing seed will inherently change the training of the model and its output.
Different seeds will also change the CV splits and al... | I trained a XGBClassifier for my classification problem and did Hyper-parameter tuning over huge grid(probably tuned every possible parameter) using optuna. While testing, change of random_state changes model performance metrics (roc_auc/recall/precision), feature_importance and even model predictions (predict_prob). ... | 0 | 1 | 729 |
0 | 59,321,064 | 0 | 0 | 0 | 0 | 3 | false | 1 | 2019-12-13T08:39:00.000 | 1 | 3 | 0 | Changes to model performance by changing random_state of XGBClassifier | 59,318,853 | 0.066568 | python,xgboost,feature-selection,xgbclassifier | I tend to think if the model is sensitive to the random seed, it isn't a very good model. With XGB can try and add more estimators - that can help make it more stable.
For any model with a random seed, for each candidate set of parameter options (usually already filtered to a shortlist of candidate), I tend to run a bu... | I trained a XGBClassifier for my classification problem and did Hyper-parameter tuning over huge grid(probably tuned every possible parameter) using optuna. While testing, change of random_state changes model performance metrics (roc_auc/recall/precision), feature_importance and even model predictions (predict_prob). ... | 0 | 1 | 729 |
0 | 59,324,955 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-13T14:48:00.000 | 0 | 1 | 0 | Implementing trained-model on camera | 59,324,845 | 0 | python,tensorflow | Congratulations :)
First of all, you use the model to recognize the objects, the model learned from the data, minor detail.
It really depends on what you are aiming for, as the comment suggest, you should probably provide a bit more information.
The simplest setup would probably be to take an image with your webcam, ... | I just trained my model successfully and I have some checkpoints from the training process. Can you explain to me how to use this data to recognize the objects live with the help of a webcam? | 0 | 1 | 37 |
0 | 59,326,907 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-13T16:59:00.000 | 2 | 1 | 0 | Pandas agg how to count rows where a condition is true | 59,326,882 | 0.379949 | python,pandas | (x == 0).sum() counts the number of rows where the condition x == 0 is true. x.sum() just computes the "sum" of x (the actual result depends on the type). | I am using lambda function and agg() in python to perform some function on each element of the dataframe.
I have following cases
lambda x: (x==0).sum() - Question: Does this logically compute (x==0) as 1, if true, and 0, if false and then adds all ones and zeros? or is it doing something else?
lambda x: x.sum() - Ques... | 0 | 1 | 592 |
0 | 59,327,644 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-12-13T17:48:00.000 | 0 | 3 | 0 | pandas read_csv. How to ignore delimiter before line break | 59,327,525 | 0 | python,pandas,file | Specifying which columns to read using usecols will be a cleaner approach or you can drop the column once you have read the data but this comes with an overhead of reading data that you don't need. The generic approach will require you the create a regex parser which will be more time consuming and more messy. | I'm reading a file with numerical values.
data = pd.read_csv('data.dat', sep=' ', header=None)
In the text file, each row end with a space, So pandas wait for a value that is not there and add a "nan" at the end of each row.
For example:
2.343 4.234
is read as:
[2.343, 4.234, nan]
I can avoid it using , usecols = [0... | 0 | 1 | 3,252 |
0 | 59,327,905 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-13T18:10:00.000 | 1 | 1 | 0 | Conv2D to Conv3D | 59,327,819 | 1.2 | python,conv-neural-network,dicom,medical | Yes, you can, but there are a few things to change.
Your kernel will now need to be in 3D, so the argument kernel_size must be a 3 integer tuple. Same thing for strides. Note that the CNN you will modify will probably be in 3D already (e.g., 60, 60, 3) if it's designed to train on colored images. The only difference i... | I have 3D medical images and wanted to know if I have a CNN that uses Conv2D can I just change the Conv2D to a Conv3D? If not what would I need to change? | 0 | 1 | 721 |
0 | 60,968,553 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-13T18:48:00.000 | -1 | 1 | 0 | Best Python GloVe word embedding package | 59,328,248 | -0.197375 | python-3.x,word-embedding,glove | If you are using python3, gensim would be the best choice.
for example:
from gensim.scripts.glove2word2vec import glove2word2vec
will fetch the gloVe module.
Saul | What is the best Python GloVe word embedding package that I can use? I want a package that can help modify the co-occurrence matrix weights. If someone can provide an example, I would really appreciate that.
Thanks,
Mohammed | 0 | 1 | 213 |
0 | 59,332,455 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-14T04:54:00.000 | 3 | 1 | 0 | How to make predictions with a decision tree on a dataset without a target value? | 59,332,410 | 1.2 | python | Decision tree is a supervised algorithm. That means you must use some target value(or lable) to build the tree(dividing node's value based on information gain rule). | Every tutorial I have found about machine learning includes testing an algorithm on a dataset that has target values and then it finds how accurate the algorithm is by testing its predictions on the test set.
What if you then receive all of the data except for the target value and you want to make target value predicti... | 0 | 1 | 27 |
0 | 59,333,699 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-12-14T07:57:00.000 | 1 | 1 | 0 | OpenCV from Python shows different results for JPG and PNG images? | 59,333,332 | 0.197375 | python-3.x,image,opencv,image-processing,computer-vision | The problem is inherent to the image format you are using. There are majorly two type of compression techniques(all image formats: jpeg, png, webp are compression techniques):
Lossless Compression
Lossy compression
As the name suggests, Lossless compression technique do not change the underlying matrix data while com... | I have been working to create an OMR bubble sheet scanner using OpenCV from Python.
I have created the bubble-sheets (using OpenCV) in PNG format.
Then I am using OpenCV to read those files. OpenCV does a perfect job on PNG images, it works perfectly.... but when I use this on JPG files, it simply doesn't! Lists runni... | 0 | 1 | 1,643 |
0 | 59,433,454 | 0 | 0 | 0 | 0 | 2 | false | 17 | 2019-12-14T16:15:00.000 | 24 | 5 | 0 | Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives? | 59,336,899 | 1 | python,machine-learning,keras,multilabel-classification,vgg-net | What hassan has suggested is not correct -
Categorical Cross-Entropy loss or Softmax Loss is a Softmax activation plus a Cross-Entropy loss. If we use this loss, we will train a CNN to output a probability over the C classes for each image. It is used for multi-class classification.
What you want is multi-label classi... | I am training a multi-label classification model for detecting attributes of clothes. I am using transfer learning in Keras, retraining the last few layers of the vgg-19 model.
The total number of attributes is 1000 and about 99% of them are 0s. Metrics like accuracy, precision, recall, etc., all fail, as the model can... | 0 | 1 | 23,020 |
0 | 63,974,451 | 0 | 0 | 0 | 0 | 2 | false | 17 | 2019-12-14T16:15:00.000 | 7 | 5 | 0 | Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives? | 59,336,899 | 1 | python,machine-learning,keras,multilabel-classification,vgg-net | Actually you should use tf.nn.weighted_cross_entropy_with_logits.
It not only for multi label classification and also has a pos_weight can pay much attention at the positive classes as you would expected. | I am training a multi-label classification model for detecting attributes of clothes. I am using transfer learning in Keras, retraining the last few layers of the vgg-19 model.
The total number of attributes is 1000 and about 99% of them are 0s. Metrics like accuracy, precision, recall, etc., all fail, as the model can... | 0 | 1 | 23,020 |
0 | 59,349,161 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-12-14T22:53:00.000 | 2 | 1 | 0 | Compute Engine n1-standard only use 50% of CPU | 59,339,838 | 1.2 | python,multithreading | In this case, the task utilizes only one of the two processors that you have available so that's why you see only 50% of the CPU getting used.
If you allow pytorch to use all the CPUs of your VM by setting the number of threads, then it will see that the usage goes to 100% | I'm running an heavy pytorch task on this VM(n1-standard, 2vCpu, 7.5 GB) and the statistics show that the cpu % is at 50%. On my PC(i7-8700) the cpu utilization is about 90/100% when I run this script (deep learning model).
I don't understand if there is some limit for the n1-standard machine(I have read in the documen... | 0 | 1 | 75 |
0 | 59,342,082 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2019-12-15T02:32:00.000 | 0 | 1 | 0 | 8Puzzle game with A* : What structure for the open set? | 59,340,795 | 1.2 | python,algorithm,complexity-theory,a-star,sliding-tile-puzzle | The open set should be a priority queue. Typically these are implemented using a binary heap, though other implementations exist.
Neither an array-list nor a dictionary would be efficient.
The closed set should be an efficient set, so usually a hash table or binary search tree, depending on what your language's standa... | I'm developing a 8 Puzzle game solver in python lately and I need a bit of help
So far I finished coding the A* algorithm using Manhattan distance as a heuristic function.
The solver runs and find ~60% of the solutions in less than 2 seconds
However, for the other ~40%, my solver can take up to 20-30 minutes, like it w... | 0 | 1 | 164 |
0 | 59,346,258 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-12-15T15:05:00.000 | 0 | 1 | 0 | Matplotlib throw errors and doesn't work when I try to import it | 59,345,149 | 0 | python,matplotlib | It seems like your package uninstall did not finish properly and something of that google package has been left behind.
You need to either move some of your source files at correct destination or uninstall anaconda and reinstall again. | I'm a newbie in programming and python (only 30 days). I've installed Anaconda and working in the Spyder IDE. Everything was going fine and have been adding packages as necessary while I was learning different things until now. Now, I'm getting an error when I'm trying to import Matplotlib. Can anyone advise me what to... | 0 | 1 | 270 |
0 | 59,348,501 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-15T19:29:00.000 | 0 | 2 | 0 | Add new data to model sklearn: SGD | 59,347,375 | 0 | python,scikit-learn | Do you think it has other way to do a first learning and then add new data more important for the model? Keras?
Thanks guys | I made models with sklearn, something like this:
clf = SGDClassifier(loss="log")
clf.fit(X, Y)
And then now I would like to add data to learn for this model, but with more important weight. I tried to use partial_fit with sample_weight bigger but not working. Maybe I don't use fit and partial_fit as good, sorry I'm beg... | 0 | 1 | 141 |
0 | 59,377,401 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-16T09:06:00.000 | 0 | 1 | 0 | cope with high variance or keep training | 59,353,398 | 1.2 | python,tensorflow,neural-network,statistics,precision-recall | I put more training data. Now I use 70000 records instead of 45000.
My results:
precision: 0.81765974, recall: 0.65085715 on test-data
precision: 0.83833283, recall: 0.708 on training-data
I am pretty confident that this result is as good as possible. Thanks for reading | I built a neural network of the dimensions Layers = [203,100,100,100,2]. So I have 203 features and get two classes as a Result. I think, in my case, it would not be necessary to have two classes. My result is the prediction of a customer quitting his contract. So I guess one class would be sufficient (And 1 being quit... | 0 | 1 | 45 |
0 | 60,356,275 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-12-16T16:28:00.000 | -1 | 2 | 0 | RPA : How to do back-end automation using RPA tools? | 59,360,594 | 1.2 | python-3.x,rpa,automationanywhere | There are several ways to do it. It is especially useful when your backed are 3rd party applications where you do not have lot of control. Many RPA products like Softomotive WinAutomation, Automation Anywhere, UiPath etc. provide file utilities, excel utilities, db utilities, ability to call apis, OCR capabilities etc.... | I would like to know how back-end automation is possible through RPA.
I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide:
An option useful to download/export the report to a csv file
Sort the csv as per the requirement
Send an... | 0 | 1 | 442 |
0 | 59,442,056 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-12-16T16:28:00.000 | 2 | 2 | 0 | RPA : How to do back-end automation using RPA tools? | 59,360,594 | 0.197375 | python-3.x,rpa,automationanywhere | RPA tools are designed to automate mainly front-end activities by mimicing human actions. It can be done easily using any RPA tool.
However, if you are interested in back-end automation the first question would be, if specific application has an option to interact in the way you want through the back-end/API?
If yes, i... | I would like to know how back-end automation is possible through RPA.
I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide:
An option useful to download/export the report to a csv file
Sort the csv as per the requirement
Send an... | 0 | 1 | 442 |
0 | 59,362,487 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-16T18:26:00.000 | 0 | 2 | 0 | How can I transform a string variable to a categorical variable in two different datasets, keeping the same conversion? | 59,362,334 | 0 | python,pandas,scikit-learn | In general, it is recommended to use the OrdinalEncoder when you are sure or know that there exists an 'ordered' relationship between the categories. For example, the grades F, B-, B, A- and A : for each of these it makes sense to have the encoding as 1,2,3,4,5 where higher the grade, higher is the weight ( in the form... | I'm building a model and I have two dataframes in Pandas. One is the training data and the other the testing data. One of the variables is the country. I was thinking about using OrdinalEncoder() to convert the country column to a categorical column. E.g.: "USA" will be 1 in the new column, "Brazil" will be 2 and so on... | 0 | 1 | 38 |
0 | 59,363,215 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-12-16T19:16:00.000 | 2 | 2 | 0 | Difference between "counts" and "number of observations" in matplotlib histogram | 59,362,968 | 1.2 | python,matplotlib,histogram | I think the wording in the documentation is a bit confusing. The count is the number of entries in a given bin (height of the bin) and the number of observation is the total number of events that go into the histogram.
The documentation makes the distinction about how they normalized because there are generally two way... | The matplotlib.pyplot.hist() documentation describes the parameter "density" (its deprecated name was "normed") as:
density : bool, optional
If True, the first element of the return tuple will be the counts normalized to form a probability density, i.e., the area (or integral) under the histogram will sum to 1. This... | 0 | 1 | 1,174 |
0 | 59,369,038 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-12-17T00:12:00.000 | 0 | 1 | 0 | Accessing SAS(9.04) from Anaconda | 59,365,941 | 0 | python,sas,anaconda,saspy | SAS datasets are ODBC compliant. SasPy is for running SAS code. If the goal is to read SAS datasets, only, use ODBC or OleDb. I do not have Python code but SAS has a lot of documentation on doing this using C#. Install the free SAS ODBC drivers and read the sas7bdat. The drivers are on the SAS website.
Writing it is di... | We are doing a POC to see how to access SAS data sets from Anaconda
All documentation i find says only SASpy works with SAS 9.4 or higher
Our SAS version is 9.04.01M3P062415
Can this be done? If yes any documentation in this regard will be highly appreciated
Many thanks in Advance! | 0 | 1 | 181 |
0 | 59,368,454 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-17T05:48:00.000 | 0 | 1 | 0 | Gensim Word2Vec or FastText build vocab from frequency | 59,368,232 | 1.2 | python,gensim,word2vec,fasttext | It "builds a vocabulary from a dictionary of word frequencies". You need a vocabulary for your gensim models. Usually you build it from your corpus. This is basically an alternative option to build your vocabulary from a word frequencies dictionary. Word frequencies for example are usually used to filter low or high fr... | I wonder what does .build_vocab_from_freq() function from gensim actually do? What is the difference when I'm not using it? Thank you! | 0 | 1 | 397 |
0 | 59,381,955 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2019-12-17T10:48:00.000 | 0 | 1 | 0 | Can you change the precision globally of a piece of code in Python, as a way of debugging it? | 59,372,579 | 1.2 | python,numpy,scipy | You can try using mpmath, but YMMV. generally scipy uses double precision. For a vast majority of cases, analyzing the sources of numerical errors is more productive than just trying to reimplement everything with higher widths floats. | I am solving a system of non-linear equations using the Newton Raphson Method in Python. This involves using the solve(Ax,b) function (spsolve in my case, which is for sparse matrices) iteratively until the error or update reduces below a certain threshold. My specific problem involves calculating functions such as x/(... | 0 | 1 | 43 |
0 | 59,410,610 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-12-17T15:38:00.000 | 0 | 2 | 0 | ImportError: dll load failed while importing _openmp_helpers: The specified module could not be found while importing sklearn package | 59,377,573 | 1.2 | python-3.x,scikit-learn,openmp,dllimport,sklearn-pandas | Tried hard to solve it in IDLE but it didn't got rectified. Finally overcame it by installing anaconda IDE and using jupyter notebook. | import sklearn
version--3.8.0 64-bit
Traceback (most recent call last):
File "", line 1, in
import sklearn
File "C:\Users\SAI-PC\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn__init__.py", line 75, in
from .utils._show_versions import show_versions
File "C:\Users\SAI-PC\AppData\Loca... | 0 | 1 | 1,319 |
0 | 59,381,734 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-17T20:11:00.000 | 0 | 1 | 0 | sklearn.metrics Prevent unlabeled predictions from being classified as false positives | 59,381,435 | 1.2 | python,machine-learning,scikit-learn | I believe I figured it out. The labels parameter of precision_recall_fscore_support allows you to specify which labels you desire to use. Therefore, by using labels=list(set(y_true).union(set(y_pred)).difference(set(["-1"])))
I am able to obtain the desired behavior. | I have a multiclass, single label classifier which predicts some samples as "-1", which means that it is not confident enough to assign the sample a label. I would like to use sklearn.metrics.precision_recall_fscore_support to calculate the metrics for the model, however I am unable to prevent the "-1" classifications ... | 0 | 1 | 20 |
0 | 59,387,758 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-18T03:24:00.000 | 0 | 2 | 0 | Best way to classify a series of data in Python | 59,385,064 | 0 | python,numpy,opencv,statistics,regression | If my assumptions are true I don't see a reason for any complex classifier. I'd simply check if the angle always gets larger or always gets smaller. Everytime this rule is followed you add 1 to a quality counter. If the rule is broken you reduce the quality counter by 1. In the end you divide the quality counter by the... | I have been working on image processing problem and I have preprocessed a bunch of images to find the most prominent horizontal lines in those images. Based on this data, I want to classify if the image has a good perspective angle or a bad angle.
The data points are angles of lines I was able to detect in a sequence ... | 0 | 1 | 64 |
0 | 59,394,538 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-18T09:34:00.000 | 2 | 1 | 0 | Dividing MST for kruskal clustering | 59,389,090 | 1.2 | python,algorithm,cluster-analysis | In Kruskal's algorithm, MST edges are added in order of increasing weight.
If you're starting with an MST and you want to get the same effect as stopping Kruskal's algorithm when there are N connected components, then just delete the N-1 highest-weight edges in the MST. | I made a c# application that draw random point on the panel. I need to cluster this points according to euclidian distance. I already implement kruskal algorithm. Normally, there must be number of minimum spanning tree up to written number. For instance, when the user want to clusters drawn point for 3 clusters , end o... | 0 | 1 | 233 |
0 | 59,394,913 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-12-18T13:50:00.000 | 3 | 2 | 0 | Cannot import from sklearn import c | 59,393,468 | 0.291313 | python,scikit-learn | I have never seen KNearestNeighbor in sklearn. There is two thing you can do instead of KNearestNeighbor
from sklearn.neighbors import KNeighborsClassifier
or
from sklearn.neighbors import NearestNeighbors
I think 1st option is the option which you want now | I am working on jupyter notebook on a python assignment and
I am trying to import KNearestNeighbor from sklearn but I am getting the error:
ImportError: cannot import name 'KNearestNeighbor' from 'sklearn'
(C:\Users\michaelconway\Anaconda3\lib\site-packages\sklearn__init__.py)
I have checked and I do have sklearn... | 0 | 1 | 1,512 |
0 | 59,445,732 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-18T17:43:00.000 | 0 | 2 | 0 | I must compress many similar files, can I exploit the fact they are similar? | 59,397,512 | 0 | python,zip,compression | A 'zip-basis' is interesting but problematic.
You could preprocess the files instead. Take one file as a template and calculate the diff of each file compared to the template. Then compress the diffs. | I have a dataset with many different samples (numpy arrays). It is rather impractical to store everything in only one file, so I store many different 'npz' files (numpy arrays compressed in zip).
Now I feel that if I could somehow exploit the fact that all the files are similar to one another I could achieve a much hig... | 0 | 1 | 71 |
0 | 59,398,539 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-18T18:35:00.000 | 0 | 1 | 0 | Line Graph in django template | 59,398,219 | 0 | python,django | as pointed out by @roganjosh, you'll render the graph using a js library.
So in the views.py you'll have to add your data to the context, and then render it in the template using a js library. I personally like plotly.js, they have a neat and easy to use interface. D3.js is also a very popular data visualisation librar... | To be very honest, I've played with matplotlib a little but iam new to django.
I have wandered all over google to search to plot a line graph in django using csv.
Unfortunately i couldnt find anything apart from 'bokeh' and 'chartit'. Although they arent very useful to help me make a start.
My Goal: I need to plot a li... | 1 | 1 | 139 |
0 | 59,403,299 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-12-19T04:20:00.000 | 3 | 2 | 0 | Pandas not working: DataFrameGroupBy ; PanelGroupBy | 59,403,256 | 0.291313 | python-3.x,pandas | I guess you are using an older version of tqdm. Try using a version above tqdm>=4.23.4.
The command using pip would be,
pip install tqdm --upgrade | I have just upgraded python and I cannot get pandas to run properly, please see below. Nothing appears to work.
Traceback (most recent call last): File
"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tqdm/_tqdm.py",
line 613, in pandas
from pandas.core.groupby.groupby import ... | 0 | 1 | 2,138 |
0 | 63,578,193 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-19T11:42:00.000 | 1 | 1 | 0 | ImageAI Object detection with prediction training | 59,409,085 | 1.2 | python,tensorflow,keras,imageai | No, location of objects is only possible with detection because it works on coordinates (bounding box), label error means you have to annotate your dataset. | I have successfully trained a predictor model - so with no labels using ModelTraining class.
Currently, I can use CustomImagePrediction.predictImage() to return a value of what it thinks is in the picture.
I want to be able to detect the location of the object in the image, not just what it thinks it is. This functiona... | 0 | 1 | 135 |
0 | 59,428,792 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-12-20T14:07:00.000 | 0 | 1 | 0 | Examining explained and unexplained variance of the DV | 59,426,564 | 1.2 | python,machine-learning,statistics,regression | I would begin analysis by finding the R-squared (R2) value of a model with all predictor variables, and then determine the change in R-squared when iteratively leaving out each predictor variable one at a time. Such an analysis should weed out the predictors with minimal impact on the regression, and give you a good id... | What statistical techniques should I adopt when trying to determine how much my independent variables explain the variance of my dependent variable?
For further context - I have been asked to develop a model in Python with the aim of examining the extent to which the predictor variables impact upon the response variabl... | 0 | 1 | 105 |
0 | 59,429,840 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-12-20T18:14:00.000 | 1 | 2 | 0 | Is there a way to select only column labels using Pyhon Pandas library without any rows? | 59,429,705 | 0.099668 | python,pandas | Just dodataframe.columns to get all column names | This might be a silly question to ask, however, it is for a specific task in a multi-step process to clean up some data.
Basically, each column label is a location represented as a series of long numbers. Each column contains measurement values in each subsequent row for those locations. I do not need the measurements... | 0 | 1 | 702 |
0 | 59,431,088 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-12-20T19:18:00.000 | 0 | 1 | 0 | replacement has length zero in R from a python code | 59,430,331 | 1.2 | python,r,numerical-methods | Keep in mind that R indexes are 1-based, while in Python they are 0-based. In your code, the first time through the for loop, u[i, j - 1] evaluates to u[2, 0], or numeric(0). This is what produces the error. | I was doing numerical method in R and Python
I have applied leapfrog method in python and it worked perfectly but I want to do the slimier thing in R. Here you can see my code
Possibly I have tried doing u[2,2]=beta*(u[1,1]-2*u[2,1]+u[3,1]) this works, here I can see that the error is due to the bold statement means d... | 0 | 1 | 81 |
0 | 59,441,386 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-12-21T11:10:00.000 | 2 | 1 | 0 | Calculate mean across one specific dimension of a 4D tensor in Pytorch | 59,435,653 | 1.2 | python,numpy,computer-vision,pytorch,tensor | The first part of the question has been answered in the comments section. So we can use tensor.transpose([3,0,1,2]) to convert the tensor to the shape [1024,66,7,7].
Now mean over the temporal dimension can be taken by
torch.mean(my_tensor, dim=1)
This will give a 3D tensor of shape [1024,7,7].
To obtain a tensor of s... | I have a PyTorch video feature tensor of shape [66,7,7,1024] and I need to convert it to [1024,66,7,7]. How to rearrange a tensor shape? Also, how to perform mean across dimension=1? i.e., after performing mean of the dimension with size 66, I need the tensor to be [1024,1,7,7].
I have tried to calculate the mean of di... | 0 | 1 | 4,489 |
0 | 59,439,160 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-21T17:16:00.000 | 0 | 2 | 0 | False Positive Rate in Confusion Matrix | 59,438,262 | 0 | python,pandas | It is possible to have FPR = 1 with TPR = 1 if your prediction is always positive no matter what your inputs are.
TPR = 1 means we predict correctly all the positives. FPR = 1 is equivalent to predicting always positively when the condition is negative.
As a reminder:
FPR = 1 - TNR = [False Positives] / [Negatives]
TP... | I was trying to manually calculate TPR and FPR for the given data. But unfortunately I dont have any false positive cases in my dataset and even no true positive cases.
So I am getting divided by zero error in pandas. So I have an intuition that fpr=1-tpr. Please let me know my intuition is correct if not let know how... | 0 | 1 | 650 |
0 | 70,914,645 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2019-12-21T22:42:00.000 | 0 | 2 | 0 | Import yfinance as yf | 59,440,380 | 0 | python,conda,yahoo,yfinance | The best way to install a python library is to:
Open a terminal or cmd or powershell on your system.
If using a virtual environment, activate the environment in your terminal or cmd or powershell.
For e.g., A virtual environment named virtual_environment created using virtualenv in C://Users/Admin/VirtualEnvironments... | Import yfinance as yf
Should run normally on conda but get this message
ModuleNotFoundError Traceback (most recent call
last) in
1 import pandas as pd
----> 2 import yfinance as yf
3 import matplotlib.pyplot as plt
ModuleNotFoundError: No module named 'yfinance'
Strange? As should be simple to i... | 0 | 1 | 6,906 |
0 | 68,372,729 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2019-12-21T22:42:00.000 | 0 | 2 | 0 | Import yfinance as yf | 59,440,380 | 0 | python,conda,yahoo,yfinance | If you using anaconda, try downloan yfinance using Powershell Prompt. | Import yfinance as yf
Should run normally on conda but get this message
ModuleNotFoundError Traceback (most recent call
last) in
1 import pandas as pd
----> 2 import yfinance as yf
3 import matplotlib.pyplot as plt
ModuleNotFoundError: No module named 'yfinance'
Strange? As should be simple to i... | 0 | 1 | 6,906 |
0 | 59,991,016 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-12-22T02:02:00.000 | 0 | 2 | 0 | python: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler | 59,441,173 | 0 | python-3.x | It means that you have columns in your dataset of type integer. It's a warning, so you are good to go if you need to scale your features for a regression or a neural network. | I don't understand this message /opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/opt/conda/envs/Python36/lib/python3.6/site-packages/ipykerne... | 0 | 1 | 1,496 |
0 | 59,447,620 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-12-22T19:49:00.000 | 0 | 1 | 0 | Re train a saved model with same Dataset | 59,447,567 | 0 | python,tensorflow,machine-learning,keras,deep-learning | If you are using keras, basically calling fit will start from pre-existing weights and therefore the first epoch will consist of the 5th epochs. Therefore, the error will already be lower. However, beware that the processing time for each epoch will be equivalent. You will only start from a model which do not have rand... | I want to know if i retrain my saved model which i ran with 4 epochs, be faster with same image set at 10 epochs?
My data set consists of 2 folders of training and validation with 5 classes and 3000 training and 1000 validation images | 0 | 1 | 150 |
0 | 61,602,101 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2019-12-23T09:26:00.000 | 0 | 3 | 0 | Tensorboard: AttributeError: 'Model' object has no attribute '_get_distribution_strategy' | 59,452,858 | 0 | python-3.x,tensorflow,deep-learning,tensorboard,tensorflow2.0 | This error mostly happens because of mixed imports from keras and tf.keras. Make sure that throughout the code exact referencing of libraries is maintained. For example instead of model.add(Conv2d()) try model.add(tf.keras.layers.Conv2D()) , applying this for all layers solved the problem for me. | I'm getting this error when i use the tensorboard callback while training.
I tried looking for answers from posts related to tensorboard errors but this exact error was not found in any stackoverflow posts or github issues.
Please let know.
The following versions are installed in my pc:
Tensorflow and Tensorflow GPU :... | 0 | 1 | 5,342 |
0 | 59,467,879 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-24T08:58:00.000 | 0 | 2 | 0 | numpy array to the just number in the array | 59,466,258 | 0 | python,numpy | A numpy array of rank 0 is a scalar (it's got shape ()) and will behave like a scalar everyone. You can treat it like that.
You're perhaps mixing it up with an array of rank 1, e.g., np.array([99.79928571]).
You can also wrap your list into np.array to get an array of float64. Perhaps that looks nicer to your eye. | I got array list looks like
[array(99.75142857), array(99.79928571), array(99.82238095),
array(99.83857143), array(99.85), array(99.85738095),
array(99.86285714), array(99.86767857)]
I'm not sure what is this array but I just want to ge a numbers
[99.75142857,99.79928571....]
this array() means numpy array | 0 | 1 | 226 |
0 | 59,480,515 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-12-24T22:06:00.000 | 0 | 1 | 0 | Numerically stable way to compute conditional covariance matrix using linalg.solve | 59,473,735 | 1.2 | python,linear-algebra,matrix-inverse | Please don't inv—it's not as bad as most people think, but there's easier ways: you mentioned how np.linalg.solve(A, b) equals A^{-1} . b, but there's no requirement on what b is. You can use solve to solve your question, A - np.dot(B, np.linalg.solve(D, C)).
(Note, if you're doing blockwise matrix inversion, C is like... | I know that the recommendation is not to use linalg.inv and use linalg.solve when inverting matrices. This makes sense when I have situation like Ax = b and I want to get x, but is there a way to compute something like: A - B * D^{-1} * C without using linalg.inv? Or what is the most numerically stable way to deal with... | 0 | 1 | 416 |
0 | 59,474,328 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-12-24T22:42:00.000 | 1 | 1 | 0 | Processing a Corpus For a word2vec Implementation | 59,473,926 | 1.2 | python,machine-learning,nlp,word2vec | Hashtable lookups can be very fast, and repeated lookups may not contribute much to the overall runtime.
But the only way to really know the potential speedup of your proposed optimization is to implement it, and profile it in comparison to the prior behavior.
Also, as you note, to be able to re-use a single-pass toke... | As part of a class project, I'm trying to write a word2vec implementation in Python and train it on a corpus of ~6GB. I'm trying to code a reasonably optimized solution so I don't have to let my PC sit for days.
Going through the C word2vec source code, I notice that there, each thread reads words from a file, and take... | 0 | 1 | 47 |
0 | 71,549,947 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-25T02:06:00.000 | 0 | 1 | 0 | tensorflow gather then reduce_sum | 59,474,657 | 0 | python,tensorflow,neural-network | I think you can just multiply by sparse matrix -- I was searching if the two are internally equivalent then I landed on your post | Let's say I have a matrix M of size 10x5, and a set of indices ix of size 4x3. I want to do tf.reduce_sum(tf.gather(M,ix),axis=1) which would give me a result of size 4x5. However, to do this, it creates an intermediate gather matrix of size 4x3x5. While at these small sizes this isn't a problem, if these sizes grow la... | 0 | 1 | 145 |
0 | 60,563,891 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-12-25T06:36:00.000 | 0 | 1 | 0 | how to use SVM to classify if the shape of features for each sample is matrix? Is it simply to reshape the matrix to long vector? | 59,475,835 | 0 | python,svm | Yes, that would be the approach I would recommend. It is essentially the same procedure that is used when utilizing images in image classification tasks, since each image can be seen as a matrix.
So what people do is to write the matrix as a long vector, consisting of every column concatenated to one another.
So you c... | I have 120 samples and the shape of features for each sample is matrix of 15*17. how to use SVM to classify? Is it simply to reshape the matrix to long vector? | 0 | 1 | 28 |
0 | 59,479,125 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2019-12-25T13:43:00.000 | 0 | 2 | 0 | How to convert a grayscale image to heatmap image with Python OpenCV | 59,478,962 | 0 | python,image,opencv,image-processing,computer-vision | You need to convert the image to a proper grayscale representation. This can be done a few ways, particularly with imread(filename, cv2.IMREAD_GRAYSCALE). This reduces the shape of the image to (54,960) (hint, no third dimension). | I have a (540, 960, 1) shaped image with values ranging from [0..255] which is black and white. I need to convert it to a "heatmap" representation. As an example, pixels with 255 should be of most heat and pixels with 0 should be with least heat. Others in-between. I also need to return the heat maps as Numpy arrays so... | 0 | 1 | 20,273 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.