GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 57,139,784 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-19T10:48:00.000 | 0 | 1 | 0 | ValueError: could not broadcast input array from shape (848,837,8) into shape (800,800,8) | 57,110,876 | 0 | python,deep-learning,valueerror | so in my case, I was running the code in python 2.7 in which Python 2.7 will give different ceil value as the answer, Python 3.4 reports different value. | When I run the code in kaggle kernel, it is working fine, but when executing the code on my machine, it throws this error. please help | 0 | 1 | 53 |
0 | 58,452,764 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2019-07-19T13:13:00.000 | -1 | 3 | 0 | import pandas results in ModuleNotFoundError :_lzma | 57,113,269 | -0.066568 | pandas,python-3.7,ubuntu-18.04,lzma | just upgraded to version 0.25.1 and works well | On Ubuntu 18.04 with python 3.7.3, I'm attempting to import pandas but this fails because it can't find _lzma.
I've verified that _lzma is installed with dpkg:
/usr/lib/python3.7/lib-dynload/_lzma.cpython-37m-x86_64-linux-gnu.so. Oddly, _lzma is not a dependency of pandas (as specified by pip3). | 0 | 1 | 19,345 |
0 | 57,150,251 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-07-19T15:48:00.000 | 0 | 2 | 0 | Automating both Uploading the training data label csv and Training processed the model in AutoML Vision Image classification | 57,115,839 | 0 | python,google-cloud-platform,automl,google-cloud-automl | I used AutoML RESTapi for creating datasets to training the model. Although If I want to retrain a model on more data, I have to delete the previously trained model and create and train a new one. | I have to manually upload the training data label CSV and click on 'train' to train the model. I want to automate all these preferably with python. | 0 | 1 | 109 |
0 | 57,118,303 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-07-19T18:00:00.000 | 1 | 1 | 0 | Spark read from second row like Pandas header=1 | 57,117,577 | 0.197375 | python,csv,apache-spark,pyspark,apache-spark-sql | Looks like there's no option in spark csv to specify how many lines to skip. Here are some alternatives you can try:
Read with option("header", "true"), and rename the column names using withColumnRenamed.
Read with option("header", "false"), and select rows from 2nd line using select.
If the first character of the fi... | In Pandas with Python I could use:
for item in read_csv(csv_file, header=1)
And in Spark I only have the option of true/false?
df = spark.read.format("csv").option("header", "true").load('myfile.csv')
How can I read starting from the second row in Spark?
The suggested duplicate post is an outdated version of Spark. I a... | 0 | 1 | 1,631 |
0 | 57,124,598 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-07-19T18:13:00.000 | 0 | 1 | 0 | How does log in spark stage/tasks help in understanding actual spark transformation it corresponds to | 57,117,739 | 0 | python,scala,apache-spark,pyspark,apache-spark-sql | Break your execution down. It's the easiest way to understand where the error might be coming from. Running a 500+ line of code for the first time is never a good idea. You want to have the intermediate results while you are working with it. Another way is to use an IDE and walk through the code. This can help you unde... | Often during debugging Spark Jobs on failure we can find the appropriate Stage and task responsible for the failure such as String Index Out of Bounds exception but it becomes difficult to understand which transformation is responsible for this failure.The UI shows information such as Exchange/HashAggregate/Aggregate b... | 0 | 1 | 117 |
0 | 57,139,328 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-07-20T16:32:00.000 | 3 | 1 | 0 | Can CNN autoencoders have different input and output dimensions? | 57,126,626 | 1.2 | python,tensorflow,keras,deep-learning | Auto-encoder (AE) is an architecture that tries to encode your image into a lower-dimensional representation by learning to reconstruct the data from such representation simultaniously. Therefore AE rely on a unsupervised (don't need labels) data that is used both as an input and as the target (used in the loss).
You c... | I am working on a problem which requires me to build a deep learning model that based on certain input image it has to output another image. It is worth noting that these two images are conceptually related but they don't have the same dimensions.
At first I thought that a classical CNN with a final dense layer whose ... | 0 | 1 | 1,346 |
0 | 57,135,408 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-21T15:59:00.000 | 0 | 1 | 0 | Out of sample predictions with LSTM | 57,134,812 | 0 | python,tensorflow,keras,deep-learning,time-series | There are two main ways to create train/validation set for a time series situation:
splitting your samples ( taking for example 80 % of the time series for training and 20 % for validation)
splitting your time series ( training your model on the first n-k values of the time series and validation on the k other value... | This is a general question about making real future predictions with an LSTM model using keras & tensorflow in Python (optional R).
For example stock prices. I know there is a train/test split to measure the accuracy/performance of the model comparing my results with the test prices. But I want to make real future pre... | 0 | 1 | 279 |
0 | 57,139,854 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-22T05:27:00.000 | 0 | 1 | 0 | Can I use GCP for training only but predict with my own AI machine? | 57,139,636 | 0 | python,machine-learning,google-cloud-platform | Decide if you want to use Tensorflow or Keras etc. Prepare scripts to train and save model, and another script to use it for prediction.
It should be simple enough to use GCP for training and download the model to use on your machine. You can choose to use a high end machine (lot of memory, cores, GPU) on GCP. Trainin... | My laptop had a problem with training big a dataset but not for predicting. Can I use Google Cloud Platform for training, only then export and download some sort of weights or model of that machine learning, so I can use it on my own laptop, and if so how to do it? | 0 | 1 | 35 |
0 | 57,216,482 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-22T12:48:00.000 | 0 | 1 | 0 | Weighted clustering using NearestNeighbors | 57,146,403 | 0 | python,scikit-learn,cluster-analysis | If you have an inverted index, enforcing a certain value to be required, while having other values optional and only used for similarity should be straightforward. Just think of the full-text search example with required and optional terms.
Depending on how many queries you do, linear search as well as a "group by" app... | I have a use case where in I need to cluster N transactions but with a constraint that a particular column value in the resultant clusters should be same for individual clusters. I have been using NearestNeighbors - NN from sklearn for this purpose and it seems to workout to an extend. The distance metric chosen is Cos... | 0 | 1 | 36 |
0 | 57,148,940 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-22T14:15:00.000 | 1 | 1 | 0 | Custom data prediction using decision trees | 57,147,986 | 0.197375 | python,machine-learning,scikit-learn | You can use the custom data for prediction as long as it has the same number, order and type of features as your training data into an array type, not in list. If you meet these conditions, you can send that array to the model for prediction with the normal predict() method. | I am using train_test_split to train the model and check the results using predict. How do I proceed to predict the labels of additional data, for example, from a test set or from user inputs? | 0 | 1 | 35 |
0 | 57,183,592 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-22T23:56:00.000 | 0 | 3 | 0 | obj file loaded as Scene object instead of Trimesh object | 57,155,089 | 0 | python,trimesh | I have the same problem. You can access the "geometry" member of the scene, but I have no idea how the original mesh is split into multiple ones. Following. | I'm trying to load a mesh from an .obj file (from ShapeNet dataset) using Trimesh, and then use the repair.fix_winding(mesh) function.
But when I load the mesh, via
trimesh.load('/path/to/file.obj') or trimesh.load_mesh('/path/to/file.obj'),
the object class returned is Scene, which is incompatible with repair.fix_wi... | 0 | 1 | 2,337 |
0 | 57,162,164 | 0 | 0 | 0 | 0 | 5 | false | 0 | 2019-07-23T01:45:00.000 | 2 | 6 | 0 | Generating random division problems in python | 57,155,636 | 0.066568 | python,math | 1) Take any non-zero randomized Divisor (x). // say 5
2) Take any randomized temporary Dividend (D). // say 24
3) Calculate R = D % x; // => 4
4) return Dividend as (D -x ) // return 20
Now, your dividend will always be perfectly divisible by the divisor. | I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders.
With addition, subtraction and multiplication, I could just do [random number] t... | 0 | 1 | 766 |
0 | 57,155,682 | 0 | 0 | 0 | 0 | 5 | true | 0 | 2019-07-23T01:45:00.000 | 2 | 6 | 0 | Generating random division problems in python | 57,155,636 | 1.2 | python,math | x/y = z
y*z = x
Generate y and z as integers, then calculate x. | I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders.
With addition, subtraction and multiplication, I could just do [random number] t... | 0 | 1 | 766 |
0 | 57,165,189 | 0 | 0 | 0 | 0 | 5 | false | 0 | 2019-07-23T01:45:00.000 | 0 | 6 | 0 | Generating random division problems in python | 57,155,636 | 0 | python,math | I think that @Vira has the right idea.
If you want to generate a and b such that a = b * q + r with r=0, the good way to do it is :
Generate b randomly
Generate q randomly
Compute a = b * q
Ask to compute the division : a divided by b. The answer is q. | I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders.
With addition, subtraction and multiplication, I could just do [random number] t... | 0 | 1 | 766 |
0 | 57,155,658 | 0 | 0 | 0 | 0 | 5 | false | 0 | 2019-07-23T01:45:00.000 | 1 | 6 | 0 | Generating random division problems in python | 57,155,636 | 0.033321 | python,math | You can simply generate the divisor and quotient randomly and then compute the dividend. Note that the divisor must be nonzero (thanks to @o11c's remind). | I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders.
With addition, subtraction and multiplication, I could just do [random number] t... | 0 | 1 | 766 |
0 | 57,155,672 | 0 | 0 | 0 | 0 | 5 | false | 0 | 2019-07-23T01:45:00.000 | -1 | 6 | 0 | Generating random division problems in python | 57,155,636 | -0.033321 | python,math | you can generate a number to be divided as [random number1]x[random number2]
. The problem will then be [random number1]x[random number2] divide by [random number1] | I need to generate random division problems for an educational game that I am building. Generating random addition, subtraction, and multiplication problems is not too difficult. But I want my division problems to not have any remainders.
With addition, subtraction and multiplication, I could just do [random number] t... | 0 | 1 | 766 |
0 | 57,165,277 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-07-23T05:02:00.000 | 1 | 1 | 0 | Bamboo error - ImportError: no module named pandas | 57,156,974 | 1.2 | python,bamboo | It sounds like you are running this Bamboo plan using the Agent Host and not a Docker Container. As such you will need to:
Remote/Log into the Bamboo server
Use pip or some other package tool to install requests, itertools, and any other missing imports
Alternatively, you could set-up an isolated Docker image that h... | I have created a Bamboo task which runs the python code from a BitBucket Repo.
Bamboo config:
I am running the script as a file.
I have selected interpreter as Shell and given this in the Script Body to execute the script python create_issue.py -c conf.yml
After I click on 'Run Plan', the build fails with ImportError... | 0 | 1 | 485 |
0 | 57,157,989 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-23T06:31:00.000 | 0 | 1 | 0 | how to get best fit line when we have data on vertical line? | 57,157,943 | 0 | python,linear-regression | Instead of fitting y as a function of x, in this case you should fit x as a function of y. | I started learning Linear Regression and I was solving this problem. When i draw scatter plot between independent variable and dependent variable, i get vertical lines. I have 0.5M sample data. X-axis data is given within range of let say 0-20. In this case I am getting multiple target value for same x-axis point hence... | 0 | 1 | 850 |
0 | 57,170,037 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-07-23T12:23:00.000 | 0 | 2 | 0 | detect objects in only specific region of the frame-yolo-opencv | 57,164,149 | 1.2 | python,opencv,image-processing,yolo | You can perform yolo on the entire image as usual, but add an if condition to only draw boxes the center of which falls in a specific region. Or you can add this condition (position) next to the conditions of IoU (where detected boxes are filtered). Also you can separate counting based on the direction of moving vehicl... | I am counting the total no. of vehicles in a video, but I want to detect only the vehicles which are travelling up(roads have a divider) so my point is, Can i use yolo only on a rectangle where vehicles are moving up? I dont want to detect vehicles that are on the other side of the road.
is there a way like i can draw ... | 0 | 1 | 955 |
0 | 71,621,543 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-07-23T12:23:00.000 | 0 | 2 | 0 | detect objects in only specific region of the frame-yolo-opencv | 57,164,149 | 0 | python,opencv,image-processing,yolo | i'm doing a similar thing...
if your product is going to be fixed on like a light poll then clearly you can either detect the road and zebra crossing by training a model.
or
manually enter these values...
later run your object detection and object tracking on only these parts of the frames i.e, use
frame[ymax:ymin, xma... | I am counting the total no. of vehicles in a video, but I want to detect only the vehicles which are travelling up(roads have a divider) so my point is, Can i use yolo only on a rectangle where vehicles are moving up? I dont want to detect vehicles that are on the other side of the road.
is there a way like i can draw ... | 0 | 1 | 955 |
0 | 57,170,266 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-23T18:09:00.000 | 0 | 1 | 0 | Is ColumnDataSource() the only way to get plots updated in a bokeh web app? | 57,169,954 | 0 | python-3.x,dataframe,callback,bokeh | The short answer to the question in the title is "Yes". The ColumnDataSource is the special, central data structure of Bokeh. It provides the data for all the glyphs in a plot, or content in data tables, and automatically keeps that data synchronized on the Python and JavaScript sides, so that you don't have to, e.g wr... | My data is in a large multi-indexed pandas DataFrame. I re-index to flatten the DataFrame and then feed it through ColumnDataSource, but I need to group my data row wise in order to plot it correctly (think bunch of torque curves corresponding to a bunch of gears for a car). If I just plot the dictionary output of Colu... | 0 | 1 | 109 |
0 | 61,889,824 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-07-23T23:56:00.000 | -1 | 2 | 0 | How to best(most efficiently) read the first sheet in Excel file into Pandas Dataframe? | 57,173,573 | -0.099668 | python,pandas,python-2.7 | The method read_excel() reads the data into a Pandas Data Frame, where the first parameter is the filename and the second parameter is the sheet.
df = pd.read_excel('File.xlsx', sheetname='Sheet1') | Loading the excel file using read_excel takes quite long. Each Excel file has several sheets. The first sheet is pretty small and is the sheet I'm interested in but the other sheets are quite large and have graphs in them. Generally this wouldn't be a problem if it was one file, but I need to do this for potentially th... | 0 | 1 | 768 |
0 | 57,176,551 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-24T03:29:00.000 | 0 | 2 | 0 | When training neural networks, does Tensorflow automatically revert back to the best epoch after finishing? | 57,174,825 | 0 | python,tensorflow,machine-learning,keras,epoch | keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='auto', period=1) | If not, why not? Sometimes I will have an epoch that gets 95ish % and then finish with an epoch that has 10% or so less accuracy. I just never can tell whether it reverts back to that best epoch. | 0 | 1 | 98 |
0 | 57,181,431 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-07-24T09:11:00.000 | 0 | 2 | 0 | Does pandas read the full data file and stores it in a data frame? Is it efficient to load a 100mb file in pandas? | 57,179,262 | 0 | python,pandas,csv,data-science | Yes the performance is affected and sometimes system gets slow.
You can try to read the data in the form of table or you can also use chunksize. This will improve the efficiency | I want to load a file of size around 100mb using pandas. I know we can load but I want to know does the file size affects the efficiency of the program. And is there any way to load the file efficiently? | 0 | 1 | 96 |
0 | 57,180,683 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-24T10:10:00.000 | 0 | 1 | 0 | Minimization of known cost function without knowing original function | 57,180,431 | 0 | python,scipy,minimization,scipy-optimize | A function approximation algorithm needs you to make a few assumptions about how your mathematical model behaves.
If you see things from a black box point of view, three scenarios can occur -
X -> MODEL -> Y
You have the X and MODEL, but you dont have the Y; This is simulation
You have the MODEL and Y, but you dont h... | I am trying to fit one function to another function by adjusting two parameters. But I dont know the form of this function. I have only cost function because for computation of this function is used LAMMPS (molecular dynamics). I need some tool which i can give only cost function and my guess and then it would minimize... | 0 | 1 | 54 |
0 | 58,828,414 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2019-07-25T02:01:00.000 | 1 | 8 | 0 | Pandas-profiling error AttributeError: 'DataFrame' object has no attribute 'profile_report' | 57,193,292 | 0.024995 | python-3.x,pandas,pandas-profiling | This should work for those who want to use the latest version:
Run pip uninstall pandas_profiling from anaconda prompt (given you're using Spyder, I'd guess this would be your case) / or command prompt
Run pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip
If you're using something li... | I wanted to use pandas-profiling to do some eda on a dataset but I'm getting an error : AttributeError: 'DataFrame' object has no attribute 'profile_report'
I have created a python script on spyder with the following code :
import pandas as pd
import pandas_profiling
data_abc = pd.read_csv('abc.csv')
profile = data_ab... | 0 | 1 | 14,039 |
0 | 57,193,306 | 0 | 0 | 0 | 0 | 2 | false | 4 | 2019-07-25T02:01:00.000 | 0 | 8 | 0 | Pandas-profiling error AttributeError: 'DataFrame' object has no attribute 'profile_report' | 57,193,292 | 0 | python-3.x,pandas,pandas-profiling | The only workaround I found was that the python script I made is getting executed from the command prompt and giving the correct output but the code is still giving an error in Spyder. | I wanted to use pandas-profiling to do some eda on a dataset but I'm getting an error : AttributeError: 'DataFrame' object has no attribute 'profile_report'
I have created a python script on spyder with the following code :
import pandas as pd
import pandas_profiling
data_abc = pd.read_csv('abc.csv')
profile = data_ab... | 0 | 1 | 14,039 |
0 | 57,195,148 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-07-25T05:01:00.000 | 0 | 1 | 0 | While train the model always getting "Saving checkpoint to path training/model.ckpt" | 57,194,634 | 1.2 | python,opencv,tensorflow,deep-learning,artificial-intelligence | This message is informative. It's just telling you, that the checkpoint of the model you're training was saved. In case you find the model "getting worse" from that checkpoint, you can cancel the training with the latest best version of the model saved.
Look for some tutorials for how to find out when the model is not ... | I am working with one project in python using Tensorflow. But I am very beginner in Tensorflow and OpenCV. Last day I tried to custom objects. But while training I am always getting one status.
"I0725 10:26:31.453798 5176 supervisor.py:1117] Saving checkpoint to path traini
ng/model.ckpt".
I don't know I what exactly... | 0 | 1 | 352 |
0 | 57,204,226 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-25T14:04:00.000 | 0 | 1 | 0 | Grouping low frequency levels of categorical variables to improve machine learning performance | 57,203,915 | 0 | python,machine-learning | This isn't a programming question... By having fewer classes, you inherently increase the chance of randomly predicting the correct class.
Consider a stacked model (two models) where you have a primary model to classify between the overrepresented classes and the 'other' class, and then have a secondary model to class... | I'm trying to find ways to improve performance of machine learning models either binary classification, regression or multinomial classification.
I'm now looking at the topic categorical variables and trying to combine low occuring levels together. Let's say a categorical variable has 10 levels where 5 levels account f... | 0 | 1 | 392 |
0 | 57,207,926 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-07-25T15:37:00.000 | 1 | 3 | 0 | How can we be sure of the efficiency of a neural network | 57,205,718 | 0.066568 | python,machine-learning,neural-network,dataset,normalization | OP: Your questions are very good for someone that's just getting started in machine learning.
Have you ensured that the distribution of your training and test dataset are similar? I would try to keep the number of samples per class (label) about equal if possible. For instance, if your training set is severely imbalan... | I trained a Forward Neural Net for binary classification and I got an accuracy of 83%, which (I hope)I'm going to improve later by changing parameters in inputs. But some tests make me feel confused :
My dataset length is 671 so I divide it as 513 train set, 58 Validation set and 100 test set
When I change the siz... | 0 | 1 | 380 |
0 | 57,211,888 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-07-25T15:37:00.000 | 1 | 3 | 0 | How can we be sure of the efficiency of a neural network | 57,205,718 | 1.2 | python,machine-learning,neural-network,dataset,normalization | As suggested by many people 3:1:1 (60:20:20 = train-validate-test) ratio is a thumb rule to split data, if you are playing with small data set it better to stick with 80:20 or 70:30 just train-test,I usually go for 90:10 ratio for better results.
Before you start with classification, first check whether your data set i... | I trained a Forward Neural Net for binary classification and I got an accuracy of 83%, which (I hope)I'm going to improve later by changing parameters in inputs. But some tests make me feel confused :
My dataset length is 671 so I divide it as 513 train set, 58 Validation set and 100 test set
When I change the siz... | 0 | 1 | 380 |
0 | 57,208,228 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-07-25T17:54:00.000 | 2 | 1 | 0 | How to prevent printing loss at each step while using the tensorflow object detection api? | 57,207,729 | 1.2 | python,tensorflow,deep-learning,google-colaboratory,object-detection-api | add ; if thats the print statement at the end of the print statement
Add %%capture in the first line of the cell for no print for the cell
and for speficic function
from IPython.utils import io
with io.capture_output() as captured:
function() | I am training using the Tensorflow Object Detection API in Google Colab. I want to suppress printing the loss at each step as the web page crashes after 30 minutes due to a large amount of text being printed as the output of the cell. I have to manually clear the output of the cell every 30 minutes or so to avoid this ... | 0 | 1 | 290 |
0 | 57,230,509 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-07-27T06:34:00.000 | 0 | 3 | 0 | How can I properly split imbalanced dataset to train and test set? | 57,229,775 | 0 | python,machine-learning,train-test-split,imbalanced-data | Start from 50/50 and go on changing the sets as 60/40, 70/30, 80/20, 90/10. declare all the results and come to some conclusion. In one of my work on Flight delays prediction project, I used 60/40 database and got 86.8 % accuracy using MLP NN. | I have a flight delay dataset and try to split the set to train and test set before sampling. On-time cases are about 80% of total data and delayed cases are about 20% of that.
Normally in machine learning ratio of train and test set size is 8:2. But the data is too imbalanced. So considering extreme case, most of tra... | 0 | 1 | 1,879 |
0 | 57,231,506 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-07-27T08:53:00.000 | 1 | 1 | 0 | When the weight of model get updated if I am using adam optimization? | 57,230,643 | 0.197375 | python-3.x,optimization,keras,deep-learning | Adam just changes how the gradient update is performed in gradient descent, it does not change when that happens, so its literally the same as in normal gradient descent.
When using mini-batch gradient descent (the current standard), weight updates happen after every batch. | I know when the weight of model updated while using gradient descent(in all three types of GD) but in my case I am using adam optimization with custom loss(triplet loss), when the weight get updated in the model in this case? Is it after every sample,every batch or every epochs?
Thanks in advance. | 0 | 1 | 140 |
0 | 57,388,681 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-07-27T18:39:00.000 | 0 | 2 | 0 | Detect forehead points using Dlib/python | 57,235,066 | 0 | python,opencv,image-processing,deep-learning,dlib | you can use the tool provided with dlib called "imgLab" to train your own shape detector by performing landmark annotations | Do we have any way to get points on the forehead of a face image?
I am using 68 points landmarks shape_predictor to get other points on the face but for this particular problem I need
points that are from the hairline to the center of the forehead.
Any suggestions would be helpful. | 0 | 1 | 922 |
0 | 57,244,533 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-28T19:32:00.000 | 0 | 3 | 0 | How come multiple regression has so many assumptions, while advanced machine learning algorithms have next to none? | 57,244,326 | 0 | python,r | There are a number of reasons for this, in my opinion.
Linear regression assumes your y to be linearly related to variables where as tree-based models are considered non-linear models. (So the linearity assumption goes out the window)
Breaking some of the linear regression assumptions may not inherently decrease the p... | I am analyzing a real-estate dataset. While all regression assumptions fail, my XGBoosting model thrives. Am I missing something? Is XGBoost just the superior model in this case? The dataset is around 67.000 observations and 30 variables. | 0 | 1 | 947 |
0 | 57,256,900 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-07-29T14:42:00.000 | 1 | 2 | 0 | How extract contrast level of a photo - opencv | 57,256,159 | 0.099668 | python,opencv,image-processing,colors,contrast | Contrast is usually understood as intensity contrast and can be computed on the Luminance component (Y). It is a measure of spread of the histogram, such as the standard deviation. | I need to return the (average) contrast value of an image.
Is this possible and if so, please show how?
More than just the code, (obviously the code is welcome), in what color space should I work? Is HSV appropriate or best? Can the contrast value of a single pixel be computed? | 0 | 1 | 3,574 |
0 | 57,549,978 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-07-29T14:50:00.000 | 0 | 2 | 0 | Tensorflow Serving number of requests in queue | 57,256,298 | 0 | python-3.x,tensorflow,prometheus,tensorflow-serving | what 's more ,you can assign the number of threads by the --rest_api_num_threads or let it empty and automatically configured by tf serivng | I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option. | 0 | 1 | 1,082 |
0 | 57,549,954 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2019-07-29T14:50:00.000 | 1 | 2 | 0 | Tensorflow Serving number of requests in queue | 57,256,298 | 1.2 | python-3.x,tensorflow,prometheus,tensorflow-serving | Actually ,the tf serving doesn't have requests queue , which means that the tf serving would't rank the requests, if there are too many requests.
The only thing that tf serving would do is allocating a threads pool, when the server is initialized.
when a request coming , the tf serving will use a unused thread to ... | I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option. | 0 | 1 | 1,082 |
0 | 57,268,592 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-07-30T08:55:00.000 | 0 | 1 | 0 | Can I tell a machine learning model that the dependent variable is normally distributed? | 57,267,855 | 0 | python,machine-learning,scikit-learn,regression,supervised-learning | One way you could do this is by creating a custom objective function that penalizes predictions that are not normally distributed. | I am trying to set up a machine learning model predicting a continuous variable y on the basis of a feature vector (x1, x2, ..., xn). I know from elsewhere that y follows a normal distribution. Can I somehow specify this to the model and enhance its predictions this way? Is there a specific model that allows me to do t... | 0 | 1 | 65 |
0 | 62,347,635 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-07-30T10:01:00.000 | 1 | 1 | 0 | Is there a limit about image size when we train custom object with already trained models? | 57,269,090 | 1.2 | python,tensorflow,detection,yolo | Providing the solution here (Answer Section), even though it is present in the comment section (Thanks dasmehdix for the update) for the benefit of the community.
No, there is no a limit of image size that we use to train our model. | I already trained ssd_mobilenet_v2_coco with my custom data set on tensorflow. Also I trained YOLO with my data set too. I solved all problems and they work.
I encounter a problem with both models. When my data set includes images with more size than 400kb, the trained models do not work. Some times "allocation of memo... | 0 | 1 | 84 |
0 | 57,325,381 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-07-30T11:27:00.000 | 0 | 1 | 0 | Kernel Restarts upon failing(?) to import tensorflow | 57,270,586 | 0 | python,tensorflow | Ok so I did some black mathmagic and fixed it. What I did was reducing the tensorflow version via pip (to 1.14, but I don't think that matters) and then upgrading my entire conda set up again with conda update --all. I have no idea why, and the anaconda console screamed like it was tortured during the entire update, bu... | So I've recently updated my Anaconda environment, and now I can't import tensorflow anymore. Everytime I run a script containing it, the Spyder Console runs for a while, then just stops and resets to ln[1].
I tried to see how far the script compiles, and it does everything fine, until the import statement. Weirdly eno... | 0 | 1 | 316 |
0 | 60,646,118 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-07-31T03:16:00.000 | 1 | 1 | 0 | Word/Sentence similarity. What is the best approach? | 57,282,671 | 1.2 | python,nlp | I have found two great solutions, by using Cosine similarity and Levenshtein distance. Im my case, Cosine similarity worked better, because I easily found part of the brand name into the text, so getting a score of 100% of accuracy. Matrix replacing (Levenshtein) was also good, but I good some errors due to very simila... | I need to build an algorithm for product master data purposes and I'm not sure about the best NLP approach for this. The scenario is:
- I have Product golden records;
- I have many others Product catalogs that need to be harmonized;
Example:
- Product Golden Record: Coke and Coke Zero;
- Products description that need ... | 0 | 1 | 104 |
0 | 57,290,376 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-07-31T10:20:00.000 | 1 | 1 | 0 | speed up pandas search for a certain value not in the whole df | 57,288,507 | 1.2 | python-3.x,pandas | Just to make a full answer out of my comment:
With -1 not in test1.values you can check if -1 is in your DataFrame.
Regarding the performance, this still needs to check every single value, which is in your case
10^5*10^2 = 10^7.
You only save with this the performance cost for summation and an additional comparison of... | I have a large pandas DataFrame consisting of some 100k rows and ~100 columns with different dtypes and arbitrary content.
I need to assert that it does not contain a certain value, let's say -1.
Using assert( not (any(test1.isin([-1]).sum()>0))) results in processing time of some seconds.
Any idea how to speed it up? | 0 | 1 | 43 |
0 | 57,290,466 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-07-31T12:01:00.000 | 0 | 1 | 0 | Is there any way to change columns datatype that should be int became a float while using read_sql from table | 57,290,281 | 0 | python,pandas | Since your column contains NaN values, which are floating point numbers, I don't think you can avoid this 'issue' loading from the Database without changing the query.
If you wish to change the query, you can insert a WHERE clause that would exclude None values, or check if the row contains such a column value.
What I... | I am using read_sql function to pull data from a postgresql table. As I store that data in a dataframe, I could find that some integer dtype column is automatically getting converted to float, is there any way to prevent that while using read_sql functiononly | 0 | 1 | 45 |
0 | 57,293,794 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-07-31T14:56:00.000 | 0 | 1 | 0 | what's the difference between tf.gfile.GFile().read() and cv2.imread() when it is about reading images | 57,293,678 | 0 | python,opencv,tensorflow | tf.gfile is basicly to use a less-convensional filesystem such as HDFS etc...
cv2.imread(image) is for local filesystem usage. | When working with TensorFlow (image classification for example), sometimes images are loaded using cv2.imread(image) and other times they are loaded using tf.gfile.GFile(image, 'rb').read().
Are there any differences between cv2.imread(image) and tf.gfile.GFile(image, 'rb').read() when using them with TensorFlow?
E... | 0 | 1 | 452 |
0 | 57,294,007 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2019-07-31T14:58:00.000 | 2 | 3 | 0 | Sklearn PCA explained variance and explained variance ratio difference | 57,293,716 | 0.132549 | python,scikit-learn,pca,covariance | It's just normalization to see how each principal component important. You can say:
explained_variance_ratio_ = explained_variance_/np.sum(explained_variance_) | I'm trying to get the variances from the eigen vectors.
What is the difference between explained_variance_ratio_ and explained_variance_ in PCA? | 0 | 1 | 10,300 |
0 | 57,297,560 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-07-31T19:03:00.000 | 1 | 3 | 0 | I want to extract a particular pattern from a content string: "Twitter for iPhone" | 57,297,369 | 0.066568 | python,regex,pandas,dataframe | Try df.col.str.extract(pat = '(Twitter for (iPhone|Samsung|others))') | I want to extract "Twitter for iPhone" part from this string.
But I have different values in the place of "Twitter for iPhone" in 1000s of columns in a dataframe. I only need the values after ">" and before "<" from the following set of strings.
I tried df.col.str.extract('(Twitter for iPhone|Twitter for Samsung|Twitte... | 1 | 1 | 259 |
0 | 57,310,189 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-07-31T23:21:00.000 | 1 | 2 | 0 | Finding the area under the curve of a probability distribution function in SciPy | 57,299,971 | 0.099668 | python,scipy,statsmodels,calculus,probability-density | Eureka! It's the .interval() method for a rv_continuous object in scipy.stats - just pass in your parameters and it will give you end points that contain that percentage of the distribution. | I'm working on a univariate problem which involves aggregating payment data on a customer level - so that I have one row per customer, and the total amount they've spent with us.
Using this distribution of payment data, I fit an appropriate probability distribution and calculated the maximum likelihood estimates for th... | 0 | 1 | 666 |
0 | 57,308,378 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-01T10:01:00.000 | 2 | 1 | 0 | Should CountVectorizer be fit on both the train and test sets? | 57,306,519 | 1.2 | python,python-3.x,scikit-learn,countvectorizer | Generally the test_set should be kept unobserved, so the CountVectorizer should be only fitted on train_set | I have come across various articles online, some of which suggest that CountVectorizer should be fit on both the train and test sets, and some suggest that it should be fit only on the train set.
Which approach is generally better for text classification? | 0 | 1 | 96 |
0 | 57,445,839 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2019-08-01T13:37:00.000 | 6 | 1 | 1 | Can we disable h5py file locking for python file-like object? | 57,310,333 | 1 | python,python-3.6,hdf5,h5py | You just need to set the value to FALSE for the environment variable HDF5_USE_FILE_LOCKING.
Examples are as follows:
In Linux or MacOS via Terminal: export HDF5_USE_FILE_LOCKING=FALSE
In Windows via Command Prompts (CMD): set HDF5_USE_FILE_LOCKING=FALSE | When opening an HDF5 file with h5py you can pass in a python file-like object. I have done so, where the file-like object is a custom implementation of my own network-based transport layer.
This works great, I can slice large HDF5 files over a high latency transport layer. However HDF5 appears to provide its own file ... | 0 | 1 | 4,420 |
0 | 57,314,963 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-01T17:53:00.000 | 0 | 1 | 0 | Can I remove whiskers and outliers from Boxplot? | 57,314,544 | 0 | python,seaborn,boxplot | You can remove the outliers by setting showfliers=False and remove whiskers by setting whis=0. | Some people find it confusing with whiskers and outliers in a Boxplot.
Is it possible to remove those from the Boxplot in Seaborn? | 0 | 1 | 1,271 |
0 | 57,368,538 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-01T19:24:00.000 | 0 | 1 | 0 | feeding annotations as ground truth along with the images to the model | 57,315,783 | 0 | python,tensorflow,keras,computer-vision,object-detection | We need to feed the bounding boxes to the loss function. We need to design a custom loss function, preprocess the bounding boxes and feed it back during back propagation. | I am working on an object detection model. I have annotated images whose values are stored in a data frame with columns (filename,x,y,w,h, class). I have my images inside /drive/mydrive/images/ directory. I have saved the data frame into a CSV file in the same directory. So, now I have annotations in a CSV file and ima... | 0 | 1 | 75 |
0 | 57,335,390 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-08-02T14:20:00.000 | 0 | 3 | 0 | Detecting which words are the same between two pieces of text | 57,328,345 | 0 | python,algorithm | You can use dictionary to first store words from first text and than just simply look up while iterating the second text. But this will take space.
So best way is to use regular expressions. | I need some python advice to implement an algorithm.
What I need is to detect which words from text 1 are in text 2:
Text 1: "Mary had a dog. The dog's name was Ethan. He used to run down
the meadow, enjoying the flower's scent."
Text 2: "Mary had a cat. The cat's name was Coco. He used to run down
the street, enj... | 0 | 1 | 77 |
0 | 57,328,491 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-08-02T14:20:00.000 | 0 | 3 | 0 | Detecting which words are the same between two pieces of text | 57,328,345 | 0 | python,algorithm | Since you do not show any work of your own, I'll just give an overall algorithm.
First, split each text into its words. This can be done in several ways. You could remove any punctuation then split on spaces. You need to decide if an apostrophe as in dog's is part of the word--you probably want to leave apostrophes in.... | I need some python advice to implement an algorithm.
What I need is to detect which words from text 1 are in text 2:
Text 1: "Mary had a dog. The dog's name was Ethan. He used to run down
the meadow, enjoying the flower's scent."
Text 2: "Mary had a cat. The cat's name was Coco. He used to run down
the street, enj... | 0 | 1 | 77 |
0 | 57,339,152 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-08-03T14:13:00.000 | 2 | 1 | 0 | Is normalization necessary for RandomForest? | 57,339,104 | 1.2 | python,data-science,normalization,preprocessor,feature-engineering | 1) No! Feature normalization isn't necessary for any tree-based classifier.
2) Generally speaking, normalization should be done on all features not just numerical ones.
3) In practice it doesn't make much difference. However, the correct practice is to identify the min and max values of each feature from the training s... | 1) Is normalization necessary for Random Forests?
2) Should all the features be normalized or only numerical ones?
3) Does it matter whether I normalize before or after splitting into train and test data?
4) Do I need to pre-process features of the future object that will be classified as well? (after accepting the m... | 0 | 1 | 1,234 |
0 | 57,507,773 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-08-03T16:40:00.000 | 0 | 2 | 0 | Keras: Is there a need to reload the model if I train for 10 epochs multiple times? | 57,340,285 | 1.2 | python,tensorflow,keras,metrics | For any one who might have the same Issue. It seems that in Tensorflow 1.14, the implementation of Keras keeps the model weights, but restarts the optimizer which leads to bad results over many repetitions of the .fit() function. My loss is about 800 when using .fit() once and about 2800 when fitting for 5 epochs each ... | I am training a model and want to use the mAP metric. For some reason the tensorflow mean_average_precision_at_k does not work for me, but the sklearn average_precision_score works.
How can I have access to the keras's model outputs to perform the sklearn metrics?
Can I compile the model one time and fit for 10 epochs,... | 0 | 1 | 258 |
0 | 57,344,741 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-08-03T16:40:00.000 | 1 | 2 | 0 | Keras: Is there a need to reload the model if I train for 10 epochs multiple times? | 57,340,285 | 0.099668 | python,tensorflow,keras,metrics | Can I compile the model one time and fit for 10 epochs, perform the metric and fit again for 10 epochs
Yes, absolutely.
The model will keep the training weights between calls to fit(). You can call this as many times as you please. | I am training a model and want to use the mAP metric. For some reason the tensorflow mean_average_precision_at_k does not work for me, but the sklearn average_precision_score works.
How can I have access to the keras's model outputs to perform the sklearn metrics?
Can I compile the model one time and fit for 10 epochs,... | 0 | 1 | 258 |
0 | 57,345,858 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-08-04T10:18:00.000 | 0 | 1 | 0 | How much should batch size and number of epochs be when fitting a model in Keras? | 57,345,714 | 0 | python,machine-learning,keras,neural-network | No! Their is not any rule of thumb for selecting the batch size of the data. Its a trade off between better accuracy and time. So we have to take the batch size which will process our data fast and give good accuracy too. Now what happens when you take too large batch size. Actually after every batch your model is goin... | I am training a model with 107850 samples and validating on 26963 samples.
How much should batch size and number of epochs be when fitting a model in Keras to optimize the validation accuracy? Is there any sort of rule of thumb to use based on data input size? Does it overfit a model if an increased number of epochs?
... | 0 | 1 | 456 |
0 | 57,347,119 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-04T13:31:00.000 | 1 | 2 | 0 | How do I bulk download images (70k) from urls with a restriction on the simultaneous downloads? | 57,346,966 | 0.099668 | python,image,url,download,bulk | Try downloading in batches like 500 images...then sleep for some 1 seconds and loop it....quite time consuming...but sure fire method....for the coding reference you can explore packges like urllib (for downloading) and as soon as u download the file use os.rename() to change the name....As u already know for that csv ... | I'm a bit clueless. I have a csv file with these columns: name - picture url
I would like to bulk download the 70k images into a folder, rename the images with the name in the first column and number them if there is more than one per name.
Some are jpegs some are pngs.
I'm guessing I need to use pandas to get the dat... | 0 | 1 | 642 |
0 | 57,351,808 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-05T03:07:00.000 | 1 | 1 | 0 | Standard Deviation of every pixel in an image in Python | 57,351,759 | 1.2 | python,arrays,numpy,standard-deviation | Use slicing, given images[num, width, height] you may calculate std. deviation of a single image using images[n].std() or for a single pixel: images[:, x, y].std() | I have an image stored in a 2D array called data. I know how to calculate the standard deviation of the entire array using numpy that outputs one number quantifying how much the data is spread. However, how can I made a standard deviation map (of the same size as my image array) and each element in this array is the st... | 0 | 1 | 5,088 |
0 | 57,353,337 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-05T06:28:00.000 | 0 | 1 | 0 | Convert each row in a PySpark DataFrame to a file in s3 | 57,353,211 | 0 | python,apache-spark,amazon-s3,pyspark,pyspark-sql | I think directly we can't store for each row as a JSON based file. Instead of that we can do like iterate for each partition of dataframe and connect to S3 using AWS S3 based library's (to connect to S3 on the partition level). Then, On each partition with the help of iterator, we can convert the row into JSON based fi... | I'm using PySpark and I need to convert each row in a DataFrame to a JSON file (in s3), preferably naming the file using the value of a selected column.
Couldn't find how to do that. Any help will be very appreciated. | 0 | 1 | 166 |
0 | 57,369,728 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-08-06T01:50:00.000 | 2 | 1 | 0 | Import XLS file from GCS to BigQuery | 57,367,921 | 0.379949 | excel,google-cloud-storage,airflow,xls,python-bigquery | Bigquery don't support xls format. The easiest way is to transform the file in CSV and to load it into big query.
However, I don't know your xls format. If it's multisheet you have to work on the file. | I have some .xls datas in my Google Cloud Storage and want to use airflow to store it to GCP. Can I export it directly to BigQuery or can i use additional library (such a pandas and xlrd) to convert the files and store it into BigQuery?
Thanks | 0 | 1 | 542 |
0 | 57,369,487 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-08-06T04:52:00.000 | 0 | 3 | 0 | what do hidden layers mean in a neural network? | 57,369,148 | 0 | python,tensorflow,neural-network | AFAIK, for this digit recognition case, one way to think about it is each level of the hidden layers represents the level of abstraction.
For now, imagine the neural network for digit recognition has only 3 layers which is 1 input layer, 1 hidden layer and 1 output layer.
Let's take a look at a number. To recognise tha... | in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model.
I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer)
So for example, given the standard MNIST datset tha... | 0 | 1 | 395 |
0 | 57,369,294 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-08-06T04:52:00.000 | 0 | 3 | 0 | what do hidden layers mean in a neural network? | 57,369,148 | 0 | python,tensorflow,neural-network | Consider a very basic example of AND, OR, NOT and XOR functions.
You may already know that a single neuron is only suitable when the problem is linearly separable.
Here in this case, AND, OR and NOT functions are linearly separable and so they can be easy handled using a single neuron.
But consider the XOR function. It... | in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model.
I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer)
So for example, given the standard MNIST datset tha... | 0 | 1 | 395 |
0 | 57,369,365 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-08-06T04:52:00.000 | 0 | 3 | 0 | what do hidden layers mean in a neural network? | 57,369,148 | 0 | python,tensorflow,neural-network | A hidden layer in a neural network may be understood as a layer that is neither an input nor an output, but instead is an intermediate step in the network's computation.
In your MNIST case, the network's state in the hidden layer is a processed version of the inputs, a reduction from full digits to abstract information... | in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model.
I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer)
So for example, given the standard MNIST datset tha... | 0 | 1 | 395 |
0 | 57,383,549 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-08-06T13:48:00.000 | 2 | 3 | 0 | How to evaluate HDBSCAN text clusters? | 57,377,594 | 1.2 | python,cluster-analysis,evaluation,hdbscan | Its the same problem everywhere in unsupervised learning.
It is unsupervised, you are trying to discover something new and interesting. There is no way for the computer to decide whether something is actually interesting or new. It can decide and trivial cases when the prior knowledge is coded in machine processable fo... | I'm currently trying to use HDBSCAN to cluster movie data. The goal is to cluster similar movies together (based on movie info like keywords, genres, actor names, etc) and then apply LDA to each cluster and get the representative topics. However, I'm having a hard time evaluating the results (apart from visual analysis... | 0 | 1 | 2,279 |
0 | 57,378,682 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-06T14:09:00.000 | 1 | 1 | 0 | Apache Nifi : I want to ingest my Data CSV to Elasticsearch without streaming it to some other processor using apache nifi | 57,378,012 | 0.197375 | python,csv,elasticsearch,apache-nifi | The output of the command executed by ExecuteStreamCommand will be written to a flow file that is transferred to the "output stream" relationship. You should be able to connect ExecuteStreamCommand "output stream" directly to PutElasticSearch. | I am trying to setup a simple process to modify my CSV file and ingest it to the elasticsearch DB using Apache Nifi. I don't want to stream my CSV file on Stdout, while passing my file from one processor to another.
I've already made two flows.
Myfirst flow get my CSV file using GetFile processor, customizes it using E... | 0 | 1 | 312 |
0 | 57,393,865 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2019-08-07T11:48:00.000 | 2 | 4 | 0 | why is numpy where keep raising divide by zero encountered? | 57,393,768 | 1.2 | python,numpy,where-clause | Since it takes advantages of vectorization , it will execute 100/np.sqrt(w)for every element of w , so the division by 0 will happen , but then you are not taking the results associated with these entries. So basically with your trick you are still dividing by 0 but not using the entries where you divided by 0. | I have an array like w=[0.854,0,0.66,0.245,0,0,0,0] and want to apply 100/sqrt(x) on each value. As I can't divide by 0, I'm using this trick :
w=np.where(w==0,0,100/np.sqrt(w))
As I'm not dividing by 0, I shouldn't have any warning.
So why does numpy keep raising RuntimeWarning: divide by zero encountered in true_divi... | 0 | 1 | 615 |
0 | 57,393,853 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-08-07T11:48:00.000 | 2 | 4 | 0 | why is numpy where keep raising divide by zero encountered? | 57,393,768 | 0.099668 | python,numpy,where-clause | 100/np.sqrt(w) still uses the w with zeros because function arguments are evaluated before executing a function. The square root of zero is zero, so you end up dividing 100 by an array that contains zeros, which in turn attempts to divide 100 by each element of this array and at some point tries to divide it by an elem... | I have an array like w=[0.854,0,0.66,0.245,0,0,0,0] and want to apply 100/sqrt(x) on each value. As I can't divide by 0, I'm using this trick :
w=np.where(w==0,0,100/np.sqrt(w))
As I'm not dividing by 0, I shouldn't have any warning.
So why does numpy keep raising RuntimeWarning: divide by zero encountered in true_divi... | 0 | 1 | 615 |
0 | 57,400,930 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-08-07T19:10:00.000 | 4 | 1 | 0 | Classification List Comprehension Not Behaving as Expected | 57,400,902 | 1.2 | python | -1 if x < .1 else 0 should be -1 if x < -.1 else 0 | I have built a list comprehension that takes in a list of lists [actual, predicted] and then classifies the contained lists. I want to produce output lists that have 1 if the original element is > .1, -1 if the original output is < -.1 and 0 if the original element is < .1 and > -.1. For example [[2, 0, -2],[0, 0, 0]] ... | 0 | 1 | 42 |
0 | 57,402,233 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2019-08-07T20:42:00.000 | 2 | 1 | 0 | Plotly Express hover options | 57,402,050 | 1.2 | python,hover,plotly,plotly-dash,plotly-express | 1 and 2 are not possible yet.
For 3 there is a hovermode attribute in layout that you can set to show one hover label per trace per y-value. | In plotly_express.line the only options I see to modify hover settings are hover_name and hover_data. A few issues I'm facing with modifying hover are:
It seems that even if I set hover_data=None it still shows the values for name,x, and y. How can I set it to only show the hover info I select without adding defaults?... | 0 | 1 | 2,985 |
0 | 57,410,794 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-08T10:39:00.000 | 0 | 1 | 0 | Create an ID segmented images | 57,410,686 | 1.2 | python,machine-learning,computer-vision,dataset,image-segmentation | From what I understand, your segmented images come with 3 channels, where the color of each pixel corresponds to its GT label.
When you train your image segmentation model, there is no need for an output of 3 channels (its redundant), so they recommend that you create a new annotated image, where you replace each color... | I have a dataset containing both RGB and Segmented images as ground truth , the readme.txt included in the annotated dataset stated this :
GT_color :folder containing the groundtruth masks for semantic segmentation
Annotations are given using a color representation, where each corresponds to a specific class. This is ... | 0 | 1 | 83 |
0 | 57,426,426 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-09T06:13:00.000 | 0 | 2 | 0 | How can I identify the records inside each cluster in a KNN model in SciKit-Learn Python? | 57,424,338 | 1.2 | python,scikit-learn,label,knn | KNN is not clustering, but classification.
The parameter k is not the k of k-means; it is the number of neighbors not the number of clusters...
Hence, setting k to 5 dors not suddenly produce 5 labels. Your training data has 2 labels, hence you get 2 labels.
KNN = k-nearest neighbors classification. For k=5 this means ... | I am making a KNN model. The target variable is divided in 2 categories, and the features are 3 categorical variables (country, language and company). The model says the optimal is 5 clusters, so I did it with 5.
I need to know how can I see the records in each of the 5 clusters (I mean, the countries, languages and co... | 0 | 1 | 112 |
0 | 57,427,988 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-08-09T09:16:00.000 | 0 | 1 | 0 | How to run python script using S3 data in AWS | 57,426,946 | 0 | python-3.x,amazon-s3,aws-lambda,job-scheduling | You have several options:
Use AWS Lambda, but Lambda has limited local storage (500mb) and memory (3gb) with 15 run time.
Since you mentioned Pandas I recommend using AWS Glue which has ability:
Detect new file
Large Mem, CPU supported
Visual data flow
Support Spark DF
Ability to query data from your CSV files
Conne... | I have a CSV file in S3. I want to run a python script using data present in S3. The S3 file will change once in a week. I need to pass an input argument to my python script which loads my S3 file into Pandas and do some calculation to return the result.
Currently I am loading this S3 file using Boto3 in my server for ... | 0 | 1 | 389 |
0 | 57,456,535 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-09T11:49:00.000 | 3 | 1 | 0 | Difference between DISTRIBUTE BY and Shuffle in Spark-SQL | 57,429,479 | 0.53705 | python,apache-spark,apache-spark-sql,pyspark-sql | Let me try to answer each part of your question:
As per my understanding, the Spark Sql optimizer will distribute the datasets of both the participating tables (of the join) based on the join keys (shuffle phase) to co-locate the same keys in the same partition. If that is the case, then if we use the distribute by in... | I am trying to understand Distribute by clause and how it could be used in Spark-SQL to optimize Sort-Merge Joins.
As per my understanding, the Spark Sql optimizer will distribute the datasets of both the participating tables (of the join) based on the join keys (shuffle phase) to co-locate the same keys in the same p... | 0 | 1 | 2,866 |
0 | 57,441,200 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-10T08:49:00.000 | 0 | 1 | 0 | How to use directional data as feature using scikit-learn knn? | 57,440,714 | 0 | python,scikit-learn,knn,gridsearchcv | You can break direction column, theta into 2 columns, sin(theta) and cos(theta), both of which would have continuity. | I'm new to using KNN and in my train set I have velocity vector. Since the directions 359° and 0° are completely different I was thinking of transforming the direction so that the vector in test data it points to 180°.
I could make this transformation before using KNeighborsClassifier if I predict from one data point, ... | 0 | 1 | 68 |
0 | 57,444,559 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-08-10T17:59:00.000 | 1 | 2 | 0 | How to save a figure in matplotlib by its variable name? | 57,444,490 | 0.099668 | python,matplotlib | You can save individual figures by:
figX.savefig('figX_file.png')
. | In matplotlib, matplotlib.pyplot.savefig saves the current figure. If I have multiple figures as variables in my workspace, e.g. fig1, fig2, and fig3, is it possible to save any of these figures based on their variable name, without first bringing them up as the current figure? E.g., I'd like to do something like:
save... | 0 | 1 | 209 |
0 | 57,450,598 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-11T06:13:00.000 | 0 | 1 | 0 | Why does not using retain_graph=True result in error? | 57,447,736 | 0 | python,neural-network,deep-learning,pytorch | By default, PyTorch doesn't store intermediate gradients, because the PyTorch's main feature is Dynamic Computational Graphs, so after backpropagation the graph will be freed all the intermediate buffers will be destroyed. | If I need to backpropagate through a neural network twice and I don't use retain_graph=True, I get an error.
Why? I realize it is nice to keep the intermediate variables used for the first backpropagation to be reused for the second backpropagation. However, why aren't they simply recalculated, like they were original... | 0 | 1 | 55 |
0 | 57,454,549 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-11T22:56:00.000 | 0 | 1 | 0 | How to get the rand() function in excel to rerun when accessing an excel file through python | 57,454,186 | 1.2 | excel,python-3.x,pandas,xlrd,xlwt | Excel formulas like RAND(), or any other formula, will only refresh when Excel is actually running and recalculating the worksheet.
So, even though you may be access the data in an Excel workbook with Python, you won't be able to run Excel calculations that way. You will need to find a different approach. | I am trying to access an excel file using python for my physics class. I have to generates data that follows a function but creates variance so it doesn’t line up perfectly to the function(simulating the error experienced in experiments). I did this by using the rand() function. We need to generate a lot of data sets s... | 0 | 1 | 193 |
0 | 57,469,658 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-12T10:25:00.000 | 0 | 1 | 0 | SSd_mobilenet loss cannot go down | 57,459,492 | 0 | python,tensorflow,object-detection-api | I would first try with images of size 300 x 300 first, or use a library which will downscale your images and the bounding boxes. | I try train ssd-mobilenet in my own dataset :
training image : 3400 with size :1600*1200
test set :800 with size :1600 *1200
tensorflow -gpu :1.13.1 gpu :4GB
cuda 10.0 cudnn 7
object: road damage like aligator crack
but after 197000 step my training loss cannot go down 2:
I need helps.Thanks in advance | 0 | 1 | 274 |
0 | 57,460,928 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-12T12:04:00.000 | 2 | 1 | 0 | Detect text only inside detected objects | 57,460,861 | 1.2 | python,opencv,yolo | Well, the first one that pops into my mind would be to crop the objects detected with YOLO and then run the OCR on that image. After running OCR, you'll have to do some postprocessing to classify each line of text to a specific category (price, name etc.) | I'm very new to Computer Vision, I'm tryind to build a CV model which will detect and recognize price tags and extract info from it. I've already trained model that can detect price tags using YOLO. But I also want to teach my system to detect and recognize text which only written inside these price tags. Than parse th... | 0 | 1 | 71 |
0 | 57,510,905 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-12T16:37:00.000 | 0 | 1 | 0 | Rasa NLU model to old | 57,465,038 | 1.2 | python,anaconda,rasa-nlu,rasa-core | I believe you trained this model on the previous version of Rasa NLU and updated Rasa NLU to a new version (Rasa NLU is a dependency for Rasa Core, so changes were made in requirenments.txt file).
If this is a case, there are 2 ways to fix it:
Recommended solution. If you have data and parameters, train your NLU model... | I have a problem. I am trying to use my model with Rasa core, but it gives me this error:
rasa_nlu.model.UnsupportedModelError: The model version is to old to
be loaded by this Rasa NLU instance. Either retrain the model, or run
withan older version. Model version: 0.14.6 Instance version: 0.15.1
Does someone kno... | 1 | 1 | 879 |
0 | 57,467,836 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-08-12T20:13:00.000 | 6 | 2 | 0 | Reshaping Image for PyTorch | 57,467,707 | 1.2 | python,image,opencv,keras,pytorch | First, Keras format is (samples, height, width, channels).
All you need to do is a moved = numpy.moveaxis(data, -1,1)
If by luck you were using the non-default config "channels_first", then the config is identical to that of PyTorch, which is (samples, channels, height, width).
And when transforming to torch: data = ... | I used to use keras and the image format it followed is [Height x Width x Channels x Samples]. i decided to switch to PyTorch. But i didn’t switch out my data loading schemes. So now i have numpy arrays of shape HxWxCxS, instead of SxCxHxW which is required for PyTorch. Does anyone have any idea to convert this ? | 0 | 1 | 2,309 |
0 | 62,476,273 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-12T20:45:00.000 | 2 | 1 | 0 | Corrected P-value returned as NaN | 57,468,030 | 0.379949 | python,statsmodels | Check for NaNs in your p-value array. using multipletests on an array with any NaNs will return all NaNs, as you describe. | I have attempted to run a FDR correction on an array of p-values using both statsmodels.stats.multitest's multipletests(method='fdr_bh') and fdrcorrection. In both instances I receive an array of NaN's as the corrected p-values. I do not understand why the corrected p-value is returning as NaN. Could someone please hel... | 0 | 1 | 694 |
0 | 57,482,532 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-13T17:03:00.000 | 1 | 1 | 0 | How to compare frequency of unigrams with frequencies of bigrams, trigrams, etc? | 57,482,331 | 0.197375 | python,text,nlp,nltk | As you've asked this, there is no simple linear multiplier. You can make a general estimate by the size of your set of units. Consider the English alphabet of 26 letters: you have 26 possible unigrams, 26^2 digrams, 26^3 trigrams, ... Simple treatment suggests that you would multiply a digram's frequency by 26 to com... | I want to build a word cloud containing multiple word structures (not just one word). In any given text we will have bigger frequencies for unigrams than bigrams. Actually, the n-gram frequency decreases when n increases for the same text.
I want to find a magic number or a method to obtain comparative results between ... | 0 | 1 | 442 |
0 | 57,485,755 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-13T21:41:00.000 | 1 | 1 | 0 | How can i generate Data from .gdf files using Jupyter notebook? | 57,485,659 | 1.2 | python | I recommend using the gdflib library. It'll allow you to process your .gdf files by organizing your data into nodes for further processing.
It would also help if you could please provide a minimal reproducible example of what you have tried. | I'm preparing my dataset to be preprocessed before training with CNN model but i couldn't generate data from this type of file which contain several signals. | 0 | 1 | 198 |
0 | 57,488,153 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-14T01:33:00.000 | 1 | 1 | 0 | Does it make sense to use a part of the dataset to train my model? | 57,487,124 | 0.197375 | python,machine-learning,xgbclassifier | To answer your first question, deleting the part of the dataset that doesn't work is not a good idea because then your model will overfit on the data that gives better numbers. This means that the accuracy will be higher, but when presented with something that is slightly different from the dataset, the probability of ... | The dataset I have is a set of quotations that were presented to various customers in order to sell a commodity. Prices of commodities are sensitive and standardized on a daily basis and therefore negotiations are pretty tricky around their prices. I'm trying to build a classification model that had to understand if a ... | 0 | 1 | 53 |
0 | 57,491,074 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-14T08:34:00.000 | 1 | 1 | 0 | Mask-RCNN project | 57,491,013 | 1.2 | python,python-3.x,deep-learning | A pre-trained model is a model that has been trained usually with a big dataset using big computers and can be fine-tuned for a given problem using small amounts of computation. This is possible to do with Deep Learning, where training a model consists in adjusting some matrices of weights. When we refer to pre-trained... | I am currently doing a group project about Semantic Segmentation and need to train model with own data set. The problem is data set is not available in any pre-trained model since the objective is to detect each part of sneakers(eg. lace, outsole, front patch, logos etc). None of our team members had never studied dee... | 0 | 1 | 103 |
0 | 57,493,372 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-08-14T10:55:00.000 | 1 | 1 | 0 | df.set_index=("Neighbourhood",inplace=True) giving me SyntaxError: invalid syntax | 57,493,320 | 1.2 | python,python-3.x,indexing,methods | set_index is a function, you need to call it.
try df.set_index("Neighbourhood",inplace=True) (without the =) | All of my previous code runs well.
It is only when I try to set the index to a particular column as the code below shows, that I run into an error.
Honestly - this same method has worked before and I have not been able to find any other method to do the same thing.
df.set_index=("Neighbourhood",inplace=True)
Error... | 0 | 1 | 499 |
0 | 57,539,079 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2019-08-15T00:11:00.000 | 6 | 1 | 0 | ModuleNotFoundError: no module named efficientnet.tfkeras | 57,503,473 | 1.2 | python,keras | To install segmentation-models use the following command: pip install git+https://github.com/qubvel/segmentation_models | I attempted to do import segmentation_models as sm, but I got an error saying efficientnet was not found. So I then did pip install efficientnet and tried it again. I now get ModuleNotFoundError: no module named efficientnet.tfkeras, even though Keras is installed as I'm able to do from keras.models import * or anythin... | 0 | 1 | 10,143 |
0 | 57,529,064 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-08-15T04:06:00.000 | 0 | 1 | 0 | Setting up keras and tensoflow to operate with AMD GPU | 57,504,746 | 1.2 | python-3.x,tensorflow,keras,gpu,amd | To the best of my knowledge, PlaidML was not working because I did not have the required prerequisites such as OpenCL. Once I downloaded the Visual Studio C++ build tools in order to install PyopenCL from a .whl file. This seemed to resolve the issue | I am trying to set up Keras in order to run models using my GPU. I have a Radeon RX580 and am running Windows 10.
I saw realized that CUDA only supports NVIDIA GPUs and was having difficulty finding a way to get my code to run on the GPU. I tried downloading and setting up plaidml but afterwards from tensorflow.python.... | 0 | 1 | 435 |
0 | 57,507,841 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-15T09:40:00.000 | 0 | 1 | 0 | What is the suitable technique for discrete classification and variable optimization problem? | 57,507,740 | 0 | python,optimization,classification | What I would do in this case is to use a finite-differences gradient approach to solve the problem. For that, you can follow the next steps:
1) Select a customer with "Bad" prediction, increase and decrease their variables one by one a little bit and check the prediction.
2) That way you'll which sign in each variable... | I get a simple task request to perform a classification to predict classes i.e between 'Good' and 'Bad' customers. The problem is, recommendation is needed on suitable variables' values for those customers with 'Bad' prediction, so that they can take action to improve their profile. Examples of variable are 'Purchase S... | 0 | 1 | 28 |
0 | 57,524,523 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2019-08-16T12:02:00.000 | 3 | 1 | 0 | Using CP-SAT Solver for non-linear objective function | 57,524,316 | 1.2 | python,or-tools,cp-sat-solver | You have to create an intermediate variable using AddMultiplicationEquality(x2, [x, x]) | I'm trying to use CP-SAT solver with some variables: x,y. I want to maximise an objective function of the form x**2-y*x with some constraints. I'm getting
TypeError: unsupported operand type(s) for ** or pow(): 'IntVar' and
'int'
error messages. Am I correct in assuming I cannot use nonlinear objective function for... | 0 | 1 | 671 |
0 | 57,528,483 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-16T13:58:00.000 | 0 | 1 | 0 | Plotly Express hovermode with arbitrary column | 57,525,950 | 1.2 | python,hover,plotly,plotly-python,plotly-express | No, there’s no feature for that at the moment. You could consider a second set of “iso-threshold” curves to see this kind of thing perhaps? | I see that the hovermode attribute in layout has options for x or y, but is it possible to use an arbitrary dataframe column? instead?
For example, I'm plotting precision-recall curves. The x-axis is recall, and the y-axis is precision. The independent variable is a detection threshold value (with range of np.linspace(... | 0 | 1 | 544 |
0 | 57,539,279 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-17T18:15:00.000 | 1 | 3 | 0 | What input shape should I take in first layer of Sequential model when the dimensions of the images are (2048*1536) | 57,538,810 | 0.066568 | python,keras,deep-learning,dataset | I would resize first the images with cv2.resize(). Do you really need all the information from such a big image?
For a sequential Model it follows for example:
model = models.Sequential()
model.add(layers.Conv2D(32,(3,3), activation='relu', input_shape = (height,width, ndim)))
...,
where height and width denote your i... | I am having an image dataset each image is of dimensions=(2048,1536).In ImageDataGenerator to fetch data from the directory, I have used the same target size i.e (2048,1536) but while making Sequential model first layer, what input shape should I have to use?? Will it be same as (2048,1536) or I can take any random sha... | 0 | 1 | 347 |
0 | 57,784,170 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2019-08-18T04:05:00.000 | -1 | 2 | 0 | Tensorflow shows only "successfully opened CUDA library libcublas.so.10.0 locally" and nothing about cudnn | 57,541,567 | -0.099668 | python,tensorflow | Actually i always rely on a stable setup. And i tried most of the tf - cuda - cudnn versions. But most stable was tf 1.9.0 , CUDA 9.0, Cudnn 7 for me. Used it for too long without a problem. You should give it a try if it suits you. | My tensorflow only prints out the line:
I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally when running.
Tensorflow logs on the net has lots of other libraries being loaded like libcudnn.
As I think my installation performance is not optimal, I am trying to find ... | 0 | 1 | 3,774 |
0 | 57,588,916 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-21T09:05:00.000 | 3 | 1 | 0 | What is the difference between math.isnan ,numpy.isnan and pandas.isnull in python 3? | 57,588,107 | 0.53705 | python-3.x,pandas,numpy | The only difference between math.isnan and numpy.isnan is that
numpy.isnan can handle lists, arrays, tuples whereas
math.isnan can ONLY handle single integers or floats.
However, I suggest using math.isnan when you just want to check if a number is nan because
numpy takes approximately 15MB of memory when importi... | A NaN of type decimal.Decimal causes:
math.isnan to return True
numpy.isnan to throw a TypeError exception.
pandas.isnull to return False
What is the difference between math.isnan, numpy.isnan and pandas.isnull? | 0 | 1 | 1,748 |
0 | 57,593,244 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2019-08-21T12:09:00.000 | 7 | 1 | 0 | Combination of GridSearchCV's refit and scorer unclear | 57,591,311 | 1.2 | python,scikit-learn,grid-search | When refit=True, sklearn uses entire training set to refit the model. So, there is no test data left to estimate the performance using any scorer function.
If you use multiple scorer in GridSearchCV, maybe f1_score or precision along with your balanced_accuracy, sklearn needs to know which one of those scorer to use to... | I use GridSearchCV to find the best parameters in the inner loop of my nested cross-validation. The 'inner winner' is found using GridSearchCV(scorer='balanced_accuracy'), so as I understand the documentation the model with the highest balanced accuracy on average in the inner folds is the 'best_estimator'. I don't und... | 0 | 1 | 3,919 |
0 | 57,599,130 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-21T12:38:00.000 | 0 | 1 | 0 | Logic for creating mutually exclusive groups of individuals (clusters) | 57,591,831 | 0 | python-3.x,logic,cluster-analysis | Solve it the opposite way.
Rather than trying all combinations and then checking which conflict, first find all conflicts.
So if a record is in groups A, B, and O, then mark AB, AO, and BO as incompatible. When going through combinations, you can easily check that adding B is impossible if you chose to use A etc. | I have survey data with about 100 columns for every individual. Based on certain criteria, for eg. a column contains info of whether a person reads comics and another column contains info of whether a person reads comics.
I want to validate if the user has created clusters/groups that are mutually exclusive.
eg. Group ... | 0 | 1 | 44 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.