GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 55,716,394 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-04-16T14:48:00.000 | 2 | 2 | 0 | Dealing with new words in gensim not found in model | 55,710,967 | 1.2 | python,nlp,gensim | Depending on the context, Gensim will usually either ignore unknown words, or throw an error like KeyError when an exact-word lookup fails. (Also, some word-vector models, like FastText, can synthesize better-than-nothing guesswork vectors for unknown words based on word-fragments observed during training.)
You should ... | Lets say I am trying to compute the average distance between a word and a document using distances() or compute cosine similarity between two documents using n_similarity(). However, lets say these new documents contain words that the original model did not. How does gensim deal with that?
I have been reading through t... | 0 | 1 | 179 |
0 | 55,712,434 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-16T15:57:00.000 | 0 | 1 | 0 | Linear algebra with large, sparse matrices | 55,712,254 | 0 | python,scipy,regression,sparse-matrix,least-squares | You could to use numpy.linalg.pinv to find "x" values | I want to solve the linear equation Ax = b, for the unknown matrix x. A and b are both large and sparse, and have shapes (when converted to dense) of 30,000 x 25 and 30,000 x 100,000, respectively.
I have tried using both scipy.sparse.linalg.lsqr and scipy.sparse.linalg.lsmr, but they both require that b be dense, whic... | 0 | 1 | 222 |
0 | 55,725,242 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-16T16:04:00.000 | 0 | 1 | 0 | Operating on large number of dataframes | 55,712,366 | 0 | python,pandas,bigdata,data-science | If all of your data has the same shape, then I don't see the point of using lists of pandas DataFrames for this.
The most performance you could get out of Python with the least work is just stacking the dataframes into a 3D Numpy array of dimensions (3000, 3000, 5000) and then doing an sum over the last axis.
As this r... | I have a large number of pandas dataframe > 5000 of shape 3000x3000 float values with density of 60% (i.e. 40% values are NaNs). These frames have identical index and columns.
I'd like to operate on these frames e.g. addition of all of them. If I do this sequentially, it takes more than 20 mins. Is there efficient way... | 0 | 1 | 88 |
0 | 70,164,527 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-17T15:01:00.000 | 0 | 1 | 0 | pandas df to_csv freezes when using pyinstaller to save the df in the exe directory | 55,730,679 | 0 | python,pyinstaller | Look, if you use Innodb compiler then you will face a problem of permission denied error in the setup. So , I have tried to solve that by using temporary file but It is getting deleted after generation. But if you really want to solve this problem then use xlsxwriter and save it to a specific file location. | I am trying to save the output dataframe to a csv file while using pyinstaller to create an exe, but my code freezes and generate "[Errno 13] Permission denied: '.\Output.csv' " error. My question is. what wrong using df.to_csv to save the output file in the same exe directory ?
Thanks in adcance | 0 | 1 | 281 |
0 | 55,731,210 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-17T15:10:00.000 | 1 | 1 | 0 | 2D RGB image construction from 3D array in SimpleITK | 55,730,852 | 1.2 | python,image,simpleitk | The documentation reads:
Signature: sitk.GetImageFromArray(arr, isVector=None)
Docstring: Get a SimpleITK Image from a numpy array. If isVector is True, then the Image will have a Vector pixel type, and the last dimension of the array will be considered the component index. By default when isVector is None, 4D images ... | I have an RGB image in the format of a 3D array with the shape of (m, n, 3). I would like to create a SimpleITK image. Using the GetImageFromArray() function results in creation of an image in 3D which is not what I am looking for. How can I create a 2D RGB image instead? | 0 | 1 | 443 |
0 | 56,049,754 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-04-18T06:31:00.000 | 0 | 2 | 0 | How to triangulate a point in 3D space, given coordinate points in 2 image and extrinsic values of the camera | 55,740,284 | 0 | python,numpy,triangulation,vision | Assume you have two cameras -- camera 1 and camera 2.
For each camera j = 1, 2 you are given:
The distance hj between it's center Oj, (is "focal point" the right term? Basically the point Oj from which the camera is looking at its screen) and the camera's screen. The camera's coordinate system is centered at Oj, the... | I'm trying to write a function that when given two cameras, their rotation, translation matrices, focal point, and the coordinates of a point for each camera, will be able to triangulate the point into 3D space. Basically, given all the extrinsic/intrinsic values needed
I'm familiar with the general idea: to somehow cr... | 0 | 1 | 1,933 |
0 | 55,767,754 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-18T18:25:00.000 | 0 | 3 | 0 | How to train a trained model with new examples in scikit-learn? | 55,751,844 | 0 | python,python-3.x,machine-learning,scikit-learn | Append the new data to your existing dataset, and train over the whole thing. Might want to reserve some of the new data for your testset. | I'm working on a machine learning classification task in which I have trained many models with different algorithms in scikit-learn and Random Forest Classifier performed the best. Now I want to train the model further with new examples but if I train the same model by calling the fit method on new examples then it wil... | 0 | 1 | 845 |
0 | 55,795,347 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-18T20:43:00.000 | 1 | 2 | 0 | How to specify gradient in Pyomo with IPOPT | 55,753,497 | 1.2 | python-3.x,pyomo,ipopt | Pyomo provides first and second derivative information using the automatic differentiation features in the Ampl Solver Library (ASL). When calling IPOPT, Pyomo outputs your model using the '.nl' file format which is read by the ASL and linked to IPOPT. So you don't have to do anything to provide gradient information, t... | Primary Question
When solving a NLP in Pyomo, using IPOPT as the solver, how can I tell IPOPT what the gradient of the objective function and/or constraints are? I have to pass a callable function that returns objective values--can I likewise pass a callable function that evaluates the gradient as well?
Secondary Quest... | 0 | 1 | 561 |
0 | 55,779,081 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-19T05:02:00.000 | 1 | 1 | 0 | Where to find a pretrained doc2vec model on Wikipedia or large article dataset like Google news? | 55,756,841 | 1.2 | python,nlp,gensim,word2vec,doc2vec | I'm not aware of any publicly-available standard gensim Doc2Vec models trained on Wikipedia. | Am struggling with training wikipedia dump on doc2vec model, not experienced in setting up a server as a local machine is out of question due to the ram it requires to do the training. I couldnt find a pre trained model except outdated copies for python 2. | 0 | 1 | 200 |
0 | 55,763,023 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-19T13:46:00.000 | 0 | 2 | 0 | input shape of convolutional neural network in keras | 55,762,873 | 0 | python,keras,classification,conv-neural-network | Make sure your image size is same as the size your Input layer is expecting. Classification architectures, in general, are not flexible to the spatial dimensions of your input. So, that is important. Otherwise you will get a shape mismatch error.
In case you want to change the input shape of your model, that is possibl... | I am trying to build a image classifier using cnn. My images are of (256,256) pixel size.
What will happen if i train the cnn by setting the input shape as (64,64) or (128,128), since (256,256) will take a lot of time to process? | 0 | 1 | 443 |
0 | 55,762,953 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-19T13:46:00.000 | 0 | 2 | 0 | input shape of convolutional neural network in keras | 55,762,873 | 0 | python,keras,classification,conv-neural-network | It will throw an error. You can resize you images with cv2.resize() or you can put the right input shape in your cnn layer and then put a maxpooling layer to reduce the number of parameters. | I am trying to build a image classifier using cnn. My images are of (256,256) pixel size.
What will happen if i train the cnn by setting the input shape as (64,64) or (128,128), since (256,256) will take a lot of time to process? | 0 | 1 | 443 |
0 | 55,763,943 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-19T14:59:00.000 | 0 | 2 | 0 | How to round number to constant decimal | 55,763,795 | 0 | python | Try this if you really need unnecessary zeros:
def format_floats(reference, values):
formatted_values = []
for i in range(len(values)):
length = len(str(reference)[str(reference).find("."):])-1
new_float = str(round(values[i], length))
new_float += "0"*(len(str(reference))-len(new_float)... | I want to use "math.sqrt" and my output should have 4 decimal after point even for numbers like "4". is there any func or way?!
I used "round(num_sqrt, 4)" but it didn't work.
my input is like:
1
2
3
19
output must be:
1.0000
1.4142
1.7320
4.3588
and my output is:
1.0
1.4142
1.7320
4.3588 | 0 | 1 | 65 |
0 | 55,764,936 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-04-19T16:25:00.000 | 0 | 1 | 1 | How can i install opencv in python3.7 on ubuntu? | 55,764,829 | 0 | python,opencv,ubuntu | Does python-3.7 -m pip install opencv-python work? You may have to change the python-3.7 to whatever path/alias you use to open your own python 3.7. | I have a Nvidia Jetson tx2 with the orbitty shield on it.
I got it from a friend who worked on it last year. It came with ubuntu 16.04. I updated everything on it and i installed the latest python3.7 and pip.
I tried checking the version of opencv to see what i have but when i do import cv2 it gives me :
Traceback (mo... | 0 | 1 | 1,498 |
0 | 56,173,568 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-20T21:40:00.000 | 0 | 1 | 0 | Negative Feature Importance Value in CatBoost LossFunctionChange | 55,777,986 | 1.2 | python,machine-learning,catboost | Negative feature importance value means that feature makes the loss go up. This means that your model is not getting good use of this feature. This might mean that your model is underfit (not enough iteration and it has not used the feature enough) or that the feature is not good and you can try removing it to improve ... | I am using CatBoost for ranking task. I am using QueryRMSE as my loss function. I notice for some features, the feature importance values are negative and I don't know how to interpret them.
It says in the documentation, the i-th feature importance is calculated as the difference between loss(model with i-th feature ex... | 0 | 1 | 2,295 |
0 | 60,627,318 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-04-21T19:14:00.000 | 0 | 1 | 0 | Storing and fetching multiple stocks in Arctic Library | 55,785,898 | 0 | python,pandas,finance | Arctic supports a few different storage engines. The only one that will do what you're looking for is VersionStore. It keeps versions of data, so any update you make to the data will be versioned, and you can retrieve data by timestamp ranges and by version.
However it does not let you do a subsetting of stock like yo... | Looking for suggestions on how to store Price data using MAN AHL's Arctic Library for 5000 stocks EOD data as well as 1 minute data. Separate solutions for EOD and 1-minute data are also welcome. Once the data is stored, I want to perform the following operations:
Fetch data for a subset of stocks (lets say around 500... | 1 | 1 | 105 |
0 | 57,091,975 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2019-04-21T19:40:00.000 | 6 | 2 | 0 | What is the negative mean absolute error in scikit-learn? | 55,786,121 | 1 | python,machine-learning,scikit-learn,regression | I would like to add here, that this negative error is also helpful in finding best algorithm when you are comparing multiple algorithms through GridSearchCV().
This is because after training, GridSearchCV() ranks all the algorithms(estimators) and tells you which one is the best. Now when you use an error function, est... | I am trying to train a model using SciKit Learn's SVM module. For the scoring, I could not find the mean_absolute_error(MAE), however, negative_mean_absolute_error(NMAE) does exist. What is the difference between these 2 metrics? Lets say I get the following results for 2 models:
model 1 (NMAE = -2.6), model 2(NMAE = -... | 0 | 1 | 9,738 |
0 | 55,799,292 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-22T18:28:00.000 | 0 | 1 | 0 | Interpolating missing data in cumulative timeseries data | 55,799,207 | 0 | python,pandas,interpolation,nan | Answer taken from @Quang Hoang above wiht ffill() | I have a time-series dataframe with a cumulative data column. Data drops at night-time leaving me with NaN values, and picks up with first data read in the morning.
I would like to interpolate the data so that all NaN values take on the value of the last known float/valid number. Is this readily possible with .interpo... | 0 | 1 | 40 |
0 | 55,799,321 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-04-22T18:29:00.000 | 3 | 2 | 0 | Do we have to remove target variable from data in Scikit-learn's linearmodel.fit()? | 55,799,226 | 1.2 | python,scikit-learn,linear-regression | The X should not contain the target as one of the columns. If you include it your linear model will produce no coding errors, but to predict the target y it will just use the feature y. | Scikit-learn's documentation says there are two arguments to the function: X(data) and y(Target Values). Do we remove the target variable from our data and provide it separately as y? Or do we keep target variable in X and also provide it separately as y? I have come across both approaches and was wondering which was c... | 0 | 1 | 443 |
0 | 55,801,233 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-22T18:29:00.000 | 1 | 2 | 0 | Do we have to remove target variable from data in Scikit-learn's linearmodel.fit()? | 55,799,226 | 0.099668 | python,scikit-learn,linear-regression | To my understand, you shouldn't predict tomorrow's weather by tomorrow's weather. If you already know what's the correct value, it is pointless to predict one.
However, you don't need to remove target variable in your dataset either, just don't include it in your X-axis.
What we are trying to do with a predictive mod... | Scikit-learn's documentation says there are two arguments to the function: X(data) and y(Target Values). Do we remove the target variable from our data and provide it separately as y? Or do we keep target variable in X and also provide it separately as y? I have come across both approaches and was wondering which was c... | 0 | 1 | 443 |
0 | 55,821,709 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-23T04:00:00.000 | 0 | 1 | 0 | cv2 - multi-user image display | 55,804,120 | 0 | python,cv2,multi-user | I was able to display the images on another user/host by setting the DISPLAY environment variable of the X server to match the desired user's DISPLAY. | Using python and OpenCV, is it possible to display the same image on multiple users?
I am using cv2.imshow but it only displays the image for the user that runs the code.
Thanks | 0 | 1 | 46 |
0 | 55,825,408 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-24T04:04:00.000 | 0 | 1 | 1 | How to run any Transformation Logic on HDFS data which is at Remote PC | 55,822,254 | 0 | python,apache-spark,hdfs,remote-access | I would suggest setting up a Spark Cluster in the same local network where you have the data and running spark transformations in the cluster remotely (SSH or Remote Desktop). The advantages of the setup are:
Network Latency will be minimised as the data is transferred in the
same network locally.
Running the transfor... | I have huge size data (in TBs or PBs) in my HDFS which is located at remote PC. Now instead of taking Data to the Transformation Logic (which is not correct and efficient), I want to run my python Transformation Logic itself on the location where my Data is stored.
Seeking some useful ideas about the technologies which... | 0 | 1 | 42 |
0 | 55,839,963 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2019-04-24T13:28:00.000 | 0 | 3 | 0 | Calculating the Number of flops for a given Neural Network? | 55,831,235 | 0 | python-3.x,deep-learning,conv-neural-network | There is no such code because the quantity of FLOPs is dependent on the hardware and software implementations. You can certainly derive a typical quantity from expanding the layer-by-layer operations for each parameter and weight. and making reasonable implementation assumptions for each activation function.
Input dim... | I have a neural network(ALEXnet or VGG16) written with Keras for Image Classification and I would like to calculate the Number of floating point operations for a network. The size of the images in the dataset could varry.
Can a generlized code be written in python which could calculate flops automatically ? or is there... | 0 | 1 | 3,688 |
0 | 64,795,470 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2019-04-24T13:28:00.000 | 2 | 3 | 0 | Calculating the Number of flops for a given Neural Network? | 55,831,235 | 0.132549 | python-3.x,deep-learning,conv-neural-network | FLOPs are the floating-point operations performed by a model. It is usually calculated using the number of multiply-add operations that a model performs. Multiply-add operations, as the name suggests, are operations involving multiplication and addition of 2 or more variables. For example, the expression, a * b + c * d... | I have a neural network(ALEXnet or VGG16) written with Keras for Image Classification and I would like to calculate the Number of floating point operations for a network. The size of the images in the dataset could varry.
Can a generlized code be written in python which could calculate flops automatically ? or is there... | 0 | 1 | 3,688 |
0 | 66,452,856 | 0 | 0 | 0 | 0 | 3 | false | 2 | 2019-04-24T13:28:00.000 | 0 | 3 | 0 | Calculating the Number of flops for a given Neural Network? | 55,831,235 | 0 | python-3.x,deep-learning,conv-neural-network | many papers using their own flops counting code.
it is made by entering the input size of certain operation's tensor.
so they manually calculate it's flops
you can find it with keyword like 'flops constraint' or 'flops counter' in github.
or there are 'torchstat' tool which counts the flops and memory usage etc. | I have a neural network(ALEXnet or VGG16) written with Keras for Image Classification and I would like to calculate the Number of floating point operations for a network. The size of the images in the dataset could varry.
Can a generlized code be written in python which could calculate flops automatically ? or is there... | 0 | 1 | 3,688 |
0 | 55,832,347 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-24T14:00:00.000 | 1 | 2 | 0 | Multiply each element of a column by each element of a different dataframe | 55,831,908 | 1.2 | python,data-manipulation | This will work. Here we are manipulating numpy array inside the DataFrame.
pd.DataFrame(df1.values*df2.values, columns=df1.columns, index=df1.index) | I have two data frame both having same number of columns but the first data frame has multiple rows and the second one has only one row but same number of columns as the first one. I need to multiply the entries of the first data frame with the second by column name.
DF:1
A B C
0 34 54 56
1 12 87 78
2 78 ... | 0 | 1 | 45 |
0 | 55,847,433 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-25T10:13:00.000 | 2 | 2 | 0 | neural network find best hyperameters or architecture first | 55,846,870 | 0.197375 | python,tensorflow,neural-network | First you should decide for an architecture and then play around with the hyperparameters. To compare different hyperparameters it is important to have the same base (architecture).
Of course you can also play around with the architecture (layers, nodes,...).But I think here it is easier to search for an architecture ... | I'm implementing my first neural network for images classification.
I would like to know if i should start to find best hyperparameters first and then try to modify my neural network architecture (e.g number of layer, dropout...) or architecture then hyperameters? | 0 | 1 | 80 |
0 | 55,852,346 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-04-25T10:13:00.000 | 1 | 2 | 0 | neural network find best hyperameters or architecture first | 55,846,870 | 1.2 | python,tensorflow,neural-network | The answer is as always : it depends
What are you trying to achieve?
If you're hoping to make the worlds best image classifier by trial and error then you might want to ask yourself if you think you have more compute available than the people who have already done this. For a really good classifier there are several on... | I'm implementing my first neural network for images classification.
I would like to know if i should start to find best hyperparameters first and then try to modify my neural network architecture (e.g number of layer, dropout...) or architecture then hyperameters? | 0 | 1 | 80 |
0 | 55,848,378 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-25T11:27:00.000 | 0 | 1 | 0 | How to reshape spatiotemporal data as lstm input? | 55,848,195 | 1.2 | python,pandas,numpy,lstm | You cannot easily use reshape when you have a different number of temporal steps for each example. What you typically do with LSTMs is that you have batches of examples and each batch is padded to the same length, usually with zeros. Use np.zeros(shape) and then iteratively assign to respective rows. | I have a dataset with columns like ['station_id', 'feature1', 'feature2',...]
Each row is a time step. And it is sorted by station_id.
The main problem is that station_ids have different number of timesteps ...
I want to shape it for an LSTM layer, like (NumberOfExamples, TimeSteps, FeaturesPerStep).
Can someone help ... | 0 | 1 | 35 |
0 | 70,753,506 | 0 | 0 | 0 | 0 | 2 | false | 8 | 2019-04-25T12:39:00.000 | 1 | 2 | 0 | Gridsearchcv vs Bayesian optimization | 55,849,512 | 0.099668 | python-3.x,machine-learning,gridsearchcv | Grid search is known to be worse than random search for optimizing hyperparameters [1], both in theory and in practice. Never use grid search unless you are optimizing one parameter only.
On the other hand, Bayesian optimization is stated to outperform random search on various problems, also for optimizing hyperparamet... | Which one among Gridsearchcv and Bayesian optimization works better for optimizing hyper parameters? | 0 | 1 | 2,664 |
0 | 55,850,059 | 0 | 0 | 0 | 0 | 2 | true | 8 | 2019-04-25T12:39:00.000 | 15 | 2 | 0 | Gridsearchcv vs Bayesian optimization | 55,849,512 | 1.2 | python-3.x,machine-learning,gridsearchcv | There is no better here, they are different approaches.
In Grid Search you try all the possible hyperparameters combinations within some ranges.
In Bayesian you don't try all the combinations, you search along the space of hyperparameters learning as you try them. This enables to avoid trying ALL the combinations.
So... | Which one among Gridsearchcv and Bayesian optimization works better for optimizing hyper parameters? | 0 | 1 | 2,664 |
0 | 55,851,082 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-25T13:56:00.000 | 2 | 2 | 0 | Should I scale a percentage variable? | 55,851,021 | 1.2 | python,machine-learning,neural-network | I think is not necessary. If the variables that are in percentage is between 0 and 1, you don't need scaled them because they are scaled already. | I have a dataframe containing variables of different scales (age, income, days as customer, percentage spent in each kind of product sold (values from 0 to 1), etc). I believe it's necessary to scale these variables for using in a neural network algorithm, for example.
My question is: The variables that are in percent... | 0 | 1 | 1,082 |
0 | 56,388,008 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-25T14:54:00.000 | 0 | 1 | 0 | How to ensure static shapes in Tensorflow model for easy OpenVINO conversion? | 55,852,212 | 0 | python,tensorflow,speech,openvino | You can have dynamic shapes in TF model and provide static shape while cnverting model with ModelOptimizer. Example for input data of size 256x256 with 3 channels.
python mo_tf.py --input_shape [1,256,256,3] --input_model model.pb | I'm trying to optimize and convert a tensorflow model to OpenVINO IR. It hasn't been very successful because of the problems I'm facing with input shapes. So I'm planning to remodel the whole model with static shapes. The model I'm trying to work on is Tacotron by keithito.
How do I ensure all the nodes in my model wil... | 0 | 1 | 342 |
0 | 55,852,914 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2019-04-25T15:12:00.000 | 0 | 1 | 0 | How to improve the write speed to sql database using python | 55,852,550 | 0 | python,python-3.x,pandas,sqlalchemy,pyodbc | If you are trying to insert the csv as is into the database (i.e. without doing any processing in pandas), you could use sqlalchemy in python to execute a "BULK INSERT [params, file, etc.]". Alternatively, I've found that reading the csvs, processing, writing to csv, and then bulk inserting can be an option.
Otherwise,... | I'm trying to find a better way to push data to sql db using python. I have tried
dataframe.to_sql() method and cursor.fast_executemany()
but they don't seem to increase the speed with that data(the data is in csv files) i'm working with right now. Someone suggested that i could use named tuples and generators to load... | 0 | 1 | 678 |
0 | 56,177,448 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-25T22:00:00.000 | 0 | 2 | 0 | Efficient way of flashing images in python | 55,858,199 | 1.2 | python,image,matplotlib,timer,frequency | Using a library called PsychoPy. It can guarantee that everything is drawn and allows you to control when a window is drawn (a frame) with the window.frame() function. | What would the most efficient way of flashing images be in python?
Currently I have an infinite while loop that calls sleep at the end, then uses matplotlib to display an image. However I can't get matplotlib to replace the current image, I instead have to close then show again which is slow. I'd like to flash sequence... | 0 | 1 | 571 |
0 | 58,901,069 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-26T07:18:00.000 | 0 | 1 | 0 | numerical entity extraction from unstructured texts using python | 55,862,614 | 0 | python-3.x,nlp,named-entity-recognition | So far my research shows that you can treat numbers as words.
This raises an issue : learning 5 will be ok, but 19684 will be to rare to be learned.
One proposal is to convert into words. "nineteen thousands six hundred eighty four" and embedding each word. The inconvenient is that you are now learning a (minimum) 6 d... | I want to extract numerical entities like temperature and duration mentioned in unstructured formats of texts using neural models like CRF using python. I would like to know how to proceed for numerical extraction as most of the examples available on the internet are for specific words or strings extraction.
Input: 'F... | 0 | 1 | 110 |
0 | 55,874,689 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-26T20:21:00.000 | 1 | 1 | 0 | Does Google-Colab continue running the script when "Runtime disconnected"? | 55,874,473 | 1.2 | python,neural-network,pytorch,google-colaboratory | Yes, for ~1.5 hours after you close the browser window.
To keep things running longer, you'll need an active tab. | I am training a neural network for Neural Machine Traslation on Google Colaboratory. I know that the limit before disconnection is 12 hrs, but I am frequently disconnected before (4 or 6 hrs). The amount of time required for the training is more then 12 hrs, so I add some savings each 5000 epochs.
I don't understand if... | 0 | 1 | 6,010 |
0 | 55,881,586 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-27T14:24:00.000 | -1 | 2 | 0 | select costume column by position pandas | 55,881,531 | -0.099668 | python,pandas | Use this syntax: data.iloc[:, [0,1,20,22]]
Where 0,1,20 and 22 is the column index. | I have a dataset with number of columns. I need to select some columns by their position. for example, I want to select columns 0,3,6,7,15 (by position) from the dataset. I tried using iloc but it seems it is applicable in the range of position, ( I may be wrong?) If there are any better ideas? | 0 | 1 | 54 |
0 | 65,195,428 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2019-04-27T17:00:00.000 | 1 | 2 | 0 | How predict_proba in sklearn produces two columns? what are their significance? | 55,882,873 | 0.099668 | python,machine-learning,scikit-learn,precision-recall | We can distinguish between the classifiers using the classifier classes. if the classifier name is model then model.classes_ will give the distinct classes. | I was using simple logistic regression to predict a problem and trying to plot the precision_recall_curve and the roc_curve with predict_proba(X_test). I checked the docstring of predict_proba but hadn't had much details on how it works. I was having bad input every time and checked that y_test, predict_proba(X_test) ... | 0 | 1 | 4,584 |
0 | 55,891,000 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2019-04-28T13:41:00.000 | 7 | 1 | 0 | Setting a random seed on TF 2.0 | 55,890,834 | 1.2 | python,tensorflow | Found it: tf.random.set_seed is what I was looking for | I have just upgraded from TF 1.13 to TF 2.0, and my interpreter is complaining because tf.set_random_seed does no longer exist.
What is the equivalent functionality in TF 2.0 ? | 0 | 1 | 1,162 |
0 | 55,892,936 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-28T17:24:00.000 | 1 | 1 | 0 | How exactly does matrix multiplication of 3d kernel and 3d image ( Say RGB) takes place to give 2d output? | 55,892,792 | 0.197375 | python,arrays,matrix,conv-neural-network,convolution | You have multiple questions in one. I will answer the about the "how the convolution takes place". Short answer: it is not a matrix multiplication.
Step 1) You slide a window of size (5,5,3) over your RGB image carving out subimages of that size. Incidentally these subimages have exactly the same dimension as that of t... | I have been studying convolution neural network architecture. I am horrendously confused on the part, where, a 3d kernel acts upon the 3d input image (well, it's 4d given we have stack of those images, but just to make explanation a bit easier). I know internet is full of stuffs like this. but i can't find exact answer... | 0 | 1 | 574 |
0 | 55,895,064 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-28T21:46:00.000 | 0 | 3 | 0 | Trouble importing numpy | 55,894,852 | 0 | python,numpy,import | The easiest way to download and manage modules is using python's built-in pip command.
pip install numpy will install numpy in your site-packages directory, where it needs to be to use it with Python 2
pip3 install numpy will do the same thing for Python 3 | I want to import numpy. I do not have it as a module so I attempted to download it as a .whl file. I successfully downloaded it to my computer but am having trouble with installing into python 3.7.
I know I have to install numpy onto my computer and then Python. I downloaded the .whl file but am having trouble trans... | 0 | 1 | 1,651 |
0 | 55,904,338 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-04-29T12:55:00.000 | 6 | 1 | 0 | Keras / NN - Handling NaN, missing input | 55,903,882 | 1.2 | python,machine-learning,keras,neural-network | You need to have the same input size during training and inference. If you have a few missing values (a few %), you can always choose to replace the missing values by a 0 or by the average of the column. If you have more missing values (more than 50%) you are probably better off ignoring the column completely. Note tha... | These days i'm trying to teach myself machine learning and i'm going though some issues with my dataset.
Some of my rows (i work with csv files that i create with some js script, i feel more confident doing that in js) are empty wich is normal as i'm trying to build some guessing model but the issue is that it results ... | 0 | 1 | 3,144 |
0 | 55,906,967 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2019-04-29T15:58:00.000 | 2 | 2 | 0 | Trouble installing 'matplotlib' | 55,906,943 | 1.2 | python,python-3.x | Don't use pip install matplotlib.pyplot, use pip install matplotlib
matplotlib.pyplot is calling pyplot from the module matplotlib. What you want is the module, matplotlib. Then from idle or wherever you are running this, you can call matplotlib.pyplot | Cant' install 'matplotlib.pyplot' on Windows 10, Python 3.7
I tried 'pip install matplotlib.pyplot' and received an error
Here's the exact error code:
Could not find a version that satisfies the requirement matplotlib.pyplot (from versions: )
No matching distribution found for matplotlib.pyplot | 0 | 1 | 699 |
0 | 55,906,978 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2019-04-29T15:58:00.000 | 0 | 2 | 0 | Trouble installing 'matplotlib' | 55,906,943 | 0 | python,python-3.x | Try just using pip install on your command line (or terminal):
'pip install matplotlib'
I hope it helps.
BR | Cant' install 'matplotlib.pyplot' on Windows 10, Python 3.7
I tried 'pip install matplotlib.pyplot' and received an error
Here's the exact error code:
Could not find a version that satisfies the requirement matplotlib.pyplot (from versions: )
No matching distribution found for matplotlib.pyplot | 0 | 1 | 699 |
0 | 55,912,036 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-29T23:23:00.000 | 0 | 2 | 0 | How can I find the sum of multiple groups within a series? | 55,912,004 | 0 | python,python-3.x,pandas,dataframe,series | You can use the rolling method on a serie :
serie.rolling(24).sum()
To get directly the max
max_idx = serie.rolling(24).sum().idxmax()
You range of interest is [max_idx-24+1:max_idx] (from index max_idx - 24 + 1 to index max_idx (both included) , so be careful if you want to retrieve these elements, with .loc should b... | I have a short series of ~60 values. What I need to do is find the largest sum of 24 consecutive values in the series.
e.g. I would need to be able to find the sums of the groups [0:23],[1:24],[2:25],[3:26], ... , [37:60] and determine which group has the largest sum. | 0 | 1 | 38 |
0 | 55,927,699 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-04-30T15:36:00.000 | 1 | 2 | 0 | Doc2Vec - Finding document similarity in test data | 55,924,378 | 1.2 | python,machine-learning,gensim,doc2vec | The act of training-up a Doc2Vec model leaves it with a record of the doc-vectors learned from the training data, and yes, most_similar() just looks among those vectors.
Generally, doing any operations on new documents that weren't part of training will require the use of infer_vector(). Note that such inference:
ign... | I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.
I currently using model.docvecs.most_similar(...). However, this function o... | 0 | 1 | 1,786 |
0 | 55,924,682 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-04-30T15:36:00.000 | 0 | 2 | 0 | Doc2Vec - Finding document similarity in test data | 55,924,378 | 0 | python,machine-learning,gensim,doc2vec | It turns out there is a function called similarity_unseen_docs(...) which can be used to find the similarity of 2 documents in the test data.
However, I will leave the question unsolved for now as it is not very optimal since I would need manually compare the specific document with every other document in the test dat... | I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.
I currently using model.docvecs.most_similar(...). However, this function o... | 0 | 1 | 1,786 |
0 | 55,932,064 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-01T04:38:00.000 | 0 | 1 | 0 | Text sequence to integer with many integer classes in Keras | 55,931,671 | 1.2 | python,tensorflow,machine-learning,keras,neural-network | Maybe you can formulate it as sequence prediction problem using RNN or regression problem with N digit output nodes. | I am getting strings ex. "one hundred twenty three", or "nine hundred ninety nine", and encoding it into a sequence of word tokens of length 4 using the Keras text preprocessing tokenizer and using it as my input with 4 nodes, and having many integer classes as my output ex. 0 1 2 ... 1000 with 1001 output nodes with a... | 0 | 1 | 91 |
0 | 55,941,809 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-01T19:02:00.000 | -2 | 2 | 0 | Scipy ImportError: No module named transform | 55,941,290 | 1.2 | python,scipy,scipy-spatial | Working on python 3.7.3. Make sure it's available on 2.7 | I'm using a python script within ROS. Ros uses python 2.7 and the version of scipy that I'm using is 0.19.1.
The following error is reported:
from scipy.spatial.transform import Rotation as R
ImportError: No module named transform | 0 | 1 | 11,819 |
0 | 55,953,034 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-05-02T10:56:00.000 | 0 | 1 | 0 | Convert Keras model output into sparse matrix without forloop | 55,950,909 | 0 | python,tensorflow,machine-learning,keras | Isn't there a batch_size parameter in the predict()?
If I get it correct the n means number of sample right?
Assume that you system ram is enough to hold the entire data but the VRAM is not. | I have a pretrained keras model that has output with dimesion of [n, 4000] (It makes the classification on 4000 classes).
I need to make a prediction on the test data (300K observations).
But when I call method model.predict(X_train), I get an run-out memory error, because I don't have enough RAM to store matrix with s... | 0 | 1 | 261 |
0 | 55,957,245 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-02T16:32:00.000 | 2 | 2 | 0 | Can you use Python with MS Machine Learning in SQL SERVER 2016 | 55,956,728 | 0.197375 | python,machine-learning,sql-server-2016 | To add to @DMellons answer; Java is supported in SQL 2019 and up. So:
SQL 2016: R
SQL 2017: R, Python
SQL 2019: R, Python, Java, more languages may come. | I want to use Microsoft's Machine Learning Services in SQL SERVER 2016, specifically to leverage Python, NOT R.
Is it possible? | 0 | 1 | 73 |
0 | 55,963,728 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-05-03T03:22:00.000 | 0 | 2 | 0 | $ in R... What is the equivalent in Python? | 55,962,891 | 0 | r,python-3.x | You can generally use pandas to mimic R. You can use [] as below.
my_column = df['columnName'] | is there a way to refer to a specific column relative to a specific data frame in python like there is in R (data.frame$data)? | 0 | 1 | 74 |
0 | 55,962,898 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-05-03T03:22:00.000 | 1 | 2 | 0 | $ in R... What is the equivalent in Python? | 55,962,891 | 1.2 | r,python-3.x | Usually with [] => data.frame["data"]
Or for object like with . => data.frame.data | is there a way to refer to a specific column relative to a specific data frame in python like there is in R (data.frame$data)? | 0 | 1 | 74 |
0 | 55,970,210 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-03T12:05:00.000 | 0 | 1 | 0 | Best way to scale across different datasets | 55,969,460 | 0 | python,scikit-learn,neural-network,preprocessor,feature-scaling | One possible solution could be like this.
Normalize (pre-process) the dataset A such that the range of each features is within a fixed interval, e.g., between [-1, 1].
Train your model on the normalized set A.
Whenever you are given a new dataset like B:
(3.1.) Normalize the new dataset such that the feature have t... | I have come across a peculiar situation when preprocessing data.
Let's say I have a dataset A. I split the dataset into A_train and A_test. I fit the A_train using any of the given scalers (sci-kit learn) and transform A_test with that scaler. Now training the neural network with A_train and validating on A_test works ... | 0 | 1 | 1,123 |
0 | 61,309,357 | 0 | 0 | 0 | 0 | 2 | false | 35 | 2019-05-03T13:20:00.000 | 0 | 5 | 0 | Tensorboard not found as magic function in jupyter | 55,970,686 | 0 | python,tensorflow,tensorflow2.0,tensorboard,tensorflow2.x | extension loading is required before. You can try -> %load_ext tensorboard
.It worked for me. I am using TensorFlow 1.> | I want to run tensorboard in jupyter using the latest tensorflow 2.0.0a0.
With the tensorboard version 1.13.1, and python 3.6.
using
...
%tensorboard --logdir {logs_base_dir}
I get the error :
UsageError: Line magic function %tensorboard not found
Do you have an idea what the problem could be? It seems that all vers... | 0 | 1 | 25,713 |
0 | 72,496,748 | 0 | 0 | 0 | 0 | 2 | false | 35 | 2019-05-03T13:20:00.000 | 0 | 5 | 0 | Tensorboard not found as magic function in jupyter | 55,970,686 | 0 | python,tensorflow,tensorflow2.0,tensorboard,tensorflow2.x | That's how I solved it
%load_ext tensorboard
%tensorboard --logdir /content/drive/MyDrive/Dog\ Vision/logs
After --logdir, this is my path directory /content/drive/MyDrive/Dog\ Vision/logs. It should be different for you. | I want to run tensorboard in jupyter using the latest tensorflow 2.0.0a0.
With the tensorboard version 1.13.1, and python 3.6.
using
...
%tensorboard --logdir {logs_base_dir}
I get the error :
UsageError: Line magic function %tensorboard not found
Do you have an idea what the problem could be? It seems that all vers... | 0 | 1 | 25,713 |
0 | 55,977,698 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-03T22:13:00.000 | 0 | 1 | 0 | Visualizing a frozen graph_def.pb | 55,977,680 | 0 | python,tensorflow,tensorboard | You can try to use TensorBoard. It is on the Tensorflow website... | I am wondering how to go about visualization of my frozen graph def. I need it to figure out my tensorflow networks input and output nodes. I have already tried several methods to no avail, like the summarize graph tool. Does anyone have an answer for some things that I can try? I am open to clarifying questions, thank... | 0 | 1 | 221 |
0 | 55,978,184 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-03T22:33:00.000 | 0 | 1 | 0 | Need setup recommendation for parallel processing many contracts through many scenarios | 55,977,829 | 0 | python,pandas,parallel-processing,dask | From a simple conceptual perspective:
Write yourself a function that takes a contract and a scenario as parameters and performs the desired calculation
Use Python's multiprocessing to set up a worker pool
Create a Queue (from multiprocessing package) that is to be shared across workers
Fill the queue with all combinat... | I need a recommendation from gurus out there on how to go about setting up a modeling application. I have thousands of scenarios to run on thousands for contracts for cash flow projections. Assuming I have 1000 scenarios and 1000 contracts I would need to run 1,000,000 projections (1000x1000). I'd like to do this in pa... | 0 | 1 | 37 |
0 | 58,553,910 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2019-05-03T23:01:00.000 | 0 | 4 | 0 | How to fix 'C extension not loaded, training will be slow. Install a C compiler and reinstall gensim for fast training.' | 55,978,013 | 0 | python-3.x,jupyter-notebook,anaconda,gensim,word2vec | I've faced this issue for a long time when I was running W2V Models which requires 'gensim'.
First of all I've installed Anaconda Navigator and then installed required packages using pip.
I've installed gensim manually using pip in cmd. When I run the W2V model, it took 40 min to train and give the result, which made m... | I'm using the library node2vec, which is based on gensim word2vec model to encode nodes in an embedding space, but when i want to fit the word2vec object I get this warning:
C:\Users\lenovo\Anaconda3\lib\site-packages\gensim\models\base_any2vec.py:743:
UserWarning: C extension not loaded, training will be slow. Inst... | 0 | 1 | 8,850 |
0 | 56,800,565 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2019-05-03T23:01:00.000 | 1 | 4 | 0 | How to fix 'C extension not loaded, training will be slow. Install a C compiler and reinstall gensim for fast training.' | 55,978,013 | 0.049958 | python-3.x,jupyter-notebook,anaconda,gensim,word2vec | anaconda prompt
conda update conda-build
==
windows 7 (32bit)
python 3.7.3
conda-build 3.18.5
gensim 3.4.0 | I'm using the library node2vec, which is based on gensim word2vec model to encode nodes in an embedding space, but when i want to fit the word2vec object I get this warning:
C:\Users\lenovo\Anaconda3\lib\site-packages\gensim\models\base_any2vec.py:743:
UserWarning: C extension not loaded, training will be slow. Inst... | 0 | 1 | 8,850 |
0 | 56,666,561 | 0 | 0 | 0 | 0 | 3 | false | 5 | 2019-05-03T23:01:00.000 | 1 | 4 | 0 | How to fix 'C extension not loaded, training will be slow. Install a C compiler and reinstall gensim for fast training.' | 55,978,013 | 0.049958 | python-3.x,jupyter-notebook,anaconda,gensim,word2vec | For me, degrading back to Gensim version 3.7.1 from 3.7.3 worked. | I'm using the library node2vec, which is based on gensim word2vec model to encode nodes in an embedding space, but when i want to fit the word2vec object I get this warning:
C:\Users\lenovo\Anaconda3\lib\site-packages\gensim\models\base_any2vec.py:743:
UserWarning: C extension not loaded, training will be slow. Inst... | 0 | 1 | 8,850 |
0 | 61,169,238 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-05-04T07:45:00.000 | 7 | 1 | 0 | Does Google Colab use my internet traffic while downloading a dataset or importing a new package into colab notebook? | 55,980,568 | 1.2 | python,python-3.x,google-colaboratory,python-module | It happens on Google cloud servers and your internet connection is used only to run the code.
I tried downloading some huge dataset using wget and my internet data wasnt affected by it. | For example when importing CIFAR-10 from Keras (using from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data())
or temporarily installing a package like HAZM (Persian form of NLTK) using !pip install hazm which is not pre-installed on Google Colab, the cell containing the import st... | 0 | 1 | 2,643 |
0 | 55,984,404 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-05-04T12:35:00.000 | 0 | 1 | 0 | previous steps before to calculate disparity? Is rectification needed? | 55,982,564 | 1.2 | python-3.x,opencv,stereo-3d,disparity-mapping | Yes, disparity needs rectified images. Since the stereo matching is done with epipolar lines, rectified images ensure that all the distortions are rectified and hence the algorithm can search blocks in a straight line. For a basic level you can try out StereoBM provided by openCV using the recitified stereo image pair.... | I want to do stereo vision and finally find the real distance to the objects from cameras. I have done image rectification.Now I want to calculate disparity. My question is, to do disparity, do I need to rectify images first? Thank you! | 0 | 1 | 165 |
0 | 55,993,149 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-05T00:30:00.000 | 0 | 2 | 0 | Pandas read_csv method can't get 'œ' character properly while using encoding ISO 8859-15 | 55,987,923 | 0 | python-3.x,pandas,encoding | Anyone have a clue ? I've manage the problem by manually rewrite this special character before reading my csv with pandas but that doesn't answer my question :( | I have some trubble reading with pandas a csv file which include the special character 'œ'.
I've done some reseach and it appears that this character has been added to the ISO 8859-15 encoding standard.
I've tried to specify this encoding standard to the pandas read_csv methods but it doesn't properly get this special ... | 0 | 1 | 167 |
0 | 56,062,610 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-05-05T08:46:00.000 | 1 | 2 | 0 | additional of features decrease the accuracy- random forest | 55,990,255 | 0.099668 | python,machine-learning,random-forest | Basically, you may be "confusing" your model with useless features. MORE FEATURES or MORE DATA WILL NOT ALWAYS MAKE YOUR MODEL BETTER. The new features will also not get weight zero because the model will try hard to use them! Because there are so many (175!), RF is just not able to come back to the previous "pristine"... | I am using sklearn's random forests module to predict a binary target variable based on 166 features.
When I increase the number of dimensions to 175 the accuracy of the model decreases (from accuracy = 0.86 to 0.81 and from recall = 0.37 to 0.32) .
I would expect more data to only make the model more accurate, especia... | 0 | 1 | 2,477 |
0 | 55,993,920 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-05-05T08:46:00.000 | 0 | 2 | 0 | additional of features decrease the accuracy- random forest | 55,990,255 | 0 | python,machine-learning,random-forest | More data does not always make the model more accurate. Random forest is a traditional machine learning method where the programmer has to do the feature selection. If the model is given a lot of data but it is bad, then the model will try to make sense out of that bad data too and will end up messing things up. More d... | I am using sklearn's random forests module to predict a binary target variable based on 166 features.
When I increase the number of dimensions to 175 the accuracy of the model decreases (from accuracy = 0.86 to 0.81 and from recall = 0.37 to 0.32) .
I would expect more data to only make the model more accurate, especia... | 0 | 1 | 2,477 |
0 | 56,006,150 | 1 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-05T22:56:00.000 | 0 | 1 | 0 | Is there a way to download "Responses In Progress" survey from Qualtrics? | 55,997,128 | 0 | python,qualtrics | Not through the API. You can do it manually through the Qualtrics interface.
If you need to use the API and the survey is invite only, an alternative would be to download the distribution history for all the distributions. That will tell you the status of each invitee. | I'm looking for a way to download surveys that are still open on Qualtrics so that I can create a report on how many surveys are completed and how many are still in progress. I was able to follow their API documentation to download the completed surveys to a csv file but I couldn't find a way to do the same for the In ... | 0 | 1 | 156 |
0 | 67,363,684 | 0 | 0 | 0 | 0 | 3 | false | 9 | 2019-05-06T23:44:00.000 | 1 | 4 | 0 | Why not use mean squared error for classification problems? | 56,013,688 | 0.049958 | python,keras,lstm,cross-entropy,mean-square-error | The answer is right there in your question. Value of binary cross entropy loss is higher than rmse loss.
Case 1 (Large Error):
Lets say your model predicted 1e-7 and the actual label is 1.
Binary Cross Entropy loss will be -log(1e-7) = 16.11.
Root mean square error will be (1-1e-7)^2 = 0.99.
Case 2 (Small Error)
Lets s... | I am trying to solve a simple binary classification problem using LSTM. I am trying to figure out the correct loss function for the network. The issue is, when I use the binary cross-entropy as loss function, the loss value for training and testing is relatively high as compared to using the mean squared error (MSE) fu... | 0 | 1 | 8,441 |
0 | 58,903,890 | 0 | 0 | 0 | 0 | 3 | false | 9 | 2019-05-06T23:44:00.000 | 6 | 4 | 0 | Why not use mean squared error for classification problems? | 56,013,688 | 1 | python,keras,lstm,cross-entropy,mean-square-error | I would like to show it using an example.
Assume a 6 class classification problem.
Assume,
True probabilities = [1, 0, 0, 0, 0, 0]
Case 1:
Predicted probabilities = [0.2, 0.16, 0.16, 0.16, 0.16, 0.16]
Case 2:
Predicted probabilities = [0.4, 0.5, 0.1, 0, 0, 0]
The MSE in the Case1 and Case 2 is 0.128 and 0.1033 re... | I am trying to solve a simple binary classification problem using LSTM. I am trying to figure out the correct loss function for the network. The issue is, when I use the binary cross-entropy as loss function, the loss value for training and testing is relatively high as compared to using the mean squared error (MSE) fu... | 0 | 1 | 8,441 |
0 | 56,045,324 | 0 | 0 | 0 | 0 | 3 | true | 9 | 2019-05-06T23:44:00.000 | -1 | 4 | 0 | Why not use mean squared error for classification problems? | 56,013,688 | 1.2 | python,keras,lstm,cross-entropy,mean-square-error | I'd like to share my understanding of the MSE and binary cross-entropy functions.
In the case of classification, we take the argmax of the probability of each training instance.
Now, consider an example of a binary classifier where model predicts the probability as [0.49, 0.51]. In this case, the model will return 1 as... | I am trying to solve a simple binary classification problem using LSTM. I am trying to figure out the correct loss function for the network. The issue is, when I use the binary cross-entropy as loss function, the loss value for training and testing is relatively high as compared to using the mean squared error (MSE) fu... | 0 | 1 | 8,441 |
0 | 56,014,229 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-07T00:57:00.000 | 0 | 1 | 0 | Determine the language for UDF creation in Hive | 56,014,157 | 0 | java,python,hive,user-defined-functions | This question probably isnt within guidelines because you are asking for an opinion.
Having said that i would propose that:
A) you pick a language that you know.
B) if you know both, then pick based upon the features you need.
C) consider performance - i believe (but cannot confifm) that a compiled Java Jar will run wi... | Summary : Concern is related to UDF creation in Hive.
Dear friends, As I am new in creating UDFs in Hive (I have read about this via google but not getting very clear idea), my first thing here is to determine which would be the best possible way like Java/Python or any other to write hive UDFs.
Another thing is on wha... | 0 | 1 | 38 |
0 | 56,063,874 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-07T15:18:00.000 | 1 | 1 | 0 | DQN behaves differently on different computers | 56,025,783 | 0.197375 | python,python-3.x,tensorflow,keras,reinforcement-learning | I assume you run a certain version of your code with a given hyper-parameter values. Then, you need to fix random seed in the beginning of your code for tensorflow (e.g. tf.set_random_seed(1)), for numpy (e.g. np.random.seed(1)) and for random, if you use it.
Additionally, you have to have same version of tensorflow o... | I have a more or less standard implementation of DQN solving the Atari "Breakout" (from Coursera Reinforcement learning course), that behaves totally different on different computers:
on my Laptop it converges each time I run it
on Coursera and Google Colab servers it never converges!
I use
Python3
Tensorflow
Kerass... | 0 | 1 | 98 |
0 | 56,027,872 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-07T16:50:00.000 | 0 | 1 | 0 | Getting OpenCV to work with python after compiling from source | 56,027,199 | 1.2 | python,python-3.x,opencv | The solution ended up being both simpler and sloppier than I would have liked. I just installed the regular distribution using pip install opencv-contrib-python, then went into the cv2 folder in Lib/site-packages, replaced the python extension (cv2.cp36-win32.pyd in my case. may be different for others) with the .pyd... | I am having an issue getting OpenCV to work with python. I compiled from source using CMake in order to gain access to the SIFT module. Whenever I try to use openCV however, python returns the "No module named 'cv2'" error. It works fine when I install using pip but then I have no SIFT. My build directory is set as... | 0 | 1 | 407 |
0 | 56,039,845 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-08T11:29:00.000 | 0 | 1 | 0 | how to extract line from a word2vec file? | 56,039,771 | 1.2 | python,pycharm | glove_model["Activity"] should get you its vector representation from the loaded model. This is because glove_model is an object of type KeyedVectors and you can use key value to index into it. | I have created a word2vec file and I want to extract only the line at position [0]
this is the word2vec file
`36 16
Activity 0.013954502 0.009596351 -0.0002082094 -0.029975398 -0.0244055 -0.001624907 0.01995442 0.0050479663 -0.011549354 -0.020344704 -0.0113901375 -0.010574887 0.02007604 -0.008582828 0.030914625 -0.0091... | 0 | 1 | 12 |
0 | 56,047,915 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-08T19:28:00.000 | 0 | 1 | 0 | How would I go about image labeling/Classification? | 56,047,785 | 0 | python,machine-learning,deep-learning,classification | There's two routes you can take, one where you have labeled data (or you want to label data yourseld), and one where you don't have that.
Let's start with the latter. Say you have an image of a passport. You want to detect where the text in the image is, and what that text says. You can achieve this using a library ca... | Let's say I have a set of images of passports. I am working on a project where I have to identify the name on each passport and eventually transform that object into text.
For the very first part of labeling (or classification (I think. beginner here)) where the name is on each passport, how would I go about that?
What... | 0 | 1 | 91 |
0 | 56,048,584 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-08T20:10:00.000 | 2 | 1 | 0 | Is there a way to use the "read_csv" method to read the csv files in order they are listed in a directory? | 56,048,345 | 1.2 | python,pandas,csv,matplotlib,python-3.7 | you could use os.listdir() to get all the files in the folder and then sort them out in a certain way, for example by name(it would be enough using the python built in sorted() ). Instead if you want more fancy ordering you could retrieve both the name and last modified date and store them in a dictionary, order the ke... | I am plotting plots on one figure using matplotlib from csv files however, I want the plots in order. I want to somehow use the read_csv method to read the csv files from a directory in the order they are listed in so that they are outputted in the same fashion.
I want the plots listed under each other the same way the... | 0 | 1 | 47 |
0 | 56,049,286 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-05-08T21:06:00.000 | 0 | 2 | 0 | How do I group similar categories? | 56,049,055 | 0 | python,python-3.x,nlp,classification,text-classification | Use a pre trained model to generate embeddings, and from there you can cluster the embeddings using a clustering algorithm like t-SNE or UMAP. I recommend fasttext or spacy, with spacey being the easiest to use. | I have about 1200 tv show categories .. like Drama, News, Sports, Sports-non event, Drama Medical, Drama Crime.. etc
How do I use NLP so that I get groups such that Drama, Drama medical and Drama Crime group together and Sports, Sports-non event etc group together and so on... basically the end goal is to reduce the 12... | 0 | 1 | 204 |
0 | 56,053,083 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-09T04:27:00.000 | 1 | 2 | 0 | Keras tf backend predict speed slow for batch size of 1 | 56,052,206 | 0.099668 | python,performance,keras | The batch size controls parallelism when predicting, so it is expected that increasing the batch size will have better performance, as you can use more cores and use GPU more efficiently.
You cannot really workaround, there is nothing really to work around, using a batch size of one is the worst case for performance. M... | I am combining a Monte-Carlo Tree Search with a convolutional neural network as the rollout policy. I've identified the Keras model.predict function as being very slow. After experimentation, I found that surprisingly model parameter size and prediction sample size don't affect the speed significantly. For reference:
... | 0 | 1 | 2,440 |
0 | 56,055,689 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-09T08:46:00.000 | 0 | 1 | 0 | cannot reshape array of size (x,) into shape (x,y,z,1) | 56,055,571 | 1.2 | python,numpy,reshape,shapes,numpy-ndarray | I found the very simple solution:
np.stack(x_train_left)
and then when i try:
x_train_left.shape prints (2200, 250, 250, 1) | I'm trying to convert a numpy ndarray with a shape of (2200,) to numpy ndarray with a shape of (2200,250,250,1). every single row contains an image (shape: 250,250,1)
This is my object:
type(x_train_left) prints numpy.ndarray
x_train_left.shape prints (2200,)
type(x_train_left[0]) prints numpy.ndarray
x_train_left[0].s... | 0 | 1 | 1,841 |
0 | 56,058,663 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-09T08:57:00.000 | 0 | 1 | 0 | Load multiple Keras models in different processes | 56,055,769 | 0 | python,tensorflow,keras,multiprocessing,python-multiprocessing | I still don't know the exact cause of the problem. However, I found out that my main process was loading a keras model and removing that solved my problem. I can now have multiple models running in parallel. | I have several trained Keras models, weights stored in h5 files using keras.models.save_model. They do not have the same architecture.
My goal is to load all of them in separate processes and be able to predict. I currently try doing this using a class which stores a TensorFlow session and graph object. I then use with... | 0 | 1 | 713 |
0 | 56,082,079 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-10T01:43:00.000 | 2 | 1 | 0 | ML with imbalanced binary dataset | 56,069,657 | 0.379949 | python,scikit-learn,dataset,resampling,oversampling | Your assumption is correct. Your machine learning model is basically overfitting on your training data which has the same pattern repeated for one class and thus, the model learns that pattern and misses the rest of the patterns, that is there in test data. This means that the model will not perform well in the wild wo... | I have a problem I am trying to solve:
- imbalanced dataset with 2 classes
- one class dwarfs the other one (923 vs 38)
- f1_macro score when the dataset is used as-is to train RandomForestClassifier stays for TRAIN and TEST in 0.6 - 0.65 range
While doing research on the topic yesterday, I educated myself in resamplin... | 0 | 1 | 162 |
0 | 56,090,218 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-11T12:02:00.000 | 2 | 1 | 0 | Minimize a piecewise linear, convex function with scipy | 56,090,155 | 1.2 | python,scipy | If the function is piecewise linear and convex, the minimum must be at one of the points where the linear pieces are connected. There is no need for a derivative, you should be able to use a binary search. | I want to find the minimum of a function which is piecewise linear, convex and differentiable at all but a finite number of points. What scipy.optimize.minimize method is appropriate to find a fast solution to my problem? | 0 | 1 | 146 |
0 | 56,091,221 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-11T13:53:00.000 | 0 | 1 | 0 | Pandas Merge Dataframes Sequentially on Conditions | 56,090,979 | 0 | python,pandas,dataframe | I don't think that there is one line code to do this. So follow the steps.
1) First, create a list:
dfs = []
2) Merge for each condition on dataframe:
dfs.append(pd.merge(df1,df2,left_on='col1',right_on='col1',how='outer')).dropna()
dfs.append(pd.merge(df1,df2,left_on='col1',right_on='col2',how='outer')).dropna()
dfs.a... | Suppose I have 2 dataframe:
DF1:
Col1 | Col2 | Col3
XCN000370/17-18C | XCN0003711718C | 0003971718
DF2
Col1 | Col2 | Col3
XCN0003711718C | XCN0003711718C | 0003971718
I want them to merge like this:
First Match Col1 (DF1) and Col1 (DF2)
In Remaining Unmatched, Match Col1 (DF1) with Col2 (DF2)
In remaining Unmatched, M... | 0 | 1 | 147 |
0 | 56,092,947 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-05-11T17:53:00.000 | 4 | 2 | 0 | How to get list of rows of pandas dataframe in python? | 56,092,914 | 1.2 | python-3.x,pandas | This should work. df.index.values
This returns index in the form of numpy array numpy.ndarray, run this type(df.index.values) to check. | How do I get a list of row lables in pandas?
I have a table with column labels and row labels. To return the column lables I use the dataframe "column" attribute.
It is possible to return the list of column labels with the attribute columns, but i couldn't find similiar attributes for rows. | 0 | 1 | 10,189 |
0 | 56,096,067 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-11T22:33:00.000 | 1 | 1 | 0 | How to evaluate/improve the accuracy from the prediction from a neural network with an unbalanced dataset? | 56,094,779 | 1.2 | python,machine-learning,scikit-learn,neural-network,classification | It all depends on your dataset. Neural network are not magical tools that can learn everything and also they require a lot of data compared to traditional machine learning models. In case of MLP, making a model extremely complex by adding a lot of layers is never a good idea as it makes the model more complex, slow and... | I used gridsearchcv to determine which hyperparameters in the mlpclassifier can make the accuracy from my neural network higher. I figured out that the amount of layers and nodes makes a difference but I'm trying to figure out which other configurations can make a difference in accuracy (F1 score actualy). But from my ... | 0 | 1 | 490 |
0 | 63,011,554 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-12T00:28:00.000 | 0 | 2 | 0 | MNIST training time in CPU | 56,095,288 | 0 | python,tensorflow,neural-network,mnist | am not really clear about the benchmark you're looking for, there is it performance from training perspective, or accuracy? for accuracy, there are some tools that can do the comparison between the predictions and actuals so you can measure the performance | I have created a simple feed forward Neural Network library in Java - and I need a benchmark to compare and troubleshoot my library.
Computer specs:
AMD Ryzen 7 2700X Eight-Core Processor
RAM 16.0 GB
WINDOWS 10 OS
JVM args: -Xms1024m -Xmx8192m
Note that I am not using a GPU.
Please list the following specs:
Co... | 0 | 1 | 1,145 |
0 | 56,117,031 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-13T06:29:00.000 | 0 | 1 | 0 | gensim doc2vec Model doesn't learn some words | 56,106,821 | 0 | python,gensim,doc2vec | If a word you expected to be learned in the model isn't in the model, the most likely causes are:
it wasn't really there, in the version the model saw, perhaps because your tokenization/preprocessing is broken. Enable logging at INFO level, and examine your corpus as presented to the model, to ensure it's tokenized as... | I'm currently learning gensim doc2model in Python3.6 to see similarity between sentences.
I created a model but it returns KeyError: "word 'WORD' not in vocabulary" when I input a word which obviously exists in the training dataset, to find a similar word/sentence.
Does it automatically skip some words not very importa... | 0 | 1 | 358 |
0 | 56,430,302 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-13T11:47:00.000 | 0 | 1 | 0 | Identify what group KNN classified a sample in | 56,111,669 | 0 | python,machine-learning,knn,nearest-neighbor | So as I'm understanding the question right you have the true group classification for your data.
In that case you can predict your whole dataset with your trained model and identify the outliers. | I want to be able to find which of my samples were wrongly classified by KNN, or which weren't classified at all.
I have used sckit-learn to run KNN. I have a df that has ~280000 samples split into four groups, I have 13 features by which to classify by. My precsion per group ranges from 0.30-0.90.
I expect the outpu... | 0 | 1 | 43 |
0 | 56,113,906 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-05-13T13:48:00.000 | 1 | 1 | 0 | Is there a way to solve yB = c without computing the right inverse? | 56,113,772 | 1.2 | python,numpy | You can transpose the equation and then use linalg.solve. | I would like to solve an equation of the form yB = c, where y is my unknown (possibly a matrix). However the B matrix is not well conditioned, and I would like to have a method similar to numpy.linalg.solve in order to maintain the numerical accuracy of the solution.
I have tried to simply use the inverse of B, with ... | 0 | 1 | 104 |
0 | 56,161,443 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-14T09:28:00.000 | 0 | 1 | 0 | Algorithms for constrained clustering on attributed graphs with some cluster-level constraints on their attributes | 56,127,111 | 0 | python,constraints,cluster-analysis,graph-theory | First of all, the problem is most likely NP-hard, so the best you can do is some greedy optimization. It will definitely help to first break the graph into subsets that cannot be connected ever (remove links of nodes that are not similar enough, then compute the connected components). Then for each component (which hop... | I have a graph with 240k nodes and 550k edges with five attributes per node coming out of an autoencoder from a sparse dataset. I'm looking to partition the graph into n clusters, such that intra-partition attribute similarity is maximized, the partitions are connected, and the sum of one of the attributes doesn't exce... | 0 | 1 | 350 |
0 | 56,189,877 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-14T10:00:00.000 | 0 | 2 | 0 | Extracting a particular type of data from unstructured text namely Institutes | 56,127,781 | 0 | python,nlp,information-extraction | The problem you face is solved by specialized text search and text analysis tools. Using phonetic analysis and indexes.
One of the popular text analysis tools is Elasticsearch.
You index your documents and search them, using REST api.
Google also provide such tools for text analysis and indexing.
Also modern RDBMS tool... | I need to extract the names of Institutes from the given data. Institues names will look similar ( Anna University, Mashsa Institute of Techology , Banglore School of Engineering, Model Engineering College). It will be a lot of similar data. I want to extract these from text. How can I create a model to extract these n... | 0 | 1 | 472 |
0 | 56,129,689 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-14T11:08:00.000 | 0 | 1 | 0 | Python comparing millions of rows and hundreds of columns between two tables from relational DB | 56,129,032 | 0 | python,python-3.x,pandas,pandasql | For handling this kind of data I would recommend using something like Hadoop rather than pandas/python. This isn't much of an answer but I can't comment yet. | Currently our system is in live proving phase. So, we need to check whether the set of tables populated in production are matching with the tables populated in sandbox (test). At the moment we have written a query for each table comparison and then run it in sql client to check it. There will be few more tables to chec... | 0 | 1 | 218 |
0 | 56,263,416 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-14T21:14:00.000 | 0 | 2 | 0 | Simple example on using BuildingsPy with Dymola | 56,138,688 | 0 | python,dymola | Thank you for your explanation, it's really clear and it helped me a lot. I tested one of my models but by launching the code, dymola opens but it does not load the library or my model exists. That's the message I got:
Error: Simulation failed in 'C:\Temp\tmp-simulator-wwuvls\BEE'
Exception: File C:\Temp\tmp-simula... | I would like to use Python to call my Modelica models using Dymola and BuildingsPy. I read the BuildingsPy tutorial, I understand in general how it goes, but I admit that the examples are not too intuitive for me. Could someone help me with a simple example using for example an existing model in the Modelica library.
... | 0 | 1 | 404 |
0 | 56,141,111 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2019-05-15T02:42:00.000 | 0 | 2 | 0 | How to persist a python dictionary? | 56,141,069 | 0 | python,pandas | I would recommend persisting as json using pandas and reloading again as needed with pandas. Pandas will make the reading and writing really easy for you. This allows you to have the superset of columns in the dataframe with nulls in the spots that are missing data.
This saves you from needing to do a key value pair s... | I have a python program that takes in a list of objects of different types, and for each type, the program will output a dictionary of key/value attributes where the key is some property of the given object's type, and value is its computed result.
To make it more concrete, my program takes in a list of 2000 objects, o... | 0 | 1 | 94 |
0 | 56,141,206 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2019-05-15T02:42:00.000 | 0 | 2 | 0 | How to persist a python dictionary? | 56,141,069 | 0 | python,pandas | json.dumps and json.loads are your friends. To translate your structure to be persisted, dumps creates a unique string that can be written to any file-like object and loads can reload it from a string-like object. Hope that helps! | I have a python program that takes in a list of objects of different types, and for each type, the program will output a dictionary of key/value attributes where the key is some property of the given object's type, and value is its computed result.
To make it more concrete, my program takes in a list of 2000 objects, o... | 0 | 1 | 94 |
0 | 56,158,265 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-15T11:17:00.000 | 1 | 1 | 0 | classificator using sample of a population: scaling the population and then sampling / scaling the sample / scaling the X_TRAIN split of the sample? | 56,148,094 | 0.197375 | python,data-science,sampling | Wonderful question. I had similar questions in my mind when I had started out few years ago. Let me try and give my two cents on this.
I suggest to go with creating a scaler for scaling X_train, store the scaler and see if use it to transform X_test. According to centrality theorem, if you have done random sampling, yo... | I am building a logistic regression classificator.
I start form a set of 500.000 record and I want to use only a sample of them.
what do you recommend:
1) scaling the population and then sampling
2) scaling the sample
3) scaling just the X_TRAIN split of the sample?
and why?
my considerations are:
1) this may have se... | 0 | 1 | 48 |
0 | 56,169,770 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-15T12:00:00.000 | 2 | 1 | 0 | How to read the label(annotation) file from Synthia Dataset? | 56,148,891 | 1.2 | python,deep-learning,dataset,semantic-segmentation | I found the right way to read it as below:
label = np.asarray(imageio.imread(label_path, format='PNG-FI'))[:,:,0] | I am new to Synthia dataset. I would like to read the label file from this datset. I expect to have one channel matrix with size of my RGB image, but when I load the data I got 3x760x1280 and it is full of zeros.
I tried to read as belows:
label = np.asarray(imread(label_path))
Can anyone help to read these labels fil... | 0 | 1 | 335 |
0 | 56,224,394 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-16T05:21:00.000 | 0 | 1 | 0 | Jupyter Notebook is showing No pyspark kernel upon startup | 56,161,339 | 0 | pyspark,jupyter-notebook,kernel,ipython | The issue was resolved only by re configuring the Jupyter notebook. | I am running pyspark scripts in jupyter notebook but the kernel is not starting. upon selecting pyspark from the dropdown the kernel loads and remains busy for some time and then shows "no kernel".
Can someone help me?
Note: upon running "$Jupyter kernelspec list" i can see pyspark kernel in the list. | 0 | 1 | 113 |
0 | 71,233,630 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2019-05-16T10:12:00.000 | 0 | 2 | 0 | Setting seed on train_test_split sklearn python | 56,166,130 | 0 | python-3.x,scikit-learn,jupyter-notebook,train-test-split | simply in train_test_split, specify the parameter random_state=some_number_you_wan to use, like random_state=42 | is there any way to set seed on train_test_split on python sklearn. I have set the parameter random_state to an integer, but I still can not reproduce the result.
Thanks in advance. | 0 | 1 | 12,708 |
0 | 56,202,424 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-16T10:29:00.000 | 0 | 1 | 0 | the clustering of mixed data using python | 56,166,439 | 0 | python,cluster-analysis | There is not one optimal number of clusters. But dozens. Every heuristic will suggest a different "optimal" number for another poorly defined notion of what is "optimal" that likely has no relevancy for the problem that you are trying to solve in the first place.
Rather than being overly concerned with "optimality", ra... | I am trying to cluster a data set containing mixed data(nominal and ordinal) using k_prototype clustering based on Huang, Z.: Clustering large data sets with mixed numeric and categorical values.
my question is how to find the optimal number of clusters? | 0 | 1 | 121 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.