GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
71,804,014
0
0
0
0
1
false
0
2019-10-30T01:43:00.000
0
2
0
'KeyError: (Timestamp('1993-01-29 00:00:00'), 'colName')
58,617,655
0
python-3.x,pandas,datetime,yahoo-finance
I think it may be related to the fact that Jan 29th, 1993 was a Saturday Try shifting the date to the next trading day
I am trying to create a new column on my stockmarket data frame that was imported form yahoo. I am dealing with just one symbol at the moment. symbol['profit']= [[symbol.loc[ei, 'close1']-symbol.loc[ei, 'close']] if symbol[ei, 'shares']==1 else 0 for ei in symbol.index] I am expecting to have a new column in the datafr...
0
1
671
0
58,631,153
0
0
0
0
1
true
0
2019-10-30T14:01:00.000
1
1
0
How to deal with label that not included in training set when doing prediction
58,627,102
1.2
python,machine-learning
You could set a certain threshold for prediction the known classes. Your model should predict from the known classes only if it predicts it with a certain threshold value, otherwise, it will be classified as unknown. The other (and less preferable) way to deal with this problem is to have another class called unknown ...
For example, using supervise learning to classify 5 different people face. But when test on 6th people face that not in training set, the model will still predict it within the 5 people. How to let the model predict the 6th and onwards people face as unknown when the model doesn't train them before?
0
1
539
0
58,631,271
0
0
0
1
1
false
2
2019-10-30T14:47:00.000
3
2
0
Python - Pandas read sql modifies float values columns
58,627,984
0.291313
python,sql,pandas
I've found a workaround for now. Convert the column you want to string, then after you use Pandas you can convert the string to whatever type you want. Even though this works, it doesn't feel right to do so.
I'm trying to use Pandas read_sql to validate some fields in my app. When i read my db using SQL Developer, i get these values: 603.29 1512.00 488.61 488.61 But reading the same sql query using Pandas, the decimal places are ignored and added to the whole-number part. So i end up getting these values: 60329.0 1512.0...
0
1
309
0
58,649,241
0
0
0
0
1
false
0
2019-10-31T18:10:00.000
0
2
0
pandas saves my data only into one column in a csv-file
58,649,100
0
python,pandas,csv
It seems they are two columns already, as you gave example in your question codes[W9KBJ-95X9T-ZC3KW-BJTJT-5FF3T] date_posted[13:14 - 28. Okt. 2019] read this file in pandas again, with explicit "," as a delimiter, you'll be able to read this file in CSV format. Let me know if you are still not clear. Thanks.
I have 2 lists and I want to save them in a csv file but they always end up in one column. dates=['13:14 - 28. Okt. 2019', '14:30 - 27. Okt. 2019', '11:33 - 26. Okt. 2019', '15:54 - 25. Okt. 2019'] codes=['W9KBJ-95X9T-ZC3KW-BJTJT-5FF3T', 'CZWJJ-X6XHJ-9CJC5-JTT3J-WZ6WC', 'KZK3T-K6RSJ-ZWTCK-JTJ3T-T3HJJ', 'CHCBT-TF6HB-Z...
0
1
141
0
58,717,599
0
0
1
0
1
false
0
2019-11-02T02:40:00.000
1
1
0
How to plot a very large audio file with low latency and time to save file?
58,667,844
0.197375
python,matplotlib,audio,plot,julia
Assuming that your audio file has a sample rate of 44 kHz (which is the most common sampling rate), then there are 60*60*44_000 = 158400000 samples per hour. This number should be compared to a high-resolution screen which is ~4000 pixels wide (4k resolution). If you would print time series with a 600 dpi printer, 1 ho...
I have an audio file sampled at 44 kbps and it has a few hours of recording. I would like to view the raw waveform in a plot (figure) with something like matplotlib (or GR in Julia) and then to save the figure to disk. Currently this takes a considerable amount of time and would like to reduce that time. What are some...
0
1
144
0
58,670,829
0
0
0
1
1
true
2
2019-11-02T08:54:00.000
2
1
0
Is there a way to append data to an excel file without reading its contents, in python?
58,669,599
1.2
python,excel,pandas
It isn't possible to just append to an xlsx file like a text file. An xlsx file is a collection of XML files in a Zip container so to append data you would need to unzip the file, read the XML data, add the new data, rewrite the XML file(s) and then rezip them. This is effectively what OpenPyXL does.
I have a huge master data dump excel file. I have to append data to it on a regular basis. The data to be appended is stored as a pandas dataframe. Is there a way to append this data to the master dump file without having to read its contents. The dump file is huge and takes a considerable amount of time for the progr...
0
1
152
0
63,735,920
0
0
0
0
1
false
1
2019-11-03T00:04:00.000
0
1
0
Facing this error : AttributeError: Can't get attribute 'DeprecationDict' on <module 'sklearn.utils.deprecation'
58,676,350
0
python,scikit-learn
You used a new version of scikit-learn to load a model that was trained by an older version of scikit-learn. Therefore, the options are: Retrain the model with the current version of scikit-learn if you have a training text and data. Or go back to the lower version of the scikit-learn reported in the warning message
Facing this issue while running the code to load ML model pickle file.,, AttributeError: Can't get attribute 'DeprecationDict' on
0
1
404
0
58,684,334
0
0
0
0
1
false
0
2019-11-03T14:51:00.000
0
1
0
pandas vs numpy packages in python
58,681,323
0
python-3.x
Yes, there is a speed difference. Feel free to post your timeit benchmark figures.
Are pandas iterrows fast compare to np.where on a smaller dataset? I heard numpy is always efficient compared to pandas? I was surprised to see that when I used iterrow in my code vs numpy's np.where on a small dataset, iterrows execution was fast.
0
1
31
0
58,685,507
0
0
0
0
1
true
2
2019-11-03T23:03:00.000
3
2
0
Project organization with Tensorflow.keras. Should one subclass tf.keras.Model?
58,685,407
1.2
python,tensorflow,tensorflow-estimator,tf.keras
Subclass only if you absolutely need to. I personally prefer following the following order of implementation. If the complexity of the model you are designing, can not be achieved using the first two options, then of course subclassing is the only option left. tf.keras Sequential API tf.keras Functional API Subclass...
I'm using Tensorflow 1.14 and the tf.keras API to build a number (>10) of differnet neural networks. (I'm also interested in the answers to this question using Tensorflow 2). I'm wondering how I should organize my project. I convert the keras models into estimators using tf.keras.estimator.model_to_estimator and Tensor...
0
1
349
0
69,961,017
0
1
0
0
1
false
28
2019-11-04T06:52:00.000
1
2
0
Force Anaconda to install tensorflow 1.14
58,688,481
0.099668
python,python-3.x,tensorflow,anaconda,version
first find the python version of tensorflow==1.14.0, then find the Anaconda version by python version. e.g. tensorflow 1.14.0 can work well on python36, and Anaconda 3.5.1 has python36. So install the Anaconda 3.5.1, then install tensorflow==1.14.0 by pip
Now, the official TensorFlow on Anaconda is 2.0. My question is how to force Anaconda to install an earlier version of TensorFlow instead. So, for example, I would like Anaconda to install TensorFlow 1.14 as plenty of my projects are depending on this version.
0
1
51,822
0
58,690,132
0
0
0
0
1
true
2
2019-11-04T07:33:00.000
2
1
0
Difference between context-sensitive tensors and word vectors
58,688,938
1.2
python,nlp,spacy
Word vectors are stored in a big table in the model and when you look up cat, you always get the same vector from this table. The context-sensitive tensors are dense feature vectors computed by the models in the pipeline while analyzing the text. You will get different vectors for cat in different texts. If you use en_...
I am currently working in python with spacy and there are different pre-trained models like the en_core_web_sm or the en_core_web_md. One of them is using words vectors to find word similarity and the other one is using context-sensitive tensors. What is the difference between using context-sensitive tensors and using...
0
1
492
0
58,692,931
0
0
0
0
1
false
1
2019-11-04T10:30:00.000
1
1
0
How can deep learning models found via cross validation be combined?
58,691,535
0.197375
python,keras,scikit-learn,deep-learning,cross-validation
A (more) correct reflection of your performance on your dataset would be to average the N fold-results on your validation set. As per the three resulting models, you can have an average prediction (voting ensemble) for a new data point. In other words, whenever a new data point arrives, predict with all your three mode...
I'm training a keras deep learning model with 3 fold cross validation. For every fold I'm receiving a best performing model and in the end my algorithm is giving out the combined score of the three best models. My question now is, if there is a possibility to combine the 3 models in the end or if it would be a legit so...
1
1
415
0
58,701,942
0
0
0
0
1
false
0
2019-11-04T21:03:00.000
1
1
0
Pandas, .agg('sum') vs .sum()
58,701,025
0.197375
python,pandas
The value in column d5['Name'] might contains null values. Groupby will ignore those rows with None in d5['Name'].
At the end of my code I sum by dataframe below, then export to csv: sumbyname = d5.groupby(['Name'])['Value'].agg('sum') I sum the value of each person by name, Now if I sum this column in excel using SUM then I get +12 Now if i do d5['Value'].sum()) in my code to find the total sum, I get -11. Is there a difference in...
0
1
658
0
58,704,122
0
0
0
0
2
false
0
2019-11-05T03:30:00.000
0
2
0
Python Pandas dataFrame - Columns selection
58,704,076
0
python,pandas,dataframe
No difference pd.crosstab(train_df['ColA'], train_df['ColB']) is recommended to prevent possible errors. For example, if you have a column named count and if you type train_df.count it will give an error. train_df['count'] won't give an error.
I have a Pandas dataFrame object train_df with say a column called "ColA" and a column "ColB". It has been loaded from a csv file with columns header using read_csv I obtain the same results when I code: pd.crosstab(train_df['ColA'], train_df['ColB']) or pd.crosstab(train_df.ColA, train_df.ColB) Is there any differenc...
0
1
65
0
58,704,174
0
0
0
0
2
false
0
2019-11-05T03:30:00.000
0
2
0
Python Pandas dataFrame - Columns selection
58,704,076
0
python,pandas,dataframe
If you only want to select a single column, there is no difference between the two ways. However, the dot notation doesn't allow you to select multiple columns, whereas you can use dataframe[['col1', 'col2']] to select multiple columns (which returns a pandas.core.frame.DataFrame instead of a pandas.core.series.Series)...
I have a Pandas dataFrame object train_df with say a column called "ColA" and a column "ColB". It has been loaded from a csv file with columns header using read_csv I obtain the same results when I code: pd.crosstab(train_df['ColA'], train_df['ColB']) or pd.crosstab(train_df.ColA, train_df.ColB) Is there any differenc...
0
1
65
0
58,704,657
0
0
0
0
1
true
1
2019-11-05T04:35:00.000
1
1
0
Does Tensorflow Use the Best Weights or Most Recent Weights When Testing in the Same Session?
58,704,575
1.2
python-3.x,tensorflow
It doesn't. The model relies on a single set of weights, that are variables. You can store the best model with a saver and save the training progress as a separate checkpoint. Other option would be to have a duplicate set of variables and copy weights once a better model is found. Yet, the it is normally uncommon to ju...
This might be a dumb question, but would like someone to tell me yes or no. Say I have an LSTM network in Tensorflow, and am training it using the Adam Optimizer to minimize a cost function by feeding X and Y variables a set of X and Y dict's during training, and then IN THE SAME SESSION, feeding the variables new X a...
0
1
30
0
63,646,557
0
1
0
0
1
false
1
2019-11-05T06:27:00.000
0
1
0
Cannot Import name 'spaces' from gym
58,705,609
0
python-3.x,python-import,importerror,openai-gym
There are probably multiple reasons for this error message. On Windows 10, this can be due to access permissions to gym-related folder. Make sure your Windows user account is granted access to gym and/or python libraries more broadly.
Everything was working fine, but suddenly running a python task which imports gym and from gym imports spaces leads to an error(though it was working fine before): ImportError: cannot import name 'spaces' I have tried reinstalling gym but then my tensorflow needs bleach version to be 1.5 while gym requires a upgraded ...
0
1
796
0
59,372,216
0
0
0
0
1
true
0
2019-11-05T12:18:00.000
0
1
0
How to set prunable layers for tfmot.sparsity.keras.prune_low_magnitude?
58,711,222
1.2
python,machine-learning,keras,tensorflow2.0,pruning
In the end I found that you can also apply prune_low_magnitude() per layer. So the workaround would be to define a list containing the names or types of the layers that shall be pruned, and iterate the layer-wise pruning over all layers in this list.
I am applying the pruning function from tensorflow_model_optimization, tfmot.sparsity.keras.prune_low_magnitude() to MobileNetV2. Is there any way to set only some layers of the model to be prunable? For training, there is a method "set_trainable", but I haven't found any equivalent for pruning. Any ideas or comment...
0
1
321
0
58,724,694
0
0
0
0
2
false
0
2019-11-05T12:45:00.000
1
2
0
Cluster identification with NN
58,711,675
0.099668
python,tensorflow,neural-network,cluster-analysis
If you want to treat clustering as a classification problem, then you can try to train the network to predict whether two points belong to the same clusters or to different clusters. This does not ultimately solve your problems, though - to cluster the data, this labeling needs to be transitive (which it likely will no...
I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimatio...
0
1
97
0
58,712,729
0
0
0
0
2
false
0
2019-11-05T12:45:00.000
0
2
0
Cluster identification with NN
58,711,675
0
python,tensorflow,neural-network,cluster-analysis
These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant). the challenge is now to build a network which does this clustering ...
I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimatio...
0
1
97
0
58,715,086
0
0
0
0
1
true
1
2019-11-05T15:56:00.000
0
1
0
Matrix multiplication using numpy array
58,715,034
1.2
python,numpy,regression
You are feeding them in the wrong order Instead of feeding (100,2) * (2,100), you are feeding (2,100) * (100,2)
I am trying to do a linear regression using Matrix multiplication. X is the feature matrix, and I have 100 data points. As per the normal equation, the dot product of X and of the transpose of X is required. Having added a column of ones as required, the shape of X is 100×2 while for the transpose of X it is 2×100. H...
0
1
48
0
58,717,243
0
0
0
0
1
true
0
2019-11-05T18:11:00.000
0
3
0
How to resize image by mainitaining aspect ratio in python3?
58,717,150
1.2
python-3.x,numpy
Well, you choose which dimension you want to enforce and then you adjust the other one by calculating either new_width = new_height*aspect_ratio or new_height = new_width/aspect_ratio. You might want to round those numbers and convert them to int too.
I have an image with image.shape=(20,10)and I want to resize this image so that new image size would be image.size = 90. I want to use np.resize(image,(new_width, new_height)), but how can I calculate new_width and new_height, so that it maintains aspect_ratio as same as in original image.
0
1
52
0
58,730,741
0
0
0
0
1
false
3
2019-11-06T07:45:00.000
2
1
0
How to set a threshold value from signal to be processed in wavelet thresholding in python
58,725,295
0.379949
python,wavelet
There are some helpful graphics on pywt webpage that help visualize what these thresholds are and what they do. The threshold applies to the coefficients as opposed to your raw signal. So for denoising, this will typically be the last couple of entries returned by pywt.wavedec that will need to be zeroed/thresholded. ...
I'm trying to denoise my signal using discrete wavelet transform in python using pywt package. But i cannot define what is threshold value that i should set in pywt.threshold() function I have no idea what the best threshold value that should be set in order to reconstruct a signal with minimal noise I used ordinary co...
0
1
956
0
65,612,208
0
0
0
0
2
false
0
2019-11-06T09:06:00.000
0
2
0
Should we stop training discriminator while training generator in CycleGAN tutorial?
58,726,483
0
python,tensorflow,deep-learning,generative-adversarial-network
For the training to happen in an adversarial way the gradients of the discriminator and generator networks should be updated separately. The discriminator becomes stronger because generator produces more realistic samples and vise versa. If you update these networks together the "adversarial" training is not happening ...
In the code provided by tensorlfow tutorial for CycleGAN, they have trained discriminator and generator simultaneously. def train_step(real_x, real_y): # persistent is set to True because the tape is used more than # once to calculate the gradients. with tf.GradientTape(persistent=True) as tape:...
0
1
413
0
58,727,960
0
0
0
0
2
false
0
2019-11-06T09:06:00.000
0
2
0
Should we stop training discriminator while training generator in CycleGAN tutorial?
58,726,483
0
python,tensorflow,deep-learning,generative-adversarial-network
In the GANs, you don't stop training D or G. They are trained simultaneously. Here they first calculate the gradient values for each network (not to change D or G before calculating the current loss), then update the weights using those. It's not clear in your question, what's the benefit of what?
In the code provided by tensorlfow tutorial for CycleGAN, they have trained discriminator and generator simultaneously. def train_step(real_x, real_y): # persistent is set to True because the tape is used more than # once to calculate the gradients. with tf.GradientTape(persistent=True) as tape:...
0
1
413
0
62,236,017
0
0
0
0
1
false
6
2019-11-06T13:14:00.000
2
2
0
What is the difference between interpolation and imputation?
58,731,044
0.197375
python-3.x,pandas
I will answer the second part of your question i.e. when to use what. We use both techniques depending upon the use case. Imputation: If you are given a dataset of patients with a disease (say Pneumonia) and there is a feature called body temperature. So, if there are null values for this feature then you can replace ...
I just learned that you can handle missing data/ NaN with imputation and interpolation, what i just found is interpolation is a type of estimation, a method of constructing new data points within the range of a discrete set of known data points while imputation is replacing the missing data of the mean of the column. B...
0
1
4,779
0
58,952,487
0
0
0
0
1
false
0
2019-11-06T13:30:00.000
0
1
0
How to use GCP vision trained model in Object Detection API
58,731,323
0
python-3.x,tensorflow,google-cloud-platform,object-detection-api
You need to save the model in the local drive from the cloud and load this model into the object detection api, the function load_model usually downloads pre-trained model from URL, you need to give the path of the local saved model here. The API that you are looking for is tf.saved_model.load, also update the path for...
I trained a object detection model through vision in GCP, How can i use that model in normal tensorflow object detection api provided by google in GitHub? It gives 3 options for exporting the model which one to use & how?
0
1
51
0
58,738,193
0
0
0
0
1
false
0
2019-11-06T19:18:00.000
0
1
0
Is there a way to mutate a neural network in tensorflow/keras?
58,737,140
0
python,tensorflow,keras,evolutionary-algorithm
Your question is a little bit too vague, but I would assume that coding your own evolutionary algorithm shouldn’t be too difficult for you given what you have done with neural networks so far. A good starting point for you would be to research the following EA concepts… Encoding. Fitness. Crossover and/or Mutation.
I would like to create a neuroevolution project using python and tensorflow/keras but I couldn't find any good way of mutating the neural network. I am aware that there are librarys like NEAT, but I wanted to try and code it myself. Would appreciate it if anyone can tell me something.
0
1
181
0
58,767,741
0
0
0
0
1
false
0
2019-11-08T13:21:00.000
0
1
0
Select a "mature" curve that best matches the slope of a new "immature" curve
58,767,395
0
python
This seems more like a mathematical problem than a coding problem, but I do have a solution. If you want to find how similar two curves are, you can use box-differences or just differences. You calculate or take the y-values of the two curves for each x value shared by both the curves (or, if they share no x-values bec...
I have a multitude of mature curves (days are plotted on X axis and data is >= 90 days old so the curve is well developed). Once a week I get a new set of data that is anywhere between 0 and 14 days old. All of the data (old and new), when plotted, follows a log curve (in shape) but with different slopes. So some weeks...
0
1
53
0
59,092,212
0
0
0
0
1
false
2
2019-11-09T14:07:00.000
0
1
0
How to generate frozen_inference_graphe.pb and .pbtxt files with tensorflow 2
58,780,057
0
python,opencv,tensorflow,keras,tensorflow2.0
The .pb file gets generated when using keras.callbacks.ModelCheckpoint(). However, I don't know how to create the .pbtxt file.
I'd like to use my own tensorflow 2 / keras model with opencv (cv.dnn.readNetFromTensorflow( bufferModel[, bufferConfig] ). But, I didn't manage to generate the required files : bufferModel : buffer containing the content of the pb file (frozen_inference_graphe) bufferConfig : buffer containing the content of the...
0
1
243
0
58,784,283
0
1
0
0
1
false
2
2019-11-09T22:53:00.000
0
3
0
Get N random non-overlapping substrings of length K
58,784,258
0
python,string,random
You could simply run a loop, and inside the loop use the random package to pick a starting index, and extract the substring starting at that index. Keep track of the starting indices that you have used so that you can check that each substring is non-overlapping. As long as k isn't too large, this should work quickly a...
Let's say we have a string R of length 20000 (or another arbitrary length). I want to get 8 random non-overlapping sub strings of length k from string R. I tried to partition string R into 8 equal length partitions and get the [:k] of each partition but that's not random enough to be used in my application, and the con...
0
1
284
0
58,798,707
0
0
0
0
1
false
1
2019-11-10T00:17:00.000
0
1
0
Retrain Tensorflow Model on the go
58,784,730
0
python,tensorflow,deep-learning,image-recognition
As for step two(getting the name of the person), I don't think you would need any retraining to achieve this. You could use Convolutional LSTM or a similar nn. input shape could be (None,image_dimension_x,y,3) (3 is the color channel, for RGB) where None would be the current total number of images in the database. It...
I m trying to create an application that captures the feed of one camera, detects the faces in the feed, then takes pictures of them and adds them to the image database. Simultaneously another camera feed will be captured and another neural network will compare the faces in the second camera feed with the face images i...
0
1
40
0
58,789,056
0
0
0
0
1
false
0
2019-11-10T13:06:00.000
-1
2
0
The smallest valid alpha value in matplotlib?
58,788,958
-0.099668
python,matplotlib
There's no lower limit; the lines just appear to be invisible for very small alpha values. If you draw one line with alpha=0.01 the difference in color is too small for your screen / eyes to discern. If you draw 100 lines with a=0.01 on top of each other, you will see them. As for your problem, you can just add a small...
Some of my plots have several million lines. I dynamically adjust the alpha value, by the number of lines, so that the outliers more or less disappear, while the most prominent features appear clear. But for some alpha's, the lines just disappear. What is the smallest valid alpha value for line plots in in matplotlib? ...
0
1
418
0
58,789,387
0
0
0
0
1
true
0
2019-11-10T13:51:00.000
1
1
0
How to choose python pandas arrangement columns vs rows
58,789,312
1.2
python,pandas,indexing,row,multiple-columns
Generally in pandas, we follow a practice that instances are columns (here doc number) and features are columns (here words). So, prefer to use the approach 'b'.
I am quite new with pandas (couple of months) and I am starting building up a project that will be based on a pandas data array. Such pandas data array will consist on a table including different kind of words present in a collection of texts (around 100k docs, and around 200 key-words). imagine for instance the words...
0
1
28
0
58,818,221
0
0
0
1
4
false
1
2019-11-10T15:08:00.000
0
5
0
How to remove rows from a datascience table in python
58,789,936
0
python-3.x,jupyter-notebook
use the 'drop.isnull()' function.
I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it? Thanks
0
1
1,193
0
63,076,734
0
0
0
1
4
false
1
2019-11-10T15:08:00.000
0
5
0
How to remove rows from a datascience table in python
58,789,936
0
python-3.x,jupyter-notebook
To remove a row in a datascience package: name_of_your_table.remove() # number of the row in the bracket
I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it? Thanks
0
1
1,193
0
66,193,032
0
0
0
1
4
false
1
2019-11-10T15:08:00.000
0
5
0
How to remove rows from a datascience table in python
58,789,936
0
python-3.x,jupyter-notebook
#df is the original dataframe# #The '-' operator removes the null values and re-assigns the remaining ones to df# df=idf[-(df['Column'].isnull())]
I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it? Thanks
0
1
1,193
0
68,397,031
0
0
0
1
4
false
1
2019-11-10T15:08:00.000
0
5
0
How to remove rows from a datascience table in python
58,789,936
0
python-3.x,jupyter-notebook
use dataframe_name.isnull() #To check the is there any missing values in your table. use dataframe_name.isnull.sum() #To get the total number of missing values. use dataframe_name.dropna() # To drop or delete the missing values.
I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it? Thanks
0
1
1,193
1
58,790,592
0
0
0
0
1
false
2
2019-11-10T16:14:00.000
0
2
0
How to get pixel values inside of a rectangle within an image
58,790,535
0
python,image,opencv
It is not very easy to iterate through a slanted rectangle. Therefore, what you can do is to rotate the whole image such that the rectangle is parallel to the sides again. For this, you can compute the slope of one side as difference in the y coordinate over the difference in the x coordinate of the corners. The value...
I have an image, where four corner points are defined. Now I want to get the pixel values of the region, that is defined by the 4 corners. Problem is, although it is a rectangle, it has a "slope", which means neither the two upper corner points nor the lower one are at the same height. How can I still solve this issue?...
0
1
2,611
0
58,801,419
0
0
0
0
1
true
1
2019-11-11T12:09:00.000
1
1
0
Error when using loaded Keras classifier with custom metrics function
58,801,078
1.2
python,keras
Do you mean keras.models.load_model(path)? It sounds very strange to have model.load_model(). You are probably missing the argument custom_objects = {'roc_auc': roc_auc} in load_model. Keras cannot create a model if it doesn't know what roc_auc means.
I have a keras model, which uses custom function for metrics: model.compile(optimizer = tf.keras.optimizers.Adam(), loss = 'binary_crossentropy', metrics = ['accuracy', roc_auc]) The function works fine and model behaves as expected. However, when saving the model via model.save() and then loading it via model.load_mod...
0
1
24
0
58,802,847
0
1
0
0
1
false
0
2019-11-11T13:42:00.000
0
1
0
How to create custom report in PDF using matplotlib and python
58,802,543
0
python,matplotlib
Even though the question is not very clear. If I have to do what I understand from your question, I will use Jupyter Notebook and save it as PDF. In this notebook, I will have: Exploratory analysis of the data (What data scientists call EDA) Discussion and other mathematical formulas at they may apply to your case The...
i am working on a project where i have to present the Chart /graph created using matplotlib with python3 into a PDF format. The PDF must carry the data, custom titles along with the chart/graph. PDF can be multiple page report as well. I know that we can store the matplotlib charts in PDF. But i am looking for any solu...
0
1
284
0
58,808,296
0
0
0
0
1
false
2
2019-11-11T16:51:00.000
1
3
0
Adding a column to a pandas dataframe based on other columns
58,805,531
0.066568
python,pandas,list-comprehension
Thanks guys! With your help I was able to solve my problem. Like Prince Francis suggested I first did df['temp'] = df.apply(lambda x : [i for i, e in enumerate(x['WD']) if e == x['Max_WD']], axis=1) to get the indicees of the 'WD'-values in 'LF'. In a second stept I then could add the actual column 'Max_LF' by doing df...
Problem description Introductory remark: For the code have a look below Let's say we have a pandas dataframe consisting of 3 columns and 2 rows. I'd like to add a 4th column called 'Max_LF' that will consist of an array. The value of the cell is retrieved by having a look at the column 'Max_WD'. For the first row that ...
0
1
1,805
0
58,812,577
0
0
0
0
1
false
1
2019-11-11T22:02:00.000
2
1
1
Apache beam DirectRunner vs "normal" parallel processes
58,809,283
0.379949
python,google-cloud-platform,cloud,apache-beam,dataflow
You question is broad. However, I will try to provide you some inputs. It's hard to compare a DirectRunner and a DataflowRunner. DirectRunner launches your pipeline on your current VM and use the capability of this only VM. It's your VM, you have to set it up, patch it, take care to free disk/partition/logs file, (.....
I currently have a pipeline running on GCP. The entire thing is written using pandas to manipulate CSVs and do some transformations, as well as side inputs from external sources. (It makes use of bigquery and storage APIs). The thing is, it runs on a 32vCPUs/120GB RAM Compute Engine instance (VM) and it does simple par...
0
1
564
0
58,844,839
0
1
0
0
1
true
0
2019-11-12T04:53:00.000
0
1
0
TensorFlow CondaVerificationError - Mixing Pip with Conda
58,812,244
1.2
python,windows,tensorflow,pip,conda
Update: I reinstalled TensorFlow 2.0 with pip and even though it successfully installed, I was still facing issues when I tried to run my code. A file called cudart64_100.dll could not be located. I eventually managed to get my TF 2.0 code to run successfully by installing an older version of CUDA. I'm still not sure w...
I'm running Anaconda 64 bit on Windows 10 and I've encountered a CondaVerificationError when I try installing TensorFlow 2.0 on one of my computers. I believe the error stems from mixing pip installations with conda installations for the same package. I originally installed then uninstalled TensorFlow with pip and then...
0
1
74
0
69,829,396
0
1
0
0
2
false
4
2019-11-12T08:24:00.000
0
3
0
Matplotlib: Command errored out with exit status 1
58,814,671
0
python,python-3.x,matplotlib,pip,python-packaging
pip install --pre -U scikit-learn this command is work for me I have found because this error come for the duplication of same libraries.
I want to install the matplotlib package for Python with pip install matplotlib in the command prompt but suddenly the lines get red and the next error appears: ERROR: Command errored out with exit status 1: 'c:\users\pol\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.ar...
0
1
11,426
0
58,814,829
0
1
0
0
2
false
4
2019-11-12T08:24:00.000
-1
3
0
Matplotlib: Command errored out with exit status 1
58,814,671
-0.066568
python,python-3.x,matplotlib,pip,python-packaging
Try running your command prompt with administrator privileges If the problem further persists try reinstalling pip
I want to install the matplotlib package for Python with pip install matplotlib in the command prompt but suddenly the lines get red and the next error appears: ERROR: Command errored out with exit status 1: 'c:\users\pol\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.ar...
0
1
11,426
0
58,822,290
0
0
0
0
1
true
2
2019-11-12T15:49:00.000
2
3
0
Read a pickle with a different version of pandas
58,822,129
1.2
python,pandas
You will need the same version (or a later one) of pandas as the one used to_pickle. When pandas converts a dataframe to pickle the compress process is specific to that version. I advise to contact your administrator and have them convert the pickle to csv that way you can open it with any version of pandas. Unless the...
I can't read a pickle file saved with a different version of Python pandas. I know this has been asked here before, but the solutions offered, using pd.read_pickle("my_file.pkl") is not working either. I think (but I am not sure) that these pickle files were created with an newer version of pandas than that of the mach...
0
1
4,370
0
58,825,987
0
0
0
0
1
false
1
2019-11-12T20:02:00.000
0
1
0
How to Sort a Pandas Column by specific values
58,825,887
0
python,python-3.x,pandas,sorting
What are you sorting by? Alphabetical would be ['four', 'nine', 'one', 'six', 'two']
Let's say, for the sake of this question, I have a column titled Blah filled with the following data points (I will give it in a list for clarity): Values = ['one', 'two', 'four', 'six', 'nine'] How could I choose to sort by specific values in this column? For example, I would like to sort this column, Blah, filled wit...
0
1
55
0
58,841,007
0
0
0
0
1
true
0
2019-11-13T12:06:00.000
1
1
0
Is there support for functional layers api support in tensorflow 2.0?
58,836,772
1.2
python-3.x,tensorflow,tensorflow2.0
Tensorflow 2.0 is more or less made around the keras apis. You can use the tf.keras.Model for creating both sequential as well as functional apis.
I'm working on converting our model from tensorflow 1.8.0 to 2.0 but using sequential api's is quite difficult for our current model.So if there any support for functional api's in 2.0 as it is not easy to use sequential api's.
0
1
35
0
58,842,847
0
0
0
0
1
true
0
2019-11-13T15:15:00.000
0
1
0
How to split parallel corpora while keeping alignment?
58,840,145
1.2
python,pandas,unix,scikit-learn,dataset
I found that I can use the shuf command on the file with the random-source parameter, like this shuf tgt-full.txt -o tgt-fullshuf.txt --random-source=tgt-full.txt.
I have two text files containing parallel text in two languages (potentially millions of lines). I am trying to generate random train/validate/test files from that single file, as train_test_split does in sklearn. However when I try to import it into pandas using read_csv I get errors from many of the lines because o...
0
1
85
0
58,847,337
0
0
0
0
1
false
1
2019-11-13T20:18:00.000
1
1
0
CNN model have better accuracy than combined CNN-SVM model
58,844,965
0.197375
python,classification,svm
It depends on a large number of factors , but yes if the underlying data is image - cnn have proven to deliver better results.
I was trying to compare the accuracy results of CNN model and combined CNN-SVM model for classification. However I found that CNN model have better accuracy than combined CNN-SVM model. Is That correct or it can happen?
0
1
144
0
62,375,398
0
0
0
0
1
false
2
2019-11-13T20:39:00.000
-1
2
0
Matplotlib and Google Colab: Using ipympl
58,845,278
-0.099668
python,matplotlib,google-colaboratory
Available matplotlib backends: ['tk', 'gtk', 'gtk3', 'wx', 'qt4', 'qt5', 'qt', 'osx', 'nbagg', 'notebook', 'agg', 'inline', 'ipympl', 'widget']
Whenever I try to plot a figure in a Google Colab notebook using matplotlib, a plot is displayed whenever I use %matplotlib inline but is not displayed when I do %matplotlib ipympl or %matplotlib widget. How can I resolve this issue. My goal is to get the plot to be interactive. Clarification: when I run %matplotlib -...
0
1
1,694
0
58,858,660
0
0
0
0
1
false
0
2019-11-13T22:30:00.000
0
2
0
Filter out range of frequencies using band stop filter in Python and confirm it using Fourier Transform FFT
58,846,626
0
python,numpy,scipy,fft
Why magnitude for frequency 50 Hz decreased from 1 to 0.7 after Fast Fourier Transform?
Supposing that I have following signal: y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(100.0 * 2.0*np.pi*x) + 0.2*np.sin(200 * 2.0*np.pi*x) how can I filter out in example 100Hz using Band-stop filter in Python? In this signal there are peaks at 50Hz, 100Hz and 200Hz. It would be helpful it it could be visualized using FF...
0
1
978
0
58,852,959
0
0
0
0
2
false
0
2019-11-14T06:33:00.000
1
3
0
CNN on python with Keras
58,850,711
0.066568
python,tensorflow,keras
What you want to do is called "transfer learning" using the learned weights of a net to solve a new problem. Please note that this is very hard and acts under many constraints i.e. using a CNN that can detect cars to detect trucks is simpler than using a CNN trained to detect people to also detect cats. In any case you...
I made a simple CNN that classifies dogs and cats and I want this CNN to detect images that aren't cats or dogs and this must be a third different class. How to implement this? should I use R-CNN or something else? P.S I use Keras for CNN
0
1
125
0
58,852,922
0
0
0
0
2
false
0
2019-11-14T06:33:00.000
0
3
0
CNN on python with Keras
58,850,711
0
python,tensorflow,keras
You can train that almost with the same architecture (of course it depends on this architecture, if it is already bad then it will not be useful on more classes too. I would suggest to use the state of the art model architecture for dogs and cats classification) but you will also need the dogs and cats dataset in addit...
I made a simple CNN that classifies dogs and cats and I want this CNN to detect images that aren't cats or dogs and this must be a third different class. How to implement this? should I use R-CNN or something else? P.S I use Keras for CNN
0
1
125
0
58,861,981
0
0
0
1
1
false
0
2019-11-14T11:38:00.000
0
2
0
Pandas not assuming dtypes when using read_sql?
58,855,925
0
python,sql,pandas
So it turns out all the data types in the database are defined as varchar. It seems read_sql reads the schema and assumes data types based off this. What's strange is then I couldn't convert those data types using infer_objects(). The only way to do it was to write to a csv then read than csv using pd.read_csv().
I have a table in sql I'm looking to read into a pandas dataframe. I can read the table in but all column dtypes are being read in as objects. When I write the table to a csv then re-read it back in using read_csv, the correct data types are assumed. Obviously this intermediate step is inefficient and I just want to be...
0
1
651
0
58,863,619
0
0
0
0
1
false
0
2019-11-14T12:42:00.000
0
1
0
Automatically create a new object from GridSearchCV best_params_
58,857,069
0
python,scikit-learn
I figured out, if you unpack the parameters it is doable, i.e if best_par=RFC_grid_search.best_params_ then you can create the optimal RFC with the parameters in best_params_ by rfc_opt=RFC(**best_part)
Assume I want to fit a random forest, RFC, and grid search using sklearns GridSearchCV. We can get the best parameters using RFC.best_params_ but if I then want to create a random forest I need manually to write those parameters in e.g RFC(n_estimators=12,max_depth=7) afterwards. Is there a way,something like RFC_opt=R...
0
1
75
0
58,858,073
0
0
0
0
1
false
0
2019-11-14T13:27:00.000
0
2
0
Softmax or sigmoid for multiclass problem
58,857,899
0
python,image-processing,deep-learning,data-science
In general cases, if you are dealing with multi-class clasification problems, you should use a Softmax because you are guaranted that the sum of probabilities of all clases will sum 1, by weighting them individually and computing the join distribution, whereas with a Sigmoid, you'd be predicting the probability of each...
I am using VGG16 model and fine tuned them on my data. I am predicting ethnicity of images (faces) .i have 5 output classes like white, black,Asian, Sub-continent and others. Should i use softmax or sigmoid. And why??
0
1
403
0
58,877,449
0
0
0
0
1
false
1
2019-11-14T16:37:00.000
0
1
0
cut out part of a point cloud
58,861,669
0
python-3.x,list-comprehension,point-clouds
Instead of list comprehension, is used numpy and came up with this statement: inPathPoints = pc[(pc[:, 0] > -0.5) & (pc[:, 0] < 0.5) & (pc[:, 2] > 0.2) & (pc[:, 2] < 2)] This is so fast it does not even show up in the profile output.
I have an intel D415 depth camera and want to identify obstacles in the path of my robot. I want to reduce the points from the cam pc=(102720,3) to only a rectangular area where the robot has to pass through I came up with this list comprehension, p[0] is the x-axis, p[2] the distance and the values are in meters, the ...
0
1
34
0
58,885,009
0
0
0
0
1
true
1
2019-11-14T21:51:00.000
2
2
0
Keras - RTX 2080 ti training slower than both CPU-only and GTX 1070?
58,867,071
1.2
python,tensorflow,keras
I figured it out! Thanks to the suggestion of a friend who got a 2060 recently, he noted that the default power mode is maximum power savings in the Nvidia Control Panel, or P8 power mode according to nvidia-smi (which is half clock speeds). After setting to prefer maximum performance in 3D settings, training times hav...
I just got my 2080 ti today and hooked it right up to experiment with Keras on my models. But for some reason, when I train on a dense model the 2080 ti is 2 times slower than my CPU (an i7 4790k) and definitely slower than my old GTX 1070 (don't have exact numbers to compare it to). To train one epoch on my CPU it tak...
0
1
884
0
59,451,889
0
1
0
0
1
false
0
2019-11-14T22:38:00.000
0
1
0
smop has issues when translating a >= statment in MATLAB
58,867,582
0
python,matlab
you can simply edit lexer.py for greater or equal by putting "\" in front of "<" or ">". example: from "<=" to "\<=" or "<\=", both of them works the same on your converted python code.
I'm using SMOP version 0.41 to translate my MATLAB code to anaconda python however whenever there is a statement with a greater or equal to statment for example: if numFFT >= 2 I get the following error SyntaxError: Unexpected "=" (parser) Has anyone experienced this?
0
1
246
0
58,879,617
0
0
0
1
1
true
0
2019-11-15T01:16:00.000
0
1
0
Is there a way to set columns to null within dask read_sql_table?
58,868,931
1.2
python,pandas,dask
If possible, I recommend setting this up on the oracle side, making a view with the correct data types, and using read_sql_table with that. You might be able to do it directly, since read_sql_table accepts sqlalchemy expressions. If you can phrase it as such, it ought to work.
I'm connecting to an oracle database and trying to bring across a table with roughly 77 million rows. At first I tried using chunksize in pandas but I always got a memory error no matter what chunksize I set. I then tried using Dask since I know its better for large amounts of data. However, there're some columns that ...
0
1
83
0
58,869,681
0
0
0
0
1
true
1
2019-11-15T02:38:00.000
3
2
0
Can we read the data in a pickle file with python generators
58,869,572
1.2
python,python-3.x,generator,pickle
Nope. The pickle file format isn't like JSON or something else where you can just read part of it and decode it incrementally. A pickle file is a list of instructions for building a Python object, and just like following half the instructions to bake a cake won't bake half a cake, reading half a pickle won't give you h...
I have a large pickle file and I want to load the data from pickle file to train a deep learning model. Is there any way if I can use a generator to load the data for each key? The data is in the form of a dictionary in the pickle file. I am using pickle.load(filename), but I am afraid that it will occupy too much RAM ...
0
1
513
0
58,874,622
0
0
0
0
1
true
0
2019-11-15T03:34:00.000
1
1
0
Is it possible to train the sentiment classification model with the labeled data and then use it to predict sentiment on data that is not labeled?
58,869,955
1.2
nltk,python-3.7,sentiment-analysis,text-classification,training-data
Is it possible to train the sentiment classification model with the labeled data and then use it to predict sentiment on data that is not labeled? Yes. This is basically the definition of what supervised learning is. I.e. you train on data that has labels, so that you can then put it into production on categorizing yo...
I want to do sentiment analysis using machine learning (text classification) approach. For example nltk Naive Bayes Classifier. But the issue is that a small amount of my data is labeled. (For example, 100 articles are labeled positive or negative) and 500 articles are not labeled. I was thinking that I train the class...
0
1
299
0
58,950,331
0
0
0
0
1
true
0
2019-11-15T03:50:00.000
1
1
0
Which algorithm in Deep learning can verfity the relationship of column into a matrix
58,870,076
1.2
python,algorithm,machine-learning,deep-learning
You can all simply use DNN, however the results are not that good compared with ML but there is no way to solve that.
I'm reading these days about deep learning and its utilization and the methods we can use it. I had a general question regarding the image verification or let's say a simple matrix. Suppose I have a matrix of size X = (4,4) and a vector of size Y = (1,4), I multiplied the the vector Y by only one column from X, let's ...
0
1
30
0
58,872,562
0
1
0
0
1
false
0
2019-11-15T07:47:00.000
2
3
0
How do you read files on desktop with jupyter notebook?
58,872,437
0.132549
python,python-3.x,pandas
This is an obvious path problem, because your notebook is not booted on the desktop path, you must indicate the absolute path to the desktop file, or the relative path relative to the jupyter boot directory.
I launched Jupyter Notebook, created a new notebook in python, imported the necessary libraries and tried to access a .xlsx file on the desktop with this code: haber = pd.read_csv('filename.xlsx') but error keeps popping up. Want a reliable way of accessing this file on my desktop without incurring any error response
0
1
3,030
0
58,882,586
0
1
0
0
1
false
4
2019-11-15T12:03:00.000
2
2
0
What is the time complexity of .at and .loc in pandas?
58,876,676
0.197375
python,pandas,performance,data-structures,time-complexity
Alright so it would appear that: 1) You can build your own index on a dataframe with .set_index in O(n) time where n is the number of rows in the dataframe 2) The index is lazily initialized and built (in O(n) time) the first time you try to access a row using that index. So accessing a row for the first time using tha...
I'm looking for the time complexity of these methods as a function of the number of rows in a dataframe, n. Another way of asking this question is: Are indexes for dataframes in pandas btrees (with log(n) time look ups) or hash tables (with constant time lookups)? Asking this question because I'd like a way to do cons...
0
1
2,034
0
59,198,937
0
0
0
0
2
false
0
2019-11-17T01:08:00.000
0
2
0
MNIST dataset with Sklearn
58,896,645
0
python,mnist,sklearn-pandas
If you only need to recognize 4s it's a binary classification problem, so you just need to create a new target variable: Y=1 if class is 4, Y=0 if class is not 4. Train_X will be unchanged Train_Y will be your new target variable related to Train_X Test_X will be unchanged Test_Y will be your new target variable rel...
I’m training linear model on MNIST dataset, but I wanted to train only on one digit that is 4. How do I choose my X_test,X_train, y_test, y_train?
0
1
157
0
58,896,802
0
0
0
0
2
false
0
2019-11-17T01:08:00.000
0
2
0
MNIST dataset with Sklearn
58,896,645
0
python,mnist,sklearn-pandas
Your classifier needs to learn to discriminate between sets of different classes. If you only care about digit 4, you should split your training and testing set into: Class 4 instances Not class 4 instances: union of all other digits Otherwise the train/test split is still the typical one, where you want to have no o...
I’m training linear model on MNIST dataset, but I wanted to train only on one digit that is 4. How do I choose my X_test,X_train, y_test, y_train?
0
1
157
0
58,917,762
0
0
0
0
1
false
0
2019-11-17T03:41:00.000
0
1
0
No bouding boxes create when running my trained model
58,897,297
0
python,python-3.x,tensorflow,object-detection,object-detection-api
Did you update the path of your model in the object_detection_tutorial.ipynb file, "tf.saved_model.load" is the API where you have to give the path of your trained model.
I have tried to train my model using ssd_mobilenet_v1_coco_11_06_2017 ,ssd_mobilenet_v1_coco_2018_01_28 and faster_rcnn_inception_v2_coco_2018_01_28 and did it successfully however when i tried to run object_detection_tutorial.ipynb and test my test_images all i get is images without bounding boxes, i trained my model ...
0
1
46
0
72,333,300
0
0
0
0
1
false
0
2019-11-17T19:35:00.000
0
2
0
How do I correctly download a map from osmnx as .svg?
58,904,416
0
python,svg,jupyter-notebook,openstreetmap,osmnx
Instead of filename='image', file_format='svg' Use: filepath='image.svg'
I am new to Python. Just working with OSMnx and wanted to open a map as an svg in Illustrator. This was posted in the GitHub documentation: # you can also plot/save figures as SVGs to work with in Illustrator later fig, ax = ox.plot_graph(G_projected, save=True, file_format='svg') I tried it in JupyterLab and it downl...
0
1
548
0
58,910,192
0
0
0
0
1
false
5
2019-11-18T07:50:00.000
7
2
0
Sorting performance comparison between numpy array, python list, and Fortran
58,910,042
1
python,performance,numpy
You seem to be misunderstanding what NumPy does to speed up computations. The speedup you get in NumPy does not come from NumPy using some smart way of saving data. Or compiling your Python code to C automatically. Instead, NumPy implements many useful algorithms in C or Fortran, numpy.sort() being one of them. These f...
I have been using Fortran for my computational physics related work for a long time, and recently started learning and playing around with Python. I am aware of the fact that being an interpreted language Python is generally slower than Fortran for primarily CPU-intensive computational work. But then I thought using nu...
0
1
3,448
0
58,939,946
0
0
1
0
1
false
0
2019-11-19T12:37:00.000
1
2
0
fast light and accurate person-detection algorithm to run on raspberry pi
58,934,308
0.099668
python,computer-vision,raspberry-pi3,object-detection,robotics
Raspberry pi does not have the computational capacity to perform object detection and realsense driver support, check out the processor load once you start the realsense application. One of the simplest models for person detection is opencv's HOGdescripto that you have used.
Hope you are doing well. I am trying to build a following robot which follows a person. I have a raspberry pi and and a calibrated stereo camera setup.Using the camera setup,i can find depth value of any pixel with respect to the reference frame of the camera. My plan is to use feed from the camera to detect person an...
0
1
527
0
58,935,738
0
0
0
0
1
true
0
2019-11-19T12:48:00.000
1
1
0
Share python modules across computers in a dask.distributed cluster
58,934,528
1.2
python,dask
Such things are possible with networking solutions such as NFS or SSH remote mounts, but that's a pretty big field and beyond the scope of Dask itself. If you are lucky, other answers will appear here, others have solved similar problems, but more likely copying is the simpler solution.
I have a ssh dask.distributed cluster with a main computer containing all modules for my script and another one with only a few, including dask itself of course. Is it possible to change the syspath of the other computer so that it also looks for modules in the main one? Of course, I could simply upload them via sftp ...
0
1
43
0
58,937,944
0
0
0
0
1
false
0
2019-11-19T15:26:00.000
1
2
0
How to find the intersecting area of two sub images using OpenCv?
58,937,483
0.099668
python,opencv
MatchTemplate returns the most probable position of a template inside a picture. You could do the following steps: Find the (x,y) origin, width and height of each picture inside the larger one Save them as rectangles with that data(cv::Rect r1, cv::Rect r2) Using the & operator, find the overlap area between both rect...
Let's say there are two sub images of a large image. I am trying to detect the overlapping area of two sub images. I know that template matching can help to find the templates. But i'm not sure how to find the intersected area and remove them in either one of the sub images. Please help me out.
0
1
420
0
58,942,213
0
0
0
0
1
false
1
2019-11-19T19:57:00.000
2
2
0
How to scale numpy matrix in Python?
58,941,935
0.197375
python,numpy,machine-learning,scaling,numpy-ndarray
subtract each column's minimum from itself for each column of the result divide by its maximum for column 0 of that result multiply by 11-1.5 for column 1 of that result multiply by 5--0.5 add 1.5 to column zero of that result add -0.5 to column one of that result You could probably combine some of those steps.
I have this numpy matrix: x = np.random.randn(700,2) What I wanna do is scale the values of the first column between that range: 1.5 and 11 and the values of the second column between `-0.5 and 5.0. Does anyone have an idea how I could achieve this? Thanks in advance
0
1
4,738
0
58,959,742
0
0
0
0
1
false
0
2019-11-20T16:53:00.000
0
2
0
Is there any Python code for Convolutional Neural Network, but without Tensorflow/Theano/Scikit etc?
58,959,547
0
python,tensorflow,deep-learning,conv-neural-network
A lot of Deep Learning courses will ask the student to implement a CNN in Python with just numpy, then teach them to achieve the same result with Tensorflow etc. You can just search on Github for "Deep-Learning-Coursera" and you will probably find something like this https://github.com/enggen/Deep-Learning-Coursera/blo...
I hope there will be some code where the Convolutional Neural Network will be implemented without Tensorflow OR theano OR Scikit etc. I searched over the google, but google is so crazy some time :), if i write "CNN without Tensorflow" it just grab the tesorflow part and show me all the results with tesorflow :( and if ...
0
1
1,274
0
58,965,640
0
0
0
0
1
true
13
2019-11-20T19:15:00.000
16
1
0
set `torch.backends.cudnn.benchmark = True` or not?
58,961,768
1.2
python,pytorch
If your model does not change and your input sizes remain the same - then you may benefit from setting torch.backends.cudnn.benchmark = True. However, if your model changes: for instance, if you have layers that are only "activated" when certain conditions are met, or you have layers inside a loop that can be iterated ...
I am using pytorch and I wonder if I should use torch.backends.cudnn.benchmark = True. I find on google that I should use it when computation graph does not change. What is computation graph in pytorch?
0
1
9,045
0
58,965,054
0
0
0
0
2
true
2
2019-11-20T23:26:00.000
2
5
0
How to check machine learning accuracy without cross validation
58,964,954
1.2
python,machine-learning,scikit-learn,neural-network,random-forest
Splitting your data is critical for evaluation. There is no way that you could train your model on 100% of the data and be able to get a correct evaluation accuracy unless you expand your dataset. I mean, you could change your train/test split, or try to optimize your model in other ways, but i guess the simple answer...
I have training sample X_train, and Y_train to train and X_estimated. I got task to make my classificator learn as accurate as it can, and then predict vector of results over X_estimated to get close results to Y_estimated (which i have now, and I have to be as much precise as it can). If I split my training data to li...
0
1
2,055
0
58,975,977
0
0
0
0
2
false
2
2019-11-20T23:26:00.000
0
5
0
How to check machine learning accuracy without cross validation
58,964,954
0
python,machine-learning,scikit-learn,neural-network,random-forest
It is not necessary to do 75|25 split of your data all the time. 75 |25 is kind of old school now. It greatly depends on the amount of data that you have. For example, if you have 1 billion sentences for training a language model, it is not necessary to reserve 25% for testing. Also, I second the previous answer of tr...
I have training sample X_train, and Y_train to train and X_estimated. I got task to make my classificator learn as accurate as it can, and then predict vector of results over X_estimated to get close results to Y_estimated (which i have now, and I have to be as much precise as it can). If I split my training data to li...
0
1
2,055
0
58,966,156
0
0
0
0
1
false
2
2019-11-21T01:53:00.000
1
2
0
Can we create an ensemble of deep learning models without increasing the classification time?
58,966,086
0.099668
python,deep-learning,ensemble-learning
There is no magic pill for doing what you want. Extra computation cannot come free. So one way this can be achieved is by using multiple worker machines to run inference in parallel. Each model could run on a different machine using tensorflow serving. For every new inference do the following: Have a primary machi...
I want to improve my ResNet model by creating an ensemble of X number of this model, taking the X best one I have trained. For what I've seen, a technique like bagging will take X time longer to classify an image, which is really not an option in my case. Is there a way to create an ensemble without increasing the req...
0
1
293
0
58,969,593
0
0
0
0
1
false
1
2019-11-21T02:08:00.000
1
1
0
How to add an additional binary variable with CPLEX and Python?
58,966,188
0.197375
python,mathematical-optimization,linear-programming,cplex
You did not say whether you use the CPLEX Python API or docplex. But in either case, you can call the functions that create variables multiple times. So in the CPLEX Python API call Cplex.variables.add() again to add another set of variables. In docplex just call Model.binary_var_dict() (or whatever method you used to ...
I have an integer programming problem with a decision variable X_i_j_k_t that is 1 if job i was assigned to worker j for day k and shift t. I am maximizing the benefit of assigning orders to my workers. I have an additional binary variable Y_i_k_t that is 1 if the job was executed and a given day and shift (jobs might ...
0
1
1,334
0
58,976,553
0
0
0
0
1
false
1
2019-11-21T13:21:00.000
1
1
0
Neural Networks - Checking for node activation
58,976,062
0.197375
python,neural-network,artificial-intelligence
Somehow techytushar's comment nudged my brain into a new line of reasoning, which I think has been very helpful: So the problem I'm addressing is: 'There can be no dormant code.' Be that lines of C or array elements that are never, and can never, be accessed. So when the trained NN runs as a compiled C application, the...
I'm involved in a research project that is looking at using Neural Networks in a safety critical environment. Part of the regulatory framework this research is targeted towards states that there must be no dormant code within the system. There must be a pathway through every part of the system and that pathway must be ...
0
1
108
0
66,943,729
0
0
0
0
1
false
2
2019-11-21T23:17:00.000
1
3
0
opencv imwrite, image get rotated
58,985,183
0.066568
python,image,opencv,png,jpeg
One possible solution is to change cv2.IMREAD_UNCHANGED to cv2.IMREAD_COLOR while loading image with imdecode. From some reason "unchanged" is not able to read EXIF metadata correctly
I am using opencv in python (cv2) to do some processing on images with format jpg, png and jpeg, JPG. I am doing a test that write image to disk using "cv2.imwrite" right after reading from "cv2.imread". I found part of the image get rotated, some of them rotate 90d, some rotate 180d. But most image keep the right orie...
0
1
2,287
0
58,993,348
0
1
0
0
1
true
1
2019-11-22T11:22:00.000
2
1
0
Convert a str(numpy array) representaion to a numpy array - Python
58,993,231
1.2
python,numpy
Try numpy.array([int(v) for v in your_str[1:-1].split()])
Let's say I have a numpy array a = numpy.array([1,2,3,4]). Now str(a) will give me "[1 2 3 4]". How do I convert the string "[1 2 3 4]" back to a numpy.array([1,2,3,4])?
0
1
77
0
58,994,326
0
0
0
1
1
false
1
2019-11-22T12:25:00.000
1
2
0
How to integrate spreadsheet/excel kind of view to my application using python?
58,994,269
0.099668
python
I think working with PyQt for large application is the best option ( for large applications ) but tkinter is the secondary option for fast small apps.
I am trying to create an application using python, In which I would like to able to read a .csv or .xlsx file and display its contents on my application, I believe there should be some packages which helps to do this in python, can I have some suggestions? Regards, Ram
0
1
36
0
59,466,483
0
0
0
0
1
false
0
2019-11-22T17:36:00.000
0
1
0
False positive number Bloom filter
58,999,182
0
python,dataframe,hash,bloom-filter
Yes, and it is very simple. Count the number of bits that are 'on' and divide that by the total number of bits. This will give you your fill-rate. When querying, all elements that were inserted earlier will hit 'on' bits and return positive. For elements which were not inserted into the filter, the probability of hitt...
I implemented a bloom filter with 3 hash functions, and now I should calculate the exact number of false positives (not possibility) in that filter. Is there an efficient way to calculate that? The number of items in the filter is 200 million and the size of bit array is 400 million
0
1
308
0
59,011,075
0
0
0
0
1
true
1
2019-11-23T09:38:00.000
1
2
0
Will pandas.read_excel preserve column order?
59,006,318
1.2
python,python-3.x,pandas
pandas will return to you the column order exactly as in the original file. If the order changes in the file, the order of columns in the dataframe will change too. You can define the column order yourself when reading in the data. Sometimes you'd also load the data, check what columns are present (with dataframe.colu...
I need to read a sheet in excel file. But the number of columns(approx 100 to 150), column names and column position may change everyday in the sheet. Will pandas.read_excel return a dataframe with columns in the same order as they are in my daily excel sheet ? I'm using pandas 0.25.3
0
1
1,802
0
59,016,457
0
0
0
0
1
false
1
2019-11-24T09:49:00.000
0
1
0
Issues installing sklearn_pandas package
59,016,428
0
python
Your pip doesn't recognized and constantly showing this message while executing: 'pip' is not recognized as an internal or external command, operable program or batch file. If your python version is 3.x.x format then you use pip3 not pip anymore. The usage is pip3 is exactly the same as pip
I have been trying to install the sklearn_pandas package. I tried two methods which I found online: 1) By running 'pip install sklearn-pandas' in the Windows command line in the same location as my Python working directory: This resulted in the error ''pip' is not recognized as an internal or external command, operable...
0
1
225
0
59,034,403
0
1
0
0
1
false
0
2019-11-25T07:23:00.000
0
1
0
Auto encoder and decoder on numerical data-set
59,026,939
0
python,neural-network,deep-learning,cryptography,autoencoder
Sure you can apply classical Autoencoders to numerical data. In its simplest form its just a matrix multiplication to a lower dimensional space and then a matrix multiplication to the original space and a L2-Loss based on the reconstruction and the input. You could start from there and add layers if your performance is...
i am working on cryptography.The data set i am using is numerical data to resolve the dimensional reduction issue i want to apply auto encoder and decoder neural network. is it possible to apply Auto Encoder on numerical data set if yes then how?
0
1
226
0
59,027,703
0
1
0
0
1
false
0
2019-11-25T08:18:00.000
0
2
0
Which Python version should I download to run tensorflow GPU
59,027,575
0
python,tensorflow,gpu
you can use any but it's better to use python version 3 with pip install 19.0 version as for the Cuda version you need to check what cuda version can be rub for your GPU and for cuDnn it will be desided on your cuda version
I try to set my CUDA Tensorflow on my windows 10. I would like to know what is the newest version of Python that works without bugs with the CUDA tensorflow and in which version. Also what version of cuDNN should i use? Thanks a lot.
0
1
357
0
59,032,616
0
1
0
0
1
false
0
2019-11-25T12:48:00.000
0
1
0
Finding an element of pandas series at a certain time (date)
59,032,213
0
python-3.x,pandas,numpy,time-series
If your dataframe is indexed by date, you can: df[date] to access all the rows indexed by such date (e.g. df['2019-01-01']); df[date1:date2] to access all the rows with date index between date1 and date2 (e.g. df['2019-01-01': '2019-11-25']); df[:date] to access all the rows with index before date value (e.g. df[:'20...
I have some pandas series with the type "pandas.core.series.Series". I know that I can see its datetimeindex when I add ".index" to the end of it. But what if I want to get the element of the series at this time? and whats if I have a "pandas._libs.tslibs.timestamps.Timestamp" and want to get the element of the series...
0
1
135
0
66,849,256
0
0
0
0
1
false
2
2019-11-25T21:18:00.000
0
2
0
MultiLabel Soft Margin Loss in PyTorch
59,040,237
0
python,pytorch,loss-function,softmax
In pytorch 1.8.1, I think the right way to do is fill the front part of the target with labels and pad the rest part of the target with -1. It is the same as the MultiLabelMarginLoss, and I got that from the example of MultiLabelMarginLoss.
I want to implement a classifier which can have 1 of 10 possible classes. I am trying to use the MultiClass Softmax Loss Function to do this. Going through the documentation I'm not clear with what input is required for the function. The documentation says it needs two matrices of [N, C] of which one is input and the ...
0
1
3,994
0
59,041,168
0
1
0
0
1
false
0
2019-11-25T22:22:00.000
1
2
0
Deploy function in AWS Lamda (package size exceeds)
59,040,958
0.099668
python,amazon-web-services,numpy,tensorflow,aws-lambda
when the zip file size is bigger than 49 mb, You can upload the zip file to Amazon S3 and use it to update the function code. aws lambda update-function-code --function-name calculateMath --region us-east-1 --s3-bucket calculate-math-bucket --s3-key 100MBFile.zip
I am trying to deploy my function on AWS Lambda. I need the following packages for my code to function: keras-tensorflow Pillow scipy numpy pandas I tried installing using docker and uploading the zip file, but it exceeds the file size. Is there a get around for this? How to use these packages for my Lambda function?
0
1
273
0
59,877,757
0
0
0
0
1
false
1
2019-11-26T01:56:00.000
0
1
0
Tensorflow-GPU getting stuck saving checkpoint during training - also not using entire GPU, not sure why
59,042,621
0
python,tensorflow
Had the same issues till I upgradet the Nvidia driver from Version 441.28 to the newest Version. After this, the training runs without stops or freezes.
GPU: Nvidia GTX 2070 Python Version: 3.5 Tensorflow: 1.13.1 CUDA: 10 cuDNN: 7.4 Model: Faster-RCNN-Inception-V2 I am using the legacy method of training my model (trian.py) and when I run it as such python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config...
0
1
778
0
59,050,279
0
0
0
1
1
false
1
2019-11-26T11:34:00.000
0
2
0
Fill an existing Excel file with data from a Pandas DataFrame
59,050,052
0
python,python-3.x,pandas,dataframe
If your # of columns and order is same then you may try xlsxwriter and also mention the sheet name to want to refresh: df.to_excel('filename.xlsx', engine='xlsxwriter', sheet_name='sheetname', index=False)
I have a Pandas DataFrame with a bunch of rows and labeled columns. I also have an excel file which I prepared with one sheet which contains no data but only labeled columns in row 1 and each column is formatted as it should be: for example if I expect percentages in one column then that column will automatically conve...
0
1
1,888
0
59,053,007
0
0
0
0
2
false
0
2019-11-26T14:11:00.000
0
2
0
how to find cosine similarity in a pre-computed matrix with a new vector?
59,052,818
0
python,pandas,machine-learning,scikit-learn,computer-vision
The initial (5000,5000) matrix encodes the similarity values of all your 5000 items in pairs (i.e. symmetric matrix). To have the similarities in case of a new item, concatenate and make a (5001, 2048) matrix and then estimate similarity again to get (5001,5001) In other words, you can not directly use the (5000,5000) ...
I have a dataframe with 5000 items(rows) and 2048 features(columns). Shape of my dataframe is (5000, 2048). when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix. Here I can compare each other. But now If I have a new vector shape of (1,2048), how can find cosine similarity of t...
0
1
241
0
59,053,865
0
0
0
0
2
false
0
2019-11-26T14:11:00.000
0
2
0
how to find cosine similarity in a pre-computed matrix with a new vector?
59,052,818
0
python,pandas,machine-learning,scikit-learn,computer-vision
Since cosine similarity is symmetric. You can compute the similarity meassure with the old data matrix, that is similarity between the new sample (1,2048) and old matrix (5000,2048) this will give you a vector of (5000,1) you can append this vector into the column dimension of the pre-computed cosine matrix making it (...
I have a dataframe with 5000 items(rows) and 2048 features(columns). Shape of my dataframe is (5000, 2048). when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix. Here I can compare each other. But now If I have a new vector shape of (1,2048), how can find cosine similarity of t...
0
1
241
0
59,063,566
0
0
0
0
1
false
1
2019-11-27T04:57:00.000
0
1
0
How should I group these elements such that overall variance is minimized?
59,063,240
0
python,r,algorithm,optimization,minimization
I would sort the numbers into increasing order and then use dynamic programming to work out where to place the boundaries between groups of contiguous elements. For example, if the only constraint is that every number must be in exactly one group, work from left to right. At each stage, for i=1..n work out the set of b...
I have a set of elements, which is for example x= [250,255,273,180,400,309,257,368,349,248,401,178,149,189,46,277,293,149,298,223] I want to group these into n number of groups A,B,C... such that sum of all group variances is minimized. Each group need not have same number of elements. I would like a optimization app...
0
1
343
0
59,095,464
0
0
0
0
1
false
4
2019-11-27T20:57:00.000
1
2
0
Compare Number of Equal Elements in Tensors
59,078,318
0.099668
python,pytorch,equality,tensor
Something like equal_count = len((tensor_1.flatten() == tensor_2.flatten()).nonzero().flatten()) should work.
I have two tensors of dimension 1000 * 1. I want to check how many of the 1000 elements are equal in the two tensors. I think I should be able to do this in one line like Numpy but couldn't find a similar function.
0
1
4,942