GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 58,178,725 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2019-10-01T05:24:00.000 | 0 | 3 | 0 | can't install pandas (ERROR: Cannot uninstall 'numpy') | 58,178,508 | 0 | python,pandas,macos | I found an alternative method to install pandas, by installing minicondas, and running conda install pandas. | I'm on macOS 10.15 Beta, running a .py script that requires pandas, which is not installed.
When I run sudo python -m pip install --upgrade pandas I receive:
ERROR: Cannot uninstall 'numpy'. It is a distutils installed project
and thus we cannot accurately determine which files belong to it which
would lead to onl... | 0 | 1 | 1,097 |
0 | 59,296,658 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2019-10-01T05:24:00.000 | 0 | 3 | 0 | can't install pandas (ERROR: Cannot uninstall 'numpy') | 58,178,508 | 1.2 | python,pandas,macos | I tried the solutions presented in the other answers, but what worked for me was installing pandas using conda. | I'm on macOS 10.15 Beta, running a .py script that requires pandas, which is not installed.
When I run sudo python -m pip install --upgrade pandas I receive:
ERROR: Cannot uninstall 'numpy'. It is a distutils installed project
and thus we cannot accurately determine which files belong to it which
would lead to onl... | 0 | 1 | 1,097 |
0 | 58,179,988 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-01T07:26:00.000 | 2 | 4 | 0 | String problem / Select all values > 8000 in pandas dataframe | 58,179,925 | 0.099668 | python,pandas | You can see value of df.dtypes to see what is the type of each column. Then, if the column type is not as you want to, you can change it by df['GM'].astype(float), and then new_df = df.loc[df['GM'].astype(float) > 8000] should work as you want to. | I want to select all values bigger than 8000 within a pandas dataframe.
new_df = df.loc[df['GM'] > 8000]
However, it is not working. I think the problem is, that the value comes from an Excel file and the number is interpreted as string e.g. "1.111,52". Do you know how I can convert such a string to float / int in ord... | 0 | 1 | 93 |
0 | 58,198,069 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-10-02T08:15:00.000 | 0 | 2 | 0 | "Process finished with exit code -2147483645" Pycharm | 58,197,666 | 0 | python,pycharm,exit-code | Mmmm... I dont know about the error. But given the fact that it starts and works well for 368 episodes... I would aim that is some lack of memory related problem.
I would run it several times, if it crash after a similar number of episodes I'd try with more memory.
Hope this helps even just a little bit. | I ran Python 3.6.6 Deep Learning with Pycharm 2019.1.3. The process was set at maximum 651 episode and it stopped at episode 368 with this message "Process finished with exit code -2147483645".
I searched through Google but there's not even a result. Anyone knows about the code? Please help! | 0 | 1 | 2,679 |
0 | 58,202,111 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-10-02T08:20:00.000 | 1 | 1 | 0 | Which version of TensorFlow.js is compatible with models trained in TensorFlow 1.12.0 (Python)? | 58,197,749 | 1.2 | python,tensorflow,tensorflow.js | You have not mentioned in which format you are saving your model in TensorFlow 1.12. I would recommend to make use of saved model format to save your model. If you use saved models, you can use the latest versions of tf.js and tf.js converters. Same is the case for keras h5 model as well.
However, if you save it in for... | I need to convert models trained in TensorFlow 1.12.0 Python into that of TensorFlow.js. What version of tf.js and tf.js converter is compatible with it? | 0 | 1 | 259 |
0 | 58,201,684 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-02T12:02:00.000 | 0 | 2 | 0 | Single-label multiclass classification random forest python | 58,201,116 | 1.2 | python,machine-learning,scikit-learn,random-forest,multiclass-classification | Method of (One Hot Encoder) applies to category variables, and category variables have no size relationship.For the price variable,I suggest you use OrinalEncoder.Sklearn is a good package for machine.like, sklearn learning.preprocessing.OneHotEncoder or sklearn.preprocessing.OrdinalEncoder | I am pretty new to machine learning and I am currently dealing with a dataset in the format of a csv file comprised of categorical data. As a means of preprocessing, I One Hot Encoded all the variables in my dataset.
At the moment I am trying to apply a random forest algorithm to classify the entries into one of the 4... | 0 | 1 | 170 |
0 | 59,021,516 | 0 | 1 | 0 | 0 | 1 | false | 10 | 2019-10-02T13:02:00.000 | 4 | 5 | 0 | Can't import tensorflow.keras in VS Code | 58,202,095 | 0.158649 | python,tensorflow,keras,visual-studio-code | I too faced the same issue. I solved it by installing keras as a new package and then I changed all packages name removing the prefix tensorflow.. So in your case after installing keras you should replace tensorflow.keras.layers with keras.layers | I'm running into problems using tensorflow 2 in VS Code. The code executes without a problem, the errors are just related to pylint in VS Code.
For example this import from tensorflow.keras.layers import Dense gives a warning "Unable to import 'tensorflow.keras.layers'pylint(import-error)". Importing tensorflow and us... | 0 | 1 | 11,921 |
0 | 58,225,796 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-03T16:36:00.000 | 0 | 1 | 0 | How to fix 'Cannot handle this data type' while trying to convert a numpy array into an image using PIL | 58,223,453 | 0 | python-3.x,numpy,python-imaging-library | This depends a little bit on what you want to do.
You have two channels with n-samples ((nsamples, 2) ndarray); do you want each channel to be a column of the image where the color varies depending on what the value is? That is why you were getting a very narrow image when you just plot myrecording.
You do not really h... | I am trying to visualize music into an image by using sounddevice to input the sound and then converting it to a numpy array.
The array is 2D and so I convert it to 3D (otherwise I only get a single thin vertical line in the image).
However when I use PIL to show the image it says 'Cannot handle this datatype'
The co... | 0 | 1 | 62 |
0 | 58,337,978 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-03T19:17:00.000 | 0 | 1 | 0 | Can we detect multiple objects in image using caltech101 dataset containing label wise images? | 58,225,543 | 1.2 | python,keras,deep-learning,object-detection,tensorflow-datasets | The dataset can be used for detecting multiple objects but with below steps to be followed:
The dataset has to be annotated with bounding boxes on the object present in the image
After the annotations are done, you can use any of the Object detectors to do transfer learning and train on the annotated caltech 101 datas... | I have a caltech101 dataset for object detection. Can we detect multiple objects in single image using model trained on caltech101 dataset?
This dataset contains only folders (label-wise) and in each folder, some images label wise.
I have trained model on caltech101 dataset using keras and it predicts single object in... | 0 | 1 | 233 |
0 | 60,456,032 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-03T19:53:00.000 | 0 | 2 | 0 | Edit python script used as Data entry in Power BI | 58,226,050 | 0 | python,powerbi | You can edit the python scripts doing the following steps:
Open Query Editor
At 'Applied steps', the first one, source, contains a small gear symbol just on the right side, click on it.
You can change the script direct into Power Query. | I have a python script and used it to create a dataframe in Power BI.
Now I want to edit that dataframe in Power BI but don´t enter from scratch as new data because I want to keep all the charts inside my Power BI model.
For example in my old dataframe i specified some dates inside my script so the information was lim... | 0 | 1 | 3,747 |
0 | 58,232,793 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-04T04:37:00.000 | 0 | 1 | 0 | Using spam classification in a different application? | 58,229,976 | 0 | python,nlp,classification,text-classification | I'm skeptical. The reason simple Bayesian filtering works for spam is that spam messages typically use a quite different vocabulary than legitimate messages.
Anecdotally, people who sell pharmaceuticals use the same words and phrases in their legitimate business correspondence as in some types of spam; so they get bad ... | I want to use the concept of spam classification and apply it to a business problem where we identify if a vision statement for a company is good or not. Here's a rough outline of what I've come up with for the project. Does this seem feasible?
Prepare dataset by collecting vision statements from top leading companies... | 0 | 1 | 29 |
0 | 58,237,280 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-10-04T13:04:00.000 | 0 | 1 | 0 | OSError: Failed to interpret file 'name.data' as a pickle | 58,237,039 | 0 | python-3.x,compiler-errors | I had the same problem. In my case it was the newer version of numpy that caused the problem.
Try installing numpy version 1.12.0
pip install numpy==1.12.0 | I am trying to loadthis file.
But, Python3 said "OSError: Failed to interpret file 'D:/USER/Downloads/wine.data' as a pickle".
How to load this file?
The code I used following this
data = np.load("D:/USER/Downloads/wine.data") | 0 | 1 | 150 |
0 | 58,237,688 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-10-04T13:40:00.000 | 0 | 1 | 0 | Data type to save expanding data for data logging in Python | 58,237,574 | 0 | python,types,data-acquisition | Python doesn't have arrays as you think of them in most languages. It has "lists", which use the standard array syntax myList[0] but unlike arrays, lists can change size as needed. using myList.append(newItem) you can add more data to the list without any trouble on your part.
Since you asked for proper vocabulary in a... | I am writing a serial data logger in Python and am wondering which data type would be best suited for this. Every few milliseconds a new value is read from the serial interface and is saved into my variable along with the current time. I don't know how long the logger is going to run, so I can't preallocate for a known... | 0 | 1 | 139 |
0 | 58,316,577 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-05T05:32:00.000 | 1 | 2 | 0 | How to know if dynamic tensor returned by "tf.boolean_mask" is empty or not? | 58,245,630 | 0.099668 | python,tensorflow | There are multiple ways to solve this, basically you are trying to identify a null tensor.
Possible solutions can be:
is_empty = tf.equal(tf.size(boolean_tensor), 0). If not empty it will give false
Count non zeros number using tf.count_nonzero(boolean_tensor)
By simply printing the tensor and checking the vaules | tf.boolean_mask(tensor, mask) => returns (?, 4)
How do I check if the returned tensor by boolean_mask is empty or not? | 0 | 1 | 422 |
0 | 58,246,743 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-05T08:41:00.000 | 1 | 1 | 0 | What is the difference between single bracket df["column"] and double bracket df[["column"]] | 58,246,700 | 1.2 | python,pandas | to be more specific, df['column'] returns only one column, but when you use df[['column']] you can call more than one column.
for example df[['column1','column2']] returns column1 and column2 from df | One is a pandas.core.series.Series and another is a dataframe pandas.core.frame.DataFrame.
I have seen codes using them both. Is there a guideline on when to use which? | 0 | 1 | 70 |
0 | 59,757,240 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-05T12:38:00.000 | 0 | 1 | 0 | How to inverse transform models output? | 58,248,367 | 0 | python-3.x,scikit-learn,neural-network,deep-learning,sklearn-pandas | Use sc.inverse_transform(predicted) | So I have a trained model, that was trained on a standardized dataset. When I try to use the model for testing on new data, that isn't in a dataset and that isn't standardized, it returns ridiculous values, because I can standardize the inputs, but I can't inverse transform the output as I did during training. What sho... | 0 | 1 | 277 |
0 | 58,253,014 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-10-05T23:06:00.000 | 0 | 1 | 0 | File progress for long duration | 58,253,000 | 0 | python,dataframe,for-loop,data-analysis | Do you have sample code of what you are doing ? Are you reading your file every time you fetch a latitude and longitude If this is the case this is why it takes so long. First load the file in your memory as a pandas object for example and then you should be able to look for your data much faster. | I am trying to fetch location based on lat and long. I have a data of 600K in my csv and I am trying to run my for loop on it . My notebook is taking very long time to process the data. ( 40min to complete 2percent)
I have decent laptop Core i7-8550U quad-core 1.8GHz and 16GB DDR4 RAM .
I am not sure how to optain th... | 0 | 1 | 10 |
0 | 58,255,664 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-06T08:35:00.000 | 0 | 1 | 0 | How can i recognize two picture are same? | 58,255,567 | 0 | python-3.x,keras,conv-neural-network | you can try correlation on both images if there exactly the same should get 1 | for example i have a image dataset. I used these images for train my model, and I am using another image for test. How can i recognize the test image is on the dataset? How do I determine what the similarity percentage is? | 0 | 1 | 32 |
0 | 58,305,335 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2019-10-06T09:13:00.000 | 0 | 3 | 0 | Databricks: merge dataframe into sql datawarehouse table | 58,255,818 | 0 | python,databricks | you can save the output in a file and then use the stored procedure activity from azure data factory for the upsert. Just a small procedure which will upsert the values from the file. I am assuming that you are using the Azure data factory here. | Are there any method where I can upsert into a SQL datawarehouse table ?
Suppose I have a Azure SQL datawarehouse table :
col1 col2 col3
2019 09 10
2019 10 15
I have a dataframe
col1 col2 col3
2019 10 20
2019 11 30
Then merge into the original table of Azure data warehouse table
c... | 0 | 1 | 782 |
0 | 58,258,675 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-06T14:38:00.000 | 1 | 3 | 0 | What is the meaning of "trainable_weights" in Keras? | 58,258,312 | 0.066568 | python,keras,deep-learning,conv-neural-network,transfer-learning | Trainable weights are the weights that will be learnt during the training process. If you do trainable=False then those weights are kept as it is and are not changed because they are not learnt. You might see some "strange numbers" because either you are using a pre-trained network that has its weights already learnt o... | If I freeze my base_model with trainable=false, I get strange numbers with trainable_weights.
Before freezing my model has 162 trainable_weights. After freezing, the model only has 2. I tied 2 layers to the pre-trained network. Does trainable_weights show me the layers to train? I find the number weird, when I see 2,25... | 0 | 1 | 3,554 |
0 | 58,277,773 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-07T21:54:00.000 | 1 | 1 | 0 | What is the threshold for sparse matrices? Is it a matrix that contain less than 50% 0's? | 58,277,617 | 1.2 | python,matrix,sparse-matrix | You can't locate a definition because there isn't one. "Sparse" is whatever relation makes a different algorithm more efficient. It may be a particular proportion of elements; it may be a function of the matrix side (e.g. n element in a nxn matrix); it may require zero rows or diagonals.
It depends critically on how ... | Does a "sparse matrix" mean that it contains *more than 50% 0's?
I can't seem to locate that information.
edit - more | 0 | 1 | 480 |
0 | 58,278,374 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-07T22:38:00.000 | 1 | 1 | 0 | Tensorflow 2.0 fit() is not recognizing batch_size | 58,277,991 | 1.2 | python,tensorflow,keras | model.compile() only does configure the model for training and it doesn't have any memory allocation.
Your bug is self-explained, you directly feed a large numpy array into the model. I would suggest coding a new data generator or keras.utils.Sequence to feed your input data. If so, you do not need specify the batch_s... | So I'm initializing a model as:
model = tf.keras.utils.multi_gpu_model(model, gpus=NUM_GPUS) and when I do model.compile() it runs perfectly fine.
But when I do history = model.fit(tf.cast(X_train, tf.float32), tf.cast(Y_train, tf.float32), validation_split=0.25, batch_size = 16, verbose=1, epochs=100), it gives me err... | 0 | 1 | 451 |
0 | 58,915,896 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-07T22:56:00.000 | 1 | 1 | 0 | How to execute python script (face detection on very large dataset) on Nvidia GPU | 58,278,145 | 0.197375 | python,gpu,face-detection,numba | Your first problem is actually getting your Python code to run on all CPU cores!
Python is not fast, and this is pretty much by design. More accurately, the design of Python emphasizes other qualities. Multi-threading is fairly hard in general, and Python can't make it easy due to those design constraints. A pity, beca... | I have a python script that loops through a dataset of videos and applies a face and lip detector function to each video. The function returns a 3D numpy array of pixel data centered on the human lips in each frame.
The dataset is quite large (70GB total, ~500,000 videos each about 1 second in duration) and executing ... | 0 | 1 | 81 |
0 | 58,283,603 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-08T00:10:00.000 | 1 | 1 | 0 | What is the difference between spline filtering and spline interpolation? | 58,278,604 | 0.197375 | python,interpolation,spline | I'm guessing a bit here. In order to calculate a 2nd order spline, you need the 1st derivative of the data. To calculate a 3rd order spline, you need the second derivative. I've not implemented an interpolation motor beyond 3rd order, but I suppose the 4th and 5th order splines will require at least the 3rd and 4th der... | I'm having trouble connecting the mathematical concept of spline interpolation with the application of a spline filter in python. My very basic understanding of spline interpolation is that it's fitting the data in a piece-wise fashion, and the piece-wise polynomials fitted are called splines. But its applications in i... | 0 | 1 | 389 |
0 | 58,279,203 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-08T01:19:00.000 | 0 | 1 | 0 | unicode vs character: What is '\x10' | 58,279,018 | 0 | python,pandas,unicode,luigi | Because it's the likely answer, even if the details aren't provide in your question:
It's highly likely something in your pipeline is intentionally producing fields with length prefixed text, rather than the raw unstructured text. \x103189069486778499 is a binary byte with the value 16 (0x10), followed by precisely 16 ... | I'm trying to understand why when we were using pandas to_csv(), a number 3189069486778499 has been output as "0.\x103189069486778499". And this is the only case happened within a huge amount of data.
When using to_csv(), we have already used encoding='utf8', normally that would solve some unicode problems...
So, I'm t... | 0 | 1 | 2,069 |
0 | 58,283,102 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-08T07:27:00.000 | 2 | 1 | 0 | Word embeddings with multiple categorial features for a single word | 58,281,876 | 1.2 | python,python-3.x,pytorch,word-embedding | I am not sure what do you mean by word2vec algorithm with LSTM because the original word2vec algorithm does not use LSTMs and uses directly embeddings to predict surrounding words.
Anyway, it seems you have multiple categorical variables to embed. In the example, it is word ID, color ID, and font size (if you round it ... | I'm looking for a method to implement word embedding network with LSTM layers in Pytorch such that the input to the nn.Embedding layer has a different form than vectors of words IDs.
Each word in my case has a corresponding vector and the sentence in my corpus is consequently a vector of vectors. So, for example, I may... | 0 | 1 | 1,159 |
0 | 58,302,623 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-09T10:32:00.000 | 0 | 3 | 0 | why numpy array has size of 112 byte and when I do flatten it, it has 96 byte of memory? | 58,302,190 | 0 | python,numpy,reshape | As Derte mentioned, sys.getsizeof doesn't say the size of the array. The 96 you got is holding information about the array (if it's 1-Dimensional) and the 112 if it's multi dimensional. Any additional element will increase the size with 8 bytes assuming you are using a dtype=int64. | I have a numpy array and I flatten it by np.ravel() and I am confused when i tried to learn the size of the both array
array =np.arange(15).reshape(3,5)
sys.getsizeof(array)
112
sys.getsizeof(array.ravel())
96
array.size
15
array.ravel().size
15
array = np.arange(30).reshape(5,6)
sys.... | 0 | 1 | 344 |
0 | 58,315,161 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-10-10T03:36:00.000 | 0 | 2 | 0 | DataFrame issue (parenthesis) | 58,315,096 | 1.2 | python,pandas,dataframe | You understand it well, in general parenthesis call class method, and without you call an attribute.
In your exemple you don't have an error, because df.head is bound to NDFrame.head who is a method as well. If df.head was only a method, calling it without parenthesis will raise an AttributeError. | May I ask what's the difference between df.head() and df.head in python's syntax nature? Could I interpret as the former one is for calling a method and the later one is just trying to obtain the DataFrame's attribute, which is the head? I am so confused why sometimes there is a parenthesis at the end but sometimes not... | 0 | 1 | 44 |
0 | 58,334,583 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-10T23:11:00.000 | 1 | 1 | 0 | Python script closes after a while | 58,332,232 | 0.197375 | python,python-3.x,machine-learning,artificial-intelligence | The way I was looping my function needed to be a for loop instead of directly calling the function as a loop method. And my error was a stack overflow | I'm using Keras for the layers, optimizer, and model and my model is Sequential
I've got two DQN networks and I'm making them duel each other in a simulated environment however after about 35 episodes (different each time) the script just stops without any errors. I've isolated my issue to be somewhere around when the ... | 0 | 1 | 46 |
0 | 58,345,624 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-11T00:23:00.000 | 0 | 2 | 0 | How does TF know what object you are finetuning for | 58,332,687 | 0 | python,tensorflow,deep-learning,conv-neural-network,object-detection | The model works with the category labels (numbers) you give it. The string "boat" is only a translation for human convenience in reading the output.
If you have a model that has learned to identify a set of 40 images as class 9, then giving it a very similar image that you insist is class 1 will confuse it. Doing so ... | I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of ... | 0 | 1 | 51 |
0 | 58,513,059 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-11T00:23:00.000 | 0 | 2 | 0 | How does TF know what object you are finetuning for | 58,332,687 | 0 | python,tensorflow,deep-learning,conv-neural-network,object-detection | so I managed to figure out the issue.
We created the annotation tool from scratch and the issue that was causing underfitting whenever we trained regardless of the number of steps or various fixes I tried to implement was that When creating bounding boxes there was no check to identify whether the xmin and ymin coordin... | I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of ... | 0 | 1 | 51 |
0 | 59,558,292 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2019-10-11T08:46:00.000 | 1 | 2 | 0 | How to save fasttext model in vec format? | 58,337,469 | 0.099668 | python,word-embedding,fasttext | you should add words num and dimension at first line of your vec file, than use -preTrainedVectors para | I trained my unsupervised model using fasttext.train_unsupervised() function in python. I want to save it as vec file since I will use this file for pretrainedVectors parameter in fasttext.train_supervised() function. pretrainedVectors only accepts vec file but I am having troubles to creating this vec file. Can someon... | 0 | 1 | 6,401 |
0 | 61,201,527 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-11T10:11:00.000 | 1 | 1 | 0 | Numpy can't be imported in Spyder | 58,339,023 | 0.197375 | python,numpy,spyder | Problem did not occur again after re-installing Anaconda. Thanks @CarlosCordoba. | When trying to import numpy in spyder i get the following error message:
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 ... | 0 | 1 | 1,431 |
0 | 58,340,103 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-11T10:47:00.000 | 0 | 1 | 0 | Combining relative camera rotations & translations with known pose | 58,339,628 | 1.2 | python,opencv,camera-calibration | If you know relative poses between all the cameras via a chain (relative poses between cameras a, b and b, c), you can combine the translations and rotations from camera a to c via b by
R ac = Rab R bc
t ac = t ab + R ab t bc
In other words, the new rotation from ac is rotating first from a to b and then from b to c. T... | I have used OpenCVs stereocalibrate function to get the relative rotation and translation from one camera to another. What I'd like to do is change the origin of the world space and update the extrinsics of both cameras accordingly. I can easily do this with cameras that have a shared view with SolvePnP but I'd like to... | 0 | 1 | 554 |
0 | 58,342,047 | 0 | 0 | 0 | 0 | 2 | true | 3 | 2019-10-11T12:36:00.000 | 3 | 3 | 0 | Error when running tensorflow in virtualenv: module 'tensorflow' has no attribute 'truncated_normal' | 58,341,433 | 1.2 | python,tensorflow,keras | Keras 2.2.4 does not support TensorFlow 2.0 (it was released much before TF 2.0), so you can either downgrade TensorFlow to version 1.x, or upgrade Keras to version 2.3, which does support TensorFlow 2.0. | I have the following error when running a CNN made in keras
File
"venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py",
line 4185, in truncated_normal
return tf.truncated_normal(shape, mean, stddev, dtype=dtype, seed=seed) AttributeError: module 'tensorflow' has no attribute
'truncated_nor... | 0 | 1 | 8,138 |
0 | 67,832,940 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2019-10-11T12:36:00.000 | 5 | 3 | 0 | Error when running tensorflow in virtualenv: module 'tensorflow' has no attribute 'truncated_normal' | 58,341,433 | 0.321513 | python,tensorflow,keras | In Tensorflow v2.0 and above, "tf.truncated_normal" replaced with "tf.random.truncated_normal" | I have the following error when running a CNN made in keras
File
"venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py",
line 4185, in truncated_normal
return tf.truncated_normal(shape, mean, stddev, dtype=dtype, seed=seed) AttributeError: module 'tensorflow' has no attribute
'truncated_nor... | 0 | 1 | 8,138 |
0 | 60,562,288 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-11T13:46:00.000 | 1 | 1 | 0 | I have a network with 3 features and 4 vector outputs. How is MSE and accuracy metric calculated? | 58,342,612 | 0.197375 | python-3.x,tensorflow,neural-network,conv-neural-network,recurrent-neural-network | It’s not advised to calculate accuracy for continuous values. For such values you would want to calculate a measure of how close the predicted values are to the true values. This task of prediction of continuous values is known as regression. And generally R-squared value is used to measure the performance of the model... | I understand how it works when you have one column output but could not understand how it is done for 4 column outputs. | 0 | 1 | 49 |
0 | 68,995,590 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-12T02:55:00.000 | 2 | 1 | 0 | CUDA goes out of memory during inference and gives InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory | 58,350,456 | 0.379949 | python,tensorflow | I am using Tensorflow 2.3.0 on a remote server. My code was working fine, but suddenly the server gets disconnected from the network, and my training stopped. When I re-run the code I got the same issue you got. So I guess this problem is related to GPU being busy in something not existing anymore. Clearing the session... | During inference, when the models are being loaded, Cuda throws InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory.
I am performing inference on a machine with 6GB of VRAM. A few days back, the machine was able to perform the tasks, but now I am frequently getting these messages... | 0 | 1 | 2,542 |
0 | 58,358,854 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-12T21:35:00.000 | 0 | 1 | 0 | Dependent vs Independent Variables | 58,358,757 | 0 | python,statistics,data-science,cross-validation | Dependence and correlation are different. if 2 variables are dependent, then they are correlated. However, if they are correlated, it is not sure that they are dependent, we need domain knowledge to consider more. To check the correlation, we can use the Correlation Coefficient. For the dependence test, we can use the ... | If I am given a large data set with many variables is it possible to determine whether any two of them are independent or dependent? Lets assume I know nothing else about the data other than a statistical study.
Would looking at the correlation/covariance be able to determine this?
The purpose of this is to determine ... | 0 | 1 | 75 |
0 | 58,361,716 | 0 | 0 | 0 | 0 | 3 | true | 4 | 2019-10-13T01:25:00.000 | 4 | 4 | 0 | tf.contrib.layers.fully_connected() in Tensorflow 2? | 58,359,881 | 1.2 | python,python-3.x,tensorflow,tensorflow2.0 | In TensorFlow 2.0 the package tf.contrib has been removed (and this was a good choice since the whole package was a huge mix of different projects all placed inside the same box), so you can't use it.
In TensorFlow 2.0 we need to use tf.keras.layers.Dense to create a fully connected layer, but more importantly, you hav... | I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet? | 0 | 1 | 9,478 |
0 | 61,319,962 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2019-10-13T01:25:00.000 | 5 | 4 | 0 | tf.contrib.layers.fully_connected() in Tensorflow 2? | 58,359,881 | 0.244919 | python,python-3.x,tensorflow,tensorflow2.0 | tf-slim, as a standalone package, already included tf.contrib.layers.you can install by pip install tf-slim,call it by from tf_slim.layers import layers as _layers; _layers.fully_conntected(..).The same as the original, easy to replace | I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet? | 0 | 1 | 9,478 |
0 | 64,493,808 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2019-10-13T01:25:00.000 | 0 | 4 | 0 | tf.contrib.layers.fully_connected() in Tensorflow 2? | 58,359,881 | 0 | python,python-3.x,tensorflow,tensorflow2.0 | tf.contrib.layers.fully_connected() is a perfect mess. It is a very old historical mark(or a prehistory DNN legacy). Google has completely deprecated the function since Google hated it. There is no any direct function in TensoFlow 2.x to replace tf.contrib.layers.fully_connected(). Therefore, it is not worth inquiring ... | I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet? | 0 | 1 | 9,478 |
0 | 58,363,469 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-13T10:37:00.000 | 1 | 1 | 0 | How to extract/cut out parts of images classified by the model? | 58,362,763 | 1.2 | python,tensorflow,machine-learning,keras,deep-learning | Your thinking is correct, you can have multiple pipelines based on the number of classes.
Training:
Main model will be an object detection and localization model like Faster RCNN, YOLO, SSD etc trained to classify at a high level like cat and dog. This pipeline provides you bounding box details (left, bottom, right, t... | I am new to deep learning, I was wondering if there is a way to extract parts of images containing the different label and then feed those parts to different model for further processing?
For example,consider the dog vs cat classification.
Suppose the image contains both cat and dog.
We successfully classify that the i... | 0 | 1 | 170 |
0 | 58,399,685 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-13T14:56:00.000 | 0 | 2 | 0 | Creating dask_jobqueue schedulers to launch on a custom HPC | 58,364,733 | 1.2 | python,python-3.x,dask,dask-distributed | Got it working after going through the source code.
Tips for anyone trying:
Create a customCluster & customJob class similar to LSFCluster & LSFJob.
Override the following
submit_command
cancel_command
config_name (you'll have to define it in the jobqueue.yaml)
Depending on the cluster, you may need to override the ... | I'm new to dask and trying to use it in our cluster which uses NC job scheduler (from Runtime Design Automation, similar to LSF). I'm trying to create an NCCluster class similar to LSFCluster to keep things simple.
What are the steps involved in creating a job scheduler for custom clusters?
Is there any other way to i... | 0 | 1 | 60 |
0 | 58,468,050 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-13T16:35:00.000 | 0 | 2 | 0 | How to define a new optimization function for Keras | 58,365,610 | 0 | python,tensorflow,math,keras | In fact, after having looked at the Keras code of the Optimizer 2-3 times, not only did I quickly give up trying to understand everything, but it seemed to me that the get_updates function simply returns the gradients already calculated, where I seek to directly access the partial derivation functions of the parameters... | I would like to implement for Keras a new optimization function that would not be based on the partial derivatives of the parameters, but also on the derivatives of these partial derivatives. How can I proceed? | 0 | 1 | 87 |
0 | 58,385,939 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-14T03:42:00.000 | 0 | 2 | 0 | DataBricks: using variable in arrays_zip function | 58,369,843 | 0 | python,databricks | using the following:
array=["col1","col2"]
df.select(arrays_zip(*[c for c in array]]).show()
Thanks | May I know if we can use variable/array in the arrays_zip function ??
For example I declare and array
array1=["col1","col2"]
then in the dataframe. I write the following :
df.withColumn("zipped",arrays_zip(array1))
then it tells me it's not a valid argument not a string or column
any one has the idea ? | 0 | 1 | 376 |
0 | 58,375,333 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-14T08:24:00.000 | 0 | 2 | 0 | How to we add a new face into trained face recognition model(inception/resnet/vgg) without retraining complete model? | 58,372,751 | 0 | python-3.x,machine-learning,computer-vision,face-recognition,object-recognition | basically, by the mathematics theory behind the machine learning models, you basically need to do another train iteration with only this new data...
but, in practice, those models, especially the sophisticated ones, rely on multiple iterations for training and a various technics of suffering and nose reductions
a good ... | Is it possible to add a new face features into trained face recognition model, without retraining it with previous faces?
Currently am using facenet architecture, | 0 | 1 | 718 |
0 | 59,680,250 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-14T08:24:00.000 | 1 | 2 | 0 | How to we add a new face into trained face recognition model(inception/resnet/vgg) without retraining complete model? | 58,372,751 | 0.099668 | python-3.x,machine-learning,computer-vision,face-recognition,object-recognition | Take a look in Siamese Neural Network.
Actually if you use such approach you don't need to retrain the model.
Basically you train a model to generate an embedding (a vector) that maps similar images near and different ones far.
After you have this model trainned, when you add a new face it will be far from the others b... | Is it possible to add a new face features into trained face recognition model, without retraining it with previous faces?
Currently am using facenet architecture, | 0 | 1 | 718 |
0 | 59,246,432 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2019-10-14T10:15:00.000 | 1 | 1 | 0 | modulenotfounderror no module named '_pywrap_tensorflow_internal' | 58,374,635 | 0.197375 | python,tensorflow | Mentioning the Answer here for the benefit of the Community.
Issue is resolved by using Python==3.6, Tensorflow==1.5, protobuf==3.6.0. | I am using Windows 10, CPU: Intel(R) Core(TM) 2 Duo CPU T6600 @ 2.2GHz (2 CPUs) ~2.2GHz. RAM: 4GB. Video card: ATI Mobility Radeon HD 3400 Series.
I uninstalled everything and then
I installed Python 3.6 and Tensorflow==1.10.0. When I do import tensorflow, I get this error.
modulenotfounderror no module named '_pywrap... | 0 | 1 | 242 |
0 | 58,376,285 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-14T11:48:00.000 | 1 | 2 | 0 | How to calculate similarity between categorical variables in collaborative filtering | 58,376,140 | 0.099668 | python,recommendation-engine,collaborative-filtering | One good example to calculate distance between categorical features is Hamming Distance where we calculate the number of different instances.
On the other hand, you can still calculate Cosine Similarity for user-item data set.
As an example;
user 1 buys item 1, item 2
user 2 buys item 2, item 3
Then, user vectors are;... | I am trying to build a recommender system using collaborative filtering.
I am having user-item dataset. I am unable to find similarity between similar user, since i cannot use Euclidean / Cosine distance will not work here.
If i convert categorical variable into 0, 1 then will not able to calculate distance.
Can you... | 0 | 1 | 2,649 |
0 | 62,181,010 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-14T11:48:00.000 | 0 | 2 | 0 | How to calculate similarity between categorical variables in collaborative filtering | 58,376,140 | 0 | python,recommendation-engine,collaborative-filtering | Cosine similarity will handle the problem as a whole vector which includes all values of the variable. And may not give the answer for correlation. So when you received a good score from cosine similarity, it will not make sure they are also correlated. | I am trying to build a recommender system using collaborative filtering.
I am having user-item dataset. I am unable to find similarity between similar user, since i cannot use Euclidean / Cosine distance will not work here.
If i convert categorical variable into 0, 1 then will not able to calculate distance.
Can you... | 0 | 1 | 2,649 |
0 | 58,378,980 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-14T13:35:00.000 | 1 | 1 | 0 | Which actvation function to use for linear-chain CRF classifier? | 58,377,983 | 0.197375 | python,tensorflow,keras,neural-network,crf | When using Embeddings → BiLSTM → Dense + softmax, you implicitly assume that the likelihood of the tags is conditionally independent given the RNN states. This can lead to the label bias problem. The distribution over the tags always needs to sum up to one. There is no way to express that the model is not certain about... | I have a sequence tagging model that predicts a tag for every word in an input sequence (essentially named entity recognition). Model structure: Embeddings layer → BiLSTM → CRF
So essentially the BiLSTM learns non-linear combinations of features based on the token embeddings and uses these to output the unnormalized s... | 0 | 1 | 458 |
0 | 59,798,103 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-10-14T14:28:00.000 | 0 | 1 | 0 | How to bring variable values from csv to rivescript? | 58,378,873 | 0 | python,csv,rivescript | you can use macro for doing this.
> object read_from_csv python
#code to read from CSV
return ""
< object
From rive response
-<call>read_from_csv <star></call> | In rivescript, if a user asks me the price of a certain item, I want the bot to look for that items' price in the csv file. Im new to to rivescript so any kind of help would be appreciated. | 0 | 1 | 89 |
0 | 58,399,690 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-15T11:24:00.000 | 0 | 3 | 0 | binary classification for imbalanced data | 58,393,565 | 0 | python | You can use Synthetic Minority Oversampling Techniques(SMOTE) or Adasyn to tackle this. Try both methods and finalize based on your desired results. | In data mining, I use a machine learning algorithm to solve the binary classification.
However, the distribution of data samples is extremely imbalanced.
The ratio between good samples and bad samples is as high as 500:1.
Which methods can be used to solve the binary classification for imbalanced data? | 0 | 1 | 104 |
0 | 66,716,747 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-10-15T11:24:00.000 | 0 | 3 | 0 | binary classification for imbalanced data | 58,393,565 | 0 | python | Also you can use asymmetric loss functions which penalize models differently based on the label of data. In your case the loss function should penalize errors in "bad" samples much more than errors in "good" samples. In this way the model pays more attentions to the rare data points. | In data mining, I use a machine learning algorithm to solve the binary classification.
However, the distribution of data samples is extremely imbalanced.
The ratio between good samples and bad samples is as high as 500:1.
Which methods can be used to solve the binary classification for imbalanced data? | 0 | 1 | 104 |
0 | 58,406,856 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-15T14:18:00.000 | 0 | 2 | 0 | How to cluster large amounts of data with minimal memory usage | 58,396,826 | 0 | python,python-3.x,scipy,cluster-analysis,data-analysis | You'll need to choose a different algorithm.
Hierarchical clustering needs O(n²) memory and the textbook algorithm O(n³) time. This cannot scale well to large data. | I am using scipy.cluster.hierarchy.fclusterdata function to cluster a list of vectors (vectors with 384 components).
It works nice, but when I try to cluster large amounts of data I run out of memory and the program crashes.
How can I perform the same task without running out of memory?
My machine has 32GB RAM, Windows... | 0 | 1 | 593 |
0 | 58,408,874 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-16T07:16:00.000 | 0 | 5 | 0 | How to generate random values in range (-1, 1) such that the total sum is 0? | 58,407,760 | 0 | python,random | Since you are fine with the approach of generating lots of numbers and dividing by the sum, why not generate n/2 positive numbers divide by sum. Then generate n/2 negative numbers and divide by sum?
Want a random positive to negative mix? Randomly generate that mix randomly first then continue. | If the sum is 1, I could just divide the values by their sum. However, this approach is not applicable when the sum is 0.
Maybe I could compute the opposite of each value I sample, so I would always have a pair of numbers, such that their sum is 0. However this approach reduces the "randomness" I would like to have in... | 0 | 1 | 437 |
0 | 58,479,863 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-10-16T08:45:00.000 | 1 | 1 | 0 | How to design realtime deeplearnig application for robotics using python? | 58,409,257 | 1.2 | python,tensorflow,deep-learning,robotics | Let me summarize everything first.
What you want to do
The "object" is on the conveyer belt
The camera will take pictures of the object
MaskRCNN will run to do the analyzing
Here are some problems you're facing
"The first problem is the time model takes to create segmentation masks, it varies from one object to an... | I have created a machine learning software that detects objects(duh!), processes the objects based on some computer vision parameters and then triggers some hardware that puts the object in the respective bin. The objects are placed on a conveyer belt and a camera is mounted at a point to snap pictures of objects(one o... | 0 | 1 | 73 |
0 | 58,428,014 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-10-16T12:21:00.000 | 1 | 1 | 0 | How to prevent keras from renaming layers | 58,413,230 | 1.2 | python,tensorflow,keras,neural-network,jupyter-notebook | When using Tensorflow (1.X) as a backend, whenever you add a new layer to any model, the name of the layer -unless manually set- will be set to the default name for that layer, plus an incremental index at the end.
Defining a new model is not enough to reset the incrementing index, because all models end up on the same... | When I re-create a model, keras always makes a new name for a layer (conv2d_2 and so on) even if I override the model. How to make keras using the same name every time I run it without restarting the kernel. | 0 | 1 | 305 |
0 | 68,501,575 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-16T13:41:00.000 | -1 | 1 | 0 | Is there another way to plot a graph in python without matplotlib? | 58,414,797 | -0.197375 | python,matplotlib,graph | in cmd (coammand prompt) type pip install matplotlib | As the title says, that's basically it. I have tried to install matplotlib already but:
I am on Windows and "sudo" doesn't work
Every solution and answers on Stack Overflow regarding matplotlib (or some other package) not being able to be installed doesn't work for me...
I get "Error Code 1"
So! Is there any other wa... | 0 | 1 | 4,146 |
0 | 58,421,378 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-16T15:15:00.000 | 0 | 1 | 0 | Regression vs Classification for a problem that could be solved by both | 58,416,636 | 0 | python,machine-learning,regression,classification | Without having the data and running classification or regression, a comparison would be hard because of the metric you use for each family is different.
For example, comparing RMSE of a regression with F1 score (or accuracy) of a classification problem would be apple to orange comparison.
It would be ideal if you can ... | I have a problem that I have been treating as a classification problem. I am trying to predict whether a machine will pass or fail a particular test based on a number of input features.
What I am really interested in is actually whether a new machine is predicted to pass or fail the test. It can pass or fail the test ... | 0 | 1 | 42 |
0 | 58,418,351 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-16T16:30:00.000 | 4 | 3 | 0 | Data Cleaning with Pandas in Python | 58,417,900 | 0.26052 | python-3.x,pandas,data-cleaning | you can use:
df.select_dtypes(include='bool')=df.select_dtypes(include='bool').astype(int) | I am trying to clean a csv file for data analysis. How do I convert TRUE FALSE into 1 and 0?
When I search Google, they suggested df.somecolumn=df.somecolumn.astype(int). However this csv file has 100 columns and not every column is true false(some are categorical, some are numerical). How do I do a sweeping code that ... | 0 | 1 | 87 |
0 | 58,423,130 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-10-16T23:34:00.000 | 0 | 4 | 0 | How can I create a 2d array of integers? | 58,422,957 | 0 | python,numpy,integer,2d,numpy-ndarray | A line like this should also work: [[0 for i in range(10)] for i in range(10)] | How do I create a 2D array of zeros that will be stored as integers and not floats in python? np.zeros((10,10)) creates floats. | 0 | 1 | 934 |
0 | 58,433,457 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-17T13:05:00.000 | 1 | 3 | 0 | Not able to outer join two dataframe | 58,433,397 | 0.066568 | python,pandas | You should try the merge method.
pd.merge(df1, df2, how='outer', on='a') | I am using this code to merge two dataframe :
pd.concat(df1, df2, on='a', how='outer')
I am getting the following error:-
TypeError: concat() got an unexpected keyword argument 'on' | 0 | 1 | 64 |
0 | 58,448,415 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-17T21:09:00.000 | 0 | 1 | 0 | How to use a pre-trained object detection in tensorflow? | 58,440,762 | 1.2 | python-3.x,tensorflow,deep-learning,object-detection | As been pointed out by @Matias Valdenegro in the comments, your first question does not make sense. For your second question however, there are multiple ways to do so. The term that you're searching for is Transfer Learning (TL). TL means transferring the "knowledge" (basically it's just the weights) from a pre-trained... | How can I use the weights of a pre-trained network in my tensorflow project?
I know some theory information about this but no information about coding in tensorflow. | 0 | 1 | 144 |
0 | 58,485,627 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-18T06:40:00.000 | 1 | 1 | 0 | A Variation on Neural Machine Translation | 58,445,247 | 0.197375 | python-3.x,deep-learning,lstm,recurrent-neural-network,seq2seq | In that case, you would be learning a model that copies the input symbol to the output. It is trivial for the attention mechanism to learn the identity correspondence between the encoder and decoder states. Moreover, RNNs can easily implement a counter. It thus won't provide any realistic estimate of the probability, i... | I have been processing this thought in my head for a long time now. So in NMT, We pass in the text in the source language in the encoder seq2seq stage and the language in the target language in the decoder seq2seq stage and the system learns the conditional probabilities for each word occurring with its target language... | 0 | 1 | 31 |
0 | 58,627,454 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-10-18T08:49:00.000 | -2 | 5 | 0 | How to play video on google colab with opencv? | 58,447,228 | -0.07983 | python,opencv,computer-vision,jupyter,google-colaboratory | Here is the command on google colab :
ret, frame = input_video.read()
Wish yhis help you | I am working on a project related to object detection using Mask RCNN on google colab. I have a video uploaded to my colab. I want to display it as a video while processing it at the runtime using openCV. I want to do what cv2.VideoCapture('FILE_NAME') does on the local machine. Is there any way to do it? | 0 | 1 | 14,645 |
0 | 58,451,829 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-18T13:05:00.000 | 1 | 1 | 0 | Why the max_depth of every decision tree in my random forest classifier model are the same? | 58,451,535 | 0.197375 | python,classification,random-forest,decision-tree | If i am not mistaken, a decision tree is likely to reach its max depth. There is nothing wrong with it. I would even say that he surely will. The space you allow your tree to grow in, the space your tree will occupy.
Scaled to a random forest, again there is nothing wrong with it. You should focus on choosing the righ... | Why the max_depth of every decision tree in my random forest classifier model are the same?
I set the max_depth=30 of my RandomForestClassifier, and when I print each trees(trees = RandomForestClassifier.estimators_), I find every tree's max_depth are the same.
I really don't know where is the problem and how it happne... | 0 | 1 | 334 |
0 | 58,456,411 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-18T18:11:00.000 | 1 | 1 | 0 | Is there a way to increase the number of units in a dense layer and still be able to load previously saved weights that used a lower number of units? | 58,456,169 | 1.2 | python,keras | In theory you could add more units and initialize them randomly, but that would make the original training worthless. A more common method for increasing the complexity of a model while leveraging earlier training is to add more layers and resume training. | I am starting to learn how to build neural networks. Here is what I did:
I ran a number of epochs with units in my dense layer at 512.
Then I saved the weights with the best accuracy.
Then I increased the number of units in my dense layer to 1024 and attempted to reload my weights with the best accuracy but with the o... | 0 | 1 | 53 |
0 | 58,468,462 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-10-18T20:30:00.000 | 1 | 1 | 0 | Read only specific rows of .parquet files matching criteria? | 58,457,788 | 1.2 | python,pyspark,pyarrow | As of 0.15.0, pyarrow doesn't have this feature, but we (in the Apache Arrow project) are actively working on this and hope to include it in the next major release. | I'm working against a filesystem filled with .parquet files. One of the columns, 'id', uniquely identifies a machine. I was able to use pyspark to open all .parquet files in a certain directory path, then create a set([]) of the values from the 'id' column. I'd like to open all other rows in all other files, where the ... | 0 | 1 | 1,045 |
0 | 58,459,640 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-10-18T23:44:00.000 | 0 | 1 | 0 | Choosing time_step for LSTM | 58,459,327 | 1.2 | python,keras,recurrent-neural-network | Arrays input into the LSTM have shape: (N_SAMPLES, SEQUENCE_LENGTH, N_FEATURES). | I am trying to reshape my input for my LSTM Network.
I have a training data of train_x (20214000 columns x 9 rows) and train_y (20214000 columns x 1 row).
How do I reshape my train_x such that I can feed it into my RNN?
I have 9 features so it would be something like:
train_x.reshape(?,?,9) and
train_y.reshape(?,?,1... | 0 | 1 | 30 |
0 | 58,463,773 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-10-19T12:03:00.000 | 0 | 1 | 0 | Keras - Using large numbers of features | 58,463,482 | 1.2 | python,machine-learning,keras,keras-layer,tf.keras | I suppose each input entry has size (20000, 1) and you have 500 entries which make up your database?
In that case you can start by reducing the batch_size, but I also suppose that you mean that even the network weights don't fit in you GPU memory. In that case the only thing (that I know of) that you can do is dimensio... | I'm developing a Keras NN that predicts the label using 20,000 features. I can build the network, but have to use system RAM since the model is too large to fit in my GPU, which has meant it's taken days to run the model on my machine. The input is currently 500,20000,1 to an output of 500,1,1
-I'm using 5,000 nodes in... | 0 | 1 | 415 |
0 | 58,484,891 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-10-19T16:13:00.000 | 1 | 1 | 0 | PyTorch - a functional equivalent of nn.Module | 58,465,570 | 0.197375 | python,pytorch | I already found the solution: if you have an operation inside of a module which creates a new tensor, then you have to use self.register_buffer in order to fully utilize automating moving between devices. | As we know we can wrap arbitrary number of stateful building blocks into a class which inherits from nn.Module. But how is it supposed to be done when you want to wrap a bunch of stateless functions (from nn.Functional), in order to fully utilize things which nn.Module allows you to, like automatic moving of tensors be... | 0 | 1 | 65 |
0 | 68,735,569 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-19T17:46:00.000 | 0 | 1 | 0 | Training a machine learning model on multiple CSV files? | 58,466,396 | 0 | python,pandas,machine-learning,scikit-learn,pytorch | If all of the files contain the same features, you can concatenate them. If some features are preprocessed differently (for example, they have different ranges in different files), you should make them consistent before concatenating. Then use the obtained big data frame/array for model training.
Also, consider shuffli... | I want to train a machine learning model on multiple csv files that are all unique. Each file is a collection of time series data from basketball games. I want to train a model to look at each game and be able to predict outcomes. Should I simply tell sci kit learn or another package to iterate through the files in the... | 0 | 1 | 393 |
0 | 58,471,416 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-19T23:41:00.000 | 1 | 1 | 0 | Saving a high number of images as an array | 58,468,914 | 1.2 | python,numpy,image-processing | Do these steps for each of the videos:
Load the data into one NumPy array.
Write to disk using np.save() with the extension .npy.
Add the .npy file to a .zip compressed archive using the zipfile module.
The end result will be as if you loaded all 224 arrays and saved them at once using np.savez_compressed, but it wil... | I have a high number of videos and I want to extract the frames, pre-process them and then create an array for each video . So far I have created the arrays but the final size of each array is too big for all of the videos. I have 224 videos, each resulting in a 6GB array totaling more than 1.2TB. I have tried using nu... | 0 | 1 | 114 |
0 | 58,469,792 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2019-10-20T02:53:00.000 | 1 | 8 | 0 | Numpy: get the index of the elements of a 1d array as a 2d array | 58,469,671 | 0.024995 | python,numpy,numpy-ndarray | Pseudocode:
get the "number of 1d arrays in the 2d array", by subtracting the minimum value of your numpy array from the maximum value and then plus one. In your case, it will be 5-0+1 = 6
initialize a 2d array with the number of 1d arrays within it. In your case, initialize a 2d array with 6 1d array in it. Each 1d a... | I have a numpy array like this: [1 2 2 0 0 1 3 5]
Is it possible to get the index of the elements as a 2d array? For instance the answer for the above input would be [[3 4], [0 5], [1 2], [6], [], [7]]
Currently I have to loop the different values and call numpy.where(input == i) for each value, which has terrible perf... | 0 | 1 | 3,200 |
0 | 58,476,288 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-20T17:13:00.000 | 1 | 1 | 0 | Tensorflow optimizer with negative feedback? | 58,475,363 | 0.197375 | python,tensorflow,optimization | You may do a forward pass, check the loss, and then do backward if you think the loss is acceptable. In TF 1.x it requires some tf.cond and manual calculation and application of gradients. The same in TF 2.0 only the control flow is easier, but you have to use gradient_tape and still apply gradients manually. | I am optimizing a tensorflow model. It is not a neural net, I am just using tensorflow for easy derivative computations. In any case, it seems that loss surface has a steep edge somewhere, and my loss will sometimes "pop out" of the local minimum it is currently targeting, the loss will go up a great deal, and the op... | 0 | 1 | 36 |
0 | 58,486,136 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-10-21T06:31:00.000 | 2 | 1 | 0 | Repeating images in training dataset for tensorflow object detection models | 58,480,861 | 1.2 | python,tensorflow,object-detection,training-data | Should I use the same image for multiple records?
No, because anything in the image that is not annotated as an object is classified as background, which is an implicit object type/class. So when you train your model with an image that has an object, but that object is not annotated correctly, the performance of the m... | I'm training a tensorflow object detection model which has been pre-trained using COCO to recognize a single type/class of objects. Some images in my dataset have multiple instances of such objects in them.
Given that every record used in training has a single bounding box, I wonder what is the best approach to deal w... | 0 | 1 | 473 |
0 | 58,482,713 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-21T08:36:00.000 | 1 | 1 | 0 | how to use 1D-convolutional neural network for non-image data | 58,482,580 | 0.197375 | python,tensorflow,conv-neural-network | You first have to know, if it is sensible to use CNN for your dataset. You could use sliding 1D-CNN if the features are sequential eg) ECG, DNA, AUDIO. However I doubt that this is not the case for you. Using a Fully Connected Neural Net would be a better choice. | I have a dataset that I have loaded as a data frame in Python. It consists of 21392 rows (the data instances, each row is one sample) and 79 columns (the features). The last column i.e. column 79 has string type labels. I would like to use a CNN to classify the data in this case and predict the target labels using the ... | 0 | 1 | 371 |
0 | 58,491,153 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-10-21T09:25:00.000 | 0 | 3 | 0 | Training in Python and Deploying in Spark | 58,483,371 | 0 | python-3.x,scala,apache-spark-mllib,xgboost,apache-spark-ml | you can
load data/ munge data using pyspark sql,
then bring data to local driver using collect/topandas(performance bottleneck)
then train xgboost on local driver
then prepare test data as RDD,
broadcast the xgboost model to each RDD partition, then predict data in parallel
This all can be in one script, you spar... | Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ?
edit:
Thanks all for the answer , but my question is r... | 0 | 1 | 1,017 |
0 | 58,483,658 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-10-21T09:25:00.000 | 0 | 3 | 0 | Training in Python and Deploying in Spark | 58,483,371 | 0 | python-3.x,scala,apache-spark-mllib,xgboost,apache-spark-ml | You can run your python script on spark using spark-submit command so that can compile your python code on spark and then you can predict the value in spark. | Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ?
edit:
Thanks all for the answer , but my question is r... | 0 | 1 | 1,017 |
0 | 58,502,790 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-21T21:30:00.000 | 0 | 1 | 0 | Any way to prevent modifications to content of a ndarray subclass? | 58,494,393 | 0 | python,numpy,subclass,numpy-ndarray | Looks like the answer is:
np.ndarray.setflags(write=False) | I am creating various classes for computational geometry that all subclass numpy.ndarray. The DataCloud class, which is typical of these classes, has Python properties (for example, convex_hull, delaunay_trangulation) that would be time consuming and wasteful to calculate more than once. I want to do calculations once... | 0 | 1 | 54 |
0 | 60,212,560 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-22T08:47:00.000 | -1 | 1 | 0 | Why does my GridSearchCV().fit() run slower now that I'm using a better processor? | 58,500,382 | -0.197375 | python,scikit-learn | Perhaps the size of your parameter grid is smaller than 48? | I'm running a range of GridSearchCV().fits for a RandomForestClassifier over a range of parameter sets.
From the start I have been setting n_jobs=-1 on the RandomForestClassifier.
For the past week I've been doing this with an i5 4-core processor and it was okay but not very fast. I've just upgraded to a computer with ... | 0 | 1 | 40 |
0 | 58,508,299 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-22T15:55:00.000 | 0 | 2 | 0 | Why get different results when comparing two dataframes? | 58,508,089 | 0 | python,pandas,dataframe,comparison | maybe the lines in both dataframes are not ordered the same way? dataframes will be equal when the lines corresponding to the same index are the same | I am comparing two df, it gives me False when using .equals(), but if I append two df together and use drop_duplicate() it gives me nothing. Can someone explain this? | 0 | 1 | 488 |
0 | 58,702,595 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-23T07:03:00.000 | 1 | 1 | 0 | How to make XGBoost model to learn its mistakes | 58,517,184 | 1.2 | python,model,xgboost | Did you verify if those samples are outliers ? If they are, try to make your model more robust to them by changing the hyperparameters or scaling your data set | My XGBoost model regularly makes mistakes in prediction on the same samples. I want to let the model know its mistakes and correct model prediction behavior. How can I do this?
I tried to solve the problem by decreasing logistic regression threshold (by increasing model sensibility) but it leads to radical increasing o... | 0 | 1 | 127 |
0 | 58,523,442 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-23T11:33:00.000 | 0 | 1 | 0 | Big Amount of Data on a PC? | 58,521,937 | 0 | python-3.x,database,apache-spark | Basically for handling large amount of data you have to use big data tool like Hadoop or Apache Spark. You can use pyspark which is combination of python and spark having high efficiency for data processing.
I suggest, if you have flat file format then used ORC file format for processing data into pyspark which improve... | Hello I want to deal with a big amount of data of 1 billions rows and 23 columns. But in pandas I cannot even read the data. So how can I handle this data on my computer which is Dell XPS 9570. Can I use spark for that? Any advice to deal with it on my PC?
Thank you | 0 | 1 | 45 |
0 | 58,537,277 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-24T01:56:00.000 | 0 | 1 | 1 | Azure Databricks with Python scripts | 58,533,089 | 0 | python,azure,databricks | @Sathya Can you provide more information on what the different python scripts as well as the config files do?
As for the python scripts, depending on what their function is, you could create one or more python notebooks in Databricks and copy the contents into them. You can then run these notebooks as part of a job or ... | I am new to Python. Need help with Azure databricks.
Scenario:
Currently I am working on a project which uses HDInsight cluster to submit spark jobs and they use Python script with classes and functions [ .py] which resides in the /bin/ folder in the edge node.
We propose to use Databricks instead of HDInsight cluster ... | 0 | 1 | 1,230 |
0 | 58,967,916 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-10-24T14:28:00.000 | 1 | 1 | 0 | ML Model Overfits if input data is normalized | 58,543,537 | 0.197375 | python,tensorflow,machine-learning,keras,resnet | Mentioning the Solution below for the benefit of the community.
Problem is resolved by making the changes mentioned below:
Batch Normalization layers within ResNet didn't work properly when frozen. So, Batch Normalization Layers within ResNet should be unfreezed, before Training the Model.
Image Preprocessing (Normali... | Please help me understand why my model overfits if my input data is normalized to [-0.5. 0.5] whereas it does not overfit otherwise.
I am solving a regression ML problem trying to detect location of 4 key points on images. To do that I import pretrained ResNet 50 and replace its top layer with the following architectur... | 0 | 1 | 208 |
0 | 58,551,852 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-24T14:33:00.000 | 0 | 1 | 0 | Possible to rename independent variable name in Built-in lmfit fitting models? | 58,543,638 | 1.2 | python,lmfit | Sorry, I don't think that is possible.
I think you will have to rewrite the functions to use q instead of x. That is, lmfit.Model uses function inspection to determine the names of the function arguments, and most of the built-in models really do require the first positional argument to be named x. | I am using lmfit to do small angle X-ray scattering pattern fitting. To this end, I use the Model class to wrap my functions and to make Composite Models which works well. However, it happened that I wrote all my function with 'q' as the independent variable (convention in the discipline). Now I wanted to combine some... | 0 | 1 | 177 |
0 | 58,549,479 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-10-24T18:19:00.000 | 0 | 3 | 0 | How to connect ML model which is made in python to react native app | 58,547,095 | 0 | python,react-native,deployment | You can look into the CoreMl library for react native application if you are developing for IOS platform else creating a restAPI is a good option. (Though some developers say that latency is an issue but it also depends on what kind of model and dataset you are using ). | i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other | 0 | 1 | 3,021 |
0 | 58,548,043 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-10-24T18:19:00.000 | 1 | 3 | 0 | How to connect ML model which is made in python to react native app | 58,547,095 | 0.066568 | python,react-native,deployment | create a REST Api in flask/django to deploy your model on server.create end points for separate functions.Then call those end points in your react native app.Thats how it works. | i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other | 0 | 1 | 3,021 |
0 | 58,556,122 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-10-25T09:30:00.000 | 3 | 1 | 0 | Support tensorflow v1.x and v2.0 on same PC | 58,555,825 | 1.2 | python,tensorflow,anaconda | Use different environments. If you have anaconda distribution you can use conda (check the answer in [])
Install virtualenv first pip install virtualenv [Not required for Anaconda]
Create env for V1.x virtualenv v1x OR [conda create --name v1x]
Activate env source v1x/bin/activate OR [conda activate v1x]
Install tens... | Code with tensorflow v1.x is not compatible with tensorflow v2.0. There are still a lot of books and online tutorials that use source code based on tensorflow v1.x. If I upgrade to v2.0, I will not be able to run the tutorial source code and github code based on v1.x.
Is it possible to have both v1.x and v2.0 supported... | 0 | 1 | 624 |
0 | 58,557,195 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-10-25T10:48:00.000 | 4 | 1 | 0 | What is "faster" spyder or jupyter notebook? | 58,557,092 | 1.2 | python,jupyter-notebook,spyder | Jupyter is basically a browser application, whereas spyder is a dedicated IDE. When I work with large datasets, I never use Jupyter as Spyder seems to run much faster. The only way to truly compare this would be to run/time the same script on both Spyder and Jupyter a couple of times, but in my experience Spyder always... | Maybe it is to broad for this place, but I have to work on a huge database /dataframe with some text processing. The dataframes are stored on my computer as csv.
Is it faster in terms of runtime to use spyder or jupyter notebook?
I am mainly using: pandas, nltk
The outcome is only a csv file, which I have to store on m... | 0 | 1 | 4,377 |
0 | 58,558,104 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-25T11:39:00.000 | 1 | 1 | 0 | What does it mean if I can not get 0 error on very small training dataset? | 58,557,857 | 1.2 | python,keras,deep-learning | Short answer: No
Reason:
It may be that a small number of examples are miss labeled. In the case of classification, try to identify which examples it is unable to correctly classify. This will tell you whether your network has learnt all it can.
It can also happen if your data has no pattern that can be learnt - if ... | In order to validate if the network can potentially learn often people try to overfit on the small dataset.
I can not reach 0 error with my dataset but the output looks like that network memorizes the training set. (MPAE ~1 %)
Is it absolutely necessary to get 0 error in order to prove that my network potentially wor... | 0 | 1 | 50 |
0 | 58,566,460 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-25T15:15:00.000 | 0 | 2 | 0 | can pandas autocorr handle irregularly sample timeseries data? | 58,561,265 | 0 | python,pandas,autocorrelation | This is not quite a programming question.
Ideally, your measure of autocorrelation would use data measured at the same frequency/same time interval between observations. Any autocorr function in any programming package will simply measure the correlation between the series and whatever lag you want. It will not corre... | I have a dataframe with datetime index, where the data was sampled irregularly (the datetime index has gaps, and even where there aren't gaps the spacing between samples varies).
If I do:
df['my column'].autocorr(my_lag)
will this work? Does autocorr know how to handle irregularly sampled datetime data? | 0 | 1 | 158 |
0 | 58,566,065 | 0 | 0 | 0 | 0 | 1 | true | 76 | 2019-10-25T20:33:00.000 | 73 | 3 | 0 | What is the difference between sparse_categorical_crossentropy and categorical_crossentropy? | 58,565,394 | 1.2 | python,tensorflow,machine-learning,keras,deep-learning | Simply:
categorical_crossentropy (cce) produces a one-hot array containing the probable match for each category,
sparse_categorical_crossentropy (scce) produces a category index of the most likely matching category.
Consider a classification problem with 5 categories (or classes).
In the case of cce, the one-hot tar... | What is the difference between sparse_categorical_crossentropy and categorical_crossentropy? When should one loss be used as opposed to the other? For example, are these losses suitable for linear regression? | 0 | 1 | 48,257 |
0 | 58,588,578 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-10-28T07:25:00.000 | 0 | 1 | 0 | Is there any way to identify heading and paragraph from scanned images using tensorflow object detection? | 58,587,021 | 0 | python,tensorflow,object-detection-api | I think the best idea would be to train a network itself in order to solve the problem, you won't need a huge model for it. The labelling part of the input dataset though might be annoying. Otherwise you could work exclusively with computer vision, leaving aside neural networks, but you should have a good idea to solve... | i need to identify heading and paragraphs from scanned images. is there any better way to identify this?
i already tried ssd_inception_v2 model, but it is not accurate. | 0 | 1 | 147 |
0 | 58,602,260 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-10-29T03:59:00.000 | 0 | 2 | 0 | What's an easy way to test out Numpy operators? | 58,601,271 | 0 | python,numpy | What do you mean by "numpy operators"? Does it mean the library functions or just numerical operations that apply on every entry of the array?
My suggestion is to start with researching on ndarray, the most important data structure of numpy. See what it is and what operations it offers. | I have Numpy installed. I'm trying to import it on Sublime so that I can test it out and see how it works. I'm trying to learn how to build an image classifier. I'm very new to Numpy and it's pretty confusing to me. Are there basic Numpy operators I can run on Python so I can start getting an idea on how it works? | 0 | 1 | 34 |
0 | 58,603,631 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-10-29T07:49:00.000 | 2 | 1 | 0 | Program to divide the array into N continuous subarray so that the sum of each subarray is odd | 58,603,191 | 1.2 | python,arrays,algorithm | Start from the first element of the array. Use a variable cur_sum to keep track of the current sum. Iterate the array till the cur_sum becomes odd, that becomes the first subarray. Then make cur_sum = 0 and start iterating the remaining array. Once you get (n-1) such subarray, you have to check if the sum of remaining ... | The problem gives two inputs : The array(arr) and the times the number of subarrays to be made out of it(n). The sum of the subarrays should be odd
It is already clear that if all the numbers are even. The odd sum subarray is not possible. For an odd sum , the continuous 2 numbers should be either odd+even or even+odd... | 0 | 1 | 215 |
0 | 58,615,181 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-10-29T20:06:00.000 | 0 | 1 | 0 | conda lists latest version of cutadapt as 2.6 but only runs cutadapt 2.4 | 58,614,631 | 0 | python,conda | I just solved my own problem. It seems I had installed cutadapt using both conda and pip at some point.
When I did 'pip list' I saw cutadapt v2.4. So I removed this version of cutadapt 'pip uninstall cutadapt'.
Now when I do 'cutadapt --version' the last version installed using conda is shown as '2.6'. | Conda lists the most current version of cutadapt as 2.6 but when I check the version and run the program it only uses the older cutadapt v2.4
I've installed cutadapt using conda 4.7.12:
conda install -c bioconda cutadapt
When I do conda list it says I have the latest version of cutadapt:
cutadapt 2.6 ... | 0 | 1 | 93 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.