GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 56,175,288 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-16T18:21:00.000 | 0 | 1 | 1 | How to scale Kafka stream processing dynamically? | 56,174,516 | 0 | java,python,apache-kafka,kafka-consumer-api | If you have N partitions, then you can have up to N consumers within the same consumer group each of which reading from a single partition. When you have less consumers than partitions, then some of the consumers will read from more than one partition. Also, if you have more consumers than partitions then some of the c... | I have a fixed number of partitions of a topic. Producers produce data at varying rate in different hours of the day.
I want to add consumers dynamically based on hours of the day for the processing so that I can process records as fast as I can.
For example I have 10 partitions of a topic. I want to deploy 5 consumer... | 0 | 1 | 210 |
0 | 57,013,231 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-16T19:49:00.000 | 0 | 2 | 0 | Rearrange powerpoint slides automatically using python-pptx | 56,175,678 | 0 | python,powerpoint,python-pptx | Would it be feasible - if all we're doing is reordering - to read the XML and rewrite it with the slide elements permuted?
Further - for the "delete" case - is it feasible to simply delete a slide element in the XML? (I realise this could leave dangling objects such as images in the file.)
The process of extracting the... | We typically use powerpoint to facilitate our experiments. We use "sections" in powerpoint to keep groups of slides together for each experimental task. Moving the sections to counterbalance the task order of the experiment has been a lot of work!
I thought we might be able to predefine a counterbalance order (using a... | 0 | 1 | 2,247 |
0 | 56,178,539 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-16T20:10:00.000 | 1 | 1 | 0 | Is there a way to assign a maximum number of clusters using DBSCAN? | 56,175,928 | 1.2 | python,cluster-analysis,dbscan | Not with DBSCAN itself. Connected components are connected components, there is no ambiguity at this point.
You could write your own rules to extract the X most significant costs from an OPTICS plot though. OPTICS is the more variable formulation of DBSCAN. | If I am trying to cluster my data using DBSCAN, is there a way to assign a maximum number of clusters? I know I can set the minimum distance between points to be considered a cluster, but my data changes case by case and I would prefer to not allow more than 4 clusters. Any suggestions? | 0 | 1 | 281 |
0 | 56,179,295 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-05-16T20:52:00.000 | 1 | 2 | 0 | Pytorch argsort ordered, with duplicate elements in the tensor | 56,176,439 | 0.099668 | python,sorting,machine-learning,pytorch,tensor | Here is one way:
sort the numpy array using numpy.argsort()
convert the result into tensor using torch.from_numpy()
import torch
import numpy as np
A = [0,1,2,3,0,0,1,1,2,2,3,3]
x = np.array(A)
y = torch.from_numpy(np.argsort(x, kind='mergesort'))
print(y) | I have a vector A = [0,1,2,3,0,0,1,1,2,2,3,3]. I need to sort it in an increasing matter such that it is listed in an ordered fashion and from that extract the argsort. To better explain this I need to sort A to such that it returns B = [0,4,5,1,6,7,2,8,9,3,10,11]. However, when I use pyotrch's torch.argsort(A) it retu... | 0 | 1 | 2,541 |
0 | 56,195,251 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-16T22:21:00.000 | 1 | 1 | 0 | DataParallel multi-gpu RuntimeError: chunk expects at least a 1-dimensional tensor | 56,177,305 | 0.197375 | python,pytorch,multi-gpu | To identify the problem, you should check the shape of your input data for each mini-batch. The documentation says, nn.DataParallel splits the input tensor in dim0 and sends each chunk to the specified GPUs. From the error message, it seems you are trying to pass a 0-dimensional tensor.
One possible reason can be if yo... | I am trying to run my model on multiple gpus using DataParallel by setting model = nn.DataParallel(model).cuda(), but everytime getting this error -
RuntimeError: chunk expects at least a 1-dimensional tensor (chunk at
/pytorch/aten/src/ATen/native/TensorShape.cpp:184).
My code is correct. Does anyone know what's ... | 0 | 1 | 1,864 |
0 | 56,182,355 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2019-05-17T07:14:00.000 | 2 | 1 | 0 | How to predict different data via neural network, which is trained on the data with 36x60 size? | 56,181,395 | 1.2 | python-3.x,opencv,keras,neural-network,data-science | Neural networks (insofar as I've encountered) have a fixed input shape, freedom permitted only to batch size. This (probably) goes for every amazing neural network you've ever seen. Don't be too afraid of reshaping your image with off-the-shelf sampling to the network's expected input size. Robust computer-vision netwo... | I was training a neural network with images of an eye that are shaped 36x60. So I can only predict the result using a 36x60 image? But in my application I have a video stream, this stream is divided into frames, for each frame 68 points of landmarks are predicted. In the eye range, I can select the eye point, and using... | 0 | 1 | 81 |
0 | 56,188,894 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-17T14:46:00.000 | 0 | 1 | 0 | Order data set using pandas dataframe based on lowest value inside | 56,188,801 | 0 | python,pandas,dataframe | Order first by pass then do it by date. This way you will be sure to have your df the way you want it | I have a dataset that I would like to order by date but second order with 'pass' value lowest inside of highest. The reason I don't have any code is because, I just have no idea where to begin.
dataframe input:
index date pass
0 11/14/2014 1
1 3/13/2015 1
2 3/20/2015 1
3 5/1/2015 2
4 5/1/2015 ... | 0 | 1 | 52 |
0 | 56,202,144 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-17T14:56:00.000 | 0 | 2 | 0 | User word2vec model output in larger kmeans project | 56,188,976 | 0 | python,cluster-analysis,k-means,word2vec,unsupervised-learning | There are two common approaches.
Taking the average of all words. That is easy, but the resulting vectors tend to be, well, average. They are not similar to the keywords of the document, but rather similar to the most average and least informative words... My experiences with this approach are pretty disappointing, de... | I am attempting a rather large unsupervised learning project and am not sure how to properly utilize word2vec. We're trying to cluster groups of customers based on some stats about them and what actions they take on our website. Someone recommended I use word2vec and treat each action a user takes as a word in a "sente... | 0 | 1 | 71 |
0 | 56,192,030 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-17T18:10:00.000 | 0 | 1 | 0 | Conversion from pixel to general Metric(mm, in) | 56,191,574 | 0 | python,opencv,image-processing | The image formation process implies taking a 2D projection of the real, 3D world, through a lens. In this process, a lot of information is lost (e.g. the third dimension), and the transformation is dependent on lens properties (e.g. focal distance).
The transformation between the distance in pixels and the physical dis... | I am using openCV to process an image and use houghcircles to detect the circles in the image under test, and also calculating the distance between their centers using euclidean distance.
Since this would be in pixels, I need the absolute distances in mm or inches, can anyone let me know how this can be done
Thanks in ... | 0 | 1 | 1,149 |
0 | 56,211,743 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-19T20:18:00.000 | 2 | 2 | 0 | How to reduce the number of features in text classification? | 56,211,670 | 1.2 | python,nlp,text-classification,naivebayes,countvectorizer | You can set the parameter max_features to 5000 for instance, It might help with overfitting. You could also tinker with max_df (for instance set it to 0.95) | I'm doing dialect text classification and I'm using countVectorizer with naive bayes. The number of features are too many, I have collected 20k tweets with 4 dialects. every dialect have 5000 tweets. And the total number of features are 43K. I was thinking maybe that's why I could be having overfitting. Because the acc... | 0 | 1 | 471 |
0 | 56,238,216 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-20T07:23:00.000 | 0 | 2 | 0 | Comparing 2 huge (5-6 GB) csv files and count the number of matching and unmatched no. of rows | 56,216,081 | 0 | python,python-3.x,python-2.7 | I hope this algorithm work
create hash of every line in both file
now create set of that hash
difference and intersection of that set. | There are 2 huge (5-6 GB) each csv files. Now the objective is to compare both these files. how many rows are matching and how many rows are not matching?
Lets say file1.csv contains 5 similar lines, we need to count it as 1 but not 5.
Similarly, for file2.csv if there are redundant data, we need to count it as 1.
I e... | 0 | 1 | 196 |
0 | 56,221,192 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-20T12:05:00.000 | 1 | 1 | 0 | TensorFlow: Is it possible to map a function to a dataset using a for-loop? | 56,220,696 | 1.2 | python,tensorflow,tensor,map-function | No, not exactly.
A Dataset is inherently lazily evaluated and cannot be assigned to in that way - conceptually try to think of it as a pipeline rather than a variable: each value is read, passed through any map() operations, batch() ops, etc and surfaced to the model as needed. To "assign" a value would be to write it... | I have a tf.data.TFRecordDataset and a (computationally expensive) function, which I want to map to it. I use TensorFlow 1.12 and eager execution, and the function uses NumPy ndarray interpretations of the tensors in my dataset using EagerTensor.numpy(). However, code inside functions that are given to tf.Dataset.map()... | 0 | 1 | 582 |
0 | 56,222,077 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-20T13:10:00.000 | 0 | 1 | 0 | Differences between sklearn.model_selection.KFold and sklearn.model_selection.cross_validate with 'cv' parameter? | 56,221,694 | 0 | python,scikit-learn | I believe that KFold will simply carve your training data into 10 splits.
cross_validate, however, will also carve the data into 10 splits (with the cv=10 parameter) but it will also actually perform the cross-validation. In other words, it will run your model 10x and you will be able to report on the performance of yo... | Can I use cross_validate in sklearn with cv=10 to instead of using Kfold with n_splits=10? Does they work as same? | 0 | 1 | 171 |
0 | 56,226,834 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-20T17:25:00.000 | 0 | 1 | 0 | How to make multiple y axes zoomable individually | 56,225,582 | 1.2 | python,bokeh | Bokeh does not support this, twin axes are always linked to maintain their original relative scale. | I have a bokeh plot with multiple y axes. I want to be able to zoom in one y axis while having the other one's displayed range stay the same. Is this possible in bokeh, and if it is, how can I accomplish that? | 0 | 1 | 27 |
0 | 56,228,359 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2019-05-20T20:37:00.000 | 1 | 3 | 0 | Create a csv file that Excel will not mutate the data of when opening | 56,227,867 | 0.066568 | python,excel,python-3.x,string,csv | Have you tried expressly formatting the relevant column(s) to 'str' before exporting?
df['column_ex'] = df['column_ex'].astype('str')
df.to_csv('df_ex.csv')
Another workaround may be to open Excel program (not file), go to Data menu, then Import form Text. Excel's import utility will give you options to define each col... | I am programmatically creating csv files using Python. Many end users open and interact with those files using excel. The problem is that Excel by default mutates many of the string values within the file. For example, Excel converts 0123 > 123.
The values being written to the csv are correct and display correctly if I... | 0 | 1 | 190 |
0 | 56,241,130 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-21T09:29:00.000 | 0 | 1 | 0 | How to build a resnet with Keras that trains and predicts the subclass from the main class? | 56,235,267 | 0 | python,keras,conv-neural-network,hierarchical,deep-residual-networks | The easiest way to do so would be to train multiple classifiers and build a hierarchical system by yourself.
One classifier detecting class A, B etc. After that make a new prediction for subclasses.
If you want only one single classifier:
What about just killing the first hierarchy of parent classes? Should be also qui... | I would like go implement a hierarchical resnet architecture. However, I could not find any solution for this. For example, my data structure is like:
class A
Subclass 1
Subclass 2
....
class B
subclass 6
........
So i would like to train and predict the main class and then the subclass of the chosen/predicted... | 0 | 1 | 286 |
0 | 70,215,508 | 0 | 0 | 0 | 0 | 3 | false | 40 | 2019-05-21T13:23:00.000 | 5 | 12 | 0 | Could not find a version that satisfies the requirement torch>=1.0.0? | 56,239,310 | 0.083141 | python,torch | I finally managed to solve this problem thanks to John Red' comment and serg06 answer. Here's what I've done :
Install Python 3.7.9 and not newer.
BUT make sure to install 64bits python
Every other combination failed for me. | Could not find a version that satisfies the requirement torch>=1.0.0
No matching distribution found for torch>=1.0.0 (from stanfordnlp) | 0 | 1 | 95,229 |
0 | 63,728,120 | 0 | 0 | 0 | 0 | 3 | false | 40 | 2019-05-21T13:23:00.000 | 0 | 12 | 0 | Could not find a version that satisfies the requirement torch>=1.0.0? | 56,239,310 | 0 | python,torch | I tried every possible command for Windows, but nothing worked. I also tried using Pycharm package installation, everything throws the same error.
Finally installed Pytorch using Anaconda. | Could not find a version that satisfies the requirement torch>=1.0.0
No matching distribution found for torch>=1.0.0 (from stanfordnlp) | 0 | 1 | 95,229 |
0 | 71,016,393 | 0 | 0 | 0 | 0 | 3 | false | 40 | 2019-05-21T13:23:00.000 | 0 | 12 | 0 | Could not find a version that satisfies the requirement torch>=1.0.0? | 56,239,310 | 0 | python,torch | I want to pip install " torch>=1.4.0, torchvision>=0.5.0 ", but in a conda env with python=3.0, this is not right.
I tried create a new conda env with python=3.7, and pip install " torch>=1.4.0, torchvision>=0.5.0 " again, it is ok. | Could not find a version that satisfies the requirement torch>=1.0.0
No matching distribution found for torch>=1.0.0 (from stanfordnlp) | 0 | 1 | 95,229 |
0 | 56,327,491 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-21T14:26:00.000 | 0 | 2 | 0 | Can I import data from On-Premises SQL Server Database to Azure Machine Learning virtual machine? | 56,240,481 | 0 | python,sql,azure,jupyter-notebook,azure-machine-learning-service | You can always push the data to a supported source using a data movement/orchestration service. Remember that all Azure services are not going to have every source option like Power BI, Logic Apps or Data Factory...this is why data orchestration/movement services exist. | On the limited Azure Machine Learning Studio, one can import data from an On-Premises SQL Server Database.
What about the ability to do the exact same thing on a python jupyter notebook on a virtual machine from the Azure Machine Learning Services workspace ?
It does not seem possible from what I've found in the docume... | 0 | 1 | 1,048 |
0 | 60,404,903 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-05-22T08:17:00.000 | 5 | 1 | 0 | Python3 numpy import error PyCapsule_Import could not import module "datetime" | 56,252,250 | 1.2 | python-3.x,numpy,pip | In my case I had this problem, because my script was called math.py, which caused module import problems. Make sure your own python files do not share name with some of common module names. After I renamed my script to something else, I could run script normally. | I am trying to import numpy with python3 on MacOS mojave. I am getting this error. I don't know if it has something to do with a virtual environment or something like that.
Error:
PyCapsule_Import could not import module "datetime"
I have tried reinstalling python3 and reinstalling numpy | 0 | 1 | 2,527 |
0 | 56,254,221 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-22T08:37:00.000 | 0 | 1 | 0 | Zipping the files in S3 | 56,252,619 | 0 | python,amazon-web-services,amazon-s3,databricks | Amazon S3 does not have a zip/compress function.
You will need to download the files, zip them on an Amazon EC2 instance or your own computer, then upload the result. | I am having some text files in S3 location. I am trying to compress and zip each text files in it. I was able to zip and compress it in Jupyter notebook by selecting the file from my local. While trying the same code in S3, its throwing error as file is missing. Could someone please help | 0 | 1 | 40 |
0 | 56,291,641 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-22T12:38:00.000 | 1 | 1 | 0 | writing a pyspark dataframe to AWS - s3 from EC2 instance using pyspark code the time taken to complete write operation is longer than usual time | 56,256,999 | 0.197375 | python,amazon-web-services,amazon-s3,amazon-ec2,pyspark | It seems an issue with the cloud environment. Four things coming to my mind, which you may check:
Spark version: For some older version of spark, one gets S3 issues.
Data size being written in S3, and also the format of data while storing
Memory/Computation issue: The memory or CPU might be getting utilized to maximum... | When we are writing a pyspark dataframe to s3 from EC2 instance using pyspark code the time taken to complete write operation is longer than usual time. Earlier it used to take 30 min to complete the write operation for 1000 records, but now it is taking more than an hour. Also after completion of the write operation ... | 1 | 1 | 75 |
0 | 56,260,405 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-22T15:23:00.000 | 1 | 3 | 0 | Load tensorflow model without importing tensorflow | 56,260,192 | 0.066568 | python,tensorflow | Pretty much, unless you brought tensorflow and all of it's files with your application. Other than that, no, you cannot import tensorflow or have any tensorflow dependent modules or code. | Is it possible to train a tensorflow model, then export it as something accessible without tensorflow? I want to apply some machine learning to a school project in which the code is submitted on an online portal - it doesn’t have tensorflow installed though, only standard libraries. I am able to upload additional files... | 0 | 1 | 735 |
0 | 56,278,851 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-23T15:41:00.000 | 5 | 1 | 0 | What is Difference Between Flatten() and Dense() Layers in Convolutional Neural Network? | 56,278,769 | 0.761594 | python,machine-learning,neural-network,deep-learning,conv-neural-network | Flatten as the name implies, converts your multidimensional matrices (Batch.Size x Img.W x Img.H x Kernel.Size) to a nice single 2-dimensional matrix: (Batch.Size x (Img.W x Img.H x Kernel.Size)). During backpropagation it also converts back your delta of size (Batch.Size x (Img.W x Img.H x Kernel.Size)) to the origina... | I Have Serious Doubt Between Them. Can Anyone Please Elaborate With Examples and Some Ideas. | 0 | 1 | 2,581 |
0 | 56,299,140 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2019-05-24T20:15:00.000 | 4 | 2 | 0 | How to implement neural network pruning? | 56,299,034 | 0.379949 | python,tensorflow,optimization,deep-learning,inference | If you add a mask, then only a subset of your weights will contribute to the computation, hence your model will be pruned. For instance, autoregressive models use a mask to mask out the weights that refer to future data so that the output at time step t only depends on time steps 0, 1, ..., t-1.
In your case, since you... | I trained a model in keras and I'm thinking of pruning my fully connected network. I'm little bit lost on how to prune the layers.
Author of 'Learning both Weights and Connections for Efficient
Neural Networks', say that they add a mask to threshold weights of a layer. I can try to do the same and fine tune the traine... | 0 | 1 | 2,526 |
0 | 56,303,121 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-25T07:28:00.000 | 0 | 1 | 0 | Why Head() function showing semicolon separated data in my jupyter note book? | 56,302,744 | 0 | python-2.x | When I opened the csv file, each row was shown as single cell. Noticed that the delimiter was ;(semicolon ). I have changed the delimiter to ,(comma) and then the each value in the csv file was displayed in each cell.
Now, the head() method is displaying the results in table structure as expected :)
Is there any limita... | read the csv file flle using pd.read_csv() method. on displaying , it still in semicolon separated data. I expected table structure
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import random
import os
df=pd.read_csv("E:\Python\data_full.csv")
df.hea... | 0 | 1 | 83 |
0 | 57,820,210 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-25T07:31:00.000 | 0 | 1 | 0 | Colab connection always broken | 56,302,763 | 0 | python | Colab restarts on inactivity in 2-3 hours, you need to reconnect it. To simply avoid this show some activity on runtime. | I run CNN code on Colab notebook, and it takes long time. However, the connection always breaks after I ran 2 or 3 hours and cannot be reconnect back. I was told Colab virture machine would break connection after 12 hours without any operates, so how can I avoid restart my code after connection broken, or any easier wa... | 0 | 1 | 86 |
0 | 56,305,461 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-25T10:52:00.000 | 0 | 1 | 0 | I have many tiff files of neurons. I was wondering if there is a way to read the strength of light where neurons are and import that data into a file | 56,304,143 | 0 | python-3.x,image,graph,tiff | You can read your file as an image and convert it to a black and white image. then if you can specify which pixels are located at each neurons in image you can check pixel's values to check if the neuron is on or off.
any way I suggest to search in image processing packages of python the solution for your problem is ea... | I am currently doing research at a university and wanted to create custom code that would be able to analyze hours worth of images of neurons and determine if the neurons are on or off. I want to write the code myself and was just wondering where I can get started. For example, what kinds of things can I import into py... | 0 | 1 | 34 |
0 | 56,315,167 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-26T02:09:00.000 | 2 | 1 | 0 | Comparing results of neural net on two subsets of features | 56,310,122 | 1.2 | python,neural-network,lstm,data-science,feature-extraction | Since you have mentioned that using the different feature extraction methods, you are only getting slightly different feature sets, so the results are also similar. Also since your LSTM model is then also getting almost similar RMSE values, the models are able to generalize well and learn similarly and extract importan... | I am running a LSTM model on a multivariate time series data set with 24 features. I have ran feature extraction using a few different methods (variance testing, random forest extraction, and Extra Tree Classifier). Different methods have resulted in a slightly different subset of features. I now want to test my LSTM m... | 0 | 1 | 20 |
0 | 56,332,236 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-27T20:37:00.000 | 0 | 1 | 0 | I have repeated date values in my x axis. How do i create a different row with a single average of those values? | 56,332,208 | 0 | python,datetime,plot | Assuming you're using pandas:
pd.groupby('DATE')['PRICE'].mean() | I have been working with a dataset which contains information about houses that have been sold on a particular market. There are two columns, 'price', and 'date'.
I would like to make a line plot to show how the prices of this market have chaged over time.
The problem is, I see that some houses have been sold at the s... | 0 | 1 | 14 |
0 | 56,371,107 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2019-05-27T23:00:00.000 | 0 | 1 | 0 | Setting legend entries manually | 56,333,244 | 1.2 | python,excel,openpyxl | You cannot do that. You need to set the rows when creating the plots. That will create the titles for your charts | I am using openpyxl to create charts. For some reason, I do not want to insert row names when adding data. So, I want to edit the legend entries manually. I am wondering if anyone know how to do this.
More specifically
class openpyxl.chart.legend.Legend(legendPos='r', legendEntry=(),
layout=None, overlay=None, s... | 0 | 1 | 621 |
0 | 56,335,423 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-05-28T00:51:00.000 | 0 | 1 | 0 | Time series prediction: need help using series with different periods of days | 56,333,832 | 0 | python,statistics,time-series,prediction | Based on what I understand after reading your question, I would approach this problem in the following way.
For each day, find how far out the event is from that day. The max
value for this number is 46 in 2016, 77 in 2017 etc. Scale this value
by the max day.
Use the above variable, along with day of the month, day o... | There's this event that my organization runs, and we have the ticket sales historic data from 2016, 2017, 2018. This data contains the quantity of tickets selled by day, considering all the sales period.
To the 2019 edition of this event, I was asked to make a prediction of the quantity of tickets selled by day, consid... | 0 | 1 | 101 |
0 | 56,335,363 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-05-28T04:57:00.000 | 1 | 1 | 0 | Why does PyTorch gather function require index argument to be of type LongTensor? | 56,335,215 | 1.2 | python,pytorch | By default all indices in pytorch are represented as long tensors - allowing for indexing very large tensors beyond just 4GB elements (maximal value of "regular" int). | I'm writing some code in PyTorch and I came across the gather function. Checking the documentation I saw that the index argument takes in a LongTensor, why is that? Why does it need to take in a LongTensor instead of another type such as IntTensor? What are the benefits? | 0 | 1 | 39 |
0 | 70,738,720 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-05-28T13:45:00.000 | 0 | 3 | 0 | How to pass different set of data to train and test without splitting a dataframe. (python)? | 56,343,657 | 0 | python,scikit-learn,linear-regression,data-science,training-data | please skillsmuggler what about the X_train and X_Test how I can define it because when I try to do that it said NameError: name 'X_train' is not defined | I have gone through multiple questions that help divide your dataframe into train and test, with scikit, without etc.
But my question is I have 2 different csvs ( 2 different dataframes from different years). I want to use one as train and other as test?
How to do so for LinearRegression / any model? | 0 | 1 | 1,436 |
0 | 56,390,413 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-28T14:35:00.000 | -1 | 2 | 0 | Gaussian Mixture Models for pixel clustering | 56,344,581 | 1.2 | python-3.x,scikit-learn,cluster-analysis,gmm | Its not clustering if you use labeled training data!
You can, however, use the labeling function of GMM clustering easily.
For this, compute the prior probabilities, mean and covariance matrixes, invert them. Then classify each pixel of the new image by the maximum probability density (weighted by prior probabilities) ... | I have a small set of aerial images where different terrains visible in the image have been have been labelled by human experts. For example, an image may contain vegetation, river, rocky mountains, farmland etc. Each image may have one or more of these labelled regions. Using this small labeled dataset, I would like t... | 0 | 1 | 353 |
0 | 58,932,030 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-29T07:31:00.000 | 1 | 2 | 0 | Neural Network Prediction Interval | 56,355,244 | 0.099668 | python,machine-learning,neural-network,prediction,confidence-interval | One approach is caliculate residual for the validation set, it will be having a distribution, calculate mean, variance of the residual distribution and if you are looking for 95% add +,- 2sigma to your prediction and that should be your prediction interval. | I created a neural network in Python for a regression problem. I would like to have a prediction intervals for each value. How would I go about approaching this since neural networks are nonlinear? | 0 | 1 | 416 |
0 | 56,373,034 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-29T07:51:00.000 | 0 | 2 | 0 | Best Clustering Algorithm for High Dimensional Vectors | 56,355,551 | 0 | python,machine-learning,cluster-analysis | 45 dimensions is not particularly high. It's at best "medium" dimensionality, so most algorithms could work.
Usually it's not so much a matter of the number of dimensions, but rather how well they are preprocessed. With bad preprocessing, 2 dimensions can be a problem if the signal in one attribute is drowned by the no... | I am attempting to use some sort of clustering method on a set of datapoint vectors which have 45 dimensions. I'm fairly new to clustering data points and was wondering if anyone could point out appropriate methods to utilize? I was attempted using K-Means Clustering but was wondering if the dimensionality of my data m... | 0 | 1 | 323 |
0 | 56,358,796 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2019-05-29T10:18:00.000 | 2 | 1 | 0 | How to generate all posible binary nxm matrices, where the sum of each row is 1 | 56,358,243 | 1.2 | python,r,matlab,matrix | TLDR: No.
Let's look at the simpliest example: 2 DC. Your possible rows will be:
(1,0)
(0,1)
Now you want to construct all possible 2x50 matrices. Their number is 2^50 (2 possible rows in 50 rows). It is equal to:
1125899906842624
We suppose that each matrix stores 100 bytes. That all 2x50 matrices will store:
(2**5... | I'm working on an assignment where i have to assign 1 up to 10 distribution centers to all US states. I have made a model in excel to calculate all the costs, and clearly the goal of the assignment is to find the cheapest way. I have 50 rows (for each state) and 10 columns (for all possible DC locations). My model is ... | 0 | 1 | 149 |
0 | 56,368,801 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-29T13:17:00.000 | 0 | 1 | 0 | Which are the possible ways to retrain a model made in IBM Watson Knowledge Studio? | 56,361,575 | 1.2 | python-3.x,ibm-watson,watson-nlu,watson-knowledge-studio | IBM Watson Knowledge Studio does not support online training to retrain the existing model with new data. To adapt to new data, you need to train a brand new model with both the new data and the existing data. | I am working on a knowledge based chatbot creation on ibm watson and I have trained my custom model on ibm watson knowledge studio for agricultural database. Now if someone asked about any information that is not available in our dataset then how can we retrained that model/improve the model with that new data ?
I am ... | 0 | 1 | 161 |
0 | 68,811,985 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-05-30T05:38:00.000 | 0 | 6 | 0 | How do you produce a random 0 or 1 with random.rand | 56,372,240 | 0 | python,numpy | Is there a reason to specifically use np.random.rand? This function outputs a float as noted in the question and previous answers, and you would need thresholding to obtain an int.
scipy.stats.bernoulli(p) directly outputs a 1 with probability p and 0 with probability 1-p. | I'm trying to produce a 0 or 1 with numpy's random.rand.
np.random.rand() produces a random float between 0 and 1 but not just a 0 or a 1.
Thank you. | 0 | 1 | 6,029 |
0 | 56,383,864 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-30T15:14:00.000 | 1 | 1 | 0 | Cluster analysis of large dataset containing only categorical variables | 56,380,999 | 1.2 | python,cluster-analysis,large-data | Instead of clustering, what you should likely be using is frequent pattern mining.
One-hot encoding variables often does more harm than good. Either use a well-chosen distance for such data (could be as simple as Hamming or Jaccard on some data sets) with a suitable clustering algorithm (e.g., hierarchical, DBSCAN, but... | I have been given the task of clustering our customers base on products they bought together. My data contains 500,000 rows related to each customer and 8,000 variables (product ids). Each variable is a one hot encode vector that shows if a customer bought that product or not.
I have tried to reduce the dimensions of ... | 0 | 1 | 452 |
0 | 56,382,933 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-05-30T17:06:00.000 | 0 | 1 | 0 | Is it possible to run ipython with some packages already imported? | 56,382,641 | 0 | python,numpy,ipython | In the case of Mac add a script like load_numpy.py under ~/.ipython/profile_default/startup/ directory and add the import statements you need to that script. Every time you run ipython, all the scripts in the startup directory will be executed first and so the imports will be there. in case of Ubuntu add the file to ~/... | Is it possible to run ipython with some packages already imported?
almost every time when I run ipython I do import numpy as np, is it possible to automate this process? i.e. just after I run ipython I want to be able to write something like np.array([0,1]). Is it possible? | 0 | 1 | 50 |
0 | 62,975,319 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2019-05-30T18:01:00.000 | 1 | 4 | 0 | How can I generate a requirements.txt file for a package not available on my development platform? | 56,383,379 | 0.049958 | python,pip | You could run pip-compile-multi in a Docker container. That way you'd be running it under Linux, and you could do that on your Mac or other dev machines. As a one-liner, it might look something like this:
docker run --rm --mount type=bind,src=$(pwd),dst=/code -w /code python:3.8 bash -c "pip install pip-compile-multi &... | I'm trying to generate requirements/dev.txt and prod.txt files for my python project. I'm using pip-compile-multi to generate them from base.in dev.in and prod.in files. Everything works great until I add tensorflow-gpu==2.0.0a0 into the prod.in file. I get this error when I do: RuntimeError: Failed to pip-compile requ... | 0 | 1 | 663 |
0 | 56,388,545 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-05-30T20:56:00.000 | 1 | 2 | 0 | what follows after clustering | 56,385,518 | 0.099668 | python,deep-learning,cluster-analysis,k-means,sklearn-pandas | since clustering is unsupervised, there isn't an objective way to evaluate it. Typically, you just observe and see if there is some features for a certain cluster. | I am trying to cluster images based on their similarities with SIFT and Affinity Propagation, I did the clustering but I just don't want to visualize the results. How can I test with a random image from the obtained labels? Or maybe there's more to it?
Other than data visualization, I just don't know what follows after... | 0 | 1 | 38 |
0 | 56,388,587 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-05-30T20:56:00.000 | 0 | 2 | 0 | what follows after clustering | 56,385,518 | 0 | python,deep-learning,cluster-analysis,k-means,sklearn-pandas | If you have ground-truth cluster labels, you can measure Jacquad-Index or something in that line to get an error score. Then, you can tweak your distance measure or parameters etc. to minimize the error score.
You can also do some clustering in order to group your data as the divide step in divide-and-conquer algorith... | I am trying to cluster images based on their similarities with SIFT and Affinity Propagation, I did the clustering but I just don't want to visualize the results. How can I test with a random image from the obtained labels? Or maybe there's more to it?
Other than data visualization, I just don't know what follows after... | 0 | 1 | 38 |
0 | 60,986,471 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-31T03:19:00.000 | 4 | 1 | 0 | ValueError: ('Could not interpret initializer identifier:', 0.2) | 56,388,245 | 0.664037 | tensorflow,keras,python-3.5 | you should change it to X = layers.Dense(neurons, activation=activation, kernel_initializer=keras.initializers.Constant(weight_init))(X) | Traceback (most recent call last): File
"AutoFC_AlexNet_randomsearch_CalTech101_v2.py", line 112, in
X = layers.Dense(neurons, activation=activation, kernel_initializer=weight_init)(X) File
"/home/shabbeer/NAS/lib/python3.5/site-packages/keras/legacy/interfaces.py",
line 91, in wrapper
return fun... | 0 | 1 | 3,852 |
0 | 56,556,679 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-31T08:28:00.000 | 1 | 1 | 0 | How to fix '_pickle.UnpicklingError: invalid load key, '<' ' error in Pytorch | 56,391,392 | 0.197375 | python-3.x,pytorch | The reason about the problem is that the previous download was not finished. So when I deleted the original file and re-downloaded it, the problem was solved. | The problem I encountered when I ran the official code of maskrcnn-benchmark for facebookresearch,which was wrong when loading the pre-training model.
The code runs on a remote server at the school and the graphics card is an NVIDIA P100.
checkpointer = DetectronCheckpointer(
cfg, model, optimizer, scheduler, o... | 0 | 1 | 7,056 |
0 | 56,402,618 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-31T22:41:00.000 | 2 | 1 | 0 | Is it feasible to run a Support Vector Machine Kernel on a device with <= 1 MB RAM and <= 10 MB ROM? | 56,402,429 | 1.2 | python,c,performance,memory-management,svm | If you're that strapped for space, you'll probably want to skip scikit and simply implement the math yourself. That way, you can cycle through the data in structures of your own choosing. Memory requirements depend on the class of SVM you're using; a two-class linear SVM can be done with a single pass through the dat... | Some preliminary testing shows that a project I'm working on could potentially benefit from the use of a Support-Vector-Machine to solve a tricky problem. The concern that I have is that there will be major memory constraints. Prototyping and testing is being done in python with scikit-learn. The final version will be ... | 0 | 1 | 158 |
0 | 56,404,614 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2019-06-01T06:29:00.000 | 3 | 6 | 0 | How to generate a random sample of points from a 3-D ellipsoid using Python? | 56,404,399 | 0.099668 | python,math,random,geometry,ellipse | Consider using Monte-Carlo simulation: generate a random 3D point; check if the point is inside the ellipsoid; if it is, keep it. Repeat until you get 1,000 points.
P.S. Since the OP changed their question, this answer is no longer valid. | I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?
I want points on the surface of the ellipsoid. | 0 | 1 | 2,545 |
0 | 72,498,276 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2019-06-01T06:29:00.000 | 0 | 6 | 0 | How to generate a random sample of points from a 3-D ellipsoid using Python? | 56,404,399 | 0 | python,math,random,geometry,ellipse | One way of doing this whch generalises for any shape or surface is to convert the surface to a voxel representation at arbitrarily high resolution (the higher the resolution the better but also the slower). Then you can easily select the voxels randomly however you want, and then you can select a point on the surface w... | I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?
I want points on the surface of the ellipsoid. | 0 | 1 | 2,545 |
0 | 56,410,719 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-01T11:32:00.000 | 0 | 1 | 0 | How to choose a split variables for continous features for decision tree | 56,406,338 | 0 | python,machine-learning,artificial-intelligence,decision-tree,machine-learning-model | Decision tree works calculating entropy and information gain to determine the most important feature. Indeed, 8000 row is not too much for decision tree. But generally, Random forest is similar to decision tree. It is working as ensemble. You can review and try it.Moreover, maybe being slowly is related to another thi... | I am currently implementing decision tree algorithm. If I have a continous featured data how do i decide a splitting point. I came across few resources which say to choose mid points between every two points but considering I have 8000 rows of data this would be very time consuming. The output/feature label is having c... | 0 | 1 | 89 |
0 | 56,409,739 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-01T12:48:00.000 | 0 | 1 | 0 | Removing white space at the beginning of values in multiple columns | 56,406,907 | 0 | python,pandas,strip | You could try using
df['Name'].replace(" ", "")
this would delete all whitespaces though. | I found a solution to this:
df['Name']=df['Name'].str.lstrip
df['Parent']=df['Name'].str.lstrip
I have this DataFrame df (there is a white space at the left of "A" and "C" in the second row (which doesn't show well here). I would like to remove that space.
Mark Name Parent age
10 A C 1
12 A C 2
13 B ... | 0 | 1 | 70 |
0 | 56,407,510 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-01T13:01:00.000 | 0 | 1 | 0 | Interpretation of Training, Testing (Dev) and Validation Score in Machine Learning | 56,406,988 | 1.2 | python,validation,machine-learning,scikit-learn,data-science | As you explained in the comments, your test set is the set you used to tune your parameters and the validation set is the set that your model didn't use for training.
Considering that, it's natural that your Validation scores are lower than other scores.
When you're training a machine learning model, you show the tra... | I have trained a Machine Learnig Model using Sklearn and looked at different scores for the traing, testing (dev) and validation set.
Here are the scores:
Accuracy on Train: 94.5468%
Accuracy on Test: 74.4646%
Accuracy on Validation: 65.6548%
Precision on Train: 96.7002%
Precision on Test: 85.2289%
Precision on ... | 0 | 1 | 330 |
0 | 56,411,523 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-06-01T17:31:00.000 | 1 | 1 | 0 | Create a new vector model in gensim | 56,408,959 | 1.2 | python,vector,gensim,word2vec | Word-vectors are generally only comparable to each other if they were trained together.
So, if you want to have vectors for all of 'new', 'york', and 'new_york', you should prepare a corpus which includes them all, in a variety of uses, and train a Word2Vec model from that. | I already trained a word2vec model with gensim library. For example, my model contains vectors for 2 words: "new" and "york". However, I also want to train a vector for the word "new york", so I transform "new york" into "new_york" and train a new vector model. Finally, I want to combine 3 vectors: vector of the word "... | 0 | 1 | 103 |
0 | 56,416,556 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2019-06-02T11:32:00.000 | 0 | 3 | 0 | Keras How To Resume Training With Adam Optimizer | 56,414,605 | 0 | python,tensorflow,machine-learning,keras | What about model.load('saved.h5'). It should also load the optimizer if you save it with model.save() though. | My model requires to run many epochs in order to get decent result, and it takes few hours using v100 on Google Cloud.
Since I'm on a preemptible instance, it kicks me off in the middle of training. I would like to be able to resume from where it left off.
In my custom CallBack, I run self.model.save(...) in on_epoch_e... | 0 | 1 | 6,918 |
0 | 56,464,549 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-02T16:35:00.000 | 0 | 1 | 0 | Is it possible to model a min-max-problem using pyomo | 56,416,916 | 0 | python,pyomo | I think yes, but unless you find a clever way to reformulate your model, it might not be very efficent.
You could solve all possiblity of max(g_m(x)), then select the solution with the lowest objective function value.
I fear that the max operation is not something you can add to a minimization model, since it is not ... | Ist it possible to formulate a min-max-optimization problem of the following form in pyomo:
min(max(g_m(x)) s.t. L
where g_m are nonlinear functions (actually constrains of another model) and L is a set of linear constrains?
How would I create the expression for the objective function of the model?
The problem is that ... | 0 | 1 | 966 |
0 | 56,421,074 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-02T20:10:00.000 | 0 | 1 | 0 | Multiple inputs for Dijkstra's algorithm | 56,418,597 | 0 | python,algorithm,routing,navigation,dijkstra | Does your requirement have a function through which the both matrices are related
If yes then on the basis of that function find a new weight matrix. Use this matrix on the flow path
If no, then try running matrix one first and then two and vice-versa and choose the one with corresponding cost output to your requireme... | The inputs to Dijkstra's algorithm are a directed and weighted graph, generally represented by an adjacency (distance) matrix and a start node.
I have two different distance matrices to be used as inputs, representing two different infrastructure (e.g., roads and cycle ways). Any ideas how modify Dijkstra's algorithm t... | 0 | 1 | 130 |
0 | 62,840,501 | 0 | 0 | 0 | 0 | 2 | false | 38 | 2019-06-04T08:28:00.000 | 1 | 5 | 0 | Pipenv stuck "⠋ Locking..." | 56,440,090 | 0.039979 | python,pip,pipenv | try doing pipenv --rm - removes virtual environment
then pipenv shell - this will again initiate virtual env
then pipenv install installs all the packages again
worked for me | Why is my pipenv stuck in the "Locking..." stage when installing [numpy|opencv|pandas]?
When running pipenv install pandas or pipenv update it hangs for a really long time with a message and loading screen that says it's still locking. Why? What do I need to do? | 0 | 1 | 24,826 |
0 | 71,402,278 | 0 | 0 | 0 | 0 | 2 | false | 38 | 2019-06-04T08:28:00.000 | 1 | 5 | 0 | Pipenv stuck "⠋ Locking..." | 56,440,090 | 0.039979 | python,pip,pipenv | I had this happen to me just now. Pipenv got stuck locking forever, 20+ minutes with no end in sight, and pipenv --rm didn't help.
In the end, the problem was that I had run pipenv install "boto3~=1.21.14" to upgrade boto3 from boto3 = "==1.17.105". But I had other conflicting requirements (in my case, botocore = "==1.... | Why is my pipenv stuck in the "Locking..." stage when installing [numpy|opencv|pandas]?
When running pipenv install pandas or pipenv update it hangs for a really long time with a message and loading screen that says it's still locking. Why? What do I need to do? | 0 | 1 | 24,826 |
0 | 56,446,962 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-04T12:14:00.000 | 0 | 2 | 0 | google colab GPU processing become very slow after keras and tensorflow upgrade | 56,443,694 | 0 | python,tensorflow,keras,google-colaboratory | You can reset your backend using the Runtime -> Reset all runtimes... menu item. (This is much faster than kill -9 -1, which will take some time to reconnect.) | I've upgrade my tensorflow and keras with this code:
!pip install tf-nightly-gpu-2.0-preview
Now every epoch of model learning cost 22 min which was 17 sec before this upgrade!!!
I did downgrade tensorflo and keras but it did not help! | 0 | 1 | 1,597 |
0 | 56,461,379 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-05T07:50:00.000 | 0 | 1 | 0 | How to specify integration limits in scipy.integrate.trapz or simps? | 56,456,248 | 0 | python,python-3.x,scipy,numerical-integration | but at what value does it start x
It doesn't matter for integration. You just specify the values of y and tell scipy how far the xs are apart. Whether they start at -5 or +26 doesn't influence the value of the integral. | I understand that in scipy.integrate.trapz(y, x=None, dx=1.0, axis=-1) or simps, the min and max values of x (if specified) are taken to be the limits of the integral, but what happens when x=None? It has dx to figure out the spacing in the x values but at what value does it start x?
I tried it with and without x, fro... | 0 | 1 | 608 |
0 | 61,623,908 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-06-05T15:11:00.000 | 2 | 1 | 0 | DataLoader num_workers vs torch.set_num_threads | 56,463,317 | 1.2 | python,machine-learning,pytorch | The num_workers for the DataLoader specifies how many parallel workers to use to load the data and run all the transformations. If you are loading large images or have expensive transformations then you can be in situation where GPU is fast to process your data and your DataLoader is too slow to continuously feed the G... | Is there a difference between the parallelization that takes place between these two options? I’m assuming num_workers is solely concerned with the parallelizing the data loading. But is setting torch.set_num_threads for training in general? Trying to understand the difference between these options. Thanks! | 0 | 1 | 1,236 |
0 | 56,469,920 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-06-05T20:58:00.000 | 0 | 1 | 0 | Keras prints out result of every batch in a single epoch, why is that? | 56,467,912 | 0 | python,keras | That looks like an interaction with a notebook/kernel environment.
You may prefer the results if you change verbose=1 to verbose=2. | As described in Keras documentation, the verbose=1 asks the keras to print out results in a progress bar. But sometimes keras prints out the results of every batch, which makes a very messy printout report (see below). I wonder why is that? I mean, the only setup is the parameter of verbose, isn't it?
My code is simple... | 0 | 1 | 738 |
1 | 56,469,292 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-05T23:27:00.000 | 0 | 1 | 0 | How to use python with qr scanner devices? | 56,469,264 | 0 | python,input,qr-code,barcode-scanner | Typically a barcode scanner automatically outputs to the screen, just like a keyboard (except really quickly), and there is an end of line character at the end (like and enter).
Using a python script all you need to do is start the script, connect a scanner, scan something, and get the input (STDIN) of the script. If ... | I want to create a program that can read and store the data from a qr scanning device but i don't know how to get the input from the barcode scanner as an image or save it in a variable to read it after with openCV | 0 | 1 | 601 |
0 | 56,472,750 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-06T06:24:00.000 | 1 | 2 | 0 | What are the available estimators which we can use as estimator in onevsrest classifier? | 56,471,908 | 0.099668 | python-3.x,scikit-learn | The following can be used for classification problems:
Logistic Regression
SVM
RandomForest Classifier
Neural Networks | I want to know briefly about all the available estimators like logisticregression or multinomial regression or SVMs which can be used for classification problems.
These are the three I know. Are there any others like these? and relatively how long they run or how accurate can they get than these? | 0 | 1 | 26 |
0 | 56,472,341 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T06:51:00.000 | 0 | 1 | 0 | Getting error while trying to fit model - The kernel appears to have died. It will restart automatically | 56,472,233 | 1.2 | python,tensorflow,jupyter | There could be many reasons for the Kernel dying, the most common one I encounter is because I have ran out of memory.
If you are training a particularly large model try temporarily reducing it and bringing the batch_size down to 1
(I don't think the warning message is related - this is just giving advance warning of ... | I am trying to fit a model using keras but I get the following error -
WARNING:tensorflow:From /anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast inst... | 0 | 1 | 359 |
0 | 56,473,975 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-06-06T08:34:00.000 | 2 | 2 | 0 | conv net save weight and new test set | 56,473,760 | 0.197375 | python,machine-learning,keras,conv-neural-network | yes, for fair evaluation no sample in the test set should be seen during training | i'm using conv net for image classification.
There is something I dont understand theoretically
For training I split my data 60%train/20%validation/20% test
I save weight when metric on validation set is the best (I have same performance on training and validation set).
Now, I do a new split. Some data from training se... | 0 | 1 | 26 |
0 | 56,474,029 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-06-06T08:34:00.000 | 1 | 2 | 0 | conv net save weight and new test set | 56,473,760 | 0.099668 | python,machine-learning,keras,conv-neural-network | The all purpose of having a test set is that the model must never see it until the very last moment.
So if your model trained on some of the data in your test set, it becomes useless and the results it will gives you will have no meaning.
So basicly:
1.Train on your train set
2.Validate on your validation set
3.Repeat... | i'm using conv net for image classification.
There is something I dont understand theoretically
For training I split my data 60%train/20%validation/20% test
I save weight when metric on validation set is the best (I have same performance on training and validation set).
Now, I do a new split. Some data from training se... | 0 | 1 | 26 |
0 | 56,577,125 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T09:28:00.000 | 0 | 1 | 1 | Cassandra write throttling with multiple clients | 56,474,650 | 1.2 | python,cassandra,datastax-python-driver | The solution I came up with was to make both data producers write to the same queue.
To meet the requirement that the low-priority bulk data doesn't interfere with the high-priority live data, I made the producer of the low-priority data check the queue length and then add a record to the queue only if the queue length... | I have two clients (separate docker containers) both writing to a Cassandra cluster.
The first is writing real-time data, which is ingested at a rate that the cluster can handle, albeit with little spare capacity. This is regarded as high-priority data and we don't want to drop any. The ingestion rate varies quite a lo... | 0 | 1 | 246 |
0 | 56,476,394 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T11:10:00.000 | 1 | 1 | 0 | Is Keras Sequential fit the same as several train_on_batch calls? | 56,476,357 | 1.2 | python,tensorflow,keras | If I do model.fit(x, y, epochs=5) is this the same as
for i in range(5) model.train_on_batch(x, y)?
Yes.
Your understanding is correct.
There are a few more bells and whistles to .fit() (we, can for example, artificially control the number of batches to consider an epoch rather than exhausting the whole dataset) bu... | Just confused as to the differences between keras.sequential train_on_batch and fit. Is the only difference that, with train_on_batch, you automatically pass over the data only once whereas with fit you specify this with the no. of epochs?
If I do model.fit(x, y, epochs=5) is this the same as
for i in range(5)
mo... | 0 | 1 | 49 |
0 | 56,484,661 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T12:14:00.000 | 0 | 1 | 0 | Scipy spline interpolation: Determine array length of vector of knots / B-spline coefficients in tck before actual computation | 56,477,397 | 1.2 | python,arrays,scipy,spline | Short answer: no, not easily. Dierckx Fortran library, which splrep wraps, uses some fairly non-trivial logic for determining the knot vector, and it's all baked into the Fortran code. So, the only way is to carefully trace the latter. It's available from netlib, also scipy/interpolate/fitpack | Is it somehow possible to determine the array length of the arrays in the tck tuple returned by scipy.interpolate.splprep before computing the values?
I have to fit a spline interpolation to noisy data with 5 million data points (or less, can be varying).
My observation is that the interpolation at an array length of ... | 0 | 1 | 161 |
0 | 56,499,818 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-06T12:39:00.000 | 0 | 1 | 0 | Why a convolution neural network model which works well on new test images fails on video stream? | 56,477,793 | 0 | python,image-processing,video-streaming,conv-neural-network,video-processing | Assuming the neural network works well on images, it should work the same on frames of a video stream. In the end, a video stream is a sequence of images.
The problem is not that it doesn't work on video stream, it simply does not work on the type of images similar to the ones you have in the video stream.
It is hard t... | I have implemented a convolutional neural network by transfer learning using VGG19 to classify 5 different traffic signs. It works well with new test images, but when I apply the model upon video streaming it doesn't classify them correctly. | 0 | 1 | 30 |
0 | 56,480,077 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2019-06-06T14:39:00.000 | 1 | 3 | 0 | Difference between tf.constant and tf.Variable (trainable= False) | 56,479,870 | 0.066568 | python,tensorflow | If you declare something with tf.constant() you won't be able to change the value in future. But, tf.Variable() let's you change the variable in future. You can assign some other value to it. If it is not trainable, then the gradient won't flow through it. | I came across some code where tf.Variable(... trainable=False) was used and I wondered whether there was any difference between using tf.constant(...) and tf.Variable(with the trainable argument set to False)
It seems a bit redundant to have the trainable argument option available with tf.constant is available. | 0 | 1 | 739 |
0 | 56,480,023 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2019-06-06T14:39:00.000 | 1 | 3 | 0 | Difference between tf.constant and tf.Variable (trainable= False) | 56,479,870 | 0.066568 | python,tensorflow | There may be other differences, but one that comes to mind is that, for some TF graphs, you want a variable to be trainable sometimes and frozen other times. For example, for transfer learning with convnets you want to freeze layers closer to the inputs and only train layers closer to the output. It would be inconven... | I came across some code where tf.Variable(... trainable=False) was used and I wondered whether there was any difference between using tf.constant(...) and tf.Variable(with the trainable argument set to False)
It seems a bit redundant to have the trainable argument option available with tf.constant is available. | 0 | 1 | 739 |
0 | 56,480,281 | 0 | 1 | 0 | 0 | 3 | true | 2 | 2019-06-06T14:39:00.000 | 2 | 3 | 0 | Difference between tf.constant and tf.Variable (trainable= False) | 56,479,870 | 1.2 | python,tensorflow | Few reasons I can tell you off the top of my head:
If you declare a tf.Variable, you can change it's value later on if you want to. On the other hand, tf.constant is immutable, meaning that once you define it you can't change its value.
Let's assume that you have a neural network with multiple weight matrices, for the... | I came across some code where tf.Variable(... trainable=False) was used and I wondered whether there was any difference between using tf.constant(...) and tf.Variable(with the trainable argument set to False)
It seems a bit redundant to have the trainable argument option available with tf.constant is available. | 0 | 1 | 739 |
0 | 56,482,246 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-06T17:09:00.000 | 0 | 1 | 0 | Is there an efficient python implementation of spectral clustering for large, dense matrices? | 56,482,181 | 0 | python,bigdata | I'd recommend performing PCA to project the data to a lower dimensionality , and then utilize mini batch k-means | Currently I'm using the spectral clustering method from sklearn for my dense 7000x7000 matrix which performs very slowly and exceeds an execution time of 6 hours. Is there a faster implementation of spectral clustering in python? | 0 | 1 | 244 |
0 | 56,495,157 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-07T13:15:00.000 | 0 | 1 | 0 | How to train a model with data with only one label | 56,495,100 | 0 | python,machine-learning,supervised-learning | Sounds to me you need to shuffle that. The dataset you have have inherent information coded in the structure of the data ( Player 1 wins). You have no way to recreate this information at runtime.
What you want is a dataset where the order of the player information is not important , and a label 0/1 determining if p... | I am trying to build a model to predict the outcome (win or loose) of a tennis match, as an exercise. I am using Python, Pandas and scikit-learn.
The dataset I have has the two players ID and the result of the match, among other quantities.
In my case, the way the database is organized, has always the Player1 as the wi... | 0 | 1 | 460 |
0 | 57,731,033 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-06-07T14:37:00.000 | 1 | 2 | 0 | partitionBy taking too long while saving a dataset on S3 using Pyspark | 56,496,387 | 0.099668 | python,apache-spark,amazon-s3,pyspark,amazon-emr | Use version 2 of the FileOutputCommiter
.set("mapreduce.fileoutputcommitter.algorithm.version", "2") | I am trying to save a dataset using partitionBy on S3 using pyspark. I am partitioning by on a date column. Spark job is taking more than hour to execute it. If i run the code without partitionBy it just takes 3-4 mints.
Could somebody help me in fining tune the parititonby? | 0 | 1 | 1,717 |
0 | 59,047,316 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-07T21:11:00.000 | 0 | 1 | 0 | Tensorflow no_grad concept | 56,501,260 | 0 | python,tensorflow,pytorch | If you don't want to train certain Variables in TensorFlow you can achieve this behaviour by adding trainable=False to Variables. | I know with pytorch you can turn off training by calling eval() on your model.
Also you can set requires_grad=False.
How can you ensure that a TensorFlow element is not modified during training? | 0 | 1 | 670 |
0 | 56,501,994 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-07T22:45:00.000 | 0 | 2 | 0 | Pandas is replacing rows with FALSE and TRUE with False and True | 56,501,959 | 0 | python,pandas | I think the output dataframe of read_csv already convert the columns to boolean values. You can verify it by calling df.info(). If you want to keep the columns as string values you need to pass a dict to the dtype parameter to specify it explicitly. | using pd.read_csv("my.csv"), I have certain rows that appear as either TRUE or FALSE. read_csv is changing these rows in the dataframe as "True" and "False". Is there any way to keep case sensitivity when reading a CSV for true and false values? | 0 | 1 | 162 |
0 | 56,651,186 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-08T04:00:00.000 | 0 | 1 | 0 | One gpu uses more memory than others during training | 56,503,303 | 0 | python,memory,gpu,pytorch,multi-gpu | DataParallel splits the batch and sends each split to a different GPU, each GPU has a copy of the model, then the forward pass is computed independently and then the outputs of each GPU are collected back to one GPU instead of computing loss independently in each GPU.
If you want to mitigate this issue you can include ... | I use multigpu to train a model with pytorch. One gpu uses more memory than others, causing "out-of-memory". Why would one gpu use more memory? Is it possible to make the usage more balanced? Is there other ways to reduce memory usage? (Deleting variables that will not be used anymore...?) The batch size is already 1. ... | 0 | 1 | 186 |
0 | 56,506,873 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-08T13:22:00.000 | -5 | 2 | 0 | Does Opencv allow you to compute the 3x4 perspective transformation using 6 points? | 56,506,815 | -1 | python,opencv,computational-geometry | Yes of course there is. Just look for the computeThreeByFourMatrix() function in the OpenCV library documentation. It is all there | I want to compute a 3x4 matrix transformation, in homogeneous coordinates, that transforms 3d world points to 2d image points. My problem is that in the documentation and tutorials of the function getPerspectiveTransformation the default matrices are either 3x3 for perspective or 2x3 in affine transformations.
Is ther... | 0 | 1 | 304 |
0 | 56,517,433 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-08T17:25:00.000 | 0 | 1 | 0 | Creating train,test data for Word2Vec model | 56,508,631 | 0 | python,gensim,word2vec | Word2Vec is considered an 'unsupervised' algorithm, so at least during its training, it is not typical to hold back any 'test' data for later evaluation.
A Word2Vec model is usually then evaluated on how well it helps some other process - such as the analogy-solving highlighted by the original paper. In gensim, the me... | I am trying to create a W2V model and then generate train and test data to be used for my model.My question is how can I generate test data after I am done with creating a W2V model with my train data. | 0 | 1 | 1,283 |
0 | 56,509,709 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-08T19:04:00.000 | 0 | 1 | 0 | How can I stop networkx to change the source and the target node? | 56,509,345 | 0 | python,pandas,networkx | If you mean the order has changed, check out nx.OrderedGraph | I make a Graph (not Digraph) from a data frame (Huge network) with networkx.
I used this code to creat my graph:
nx.from_pandas_edgelist(R,source='A',target='B',create_using=nx.Graph())
However, in the output when I check the edge list, my source node and the target node has been changed based on the sort and I don't k... | 0 | 1 | 309 |
0 | 56,521,630 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-06-08T19:22:00.000 | 2 | 1 | 0 | After installing Tensorflow 2.0 in a python 3.7.1 env, do I need to install Keras, or does Keras come bundled with TF2.0? | 56,509,459 | 0.379949 | python-3.x,tensorflow2.0,tf.keras | In Tensorflow 2.0 there is strong integration between TensorFlow and the Keras API specification (TF ships its own Keras implementation, that respects the Keras standard), therefore you don't have to install Keras separately since Keras already comes with TF in the tf.keras package. | I need to use Tensorflow 2.0(TF2.0) and Keras but I don't know if it's necessary to install both seperately or just TF2.0 (assuming TF2.0 has Keras bundled inside it). If I need to install TF2.0 only, will installing in a Python 3.7.1 be acceptable?
This is for Ubuntu 16.04 64 bit. | 0 | 1 | 176 |
0 | 56,524,739 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2019-06-09T23:02:00.000 | 0 | 1 | 0 | What's the purpose of TensorFlow specific data types? | 56,518,982 | 1.2 | python,tensorflow | In short because TensorFlow is not executed by the python interpreter (at least not in general).
Python provides but one possible API to interact with TensorFlow. The core of TensorFlow itself is compiled (written mostly in C++) where python datatypes are not available. Also, (despite recent advances allowing eager exe... | For example, Why use tf.int32? Why not just use Python builtin integers? | 0 | 1 | 41 |
0 | 56,520,260 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-09T23:29:00.000 | 0 | 1 | 0 | Can we limit the value of a response variable in any machine learning algorithm? | 56,519,098 | 0 | python,machine-learning | It depends on the meaning of your response variable, considering you are using linear regression. But in a general function y=f(x), you can add a Softmax function y=Softmax(f(x)) to make sure y in (0, 1). If you replace Softmax with sigmoid and use it for regression, then you get a Logistic regression, they you can lim... | I am working on a problem in which my response variable is a relative power whose value cannot go beyond 100%. When I use linear regression or any other machine-learning algorithms, the predicted value goes beyond 100% and I want to limit that to 100%. Is there any way we can achieve that? | 0 | 1 | 36 |
0 | 56,543,927 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-10T03:37:00.000 | 1 | 2 | 0 | How to Identify Each Components from Audio Signal? | 56,520,227 | 0.099668 | python,machine-learning,signal-processing | Well...
If your shaft is rotating at, say 1200 RPM or 20 Hz, then all the significant sound produced by that rotation should be at harmonics of 20Hz.
If the turbine has 3 perfect blades, however, then it will be in exactly the same configuration 3 times for every rotation, so all of the sound produced by the rotation s... | I have some audio files recorded from wind turbines, and I'm trying to do anomaly detection. The general idea is if a blade has a fault (e.g. cracking), the sound of this blade will differ with other two blades, so we can basically find a way to extract each blade's sound signal and compare the similarity / distance be... | 0 | 1 | 48 |
0 | 56,520,450 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-06-10T04:05:00.000 | 3 | 2 | 0 | Different values between pandas df.size and len(df.to_dict("records")) | 56,520,381 | 0.291313 | python,python-3.x,pandas | Size will display total number of values while len display length of Data Frame
Ex: if you have 3*2(3 rows and 2 columns)
size will be "6", len will be "3" | Why might the values between df.size and len(df.to_dict("records")) be different? I find the value of df.size=58151429 while my len(df.to_dict("records"))=2528323 which is quite a big difference. Why can that be? | 0 | 1 | 696 |
0 | 56,526,165 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-06-10T11:57:00.000 | 0 | 2 | 0 | How can I calculate the coordinates of vertices of an zebra crossing area from the coordinates of vertices of zebra stripe? | 56,525,947 | 0 | python,computer-vision,computational-geometry | As you told, you know the coordinates of each individual stripe of the zebra crossing. So now you can determine the first and last stripes by looking at max and min coordinates of all vertices(By considering a reference axis from which you can measure distance). then you know coordinates of terminal stripes and hence y... | I am doing a zebra crossing detection problem, and now I've already known the vertices of each zebra stripe, as a list of points. How can I efficiently calculate the coordinates of the vertices of the outline rectangle containing those zebra stripes?
I am doing it in 3D
I've been thinking this question for days, and ca... | 0 | 1 | 85 |
0 | 56,526,776 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-06-10T11:57:00.000 | 0 | 2 | 0 | How can I calculate the coordinates of vertices of an zebra crossing area from the coordinates of vertices of zebra stripe? | 56,525,947 | 1.2 | python,computer-vision,computational-geometry | From what you say, it seems that you have the 3D coordinates of the outline of a rectangle. I will assume Cartesian coordinates and undistorted geometry.
The points belong to a plane, which you can determine by 3D plane fitting. Then by an orthogonal change of variables, you can project the points onto that plane.
For ... | I am doing a zebra crossing detection problem, and now I've already known the vertices of each zebra stripe, as a list of points. How can I efficiently calculate the coordinates of the vertices of the outline rectangle containing those zebra stripes?
I am doing it in 3D
I've been thinking this question for days, and ca... | 0 | 1 | 85 |
0 | 56,526,556 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-10T12:31:00.000 | 0 | 2 | 0 | Packaging tensorflow models as wheel files | 56,526,497 | 0 | python,tensorflow,keras,setup.py,python-wheel | You can send only the frozen inference graph in .pb format. | I have created my tensorflow model which will act as a serve. Code will be hosted on client's local server. I don't want to give them my code but give them a wheel file. But after following python's package distribution my tensorflow files become corrupted. | 1 | 1 | 120 |
0 | 56,531,971 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-10T18:18:00.000 | 0 | 1 | 0 | tensorflow dataset cache cross validation | 56,531,580 | 0 | python-3.x,tensorflow,tensorflow-datasets | Answering my own question, to do this, I can create a pipeline for each file, cache each pipeline on disk, put them into deque, then use tf.data.experimental.sample_from_datasets. | I have a very expensive data pipeline. I want to use tf.data.Dataset.cache to cache the first epoch dataset to disk. Then speed up the process. The reason I'm doing this instead of saving the dataset into tfrecords is
1) I change many parameters doing the processing every time, it is more convenient for me to cache it ... | 0 | 1 | 218 |
0 | 56,533,007 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-06-10T19:39:00.000 | 0 | 2 | 0 | Python point cloud data to surface fit/function | 56,532,588 | 0 | python,python-3.x,mesh,point-clouds,surface | I don't know if creating a single function for the entire surface is the correct approach?
I guess this depends on your data. Let's assume the base form of your surface is spherical. Then you can model it as such.
If your surface is more complex then a sphere you might can still model the neighborhood of (x,y) as suc... | I have unstructured (taken in no regular order) point cloud data (x,y,z) for a surface. This surface has bulges (+z) and depressions (-z) scattered around in an irregular fashion. I would like to generate some surface that is a function of the original data points and then be able to input a specific (x,y) and get the ... | 0 | 1 | 847 |
0 | 56,534,865 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-06-10T19:39:00.000 | -1 | 2 | 0 | Python point cloud data to surface fit/function | 56,532,588 | -0.099668 | python,python-3.x,mesh,point-clouds,surface | What you are trying to do, can be called surface fitting, or two-dimensional curve fitting. You would be able to find lots of available algorithms by searching for those terms. Now, the choice of the particular algorithm/method should be dictated:
by the origin of your data (there are specialized algorithms or variati... | I have unstructured (taken in no regular order) point cloud data (x,y,z) for a surface. This surface has bulges (+z) and depressions (-z) scattered around in an irregular fashion. I would like to generate some surface that is a function of the original data points and then be able to input a specific (x,y) and get the ... | 0 | 1 | 847 |
0 | 56,695,079 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-11T02:31:00.000 | 0 | 1 | 0 | How to load NTU rgbd dataset? | 56,535,700 | 0 | python,machine-learning | The overall size of the dataset is 1.3 TB and this size will decrease after processing the data and converting it into numpy arrays or something else.
But I do not think you will work on the entire dataset, what is the part you want to work on it in the dataset? | We are working on early action prediction but we are unable to understand the dataset itself NTU rgbd dataset is 1.3 tb.my laptop Hard disk is 931 GB
.first problem : how to deal with such a big dataset?
Second problem : how to understand dataset?
Third problem: how to load dataset ?
Thanks for the help | 0 | 1 | 128 |
0 | 56,546,713 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-06-11T14:52:00.000 | 1 | 1 | 0 | Statsmodels.api doesn't import | 56,546,431 | 1.2 | python-3.x,statsmodels | From the error it looks as though there is not a function called factorial within the misc directory of the scipy package.
Have you tried opening up the __init__.py file specified in the error and looking through the misc directory to find the factorial function? | That's it. It installs, I can import statsmodels, but statsmodels.api doesn't import.
I've tried installing with pip and conda, both give me version 0.9.0 and everything is fine.
I've installed all the dependencies, statsmodels works, but statsmodels.api doesn't.
import statsmodels.api Traceback (most recent call l... | 0 | 1 | 178 |
0 | 56,548,032 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-11T16:14:00.000 | 0 | 4 | 0 | Graphing multiple csv lists into one graph in python | 56,547,899 | 0 | python,pandas,csv,matplotlib,graph | Read the first file and create a list of lists in which each list filled by two columns of this file. Then read the other files one by one and append y column of them to the correspond index of this list. | I have 5 csv files that I am trying to put into one graph in python. In the first column of each csv file, all of the numbers are the same, and I want to treat these as the x values for each csv file in the graph. However, there are two more columns in each csv file (to make 3 columns total), but I just want to graph t... | 0 | 1 | 932 |
0 | 56,561,928 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-12T11:39:00.000 | 1 | 2 | 0 | Is there any way in OCR/tesseract/OpenCV for extracting text from a particular region of an image? | 56,561,357 | 0.099668 | python,artificial-intelligence,ocr,tesseract,text-extraction | Looks like you are newbird,so let me help you quick walkthrough of understanding of terms used in your keyword.
OCR is optical character recognition a concept
Tesseract is special library handling for OCR.
OpenCV helps in image processing library helping in object detection and recognition.
Yes, you can extract the tex... | I’m setting up a new invoice extraction method using AI, I able to recognize "Total"/"Company Details" from invoice images but need help with extracting data from that particular region recognized in the invoice image by specifying an area in the image(Xmin, Xmax, Ymin, Ymax)? | 0 | 1 | 950 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.