GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
59,453,381
0
1
0
0
3
false
1
2018-05-03T05:14:00.000
-1
3
0
Install opencv python package in Anaconda
50,147,385
-0.066568
python-2.7,opencv,image-processing,ide,anaconda
Remove all previous/current (if any) python installation Install Anaconda and add anaconda to PATH(Envirnoment variables:: Adavanced system setting->Environment variables->under system variables go to variable PATHand click edit to add new envirnomental variables) (During installation check box involve PATH) Open anaco...
Can someone provide the steps and the necessary links of the dependencies to install external python package "opencv-python" for image processing? I tried installing it in pycharm, but it was not able to import cv2(opencv) and was throwing version mismatch with numpy! Please help!
0
1
9,045
0
50,148,675
0
0
0
0
1
true
0
2018-05-03T06:43:00.000
0
1
0
Jupyter Notebook; Cannot use columns which are not shown on Jupyter notebook
50,148,502
1.2
python,pandas,jupyter-notebook
You can try: pandas.set_option('display.max_columns', None) in order to display all columns. To reset it, type: pandas.reset_option('display.max_columns'). You may change None to whatever number of columns you wish to have.
I'm trying to do create plot using columns on pandas. But the number of columns are many and some of the columns are not shown on Jupyter notebook. And when I use the columns which are not shown on Jupyter Notebook, the plot cannot be created corrctly. Like when I do this, sns.pairplot(data[['col1', 'col13(cannot see o...
0
1
64
0
50,159,650
0
0
0
0
1
false
1
2018-05-03T11:52:00.000
0
1
0
embedding word positions in keras
50,154,465
0
python,nlp,keras,word2vec,word-embedding
You can compute the maximal separation between entity mentions linked by a relation and choose an input width greater than this distance. This will ensure that every input (relation mention) has same length by trimming longer sentences and padding shorter sentences with a special token.
I am trying to build a relation extraction system for drug-drug interactions using a CNN and need to make embeddings for the words in my sentences. The plan is to represent each word in the sentences as a combination of 3 embeddings: (w2v,dist1,dist2) where w2v is a pretrained word2vec embedding and dist1 and dist2 are...
0
1
354
0
50,185,377
0
0
0
0
1
false
1
2018-05-03T23:53:00.000
1
1
0
How to identify when a certain image disappear
50,165,274
0.197375
python,python-3.x,opencv
I ended using numpy to save the captured frames and reached 99% efficiency with the reduced area, no resizing of the images or multiprocessing.
I'm doing an opencv project which needs to detect a image on the screen, which will disappear after some time it shows up. It needs to save the most amount of frames possible while the image is showing, and stop when it disappears. I plan to use the data collected to do a ConvNet, so the more frames I can capture, the ...
0
1
118
0
50,363,845
0
0
0
1
2
true
2
2018-05-04T11:46:00.000
3
3
0
Save keras model to database
50,174,189
1.2
sql-server,database,python-3.x,tensorflow,keras
It seems that there is no clean solution to directly store a model incl. weights into the database. I decided to store the model as h5 file in the filesystem and upload it from there into the database as a backup. For predictions I load anyway the model from the filesystem as it is much faster than getting it from the ...
I created a keras model (tensorflow) and want to store it in my MS SQL Server database. What is the best way to do that? pyodbc.Binary(model) throws an error. I would prefer a way without storing the model in the file system first. Thanks for any help
0
1
3,587
0
57,933,840
0
0
0
1
2
false
2
2018-05-04T11:46:00.000
1
3
0
Save keras model to database
50,174,189
0.066568
sql-server,database,python-3.x,tensorflow,keras
The best approach would be to save it as a file in the system and just save the path in the database. This technique is usually used to store large files like images since databases usually struggle with them.
I created a keras model (tensorflow) and want to store it in my MS SQL Server database. What is the best way to do that? pyodbc.Binary(model) throws an error. I would prefer a way without storing the model in the file system first. Thanks for any help
0
1
3,587
0
50,494,869
0
0
0
0
1
false
6
2018-05-04T21:09:00.000
4
2
0
How to find an optimum number of processes in GridSearchCV( ..., n_jobs = ... )?
50,183,080
0.379949
python,machine-learning,parallel-processing,scikit-learn,parallelism-amdahl
An additional simpler answer by Prof. Kevyn Collins-Thompson, from course Applied Machine Learning in Python: If I have 4 cores in my system, n_jobs = 30 (30 as an example) will be the same as n_jobs = 4. So no additional effect So the maximum performance that can be obtained always is using n_jobs = -1
I'm wondering, which is better to use with GridSearchCV( ..., n_jobs = ... ) to pick the best parameter set for a model, n_jobs = -1 or n_jobs with a big number, like n_jobs = 30 ? Based on Sklearn documentation: n_jobs = -1 means that the computation will be dispatched on all the CPUs of the computer. On my PC I h...
0
1
4,600
0
50,198,173
0
0
0
0
1
true
0
2018-05-06T09:13:00.000
0
1
0
Predicting binary classification
50,198,094
1.2
python,machine-learning
it's basically the same, when talking about binary classification, you can think of a final layer for each model that adapt the output to other model e.g if the model output 0 or 1 than the final layer will translate it to vector like [1,0] or [0,1] and vise-versa by a threshold criteria, usually is >= 0.5 a nice bypro...
I have been self-learning machine learning lately, and I am now trying to solve a binary classification problem (i.e: one label which can either be true or false). I was representing this as a single column which can be 1 or 0 (true or false). Nonetheless, I was researching and read about how categorical variables can ...
0
1
381
0
50,206,803
0
0
0
0
1
false
0
2018-05-07T02:06:00.000
0
1
0
Tensorflow How can I make a classifier from a CSV file using TensorFlow?
50,206,222
0
python,csv,tensorflow
You can start with this tutorial, and try it first without changing anything; I strongly suggest this unless you are already familiar with Tensorflow so that you gain some familiarity with it. Now you can modify the input layer of this network to match the dimensions of the HuMoments. Next, you can give a numeric label...
I need to create a classifier to identify some aphids. My project has two parts, one with a computer vision (OpenCV), which I already conclude. The second part is with Machine Learning using TensorFlow. But I have no idea how to do it. I have these data below that have been removed starting from the use of OpenCV, are ...
0
1
102
0
50,210,208
0
0
0
0
1
false
0
2018-05-07T08:23:00.000
0
1
0
PySCIPOpt/SCIP - isLPSolBasic() not in pyscipopt.scip.Model
50,209,928
0
python,discrete-mathematics,scip
Your assumption is correct: The pip version of PySCIPOpt was outdated and did not yet include the latest updates with respect to cutting plane separators. I just triggered a new release build (v.1.4.6) that should be available soon. When in doubt, you can always build PySCIPOpt from source by running python setup.py in...
I developed a gomory cut for a LP problem (based on 'test-gomory.py' test file) which I could not manage to run. Finally, I copied the test file to check whether I'd the same trouble. Indeed I got the same message: if not scip.isLPSolBasic(): AttributeError: 'pyscipopt.scip.Model' object has no attribute 'isLPSolBasic'...
0
1
215
0
58,696,815
0
0
0
0
1
false
1
2018-05-07T12:01:00.000
0
2
0
How to load index shards by gensim.similarities.Similarity?
50,213,754
0
python,gensim
shoresh's answer is correct. The key part that OP was missing was index.save(output_fname) While just creating the object appears to save it, it's really only saving the shards, which require saving a sort of directory file (via index.save(output_fname) to be made accessible as a whole object.
I'm working on something using gensim. In gensim, var index usually means an object of gensim.similarities.<cls>. At first, I use gensim.similarities.Similarity(filepath, ...) to save index as a file, and then loads it by gensim.similarities.Similarity.load(filepath + '.0'). Because gensim.similarities.Similarity defau...
0
1
473
0
50,225,476
0
0
0
0
2
false
1
2018-05-07T17:42:00.000
0
2
0
Challenges with high cardinality data
50,219,738
0
python,machine-learning,data-science,dimensionality-reduction,cardinality
You can use replace all id numbers and names in the data with a standard token like <ID> or <NAME>. This should be done during preprocessing. Next you should pick a fixed vocabulary. Like all words that occur at least 5 times in the training data.
Background: I am working on classifying data from a ticketing system data into a failed or successful requests. A request goes into various stages before getting completed. Each request is assigned to different teams and individuals before being marked as complete. Making use of historical data I want to create predic...
0
1
254
0
50,226,408
0
0
0
0
2
false
1
2018-05-07T17:42:00.000
0
2
0
Challenges with high cardinality data
50,219,738
0
python,machine-learning,data-science,dimensionality-reduction,cardinality
Since you have a dynamic data like you said, you can use neural net to identify and merge updating variables and data. Also you should use classifiers like CVParameterSelection : For cross validation parameter selection. PART : For making a decision tree, great utility as it works on divide and conquer rule. REP Tree ...
Background: I am working on classifying data from a ticketing system data into a failed or successful requests. A request goes into various stages before getting completed. Each request is assigned to different teams and individuals before being marked as complete. Making use of historical data I want to create predic...
0
1
254
0
50,233,334
0
0
0
0
1
false
1
2018-05-08T11:51:00.000
0
2
0
action when target variable is character/string
50,232,939
0
python-3.x
You can use the pandas factorize method for converting strings into numbers. numpy.unique can also be used but will be comparatively slower.
I want to train a classifier, and my target variable has 300 unique values, and its type is character/string Is there an automated process with pandas that can automatically trnsform each string into a number? Thanks a lot
0
1
486
0
50,252,474
0
0
0
0
1
false
0
2018-05-09T08:13:00.000
0
1
0
Data with too many categories
50,248,547
0
python,r
I am not sure if I understand by "automatically". However, instead of plotting (which can be a hard task if you have many attributes for each sample), you can try to automatically group your samples using clustering techniques such as K-Means, Hierarchical clustering, SOM (or any clustering technique that fits to your ...
I hope to know a general approach when do data engineering. I have a data set with some variables with too many categories, and including these variables into a predictive model definitely would increase the complexity of the model, thus leads to overfit. While normally I would group those categories into fewer groups ...
0
1
339
0
50,260,668
0
0
0
0
1
true
9
2018-05-09T16:01:00.000
10
1
0
Tensorflow Eager and Tensorboard Graphs?
50,257,614
1.2
python,tensorflow,machine-learning
No, by default there is no graph nor sessions in eager executing, which is one of the reasons why it is so appealing. You will need to write code that is compatible with both graph and eager execution to write your net's graph in graph mode if you need to. Note that even though you can use Tensorboard in eager mode to ...
I'm currently looking over the Eager mode in Tensorflow and wanted to know if I can extract the graph to use in Tensorboard. I understand that Tensorflow Eager mode does not really have a graph or session system that the user has to create. However, from my understanding there is one under the hood. Could this hidden G...
0
1
4,878
0
50,265,639
0
0
0
0
1
true
3
2018-05-10T01:53:00.000
2
1
0
Document similarity in production environment
50,264,369
1.2
python,machine-learning,nlp,gensim,doc2vec
You don't have to take the old model down to start training a new model, so despite any training lags, or new-document bursts, you'll always have a live model doing the best it can. Depending on how much the document space changes over time, you might find retraining to have a negligible benefit. (One good model, built...
We are having n number of documents. Upon submission of new document by user, our goal is to inform him about possible duplication of existing document (just like stackoverflow suggests questions may already have answer). In our system, new document is uploaded every minute and mostly about the same topic (where there ...
0
1
418
0
50,737,007
0
0
0
0
1
true
1
2018-05-10T13:06:00.000
1
1
0
python: Find camera shift angle using a reference image and current image from the camera
50,273,650
1.2
python,opencv,image-processing,scipy,computer-vision
Use of hanning window will be more accurate for getting the phase values. Use phase correlate function with the window will give you a right phase shift.
I have a reference image captured from a cc-camera and need to capture current image from the cc-camera to check if the camera is shifted or not at regular intervals. If shifted, the shift angle must also be needed. How can I achieve this using python? Is the cv2.phaseCorrelate() function be helpful for this. Please ...
0
1
352
0
50,282,794
0
0
0
0
1
true
3
2018-05-10T22:49:00.000
4
1
0
Tensorflow: switch between CPU and GPU [Windows 10]
50,282,567
1.2
python-3.x,tensorflow
If you set the environment variable CUDA_VISIBLE_DEVICES=-1 you will use the CPU only. If you don't set that environment variable you will allocate memory to all GPUs but by default only use GPU 0. You can also set it to the specific GPU you want to use. CUDA_VISIBLE_DEVICES=0 will only use GPU 0. This environment var...
How can I quickly switch between running tensorflow code with my CPU and my GPU? My setup: OS = Windows 10 Python = 3.5 Tensorflow-gpu = 1.0.0 CUDA = 8.0.61 cuDNN = 5.1 I saw a post suggesting something about setting CUDA_VISIBLE_DEVICES=0 but I don't have this variable in my environment (not sure if it's because I'm r...
0
1
2,716
0
50,290,731
0
0
0
0
1
true
1
2018-05-11T10:08:00.000
1
1
0
Visualize a SVM model having100 attributes in 2D plot python
50,289,947
1.2
python,machine-learning,plot,svm,text-classification
For input data of higher dimensionality, I think that there is not a direct way to render a SVM. You should apply a dimensionality reduction, in order to have something to plot in 2-d or 3-d.
I am using SVM for text classification (tf-idf score based classification). Is it possible to plot SVM having more than 100 attributes and 10 labels. Is there any way to reduce the features and then plot the same multiclass SVM.
0
1
150
0
50,296,509
0
0
0
0
2
true
1
2018-05-11T16:20:00.000
2
2
0
Generate non-uniform random numbers
50,296,427
1.2
python,arrays,algorithm
Your number 100 is not independent of the input; it depends on the given p values. Any parameter that depends on the magnitude of the input values is really exponential in the input size, meaning you are actually using exponential space. Just constructing that array would thus take exponential time, even if it was stru...
Algo (Source: Elements of Programming Interviews, 5.16) You are given n numbers as well as probabilities p0, p1,.., pn-1 which sum up to 1. Given a rand num generator that produces values in [0,1] uniformly, how would you generate one of the n numbers according to their specific probabilities. Example If numb...
0
1
556
0
50,296,535
0
0
0
0
2
false
1
2018-05-11T16:20:00.000
2
2
0
Generate non-uniform random numbers
50,296,427
0.197375
python,arrays,algorithm
If you have rational probabilities, you can make that work. Rather than 100, you must use a common denominator of the rational proportions. Insisting on 100 items will not fulfill the specs of your assigned example, let alone more diabolical ones.
Algo (Source: Elements of Programming Interviews, 5.16) You are given n numbers as well as probabilities p0, p1,.., pn-1 which sum up to 1. Given a rand num generator that produces values in [0,1] uniformly, how would you generate one of the n numbers according to their specific probabilities. Example If numb...
0
1
556
0
50,298,062
0
0
0
0
1
false
2
2018-05-11T17:11:00.000
2
3
0
Get cluster points after KMeans in a list format
50,297,142
0.132549
python-3.x,scikit-learn,k-means,data-science
You probably look for the attribute labels_.
Suppose I clustered a data set using sklearn's K-means. I can see the centroids easily using KMeans.cluster_centers_ but I need to get the clusters as I get centroids. How can I do that?
0
1
5,157
0
50,305,713
0
0
0
0
1
false
1
2018-05-12T10:36:00.000
0
1
0
How to continue to train a model with new classes and data?
50,305,294
0
python-3.x,tensorflow,machine-learning,rnn
Best thing to do is to train your network from scratch with the output layers adjusted to the new output class size. If retraining is an issue, then keep the trained network as it is and only drop the last layer. Add a new layer with the proper output size, initialized to random weights and then fine-tune (train) the e...
I have trained a model successfully and now I want to continue training it with new data. If a given data with the same amount of classes it works fine. But having more data then initially it will give me the error: ValueError: Shapes (?, 14) and (?, 21) are not compatible How can I dynamically increase the number of...
0
1
168
0
50,632,788
0
0
0
0
1
false
0
2018-05-12T14:33:00.000
0
1
0
protoc executable unable to find object detection .protos file in tensorflow models
50,307,347
0
python,tensorflow,object-detection,protoc
Currently you are in which directory? You can set the Path variable to point to protoc.exe and run the command from object detection directory
I am trying to work with the object detection api by tensorflow but was unable to install that properly. I looked up every solution in the internet and everything was in vain. Below is the error message I am getting: “C:\Program Files\protoc-3.5.0-win32\bin\protoc.exe” object_detection/protos/*.proto --python_out=. o...
0
1
353
0
50,309,756
0
0
0
0
1
true
0
2018-05-12T16:54:00.000
2
1
0
How to revert shuffled array in python with default PRNG and indexes?
50,308,623
1.2
python,numpy,random,pillow
This has nothing to do with your random numbers. Notice that you use the random number generator only once when you create the shuffled indices. When you load the indices from the file, the random number generator is not used, since only a file is read. Your issue occurs at a different place: You save the scrambled Len...
Moving an image to an array then flattening it and shuffling with given x seed it should be easy to unshuffle it with the given seed and indexes from the shuffling process. read image IMG.jpg random.seed(x) and shuffle -> indexes, shuffle_img.jpg unshuffle However, this RESULT shows that the resulting IMG is simmi...
0
1
287
0
50,333,204
0
0
0
0
1
false
1
2018-05-12T20:43:00.000
1
1
0
How to efficiently use a Session from within a python object to keep tensorflow as an implementation detail?
50,310,515
0.197375
python,tensorflow,scikit-learn
Don't open a session at each function call, that could be very inefficient if the function is called many times. If for some reason, you don't want to expose a context manager, then you need to open the session yourself, and leave it open. It is perhaps a bit simpler for the user, but sharing the tf.Session with other ...
I'm implementing a custom sklearn Transformer, which requires an optimization step which has been coded in Tensorflow. TF requires a Session, which should be used as a context manager or explicitly closed. The question is: adding a close() method to the Transformer would be odd (and unexpected for a user), what is the ...
0
1
143
0
50,326,237
0
0
0
0
1
false
2
2018-05-13T19:28:00.000
0
1
0
Need help using Keras' model.predict
50,319,873
0
python,tensorflow,machine-learning,keras,predict
The model that you trained is not directly optimized w.r.t. the graph reconstruction. Without loss of generality, for a N-node graph, you need to predict N choose 2 links. And it may be reasonable to assume that the true values of the most of these links are 0. When looking into your model accuracy on the 0-class and 1...
My goal is to make an easy neural network fit by providing 2 verticies of a certain Graph and 1 if there's a link or 0 if there's none. I fit my model, it gets loss of about 0.40, accuracy of about 83% during fitting. I then evaluate the model by providing a batch of all positive samples and several batches of negative...
0
1
343
0
50,329,741
0
0
0
0
1
false
0
2018-05-14T05:48:00.000
0
1
0
How to used a tensor in different graphs?
50,323,744
0
python,tensorflow,graph
expanding on @jdehesa's comment, embedding could be trained initially, saved from graph1 and restored to graph2 using tensorflows saver/restore tools. for this to work you should assign embedding to a name/variable scope in graph1 and reuse the scope in graph2
I build two graphs in my code, graph1 and graph2. There is a tensor, named embedding, in graph1. I tied to use it in graph2 by using get_variable, while the error is tensor must be from the same graph as Tensor. I found that this error occurs because they are in different graphs. So how can I use a tensor in graph1 to...
0
1
39
0
50,327,148
0
0
0
0
1
false
1
2018-05-14T09:11:00.000
1
1
0
identifying feature type in a dataset : categorical or bag of words
50,326,774
0.197375
python,pandas,machine-learning
Well you're confused between those two terms: Categorical Data is the kind of data which can be categorized between different categories especially more than two classes or multi-class. Search for 20 Newsgroup Dataset. Whereas, Bag of Words is a technique of storing features. Identification of features is done on the b...
I am trying to identify the type of feature in a dataset which can be either categorical/bag of words/ floats. However I am unable to reach to a accurate solution to distinguish between categorical and bag of words due to following reasons. Categorical data can either be object or float. Counting the unique values in...
0
1
183
0
53,698,659
0
1
0
0
2
false
15
2018-05-14T17:00:00.000
16
5
0
No module named '_bz2' in python3
50,335,503
1
python,python-3.x,matplotlib,importerror,bzip2
If you compiling python yourself, you need to install libbz2 headers and .so files first, so that python will be compiled with bz2 support. On ubuntu, apt-get install libbz2-dev then compile python.
When trying to execute the following command: import matplotlib.pyplot as plt The following error occurs: from _bz2 import BZ2Compressor, BZ2Decompressor ImportError: No module named '_bz2' So, I was trying to install bzip2 module in Ubuntu using : sudo pip3 install bzip2 But, the following statement pops up in...
0
1
17,593
0
71,457,141
0
1
0
0
2
false
15
2018-05-14T17:00:00.000
1
5
0
No module named '_bz2' in python3
50,335,503
0.039979
python,python-3.x,matplotlib,importerror,bzip2
I found a pattern in those issue. It happens mostly if you are missing dev tools and other important libraries important to compile code and install python. For me most of those step did not work. But I had to do the following : Remove my python installation pyenv uninstall python_version Then install all the build ...
When trying to execute the following command: import matplotlib.pyplot as plt The following error occurs: from _bz2 import BZ2Compressor, BZ2Decompressor ImportError: No module named '_bz2' So, I was trying to install bzip2 module in Ubuntu using : sudo pip3 install bzip2 But, the following statement pops up in...
0
1
17,593
0
54,786,075
0
0
0
0
1
false
0
2018-05-15T16:29:00.000
0
1
0
Tensorflow android feed with specific array
50,355,139
0
android,python,tensorflow
tensorInferenceInterface.feed("the_input", pixels, batch_size, width, height, dims); hope this will work for you
I created a CNN model on tensorflow with input placeholder - [None, 32, 32, 3]tf.placeholder(tf.float32, [None, 24, 24, 3]). Then I want to use that model in Android application and for that reason I froze the model. When I include the library I noticed that I can feed only with two dimensional array or ByteBuffer. How...
0
1
86
0
53,350,392
0
0
0
0
1
false
25
2018-05-15T16:57:00.000
1
6
0
How should I get the shape of a dask dataframe?
50,355,598
0.033321
python,dask
To get the shape we can try this way: dask_dataframe.describe().compute() "count" column of the index will give the number of rows len(dask_dataframe.columns) this will give the number of columns in the dataframe
Performing .shape is giving me the following error. AttributeError: 'DataFrame' object has no attribute 'shape' How should I get the shape instead?
0
1
16,817
0
50,400,053
0
0
0
0
1
false
0
2018-05-15T20:09:00.000
0
1
0
Dynamic range of unknown parameters and form of cost function in scipy.optimize.least_squares
50,358,434
0
python,scipy,mathematical-optimization,least-squares,nonlinear-optimization
Most solvers are designed for variables in the 1-10 range. A large range can cause numerical problems, but it is not guaranteed to be problematic. Numerical problems sometimes stem from the matrix factorization step of the linear algebra for solving the Newton step, which is more dependent of the magnitude of the deriv...
I am using scipy.optimize.least_squares to solve an interval constrained nonlinear least squares optimization problem. The form of my particular problem is that of finding a0, a1, b0, and b1 such that the cost function: \sum^N_{n=1} ( g_n - (y_n - b0 e^-(tn/b1)) / a0 e^-(tn/a1) )^2 is minimized where g_n, y_n and t_n ...
0
1
176
0
50,632,652
0
0
0
0
1
false
7
2018-05-16T06:58:00.000
2
2
0
How to use two models in Tensorflow object Detection API
50,364,281
0.197375
python,tensorflow,object-detection
You cant combine both the models. Have two sections of code which will load one model at a time and identify whatever it can see in the image. Other option is to re-train a single model that can identify all objects you are interested in
In tensorflow Object Detection API we are using ssd_mobilenet_v1_coco_2017_11_17 model to detect 90 general objects. I want to use this model for detection. Next, I have trained faster_rcnn_inception_v2_coco_2018_01_28 model to detect a custom object. I wish to use this in the same code where I will be able to detect t...
0
1
2,781
0
50,370,068
0
0
0
0
1
false
1
2018-05-16T09:47:00.000
2
1
0
should model.compile() be run prior to using model.load_weights(), if model has been only slightly changed say dropout?
50,367,540
0.379949
python,keras,deep-learning
The model.compile() method does not touch the weights in any way. Its purpose is to create a symbolic function adding the loss and the optimizer to the model's existing function. You can compile the model as many times as you want, whenever you want, and your weights will be kept intact. Possible consequences of com...
With training & validation through a dataset for nearly 24 epochs, intermittently 8 epochs at once and saving weights cumulatively after each interval. I observed a constant declining train & test-loss for first 16 epochs, post which the training loss continues to fall whereas test loss rises so i think it's the case ...
0
1
1,304
0
50,518,560
0
0
0
0
1
false
0
2018-05-16T10:05:00.000
0
1
0
get vscode python error when run pyspark program using vs code
50,367,947
0
python,pyspark,visual-studio-code
Somehow your code is trying to pickle the debugger itself. Do make sure you are running the debugger using the PySpark configuration.
I run my pyspark program in vscode, and get error: PicklingError: Could not serialize object: ImportError: No module named visualstudio_py_debugger . I suppose it has something to do with my vscode setting?
1
1
185
0
50,375,996
0
0
0
0
1
false
1
2018-05-16T16:07:00.000
2
1
0
Real width of detected face
50,375,489
0.379949
python,opencv,distance,face-detection,measure
OpenCV's facial recognition is slightly larger than a face, therefore an average face may not be helpful. Instead, just take a picture of a face at different distances from the camera and record the distance from the camera along with the pixel width of the face for several distances. After plotting the two variables o...
I've been researching like forever, but couldn't find an answer. I'm using OpenCV to detect faces, now I want to calculate the distance to the face. When I detect a face, I get a matofrect (which I can visualize with a rectangle). Pretty clear so far. But now: how do I get the width of the rectangle in the real world? ...
0
1
138
0
50,376,159
0
0
0
0
1
false
1
2018-05-16T16:42:00.000
-1
2
0
How can I print pandas pivot table to word document
50,376,081
-0.099668
python,python-2.7,pandas,ms-word,pivot-table
I suggest exporting it to csv using pandas.to_csv(), and then read the csv to create a table in a word document.
Could some one please explain with an example about how we can send the pivot table output from pandas to word document without losing format.
0
1
2,260
0
50,382,763
0
0
0
0
1
false
0
2018-05-17T02:58:00.000
1
1
0
Neural Network Predict Soccer Results
50,382,706
0.197375
python,c,tensorflow,neural-network
I did this for the 2017 UK Premier league season. However I accumulated the first 19 (out of 38 Games), in order to try and help with my predictions. I would attack this in the following manner Get Data on the Teams (What you consider data I will leave up to you) Get the History of previous matches (Personally I think...
I've going on a little competition with friends: We're writing models for the upcoming worldcup to see whoms model gets the most points out of a tip game. So my approach would be to write a neural network and train it with previous worldcup results regarding to the anticipated wining rates (back then), to maximize the ...
0
1
968
0
50,385,903
0
0
0
0
1
false
0
2018-05-17T07:15:00.000
0
1
0
What if a categorical column has multiple values in the train set but only one in test data? Would such a feature be useful in model training at all?
50,385,511
0
python,machine-learning,regression,data-science,feature-selection
well, It depends on how many features you have in total. If very few (say less than five), that single feature will most likely play an important role in your classification. In this case, I would say you have "Data Mismatch" problem; meaning that your training and test data are coming from different distributions. One...
I am trying to solve a regression problem, where in one of my features can take up two values ('1','0') in the train set but can be valued only '1' in the test data. Intuitively, including this feature seems wrong to me but I am unable to find a concrete logic to support my assumption.
0
1
52
0
50,392,683
0
0
0
0
1
false
2
2018-05-17T13:04:00.000
1
1
0
Weight file use for different size image
50,392,211
0.197375
python,keras,deep-learning,convolutional-neural-network
If your model is fully CNN, there is absolutely no need to have different models. A CNN model can take images of any size. Just make sure the input_shape=(None,None,channels) You will need separate numpy arrays though, one for the big images, another for the small images, and you will have to call a different fit meth...
I want to train CNN where image-dimension is 128*512, then I want to use this weight file to train other data which has 128*1024 dimension. That means I want to use pre-trained weight file during the training time of different data(128*1024). Is it possible or How can I do it? I want to do this because I have only 300 ...
0
1
97
0
50,410,930
1
0
0
0
1
false
0
2018-05-18T07:49:00.000
0
1
0
How do I capture a live IP Camera feed (eg: http://61.60.112.230/view/viewer_index.shtml?id=938427) to python application?
50,406,354
0
python,opencv,camera,video-streaming
First,you need to find the actual path to camera stream. It is mjpeg or rtsp stream. Use developer option in this page to find the stream, like http://ip/video.mjpg or rtsp://ip/live.sdp. When you find the stream url,create python script which will use stream like capture = cv2.VideoCapture(stream_url). But,as noticed,...
I need to perform image recognition thing on real time camera feed. The video is embedded in shtml hosted on a IP. How do I access the video and process things using openCV.
0
1
378
0
50,412,247
0
0
0
0
1
false
0
2018-05-18T13:15:00.000
0
1
0
Reconstructing a classified image from a simple Convolution Neural Network
50,412,204
0
python,tensorflow,convolutional-neural-network,deconvolution
You should have a look to cGAN implementation and why not DeepDream from Google ;) The answer is yes it is possible, however it's not straight forward
I have a CNN trained on a classification task (Network is simple, 2 convolution + pooling layers and 2x fully connected layers). I would like to use this to reconstruct an image if I input a classification label. Is this possible to achieve? Is it possible to share weights between corresponding layers in the 2 networ...
0
1
25
0
50,414,131
0
1
0
0
1
false
0
2018-05-18T14:53:00.000
4
1
0
Effective passing of large data to python 3 functions
50,414,041
0.664037
python,python-3.x
Python handles function arguments in the same manner as most common languages: Java, JavaScript, C (pointers), C++ (pointers, references). All objects are allocated on the heap. Variables are always a reference/pointer to the object. The value, which is the pointer, is copied. The object remains on the heap and is n...
I am coming from a C++ programming background and am wondering if there is a pass by reference equivalent in python. The reason I am asking is that I am passing very large arrays into different functions and want to know how to do it in a way that does not waste time or memory by having copy the array to a new temporar...
0
1
507
0
52,691,471
0
0
0
0
1
false
1
2018-05-18T18:57:00.000
0
1
0
tensorflow.python.framework.errors_impl.NotFoundError: data/kitti_label_map.pbtxt; No such file or directory
50,417,709
0
python,python-3.x,tensorflow
I don't have a definitive solution to this but here is what resolved it. First, I copied the kitti_label_map.pbtxt into the data_dir. Then I also copied create_kitti_tf_record.py into the data_dir. And now I copied(this is what made it run in the end) the name and absolute path of the kitti_label_map.pbtxt and pasted i...
I'm trying to convert the kitti dataset into the tensorflow .record. After I typed the command: python object_detection/dataset_tools/create_kitti_tf_record.py --lable_map_path=object_detection/data/kitti_label_map.pbtxt --data_dir=/Users/zhenglyu/Graduate/research/DataSet/kitti/data_object_image_2/testing/image_2...
0
1
4,898
0
51,161,622
0
0
0
1
1
false
1
2018-05-19T21:46:00.000
0
2
0
how to get the distance of sequence of nodes in pgr_dijkstra pgrouting?
50,429,760
0
mysql,sql,postgresql,mysql-python,pgrouting
If you want all pairs distance then use select * from pgr_apspJohnson ('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads)
I have an array of integers(nodes or destinations) i.e array[2,3,4,5,6,8] that need to be visited in the given sequence. What I want is, to get the shortest distance using pgr_dijkstra. But the pgr_dijkstra finds the shortest path for two points, therefore I need to find the distance of each pair using pgr_dijkstra and...
0
1
708
0
50,448,522
0
0
0
0
1
false
0
2018-05-21T11:22:00.000
0
1
0
TensorFlow: Print Internal State of RNN at at Every Time Step
50,447,786
0
python,debugging,tensorflow,lstm
Okay, so the issue was that I was modifying the output but wasn't updating the output_size of the LSTM itself. Hence the error. It works perfectly fine now. However, I still find this method to be extremely annoying. Not accepting my own answer with the hope that somebody will have a cleaner solution.
I am using the tf.nn.dynamic_rnn class to create an LSTM. I have trained this model on some data, and now I want to inspect what are the values of the hidden states of this trained LSTM at each time step when I provide it some input. After some digging around on SO and on TensorFlow's GitHub page, I saw that some peopl...
0
1
217
0
53,102,357
0
1
0
0
1
false
0
2018-05-22T09:01:00.000
0
1
0
Python has stopped working(APPCRASH) Anaconda
50,463,731
0
python,anaconda
Updated all the libraries to latest version in Anaconda and try. I was facing a similar situation when I was running the code for Convolutional Neural Network in Spyder under Anaconda environment (Windows 7) I was getting following error Problem Event Name: APPCRASH Application Name: pythonw.exe Fault Module ...
When I try to build Linear Regression model with training data in Jupyter notebook, python has stopped working, with error as shown below. I am using Anaconda 3.5 on windows7, Python 3.6 version. Problem signature: Problem Event Name: APPCRASH Application Name: python.exe Application Version: 3.6.4150.1013 A...
0
1
2,067
0
50,487,464
0
0
0
0
1
false
0
2018-05-22T13:41:00.000
0
2
0
How to use classifier random forest in Python for 2 different data sets?
50,469,243
0
python,random-forest
You could try to put NUM as a single column, and the first and second datasets would use completely independent columns, with the non-matching cells containing empty data. Whether the results will be any good, will depend much on your data.
I have 2 data sets with different variables. But both includes a variable, say NUM, that helps to identify the occurrence of an event. With the NUM, I was able to identify the event, by labelling it. How can one run RF to effectively include considerations of the 2 datasets? I am not able to append them (column wise) ...
0
1
120
0
56,719,666
0
0
0
0
1
false
0
2018-05-22T16:06:00.000
0
2
0
How can I solve a system of linear equations in python faster than with numpy.linalg.lstsq?
50,472,095
0
python,performance,numpy,math,computer-science
If your coefficient matrix is sparse, use "spsolve" from "scipy.sparse.linalg".
I am trying to solve a linear system spanning somewhat between hundred thousand and two hundred thousand equations with numpy.linalg.lstsq but is taking waaaaay too long. What can I do to speed this up? The matrix is sparse with hundreds of columns (the dimensions are approximately 150 000 x 140) and the system is over...
0
1
2,001
0
59,359,358
0
0
0
1
2
false
1
2018-05-22T16:16:00.000
0
2
1
superset dashboards - dynamic updates
50,472,282
0
python,oracle,superset
You could set the auto-refresh interval for a dashboard if you click on the arrow next to the Edit dashboard-button.
I'm testing Apache superset Dashboards, It s a great tool. I added an external Database source (Oracle), and I created nice Dashboards very easily. I would like to see my Dashboards updated regularly and automatically (3 times a day) in superset. But my Dashboards are not updated. I mean when a row is inserted into the...
1
1
1,509
0
50,473,011
0
0
0
1
2
false
1
2018-05-22T16:16:00.000
1
2
1
superset dashboards - dynamic updates
50,472,282
0.099668
python,oracle,superset
I just found the origin of my error... :-) In fact I added records in the future (tomorow, the day after, ...)... And My dashboard was only showing all Records to the today date... I inserted a record before, I refreshed and It appeared. Thanks to having read me...
I'm testing Apache superset Dashboards, It s a great tool. I added an external Database source (Oracle), and I created nice Dashboards very easily. I would like to see my Dashboards updated regularly and automatically (3 times a day) in superset. But my Dashboards are not updated. I mean when a row is inserted into the...
1
1
1,509
0
50,506,830
0
0
0
0
1
false
0
2018-05-24T09:42:00.000
0
1
0
Which is more efficient ? Fetching data directly from databse or from an HTML page in pandas dataframe?
50,506,046
0
python,database,pandas,api,dataframe
This depends on a lot of things, namely: is your database in the same local network as your application server? is the website used in the same local network as your application server? how large is the table you have in the website? how large is your database? can a user do some trusted changes on the webpage that is...
I am making graphs using bokeh in python and for that I was calling an HTML page to fetch data inside my dataframe but now I want to fetch data directly from database inside my dataframe. So which method is more efficient?
1
1
44
0
50,517,141
0
0
0
0
2
false
0
2018-05-24T19:47:00.000
1
4
0
Generate an array of N random integers, between 1 and K, but containing at least one of each number
50,517,089
0.049958
python,numpy,random
Fill the matrix iteratively with numbers 1-K so if K was 2, and N was 4, [1,2,1,2]. Then randomly generate 2 random numbers between 1-N where the numbers don't equal, and swap the numbers at those positions.
I need to generate a matrix of N random integers between 1 and K, where each number appears at least once, having K ≤ N. I have no problem using a call to numpy.random.random_integers() and checking the number of distinct elements, when K is much less than N, but it's harder to get a valid array when K approximates to ...
0
1
152
0
50,517,145
0
0
0
0
2
false
0
2018-05-24T19:47:00.000
1
4
0
Generate an array of N random integers, between 1 and K, but containing at least one of each number
50,517,089
0.049958
python,numpy,random
Fill K numbers using xrange(k) and then fill (n-k) number using random number generator
I need to generate a matrix of N random integers between 1 and K, where each number appears at least once, having K ≤ N. I have no problem using a call to numpy.random.random_integers() and checking the number of distinct elements, when K is much less than N, but it's harder to get a valid array when K approximates to ...
0
1
152
0
50,520,768
0
0
0
0
1
false
0
2018-05-24T21:53:00.000
0
1
0
LSTM using generator function
50,518,710
0
python,neural-network,keras,lstm,recurrent-neural-network
Personally, it is recommended to use the PReLU activation function before the fully connected dense layer. For example: model.add(LSTM(128,input_shape=(train_X.shape[1],train_X.shape[2]))) model.add(BatchNormalization()) model.add(Dropout(.2)) model.add(Dense(64)) model.add(PReLU())
I am trying to create a model that has an LSTM layer of 100 units with input dimensions (16,48,12) (16 is the batch size as it takes input through a generator function). The generator function produces an expected output of (16, 1, 2) (16 is the batch size) and I want to use as output a dense layer with a softmax activ...
0
1
240
0
50,525,619
0
0
0
0
1
false
0
2018-05-25T08:36:00.000
0
1
0
How do Encoder/Decoder models learn in Deep Learning?
50,524,927
0
python,tensorflow,machine-learning,keras
The encoder learns a compressed representation of the input data and the decoder tries to learn how to use just this compressed representation to reconstruct the original input data as best as possible. Let's say that the initial weights (usually randomly set) produce a reconstruction error of e. During training, both ...
After learning a bit about encoder/decoder models in deep learning (mostly in Keras), i still cannot understand where the learning takes place. Does the encoder just create the feature map and then the decoder tries to get as close as possible as the result with BackProp, or does the encoder learn as well when the mode...
0
1
135
0
52,078,671
0
0
0
0
1
false
1
2018-05-25T14:13:00.000
0
1
0
Fit method of gensim.sklearn_api.w2vmodel.W2VTransformer throws error when inputed 2-dimensional array of strings
50,531,181
0
python,arrays,python-3.6,word2vec,gensim
It seems that gensim's word2vec has some problems when working with numpy arrays. Converting data to python lists helped me.
i'm trying to cluster some documents with word2vec and numpy. w2v = W2VTransformer() X_train = w2v.fit_transform(X_train) When I run the fit or fit_transform I get this error: Exception in thread Thread-8: Traceback (most recent call last): File "C:\Users\lperona\AppData\Local\Continuum\anaconda3\lib\threading.p...
0
1
380
0
61,811,370
0
0
0
0
1
false
8
2018-05-25T19:09:00.000
0
3
0
Cython + OpenCV and NumPy
50,535,498
0
python,numpy,opencv,cython
It all depends on what your program is doing. If you program is just stringing together large operations that are implemented in C++ then cython isn't going to help you much if at all. If you are writing code that works directly on individual pixels then cython could be a big help.
I have a program with mainly OpenCV and NumPy, with some SciPy as well. The system needs to be a real-time system with a frame rate close to 30 fps but right now only about 10 fps. Will using Cython help speed this up? I ask because OpenCV is already written in C++ and should already be quite optimized, and NumPy, as f...
0
1
7,781
0
50,557,193
0
0
0
0
1
true
1
2018-05-26T17:23:00.000
0
1
0
What affects tensorlayer.prepro.threading_data's return type?
50,545,324
1.2
python,list,types,return,numpy-ndarray
It seems that what was causing this problem was having items with different shapes in the list. In this instance, PNG images with 3 and 4 channel. Removing the alpha channel (the fourth channel) from all PNG images solved this for me.
I've been trying to use tensorlayer.prepro.threading_data, but I'm getting a different return type for different inputs. Sometimes it returns an ndarray and sometimes it returns a list. The documentation doesn't specify what's the reason for the different return types. Can anyone shed some light on this? Answer: It see...
0
1
51
0
68,709,535
0
0
0
0
1
false
1
2018-05-27T02:35:00.000
-1
1
0
Extract only points inside polygon
50,548,575
-0.197375
python,pyspark,arcgis,pyspark-sql,point-in-polygon
Insert csv on map and polygons layer for selection. take Select by Location tools and insert to processing. Target layer is point (csv), Source Layer is Polygon layer. Spatial selection for target layer feature(s) is completely contain the source layer feature selected. And click apply. Notes: You must configure porjec...
I have two CSV file one contain points for polygon around 2000 point (lat, long). another file has more than 1 billion row ( id, lat, long). how to extract only the points intersect(inside) the polygon by pyspark
0
1
222
0
50,564,217
0
0
1
0
1
false
0
2018-05-28T04:25:00.000
1
1
0
How to write video to memory in OpenCV
50,559,105
0.197375
python,python-3.x,opencv,video,video-capture
Video files will often, or even generally, be too big to fit in your main memory so you will be unlikely to be able to just keep the entire video there. It also worth noting that your OS itself may decide to move data between fast memory, slower memory, disk etc as it manages multiple processes but that is likely not i...
Can you write video to memory? I use Raspberry Pi and I don't want to keep writing and deleting videowriter objects created on sd card (or is it ok to do so?). If conditions are not met I would like to discard written video every second. I use type of motion detector recording and I would like to capture moment (one se...
0
1
1,432
0
50,597,931
0
0
0
0
1
false
0
2018-05-28T07:27:00.000
0
1
0
Unsupervised Outlier detection
50,561,180
0
python-3.x,cluster-analysis,curve-fitting,outliers,lmfit
What happens if you just treat the 6 points as a 12 dimensional vector and run any of the usual outlier detection methods such as LOF and LoOP? It's trivial to see the relationship between Euclidean distance on the 12 dimensional vector, and the 6 Euclidean distances of the 6 points each. So this will compare the simil...
I have 6 points in each row and have around 20k such rows. Each of these row points are actually points on a curve, the nature of curve of each of the rows is same (say a sigmoidal curve or straight line, etc). These 6 points may have different x-values in each row.I also know a point (a,b) for each row which that curv...
0
1
185
0
50,574,746
0
0
0
0
1
true
1
2018-05-28T11:26:00.000
0
1
0
Why did the numpy core size shrink from 1.14.0 to 1.14.1?
50,565,322
1.2
python,numpy,shared-libraries
It appears that the difference is in the debug symbols. Perhaps one was built with a higher level of debug symbols than the other, or perhaps the smaller one was built with compressed debug info (a relatively new feature). One way to find out more would be to inspect the compiler and linker flags used during each bui...
When creating an AWS lambda package I noticed that the ZIP became a lot smaller when I updated from numpy 1.14.0 to 1.14.3. From 24.6MB to 8.4MB. The directory numpy/random went from 4.3MB to 1.2MB, according to Ubuntus Disc Usage analyzer. When I, however, compare the directories with meld they seem to be identical. S...
0
1
60
0
50,583,288
0
0
0
0
1
false
0
2018-05-28T15:43:00.000
0
2
0
splitting data in to test train data when there is unbalance of data
50,569,782
0
python,machine-learning
Try to do oversampling as you have less amount of data points. Or else you can use neural network preferably MLP, That works fine with unbalanced data.
i have an unbalanced data set which has two categorical values. one has around 500 values of a particular class and other is only one single datapoint with another class.Now i would like to split this data into test train with 80-20 ratio. but since this is unbalanced , i would like to have the second class to be prese...
0
1
61
0
50,573,045
0
0
0
0
1
false
1
2018-05-28T20:07:00.000
2
2
0
How to increasing Number of epoch in keras Conv Net
50,572,884
0.197375
python-3.x,tensorflow,keras,deep-learning
Simply call model.fit(data, target, epochs=100, batch_size=batch_size) again to carry on training the same model. model needs to be the same model object as in the initial training, not re-compiled.
Suppose, I have a model that is already trained with 100 epoch. I tested the model with test data and the performance is not satisfactory. So I decide to train for another 100 epoch. How can I do that? I trained with model.fit(data, target, epochs=100, batch_size=batch_size) Now I want to train the same model without ...
0
1
1,201
0
50,594,107
0
0
0
0
1
false
1
2018-05-28T20:14:00.000
0
2
0
Managing classes in tensorflow object detection API
50,572,962
0
python,python-3.x,tensorflow,object-detection,object-detection-api
Shrinking the last layer to output 1 or two classes is not likely to yield large speed ups. This is because most of the computation is in the intermediate layers. You could shrink the intermediate layers, but this would result in poorer accuracy.
I'm working on a project that requires the recognition of just people in a video or a live stream from a camera. I'm currently using the tensorflow object recognition API with python, and i've tried different pre-trained models and frozen inference graphs. I want to recognize only people and maybe cars so i don't need ...
0
1
980
0
50,579,077
0
1
0
0
1
false
2
2018-05-29T07:48:00.000
1
3
0
Creating an empty multidimensional array
50,579,027
0.066568
python,numpy
I am guessing that by empty, you mean an array filled with zeros. Use np.zeros() to create an array with zeros. np.empty() just allocates the array, so the numbers in there are garbage. It is provided as a way to even reduce the cost of setting the values to zero. But it is generally safer to use np.zeros().
In Python when using np.empty(), for example np.empty((3,1)) we get an array that is of size (3,1) but, in reality, it is not empty and it contains very small values (e.g., 1.7*(10^315)). Is possible to create an array that is really empty/have no values but have given dimensions/shape?
0
1
4,649
0
50,711,224
0
0
0
0
2
false
1
2018-05-29T09:08:00.000
0
3
0
For a given sparse matrix, how can I multiply it with a given vector of binary values
50,580,459
0
python,numpy,scipy,sparse-matrix,linear-algebra
If you don't like the speed of matrix multiplication, then you have to consider modification of the matrix attributes directly. But depending on the format that may be slower. To zero-out columns of a csr, you can find the relevant nonzero elements, and set the data values to zero. Then run the eliminate_zeros method...
I have a sparse matrix and another vector and I want to multiply the matrix and vector so that each column of the vector where it's equal to zero it'll zero the entire column of the sparse matrix. How can I achieve that?
0
1
357
0
50,589,936
0
0
0
0
2
false
1
2018-05-29T09:08:00.000
1
3
0
For a given sparse matrix, how can I multiply it with a given vector of binary values
50,580,459
0.066568
python,numpy,scipy,sparse-matrix,linear-algebra
The main problem is the size of your problem and the fact you're using Python which is on the order of 10-100x slower for matrix multiplication than some other languages. Unless you use something like Cython I don't see you getting an improvement.
I have a sparse matrix and another vector and I want to multiply the matrix and vector so that each column of the vector where it's equal to zero it'll zero the entire column of the sparse matrix. How can I achieve that?
0
1
357
0
50,580,610
0
0
0
0
1
true
1
2018-05-29T09:11:00.000
0
1
0
Is it possible to execute tensorflow-on-spark program without gpu suppport?
50,580,534
1.2
python,apache-spark,tensorflow,pyspark
Yes, it is possible to use CPU. Tensorflow will automatically use CPU if it doesn't find any GPU on your system.
I want tensor-flow-on-spark programs(for learning purpose),& I don't have a gpu support . Is it possible to execute tensor-flow on spark program without GPU support? Thank you
0
1
149
0
52,261,793
0
0
1
0
2
false
1
2018-05-29T15:14:00.000
0
2
0
How to get phase and frequency of complex CSI for channel impulse responses?
50,587,784
0
python,frequency,phase,amplitude,csi
In Matlab, you do abs(csi) to get the amplitude. To get the phase, angle(csi). Search for similar functions in python
I have measurements of channel impulse responses as complex CSI's. There are two transmitters Alice and Bob and the measurements look like [real0], [img0], [real1], [img1], ..., [real99], [img99] (100 complex values). Amplitude for the Nth value is ampN = sqrt(realN^2 + imgN^2) How do I get the frequency and phase valu...
0
1
364
0
50,593,481
0
0
1
0
2
false
1
2018-05-29T15:14:00.000
0
2
0
How to get phase and frequency of complex CSI for channel impulse responses?
50,587,784
0
python,frequency,phase,amplitude,csi
complex-valued Channel State Information ? python has cmath, a standard lib for complex number math but numpy and scipy.signal will probably ultimately be more useful to you
I have measurements of channel impulse responses as complex CSI's. There are two transmitters Alice and Bob and the measurements look like [real0], [img0], [real1], [img1], ..., [real99], [img99] (100 complex values). Amplitude for the Nth value is ampN = sqrt(realN^2 + imgN^2) How do I get the frequency and phase valu...
0
1
364
0
50,591,996
0
1
0
0
1
true
0
2018-05-29T19:29:00.000
0
1
0
Runtime error when importing basemap into spyder
50,591,637
1.2
python,runtime-error,spyder,matplotlib-basemap
(Spyder maintainer here) This error was fixed in our 3.2.8 version, released in March/2018. Since you're using Anaconda, please open the Anaconda prompt and run there conda update spyder to get the fix.
I installed spyder onto my computer a few months ago and it has worked fine until I needed to produce a map with station plots and topography. I simply tried to import matplotlib-basemap and get the following error: File "<ipython-input-12-6634632f8d36>", line 1, in runfile('C:/Users/Isa/Documents/Freedman/2018/ENV...
0
1
453
0
50,611,284
0
0
0
0
1
false
0
2018-05-30T10:14:00.000
-2
2
0
Tensorflow contrib learn deprecation warnings
50,602,085
-0.197375
python,tensorflow,machine-learning,deprecation-warning
All of these warning have instructions for updating. Follow the instructions: switch to tf.data for preprocessing.
When I am using the below line in my code vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length, vocabulary=bow) I get theses warnings. How do I eliminate them ? WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from...
0
1
1,335
0
51,165,112
0
0
0
0
2
false
0
2018-05-30T15:56:00.000
0
2
0
TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed
50,609,002
0
python,debugging,tensorflow,visual-studio-code
You can simply stop at the break point, and switch to DEBUG CONSOLE panel, and type var.shape. It's not that convenient, but at least you don't need to write any extra debug code in your code.
I'm new (obviously) to python, but not so new to TensorFlow I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: WARNING:tensorflow:Tensor._shape is...
0
1
1,348
0
50,609,100
0
0
0
0
2
false
0
2018-05-30T15:56:00.000
0
2
0
TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed
50,609,002
0
python,debugging,tensorflow,visual-studio-code
Probably yes you may have to wait. In the debug mode a deprecated function is being called. You can print out the shape explicitly by calling var.shape() in the code as a workaround. I know not very convenient.
I'm new (obviously) to python, but not so new to TensorFlow I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: WARNING:tensorflow:Tensor._shape is...
0
1
1,348
0
50,734,768
0
0
1
0
1
true
0
2018-05-30T17:55:00.000
0
1
0
Mutation algorithm efficiency
50,610,831
1.2
python,numpy,statistics,genetic-algorithm
Yes. Suppose your gene length is 100 and your mutation rate is 0.1, then picking 100*0.1=10 random indices and mutating them is faster than generating & checking 100 numbers.
Instead of iterating through each element in a matrix and checking if random() returns lower than the rate of mutation, does it work if you generate a certain amount of random indices that match the rate of mutation or is there some other method?
0
1
54
0
50,645,446
0
0
0
0
1
false
1
2018-05-30T21:00:00.000
0
1
0
Finding original features' effect to the principal components used as inputs in Kernel PCA
50,613,294
0
python,machine-learning,cluster-analysis,pca,kernel-density
Since you are applying PCA in the kernel space, there is a strictly nonlinear relation with your original features and the features of the reduced data; the eigenvectors you calculate are in the kernel space to begin with. This obstructs a straightforward approach, but maybe you can do some kind of sensitivity analysis...
I am trying to implement Kernel PCA to my dataset which has both categorical (encoded with one hot encoder) and numeric features and decreases the number of dimensions from 22 to 3 dimensions in total. After that, I will continue with clustering implementation. I use Spyder as IDE. In order to understand the structure ...
0
1
91
0
50,633,400
0
0
0
0
1
false
0
2018-05-31T17:56:00.000
0
2
0
Will OpenCV 3.2+ FileStorage save the SimpleBlobDetector_create object in XML or YML?
50,630,168
0
python-3.x,opencv,file-storage
I have lately had some troubles when using FileStorage with XML or YAML (It appears to be some kind of bug in the OpenCV sourcecode). I would recommend you to try it with JSON. In oder to do so, just change the name of the file to XXXX.json. If you are saving self-constructed structures as well, just construct the stru...
I am fairly new to OpenCV 3+ in Python. It looks to me that FileStorage under Python does not support, for example, a writeObj() method. Is it possible to save the SimpleBlobDetector_create to an XML or YAML file using OpenCV 3+ in Python? Another way to put it is this: using Python OpenCV, can I save XML/YAML data th...
0
1
362
0
50,634,001
0
1
0
0
1
true
2
2018-05-31T22:09:00.000
0
1
0
import sklearn in python
50,633,488
1.2
python,import,scikit-learn
Why bot Download the full anaconda and this will install everything you need to start which includes Spider IDE, Rstudio, Jupyter and all the needed modules.. I have been using anaconda without any error and i will recommend you try it out.
I installed miniconda for Windows10 successfully and then I could install numpy, scipy, sklearn successfully, but when I run import sklearn in python IDLE I receive No module named 'sklearn' in anaconda prompt. It recognized my python version, which was 3.6.5, correctly. I don't know what's wrong, can anyone tell me ho...
0
1
1,095
0
69,603,564
0
1
0
0
1
false
3
2018-06-01T01:04:00.000
0
2
0
Pycharm Can't install TensorFlow
50,634,751
0
python,tensorflow,pip,pycharm,conda
what worked for is this; I installed TensorFlow on the command prompt as an administrator using this command pip install tensorflow then I jumped back to my pycharm and clicked the red light bulb pop-up icon, it will have a few options when you click it, just select the one that says install tensor flow. This would no...
I cannot install tensorflow in pycharm on windows 10, though I have tried many different things: went to settings > project interpreter and tried clicking the green plus button to install it, gave me the error: non-zero exit code (1) and told me to try installing via pip in the command line, which was successful, but ...
0
1
16,417
0
50,648,721
0
0
0
0
1
false
9
2018-06-01T16:44:00.000
0
4
0
Can you use loc to select a range of columns plus a column outside of the range?
50,647,832
0
python,python-3.x,pandas,dataframe
You can use pandas.concat(): pd.concat([df.loc[:,'column_1':'columns_60'],df.loc[:,'column_81']],axis=1)
Suppose I want to select a range of columns from a dataframe: Call them 'column_1' through 'column_60'. I know I could use loc like this: df.loc[:, 'column_1':'column_60'] That will give me all rows in columns 1-60. But what if I wanted that range of columns plus 'column_81'. This doesn't work: df.loc[:, 'column_1':'c...
0
1
4,458
0
51,454,563
0
1
0
0
1
false
0
2018-06-02T10:28:00.000
0
1
0
Wordcloud-Pillow issue
50,655,939
0
python,python-imaging-library,python-import,word-cloud
This is a Pillow problem, rather than a WordCloud problem. As it says, your Pillow installation has somehow become part 4.2.1, part 5.1.0. The simplest solution would be to reinstall Pillow.
from wordcloud import WordCloud While importing WordCloud I get below issue.Can you help with this? RuntimeWarning: The _imaging extension was built for another version of Pillow or PIL: Core version: 5.1.0 Pillow version: 4.2.1 "The _imaging extension was built for Python with UCS2 support; " ImportError ...
0
1
270
0
55,304,529
0
0
0
0
1
false
0
2018-06-03T08:14:00.000
-1
3
0
Train CNN model with multiple folders and sub-folders
50,664,485
-0.066568
python,tensorflow,keras
Use os.walk to access all the files in sub-directories recursively and append to the dataset.
I am developing a convolution neural network (CNN) model to predict whether a patient in category 1,2,3 or 4. I use Keras on top of TensorFlow. I have 64 breast cancer patient data, classified into four category (1=no disease, 2= …., 3=….., 4=progressive disease). In each patient's data, I have 3 set of MRI scan images...
0
1
1,045
0
50,683,342
0
0
0
0
1
true
3
2018-06-04T13:46:00.000
3
2
0
How to normalize data when using Keras fit_generator
50,682,119
1.2
python,tensorflow,machine-learning,keras,keras-2
The generator does allow you to do on-the-fly processing of data but pre-processing the data prior to training is the preferred approach: Pre-process and saving avoids processing the data for every epoch, you should really just do small operations that can be applied to batches. One-hot encoding for example is a commo...
I have a very large data set and am using Keras' fit_generator to train a Keras model (tensorflow backend). My data needs to be normalized across the entire data set however when using fit_generator, I have access to relatively small batches of data and normalization of the data in this small batch is not representati...
0
1
2,686
0
50,692,350
0
0
0
0
1
false
0
2018-06-05T04:53:00.000
-1
2
0
Pandas - Read/Write to the same csv quickly.. getting permissions error
50,692,295
-0.099668
python,pandas,csv,io
Close the file that you are trying to read and write and then try running your script. Hope it helps
I have a script that I am trying to execute every 2 seconds.. to begin it reads a .csv with pd.read_csv. Then executes modifications on the df and finally overwrites the original .csv with to_csv. I'm running into a PermissionError: [Errno 13] Permission denied: and from my searches I believe it's due to trying to open...
0
1
1,596
0
50,709,075
0
0
0
0
1
true
0
2018-06-05T05:38:00.000
1
1
0
Can doc2vec be useful if training on Documents and inferring on sentences only
50,692,739
1.2
python,gensim,training-data,doc2vec
Every corpus and project goals are different. Your approach of training on larger docs but then inferring on shorter sentences could plausibly work, but you have to try it to see how well, and then iteratively test whether perhaps shorter training docs (as single sentences or groups-of-sentences) work better, for your ...
I am training with some documents with gensim's Doc2vec. I have two types of inputs: Whole English Wikipedia: Each article of Wikipedia text is considered as one document for doc2vec training. (Total around 5.5 million articles or documents) Some documents related to my project that are manually prepared and coll...
0
1
332
0
50,713,079
0
1
0
0
1
false
1
2018-06-06T04:26:00.000
1
1
0
Pytrends anaconda install conflict with TensorFlow
50,712,246
0.197375
python,tensorflow,anaconda
Try upgrading your version of tensorflow. I tried it with Tensorflow 1.6.0 ,tensorboard 1.5.1 and it worked fine. I was able to import pytrends.
It seems I have a conflict when trying to install pytrends via anaconda. After submitting "pip install pytrends" the following error arises: tensorflow-tensorboard 1.5.1 has requirement bleach==1.5.0, but you'll have bleach 2.0.0 which is incompatible. tensorflow-tensorboard 1.5.1 has requirement html5lib==0.9999999, b...
0
1
333
0
50,723,414
0
0
0
0
1
false
0
2018-06-06T11:33:00.000
0
2
0
Optimizing RAM usage when training a learning model
50,719,405
0
python,deep-learning,ram,rdp
Slightly orthogonal to your actual question, if your high RAM usage is caused by having entire dataset in memory for the training, you could eliminate such memory footprint by reading and storing only one batch at a time: read a batch, train on this batch, read next batch and so on.
I have been working on creating and training a Deep Learning model for the first time. I did not have any knowledge about the subject prior to the project and therefor my knowledge is limited even now. I used to run the model on my own laptop but after implementing a well working OHE and SMOTE I simply couldnt run it o...
0
1
1,499
0
50,730,108
0
0
0
0
1
true
3
2018-06-06T14:46:00.000
8
1
0
How is Word2Vec min_count applied
50,723,303
1.2
python,word2vec,gensim
Words below the min_count frequency are dropped before training occurs. So, the relevant context window is the word-distance among surviving words. This de facto shrinking of contexts is usually a good thing: the infrequent words don't have enough varied examples to obtain good vectors for themselves. Further, while i...
Say that I'm training a (Gensim) Word2Vec model with min_count=5. The documentation learns us what min_count does: Ignores all words with total frequency lower than this. What is the effect of min_count on the context? Lets say that I have a sentence of frequent words (min_count > 5) and infrequent words (min_count <...
0
1
6,773
0
50,787,466
0
1
0
0
1
true
1
2018-06-06T19:21:00.000
1
1
0
Not able to install Spyder in the virtual environment on Anaconda
50,728,057
1.2
python,python-3.x,anaconda,spyder
You should activate your virtual environment and then type: conda install spyder. That should install spyder for that particular virtual environment. If you used pip or pip3 you may have problems.
I was trying to install spyder in the virtual environment on anaconda, but ended up with this debugging error. Executing transaction: \ DEBUG menuinst_win32:init(199): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\Users\Public\Anaconda\envs\tensorflow', env_name: 'tensorflow', mode: 'None', used_mode: 'u...
0
1
777
0
50,748,029
0
0
0
0
1
true
0
2018-06-07T09:27:00.000
0
1
0
Reinforcement Learning with MDP for revenues optimization
50,737,705
1.2
python,optimization,reinforcement-learning,markov-decision-process
I think the biggest thing missing in your formulation is the sequential part. Reinforcement learning is useful when used sequentially, where the next state has to be dependent on the current state (thus the "Markovian"). In this formulation, you have not specified any Markovian behavior at all. Also, the reward is a sc...
I want to modelize the service of selling seats on an airplane as an MDP( markov decision process) to use reinforcement learning for airline revenues optimization, for that I needed to define what would be: states, actions, policy, value and reward. I thought a little a bit about it, but i think there is still somethin...
0
1
232
0
50,739,411
0
0
0
0
1
true
1
2018-06-07T09:44:00.000
3
1
0
How can I read a csv file using panda dataframe from GPU?
50,738,058
1.2
python-3.x,gpu,h2o,h2o4gpu
No. The biggest bottleneck is IO and that’s handled by the CPU.
I am reading a file using file=pd.read_csv('file_1.csv') which is taking a long time on CPU. Is there any method to read this using GPU.
0
1
810
0
50,776,543
0
0
0
0
1
false
1
2018-06-07T14:13:00.000
0
1
0
Processing Images with Depth Information
50,743,476
0
python,opencv
If you want to work with depth based cameras you can go for Time of Flight(ToF) cameras like picoflexx and picomonstar cameras. They will give you X,Y and Z values. Where your x and y values are distances from camera centre of that point (like in 2D space) and Z will five you the direct distance of that point (not perp...
I've been doing a lot of Image Processing recently on Python using OpenCV and I've worked all this while with 2-D Images in the generic BGR style. Now, I'm trying to figure out how to incorporate depth and work with depth information as well. I've seen the documentation on creating simple point clouds using the Left an...
0
1
280
0
50,774,913
0
0
0
0
1
true
0
2018-06-07T17:31:00.000
0
1
0
ARIMA Forecasting
50,747,097
1.2
python,time-series,missing-data,arima
I don't know exactly about your specific domain problem, but these things apply usually in general: If the NA values represent 0 values for your domain specific problem, then replace them with 0 and then fit the ARIMA model (this would for example be the case if you are looking at daily sales and on some days you have...
I have a time series data which looks something like this Loan_id Loan_amount Loan_drawn_date id_001 2000000 2015-7-15 id_003 100 2014-7-8 id_009 78650 2012-12-23 id_990 100 2018-11-12 I am trying to build a Arima forecasting model on this data which has round about 550 observ...
0
1
148
0
50,774,153
0
0
0
0
1
false
3
2018-06-07T17:57:00.000
0
2
0
what does the error "Length of label is not same with #data" when I call lightgbm.train
50,747,460
0
python,lightgbm
It simply means that the dimensions of your training examples and the respective list of labels do not match. In other words, if you have 10 training instances you need exactly 10 labels. (For a multi-label scenarios a better formulation would be to replace label by labelling, or refer to the size of the array.)
I'm pretty new the LightGBM, and when I try to apply lightgbm.train on my dataset, I got this error: LightGBMError: Length of label is not same with #data I'm not sure where I made a mistake. I tried model = lightgbm.train(params, train_data, valid_sets=test_data, early_stopping_rounds=150, verbose_eval=200) Thanks in ...
0
1
4,332
0
50,753,840
0
0
0
0
1
false
0
2018-06-07T19:07:00.000
1
1
0
readlines and numpy loadtxt gives UnicodeDecodeError after upgrade 18.04
50,748,564
0.197375
python,numpy,ubuntu
Adding encoding='ISO-8859-1' to readlines and loadtxt did the trick.
I have a python script which uses readlines and numpy.loadtxt to load a csv file. It works perfectly fine on my desktop running ubuntu 16.04. On my laptop running 18.04 I get (loading the same file) the following error: UnicodeDecodeError: 'utf8' codec can't decode byte 0xb5 in position 446: invalid start byte What can...
0
1
322