GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
43,217,958
0
0
0
0
1
false
24
2017-04-04T21:22:00.000
8
3
0
Pandas data precision
43,217,916
1
python,pandas
Your data is stored with the precision, corresponding to your dtype (np.float16, np.float32, np.float64). pd.options.display.precision - allows you to change the precision for printing the data
By default the numerical values in data frame are stored up to 6 decimals only. How do I get the full precision. For example 34.98774564765 is stored as 34.987746. I do want the full value. and 0.00000565 is stored as 0. . Apart from applying formats to each data frame is there any global setting that helps preservin...
0
1
80,695
0
43,219,694
0
0
0
0
1
false
0
2017-04-04T23:54:00.000
0
2
0
How to Omit NaN values when applying groupyby in Pandas
43,219,641
0
python,pandas
I get that the mean of that particular group is NAN when a NAN value is present FALSE! :) the mean will only consider non null values. You are safe my man.
I have a dataset consisting of multiple columns and I want to calculate the average by using the groupby function in Python. However, since some of the values are NAN I get that the mean of that particular group is NAN when a NAN value is present. I would like to omit this value, not set it to zero or fill it with any ...
0
1
266
0
53,868,885
0
1
0
0
2
false
6
2017-04-04T23:58:00.000
1
5
1
Segmentation fault when I try to run Anaconda Navigator
43,219,679
0.039979
python,ubuntu,anaconda,navigator
I had the same issue when I install OpenCV library using conda.Most probably downgrading something makes this issue happen. Just type : conda update --all
I have recently Installed Anaconda for Python 3.6 but it shows the error "Segmentation fault" whenever I try to run Anaconda-Navigator. I've tried just writting in the terminal Anaconda-Navigator and also going to my Anaconda3 folder and try to execute it inside bin. The only solution that works so far is accessing th...
0
1
9,783
0
47,718,983
0
1
0
0
2
false
6
2017-04-04T23:58:00.000
0
5
1
Segmentation fault when I try to run Anaconda Navigator
43,219,679
0
python,ubuntu,anaconda,navigator
I had the same problem.I solved it by adding /lib to mt LD_LIBRARY_PATH. Note: On my system Anaconda installation path is /home/pushyamik/anaconda3.
I have recently Installed Anaconda for Python 3.6 but it shows the error "Segmentation fault" whenever I try to run Anaconda-Navigator. I've tried just writting in the terminal Anaconda-Navigator and also going to my Anaconda3 folder and try to execute it inside bin. The only solution that works so far is accessing th...
0
1
9,783
0
43,304,907
0
0
0
0
1
false
0
2017-04-06T16:32:00.000
0
2
0
Generating an SIS epidemilogical model using Python networkx
43,260,916
0
python,graph,graph-theory,networkx
Do you use networkx for calculation or visualization? There is no need to use it for calculation since your model is simple and it is easier to calculate it with matrix (vector) operations. That is suitable for numpy. Main part in a step is calculation of probability of switching from 0 to 1. Let N be vector that for e...
I have been told networkx library in python is the standard library to use for graph-theoretical applications, but I have found using it quite frustrating so far. What I want to do is this: Generating an SIS epidemiological network, assigning initial contact rates and recovery rates and then following the progress of ...
0
1
1,096
0
65,609,395
0
0
0
0
1
false
0
2017-04-06T20:14:00.000
0
3
0
Is there a way to do inverse z-transforms in Python?
43,264,959
0
python,inverse-transform
You can use scipy.signal.residuez for z^-n form of z-transform or scipy.signal.residue for z^n form of z-transform.
Is there a way to do inverse z-transforms in Python? (I don’t see anything like this in NumPy or SciPy).
0
1
4,204
0
43,362,812
0
0
0
0
1
true
0
2017-04-07T10:10:00.000
1
2
0
Parallel training with CNTK and numpy interop
43,275,431
1.2
python,cntk
There are the following options: 1) Use distributed learner + training session - then you need to either use ImageDeserializer, or implement your own MinibatchSource (this extensibility only available starting RC2) 2) Use distributed learner + write the training loop yourself. In that case you have to take care of spli...
I'm training an autoencoder network which needs to read in three images per training sample (one input RGB image, two output RGB images). It was easy to make this work with python and numpy interop and reading the image files in myself. How can I enable parallel/distributed training with this? Do I have to use the trai...
0
1
194
0
43,288,022
0
0
0
0
1
false
0
2017-04-07T22:31:00.000
1
1
0
Pythonic array indexing with boolean masking array
43,287,990
0.197375
python,numpy
Negating a boolean mask array in NumPy is ~mask. Also, consider whether you actually need where at all. Seemingly the most common use is some_array[np.where(some_mask)], but that's just an unnecessarily wordy and inefficient way to write some_array[some_mask].
I get a PEP8 complaint about numpy.where(mask == False) where mask is a boolean array. The PEP8 recommendation comparison should be either 'if condition is false' or 'if not condition'. What is the pythonic syntax for the suggested comparison inside numpy.where()?
0
1
905
0
43,311,283
0
0
0
0
1
false
3
2017-04-09T18:51:00.000
0
2
0
Spark and Cassandra through Python
43,310,597
0
python,apache-spark,cassandra,pyspark
I'll just give my "short" 2 cents. The official docs are totally fine for you to get started. You might want to specify why this isn't working, i.e. did you run out of memory (perhaps you just need to increase the "driver" memory) or is there some specific error that is causing your example not to work. Also it would b...
I have huge data stored in cassandra and I wanted to process it using spark through python. I just wanted to know how to interconnect spark and cassandra through python. I have seen people using sc.cassandraTable but it isnt working and fetching all the data at once from cassandra and then feeding to spark doesnt make ...
0
1
1,860
0
45,893,216
0
0
0
0
2
false
1
2017-04-10T07:25:00.000
0
3
0
Python ImportError - NUMPY MKL
43,317,119
0
python,python-import,importerror
Actually, there is no problem. Just need to restart the Jupyter and you will see it is working well.
I keep getting this error whenever I try running this code, if I can get some insight in what is going on that would be a great help, since I'm pretty new to this coding environment I would really appreciate some help. The code is this: File "C:\Users\user\Desktop\Python\pythonsimulation.py", line 6, in from sc...
0
1
5,133
0
45,571,159
0
0
0
0
2
false
1
2017-04-10T07:25:00.000
0
3
0
Python ImportError - NUMPY MKL
43,317,119
0
python,python-import,importerror
If you are using Jupyter, try restarting the kernel. please click Restart in Kernel menu.
I keep getting this error whenever I try running this code, if I can get some insight in what is going on that would be a great help, since I'm pretty new to this coding environment I would really appreciate some help. The code is this: File "C:\Users\user\Desktop\Python\pythonsimulation.py", line 6, in from sc...
0
1
5,133
0
50,902,599
0
0
0
0
1
false
3
2017-04-10T13:39:00.000
2
4
0
How to simply kill python-tensorflow process and release memory?
43,324,788
0.099668
python,tensorflow
Don't run this on your desktop, but for HPC/remote machines with no display, this kills all left over GPU-using processes: nvidia-smi -q -d PIDS | grep -P "Process ID +: [0-9]+" | grep -Po "[0-9]+" | xargs kill -9
Whenever I run a python script that uses tensorflow and for some reason decide to kill it before it finishes, there is the problem that ctrl-c doesn't work. I would use ctrl-z but it doesn't release the gpu memory, so when i try to re-run the script there is no memory left. Is there a solution for this in linux?
0
1
6,349
0
43,386,604
0
0
0
0
1
false
3
2017-04-10T19:29:00.000
1
2
0
How to train an SVM classifier on a satellite image using Python
43,331,510
0.099668
python,machine-learning,scikit-learn,svm,k-means
My Solution:- Manual Processing:- If the size of your dataset is small, you can manually create a vector data (also reliable, when it is created by yourself). If not, it is much difficult to apply SVM to classify the images. Automatic Processing:- Step 1:- You can use "Unsupervised Image Clustering" technique to gro...
I am using scikit-learn library to perform a supervised classification (Support Vector Machine classifier) on a satellite image. My main issue is how to train my SVM classifier. I have watched many videos on youtube and have read a few tutorials on how to train an SVM model in scikit-learn. All the tutorials I have wat...
0
1
6,775
0
43,337,961
0
1
0
0
2
false
0
2017-04-11T05:50:00.000
0
4
0
python : create a variable with different dimension sizes
43,337,694
0
python,variables,dimensions
Is not possible to make arrays with different sizes as I understood you want to, and this is because a 2D-array is basically a table with rows and columns, and each row has the same number of columns, no matter what. But, you can join the values in each variable and save the resulting strings in the array, and to use t...
I want to create a variable D by combining two other variable x and y. x has the shape [731] and y has the shape [146]. At the end D should be 2D so that D[0] contains all x-values and D[1] all y-values. I hope I explained it in a way someone can understand what I want to do. Can someone help me with this?
0
1
792
0
43,337,785
0
1
0
0
2
true
0
2017-04-11T05:50:00.000
2
4
0
python : create a variable with different dimension sizes
43,337,694
1.2
python,variables,dimensions
It is a simple as: D = [x, y] Hope it helped :)
I want to create a variable D by combining two other variable x and y. x has the shape [731] and y has the shape [146]. At the end D should be 2D so that D[0] contains all x-values and D[1] all y-values. I hope I explained it in a way someone can understand what I want to do. Can someone help me with this?
0
1
792
0
45,476,167
0
0
0
0
3
false
8
2017-04-11T09:48:00.000
0
4
0
How Yolo calculate P(Object) in the YOLO 9000
43,342,433
0
python,tensorflow,object-detection,yolo
There are 13x13 grid cells, true, but P(object) is calculated for each of 5x13x13 anchor boxes. From the YOLO9000 paper: When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box. I can't comment yet because I'm...
Currently I am testing the yolo 9000 model for object detection and in the Paper I understand that the image is splited in 13X13 boxes and in each boxes we calculate P(Object), but How can we calculate that ? how can the model know if there is an object in this boxe or not, please I need help to understand that I am us...
0
1
1,673
0
59,667,414
0
0
0
0
3
false
8
2017-04-11T09:48:00.000
0
4
0
How Yolo calculate P(Object) in the YOLO 9000
43,342,433
0
python,tensorflow,object-detection,yolo
During test time the YOLO network gets the IOU from the default setted value. That is 0.5.
Currently I am testing the yolo 9000 model for object detection and in the Paper I understand that the image is splited in 13X13 boxes and in each boxes we calculate P(Object), but How can we calculate that ? how can the model know if there is an object in this boxe or not, please I need help to understand that I am us...
0
1
1,673
0
44,433,597
0
0
0
0
3
false
8
2017-04-11T09:48:00.000
4
4
0
How Yolo calculate P(Object) in the YOLO 9000
43,342,433
0.197375
python,tensorflow,object-detection,yolo
They train for the confidence score = P(object) * IOU. For the ground truth box they take P(object)=1 and for rest of the grid pixels the ground truth P(object) is zero. You are training your network to tell you if some object in that grid location i.e. output 0 if not object, output IOU if partial object and output 1 ...
Currently I am testing the yolo 9000 model for object detection and in the Paper I understand that the image is splited in 13X13 boxes and in each boxes we calculate P(Object), but How can we calculate that ? how can the model know if there is an object in this boxe or not, please I need help to understand that I am us...
0
1
1,673
0
43,345,587
0
0
0
0
1
true
4
2017-04-11T10:52:00.000
2
1
0
Feature extraction in Keras
43,343,868
1.2
python,image-processing,computer-vision,deep-learning,keras
There is no general rules on how to choose the layer for feature extraction but you might use a easy rule of thumb. The deeper you go to the network - the less ImageNet specific semantic features you would have. But in the same time - you are getting less semantic features also. What I would do is to use the pool layer...
I am using pretrained resnet50 and inception v3 networks to extract features from my images, which I then use with my ML algo. Which layers are recommended for feature extraction? I am currently using: "mixed10" in Inception v3 and "avg_pool" in resent50. The features are modelling well in XGBoost though. Thank you.
0
1
1,958
0
60,558,628
0
0
0
0
3
false
1
2017-04-11T12:27:00.000
0
3
0
Best Way to create a bounding box for object detection
43,345,925
0
python,tensorflow,deep-learning,object-detection
There is not any rule as such but yes the best practice of proper annotation is to keep certain pixel width while creating bounding boxes. See, the background changes are the variations in the object which will make it robust but keep in mind to have enough samples to properly recognizing the patterns in object(edges, ...
Currently, I am working to create a deep neural network for object detection, and i am also create my own dataset, and I use the bounding box to annotate my images, and my question is what are the rules to have the best bounding box for my images training. I mean if I wrap my object is it good to limit the background o...
0
1
2,423
0
53,543,602
0
0
0
0
3
false
1
2017-04-11T12:27:00.000
0
3
0
Best Way to create a bounding box for object detection
43,345,925
0
python,tensorflow,deep-learning,object-detection
You can reference YOLO algorithm- this is the best algorithm for object detection. The first, input image will divide into SxS grid cell, Yolo will predict 5 bounding box for each cell and with each bounding box, Yolo also predict the center coordinates of box, width, height of box and confidence score of having any ob...
Currently, I am working to create a deep neural network for object detection, and i am also create my own dataset, and I use the bounding box to annotate my images, and my question is what are the rules to have the best bounding box for my images training. I mean if I wrap my object is it good to limit the background o...
0
1
2,423
0
43,346,462
0
0
0
0
3
false
1
2017-04-11T12:27:00.000
0
3
0
Best Way to create a bounding box for object detection
43,345,925
0
python,tensorflow,deep-learning,object-detection
I am not specialized in bounding box, but in general in deep learning, we try to obtain a network which will be robust against irrelevant variables, in your case, the background. Bounding should not be dependant of the background, so set your bounding box the way you want, it should be learned by the network how to rep...
Currently, I am working to create a deep neural network for object detection, and i am also create my own dataset, and I use the bounding box to annotate my images, and my question is what are the rules to have the best bounding box for my images training. I mean if I wrap my object is it good to limit the background o...
0
1
2,423
0
43,352,367
0
1
0
0
1
false
1
2017-04-11T16:39:00.000
0
2
0
R, Python and pyRserve - multi-threaded examples?
43,351,701
0
python,r,multithreading,rserve,pyrserve
I kept on working with the code and it turns out that each thread needs its own port in order to work. I didn't find that documented anywhere, I was just trying out different idea. So: I set up as many instances of Rserve as a I wanted threads. Each one of those instances was its own port In my python code, when I ins...
I have a Python script set up where it instantiates Rserve, sets up a few R scripts and functions and then runs some data against the functions. However, I have been unable to create a multi-threaded instance of this same process. My core issue is that one thread always seems to dominate the processing and all of the o...
0
1
290
0
43,369,916
0
0
0
0
1
false
1
2017-04-11T16:42:00.000
2
2
0
how to import cv2 and numpy in Choregraphe for NAO robot?
43,351,742
0.197375
python,opencv,numpy,nao-robot,choregraphe
It depends if you're using a real NAO or a simulated one. Simulated one: choregraphe use its own embedded python interpreter, even if you add library to your system it won't change anything Real NAO: the system python interpreter is used, you need to install those library to your robot (and not to the computer running...
I'm doing a project that require cv2 and numpy in one of the scripts using choregraphe, but I get an error : No module named cv2/numpy. I think it is because choregraphe has its own python interpreter but I do not know how to install cv2 and numpy into the python of choregraphe. How can I do it?
0
1
1,416
0
43,388,797
0
0
0
0
2
false
0
2017-04-12T05:18:00.000
0
3
0
export geometric stiffness matrix in abaqus software
43,360,854
0
python,abaqus
Abaqus doesn't output stiffness matrices. If you already know Abaqus and only want to do buckling, you might want to try CalculiX. I think it can output a stiffness matrix and uses Abaqus-style input files.
I need export geometric(stress or diffrential) stiffness matrix in linear buckling problem at abaqus software for use in matlab program, i find method export of usual stiffness matrix, but i cant find any data about export geometric stiffness matrix in abaqus script, Can some one help me about it? for example when i ...
0
1
797
0
43,375,546
0
0
0
0
2
false
0
2017-04-12T05:18:00.000
1
3
0
export geometric stiffness matrix in abaqus software
43,360,854
0.066568
python,abaqus
Let's discuss a little back ground on geometric nonlinear FEA. There are many different methods used for nonlinear geometric analysis of structure. The most well-known and established ones are [1]: Total Lagrangian (TL), Updated Lagrangian (UL). TL: Uses the full nonlinear definition of strains. UL: Uses the linear...
I need export geometric(stress or diffrential) stiffness matrix in linear buckling problem at abaqus software for use in matlab program, i find method export of usual stiffness matrix, but i cant find any data about export geometric stiffness matrix in abaqus script, Can some one help me about it? for example when i ...
0
1
797
0
43,383,011
0
0
0
0
1
false
5
2017-04-13T03:04:00.000
2
3
0
How to calculate the distance in meaning of two words in Python
43,382,857
0.132549
python,nlp,nltk
My suggestion is as follows: Put each word through the same thesaurus, to get a list of synonyms. Get the size of the set of similar synonyms for the two words. That is a measure of similarity between the words. If you would like to do a more thorough analysis: Also get the antonyms for each of the two words. Get th...
I am wondering if it's possible to calculate the distance/similarity between two related words in Python (like "fraud" and "steal"). These two words are not synonymous per se but they are clearly related. Are there any concepts/algorithms in NLP that can show this relationship numerically? Maybe via NLTK? I'm not looki...
0
1
2,056
0
43,403,593
0
0
0
0
1
true
1
2017-04-14T00:17:00.000
1
1
0
Reshape array in keras
43,403,560
1.2
python,keras,reshape
Just put the last dim to -1.. So Reshape(15,-1)
Is it possible in keras to [Reshape][1] an array in eras by only specifying one of the two dimension, such that the last dimension fits accordingly? In my case i have (30,1,2080) and i want to reshape it to (15,)
0
1
599
0
43,416,019
0
0
0
0
1
true
1
2017-04-14T12:42:00.000
3
1
0
tf.image.pad_to_bounding_box VS tf.pad and tf.image.crop_to_bounding_box VS tf.slice
43,411,738
1.2
python,tensorflow
Mostly you use them tf.image.* for easiness of use. Both crop_to_bounding_box and pad_to_bounding_box use slice and padunderneath, but also add checkings and constraints to make sure you don't spend hours trying to debug your slice/pad indices and offsets.
I'd like to understand why does the two functions tf.image.crop_to_bounding_box and tf.image.pad_to_bounding_box exists, since the behaviour of these two functions can be done really simply with respectively tf.slice and tf.pad. They are not so much easier to understand, and their scope is narrow since they accept only...
0
1
825
0
43,420,032
0
0
1
0
1
false
10
2017-04-14T13:59:00.000
6
2
0
Performance of pyomo to generate a model with a huge number of constraints
43,413,067
1
python,mathematical-optimization,pyomo
While you can use NumPy data when creating Pyomo constraints, you cannot currently create blocks of constraints in a single NumPy-style command with Pyomo. Fow what it's worth, I don't believe that you can in languages like AMPL or GAMS, either. While Pyomo may eventually support users defining constraints using matr...
I am interested in the performance of Pyomo to generate an OR model with a huge number of constraints and variables (about 10e6). I am currently using GAMS to launch the optimizations but I would like to use the different python features and therefore use Pyomo to generate the model. I made some tests and apparently wh...
0
1
4,313
0
43,507,382
0
0
0
0
1
true
3
2017-04-14T15:33:00.000
0
1
0
Undersampling vs class_weight in ScikitLearn Random Forests
43,414,689
1.2
python,scikit-learn,random-forest
I think you can split majority class in +-10000 samples and train the same model using each sample plus the same points of minority class.
I am applying ScikitLearn's random forests on an extremely unbalanced dataset (ratio of 1:10 000). I can use the class_weigth='balanced' parameter. I have read it is equivalent to undersampling. However, this method seems to apply weights to samples and do not change the actual number of samples. Because each tree of t...
0
1
900
0
53,412,662
0
1
0
0
4
false
26
2017-04-14T22:01:00.000
-1
13
0
how to install tensorflow on anaconda python 3.6
43,419,795
-0.015383
python,tensorflow,installation,python-wheel
conda create -n tensorflow_gpuenv tensorflow-gpu Or type the command pip install c:.*.whl in command prompt (cmd).
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
0
1
191,454
0
53,520,347
0
1
0
0
4
false
26
2017-04-14T22:01:00.000
0
13
0
how to install tensorflow on anaconda python 3.6
43,419,795
0
python,tensorflow,installation,python-wheel
Uninstall Python 3.7 for Windows, and only install Python 3.6.0 then you will have no problem or receive the error message: import tensorflow as tf ModuleNotFoundError: No module named 'tensorflow'
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
0
1
191,454
0
59,495,885
0
1
0
0
4
false
26
2017-04-14T22:01:00.000
1
13
0
how to install tensorflow on anaconda python 3.6
43,419,795
0.015383
python,tensorflow,installation,python-wheel
Well, conda install tensorflow worked perfect for me!
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
0
1
191,454
0
45,116,048
0
1
0
0
4
false
26
2017-04-14T22:01:00.000
-1
13
0
how to install tensorflow on anaconda python 3.6
43,419,795
-0.015383
python,tensorflow,installation,python-wheel
For Windows 10 with Anaconda 4.4 Python 3.6: 1st step) conda create -n tensorflow python=3.6 2nd step) activate tensorflow 3rd step) pip3 install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.2.1-cp36-cp36m-win_amd64.whl
I installed the new version python 3.6 with the anaconda package. However i am not able to install tensorflow. Always receive the error that tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. How can I install tensorflow on anaconda (python 3.6)?
0
1
191,454
0
43,423,664
0
0
0
0
1
false
0
2017-04-15T02:20:00.000
0
1
0
Hdf5 and spatial indexes
43,421,508
0
python,pandas,quadtree,r-tree
Yes, r-trees can be stored on disk easily. (It's much harder with KD-trees and quad-trees) That is why the index is block oriented - the block size is meant to be chosen to match hour drive. I don't use pandas, and will not give a library recommendation.
I have a large dataset, 11 million rows, and I loaded the data into pandas. I want to then build a spatial index, like rtree or quad tree, but when I push it into memory, it consumes a ton of RAM along with the already reading the large file. To help reduce the memory footprint, I was thinking of trying to push the ...
0
1
245
0
43,425,655
0
1
0
0
1
true
2
2017-04-15T11:43:00.000
2
2
1
Is it recommended to use TensorFlow under Ubuntu or under Windows?
43,425,621
1.2
python,ubuntu,tensorflow,deep-learning
I think it's easier for you to use Ubuntu if you have the possibility. Getting lapack and blas libraries from sources is easier in linux (you can get precompiled packages for windows though). I prefer native pip, but for windows and for starting Anaconda should be the choice.
I am a newbie to TensorFlow (and the whole deep learning as well). I have a machine with dual boot, Windows 10 and Ubuntu 16. Under which OS should I install and run TensorFlow? Windows or Ubuntu? Also, what is the recommended Python environment? Anaconda or native pip?
0
1
8,589
0
43,436,325
0
0
0
0
1
false
0
2017-04-15T15:15:00.000
0
1
0
Curve fitting a sum of Gaussian's to 6 peaks
43,427,707
0
python,curve-fitting,gaussian
I have successfully used a genetic algorithm to search error space and find initial parameter estimates for scipy's Levenberg-Marquardt nonlinear solver. The most recent versions of scipy now include the Differential Evolution genetic algorithm which is what I had been using. It will take a bit of experimenting on a ...
I was wondering if anyone could show me a way of fitting multiple gaussian curves to a data set containing 6 peaks(data comes from a diffraction pattern from a copper gold alloy crystal). the way i have at the moment involves using multiple gaussian equations added together meaning i have to give multiple guesses of va...
0
1
523
0
43,432,668
0
1
0
0
1
true
2
2017-04-16T00:09:00.000
1
2
0
Chaotic permutations
43,432,439
1.2
python,algorithm
Use shuffle. It is fast enough and adequately chaotic for any practical purpose.
Is there any alorithm, which provides a chaotic permutation of a given list in a reasonable time? Im using Python, but I am concerned, that the given shuffle function will not provide a good solution, due a length of 1.1 Millions elements in the given list. I did some googling and did not find any usefull results, woul...
0
1
522
0
43,438,949
0
0
0
0
1
false
1
2017-04-16T15:10:00.000
2
2
0
Convert image file to float array in Python
43,438,653
0.197375
python
You can convert the list of the integers to list of floats as [float(i) for i in values] with list comprehension. An other option is to convert the img variable as numpy.ndarray to an other numpy.ndarray which contains float values: img = img.astype(float) After this assignment the results will contain float values.
How can I convert an image to an array of float numbers? img = cv2.imread('img.png') and now convert img to float so I get for print(img[0,0]) something like "[ 4.0 2.0 0.0] instead of [4 2 0] Do you have an idea? Thank you very much!
0
1
27,136
0
72,350,771
0
1
0
0
2
false
34
2017-04-16T18:48:00.000
1
3
0
The real difference between float32 and float64
43,440,821
0.066568
python,numpy,types
float32 is less accurate but faster than float64, and flaot64 is more accurate than float32 but consumes more memory. If speed accuracy is more important, you can use float64, and if speed is more important than accuracy, you can use float32.
I want to understand the actual difference between float16 and float32 in terms of the result precision. For instance, Numpy allows you to choose the range of the datatype you want (np.float16, np.float32, np.float64). My concern is that if I decide to go with float 16 to reserve memory and avoid possible overflow, wou...
0
1
53,683
0
52,804,163
0
1
0
0
2
false
34
2017-04-16T18:48:00.000
37
3
0
The real difference between float32 and float64
43,440,821
1
python,numpy,types
float32 is a 32 bit number - float64 uses 64 bits. That means that float64’s take up twice as much memory - and doing operations on them may be a lot slower in some machine architectures. However, float64’s can represent numbers much more accurately than 32 bit floats. They also allow much larger numbers to be stored. ...
I want to understand the actual difference between float16 and float32 in terms of the result precision. For instance, Numpy allows you to choose the range of the datatype you want (np.float16, np.float32, np.float64). My concern is that if I decide to go with float 16 to reserve memory and avoid possible overflow, wou...
0
1
53,683
0
43,813,452
0
0
0
0
1
false
0
2017-04-16T20:30:00.000
0
1
0
How can I generate output predictions in Tensorflow just like model.predict in Keras?
43,441,827
0
python,numpy,tensorflow,ipython,keras
You can use tf.contrib.Keras to get same behavior.
Keras has model.predict which generates output predictions for the input samples. I am looking for this in tensorflow but cannot seem to find it or code it up.
0
1
288
0
43,459,094
0
0
0
0
1
false
0
2017-04-17T20:24:00.000
0
2
0
imread octave to imread cv2
43,459,032
0
python,numpy,image-processing,octave
What you have is a 3D array in octave. Here in the x-dimension you seem to have RGB values for each pixel and Y and Z dimension are the rows and columns respectively. However when you print it you will see all the values in the array and hence it looks like a 1D array.
I have to translate a code from Octave to Python, among many things the program does something like this: load_image = imread('image.bmp') which as you can see its a bitmap, then if I do size(load_image) that prints (1200,1600,3) which its ok, but, when I do: load_image it prints a one dimensional array, that does not ...
0
1
201
0
43,467,467
0
1
0
0
1
false
6
2017-04-18T07:54:00.000
-2
4
0
how to combine 2D arrays into a 3D array in python?
43,466,644
-0.099668
python,arrays
If your arrays are only 'list', sumplt defines an empty list at the beginning and append item into it: foo=[] for i in range(14): ... foo.append(tab)
I have a project in which there is a for loop running about 14 times. In every iteration, a 2D array is created with this shape (4,3). I would like to concatenate those 2D arrays into one 3D array (with the shape of 4,3,14) so that every 2D array would be in different "layer". How should that be implemented in Python?
0
1
8,835
0
43,482,677
0
1
0
0
1
true
0
2017-04-18T13:25:00.000
0
1
0
Python memory use for DataFrames in python script and in Anaconda Spyder
43,473,556
1.2
python,memory-management,anaconda,spyder
What is the effect in a script called something.py and executed as python something.py? Memory is unloaded after completion of execution. Please confirm. Yes, I think memory is freed after execution. What is the effect when I run something.py in say Anaconda Spyder. The spyder memory will not be unloaded unless I dis...
Python 101 question. A pandas dataframe is created as: df1=pandas.DataFrame(data, index=index, columns=columns) # takes up say 100 MB memory now df2=df1 # will memory usage be doubled? What is the effect in a script called something.py and executed as python something.py? Memory is unloaded after completion of executio...
0
1
844
0
43,481,217
0
1
0
0
1
false
1
2017-04-18T17:43:00.000
1
1
0
How to manage complexity while using IPython notebooks?
43,478,908
0.197375
python,ipython-notebook,code-organization,project-organization
Well, I have this problem now and then when working with a big set of data. Complexity is something I learned to live with, sometimes it's hard to keep things simple. What i think that help's me a lot is putting all in a GIT repository, if you manage it well and make frequent commits with well written messages you can ...
Imagine that you working with a large dataset, distributed over a bunch of CSV files. You open an IPython notebook and explore stuff, do some transformations, reorder and clean up data. Then you start doing some experiments with the data, create some more notebooks and in the end find yourself heaped up with a bunch of...
0
1
161
0
48,334,045
0
0
0
0
1
false
1
2017-04-18T23:14:00.000
0
2
0
Combining effects of dummy variables in a regression model
43,483,717
0
python,sas,regression,random-forest,dummy-variable
You may google SAS STAT manual /User guide. Check out any major regression procedures there that support Class statement. Underneath the Class it details Reference... option. They all detail how a design matrix is fashioned out. The way you fed your 100 dummies must have been obvious enough to trigger JMP to roll back ...
I am building a regression model with about 300 features using Python Sklearn. One of the features has over 100 categories and I end up having ~100 dummy columns for this feature.Now each of the dummy column has its own coefficient, or a feature ranking score (if using Random Forest or xgb) - which is something I dont ...
0
1
284
0
43,486,149
0
0
0
0
1
false
0
2017-04-19T03:45:00.000
2
1
0
How to show only years on x axis of pandas histogram?
43,485,972
0.379949
python,pandas
There are two ways to do it. 1) if you just want to change the display format, by using the {:,.0f} format to explicitly display (rounded) floating point values with no decimal numbers: pd.options.display.float_format = '{:,.0f}'.format 2) if you want to convert it to int : df.col = df.col.astype(int)
I have a pandas histogram that shows the frequency that specific years show up in a dataframe. The x axis contains 2006.0, 2006.5, 2007.0, 2007.5, etc. However I want my histogram x axis to only have 2006, 2007, etc. This will make my histogram clearer, especially since in my df I only have values for particular years...
0
1
220
0
57,392,720
0
1
0
0
2
false
9
2017-04-19T16:19:00.000
2
7
0
anaconda cannot import matplotlib.pyplot
43,501,102
0.057081
python
Just open anaconda prompt and use either of the below command to install the package. This solved my issue. conda install -c plotly chart-studio or conda install -c plotly/label/test chart-studio
I am getting this error when I am trying to import "matplotlib.pyplot". I cant even install matplotlib.pyplot through conda install. It shows this: import matplotlib.pyplot Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'matplotlib.pyplot'
0
1
25,113
0
70,727,396
0
1
0
0
2
false
9
2017-04-19T16:19:00.000
0
7
0
anaconda cannot import matplotlib.pyplot
43,501,102
0
python
Check if the ...../python3.x/site-packages is listed within sys.path. If not append it with sys.path.append('.....python3.8/site-packages')
I am getting this error when I am trying to import "matplotlib.pyplot". I cant even install matplotlib.pyplot through conda install. It shows this: import matplotlib.pyplot Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'matplotlib.pyplot'
0
1
25,113
0
62,819,510
0
0
0
0
1
false
1
2017-04-20T08:04:00.000
1
1
0
ValueError: Clustering algorithm could not initialize. Consider assigning the initial clusters manually
43,513,662
0.197375
algorithm,python-3.x
Your data might not warrant a larger number of clusters. Run the algorithm for lesser number of k values and note the total cost at the end. If this stops decreasing, there is no need to increase k. It's called the elbow method, you can look it up.
I have data containing a mixture of numeric values and categorical values. I used K-prototype to cluster them. init = 'Huang' n_clusters = 50 max_iter = 100 kproto = kprototypes.KPrototypes(n_clusters=n_clusters,init=init,n_init=5,verbose=verbose) clusters = kproto.fit_predict(data_cats_matrix,categorical=ca...
0
1
2,181
0
43,531,728
0
0
1
0
1
true
1
2017-04-20T23:40:00.000
1
1
0
Integer overflow in universal hashing implementation
43,531,607
1.2
python,numpy,hash
(x + y) % z == ((x % z) + (y % z)) % z. So you could take the modulus before doing the sum: Cast a and x to uint64. (Multiply two uint32 will never overflow uint64). Compute h = (a * x) % p + b Return (h - p) if h > p else h. (Alternatively: return h % p)
For one of my pet projects I would like to create an arbitrary number of different integer hashes. After a bit of research I figured that universal hashing is the way to go. However, I struggle with the (numpy) implementation. Say I'm trying to use the hash family h(a,b)(x) = ((a * x + b) mod p) mod m and my x can be a...
0
1
189
0
43,532,000
0
0
0
0
1
true
0
2017-04-21T00:21:00.000
2
1
0
Python Machine Learning classifier for just one label
43,531,951
1.2
python,machine-learning
The lack of negative examples does not make this unary classification; there is no such modelling, as one-class has no discrimination, and therefore derives no new information from the data set. As you've pointed out, there are two classes: Legitimate and not. That's binary. Use any binary classifier from your resear...
I'm new to machine learning and looking to do run a training / testing dataset through a few classifiers, but the problem I'm having is that I only have one label for my data (Legitimate, currently set as an int so 1 for legit, 0 for not). Ideally I'm looking for a classifier that is going to run with just one label an...
0
1
438
0
43,544,802
0
0
0
0
1
false
1
2017-04-21T13:03:00.000
2
1
0
Dashboard Filter Superset
43,543,469
0.379949
python,superset
I found a possibility to create a fast filter. A table 'fcountry' is created, where all countries (DE,FR, etc.) are stored. This table is used to create the filter widget, which is added to the Dashboard. However, I'm still looking for a handy solution for EU without DE (where Country!='DE'). At the Moment I need to s...
I have a test data table (called "eu") with around 5 millions Observations, where each Observation belongs a country in the EU. Now I'd like to implement a country filter on my dashboard for further Analysis. There already exists a filter widget, which does a query on my data set "eu" to get distinct Countries. However...
0
1
1,424
0
43,576,690
0
1
0
0
5
false
9
2017-04-22T10:03:00.000
0
9
0
Jupyter can't find keras' module
43,557,881
0
python,anaconda,keras,jupyter-notebook
I have realized that I had two different Jupyter's directories, so I have manually deleted one on them. Finally, I have reinstalled Anaconda. Now Keras works properly.
I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6). When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'. I have tried to solve this issue by sys.path.append...
0
1
27,759
0
43,557,975
0
1
0
0
5
false
9
2017-04-22T10:03:00.000
1
9
0
Jupyter can't find keras' module
43,557,881
0.022219
python,anaconda,keras,jupyter-notebook
(Not an answer but some troubleshooting hints) sys.path is not the path to your Python executable, but the path to the libraries. Check where Keras is installed and check your sys.path. How exactly did you install Keras? Try to execute the same command from the Python interpreter. Do you have the same issue? How did...
I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6). When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'. I have tried to solve this issue by sys.path.append...
0
1
27,759
0
60,808,850
0
1
0
0
5
false
9
2017-04-22T10:03:00.000
0
9
0
Jupyter can't find keras' module
43,557,881
0
python,anaconda,keras,jupyter-notebook
Here's how I solved this problem. First, the diagnosis. When I run which python in a terminal window on my Mac (the same terminal I used to launch jupyter, I get /Users/myusername/.conda/envs/myenvname/bin/python, but when I run the same command from a terminal within Jupyter, I get /usr/bin/python. So Jupyter isn't us...
I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6). When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'. I have tried to solve this issue by sys.path.append...
0
1
27,759
0
51,338,496
0
1
0
0
5
false
9
2017-04-22T10:03:00.000
0
9
0
Jupyter can't find keras' module
43,557,881
0
python,anaconda,keras,jupyter-notebook
If you are a windows/mac user who are working on Jupyter notebook “pip install keras” doesn't help you .Try the below steps.It was solved for me 1. In command prompt navigate to the “site packages” directory of your anaconda installed. 2. Now use “conda install tensorflow” and after “conda install keras” 3. Re-start ...
I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6). When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'. I have tried to solve this issue by sys.path.append...
0
1
27,759
0
56,765,580
0
1
0
0
5
false
9
2017-04-22T10:03:00.000
0
9
0
Jupyter can't find keras' module
43,557,881
0
python,anaconda,keras,jupyter-notebook
Acually, I did this command pip install keras and sudo -H pip3 install keras and pip3 install keras. None of them worked. I added the following command and everything worked like a charm: pip install Keras. Yes a capital 'K'
I have installed Tensorflow and Keras by Anaconda (on Windows 10), I have created an environment where I am using Python 3.5.2 (the original one in Anaconda was Python 3.6). When I try to execute import keras as ks, I get ModuleNotFoundError: No module named 'keras'. I have tried to solve this issue by sys.path.append...
0
1
27,759
0
43,563,552
0
1
0
0
1
false
1
2017-04-22T19:05:00.000
1
1
0
How to iterate/ loop through a large (>2GB) JSON dataset in R/ Python?
43,563,447
0.197375
python,json,r,bigdata
The jsonlite R package supports streaming your data. In that way there is no need to read all the json data into memory. See the documentation of jsonlite for more details, the stream_in function in particular. Alternatively: I would dump the json into a mongo database and process the data from that. You need to insta...
I was trying to do some exploratory analyses on a large (2.7 GB) JSON dataset using R, however, the file doesn't even load in the first place. When looking for solutions, I saw that I could process the data in smaller chunks, namely by iterating through the larger file or by down-sampling it. But I'm not really sure ho...
0
1
609
0
69,907,112
0
0
0
0
1
false
7
2017-04-24T01:41:00.000
0
2
0
AttributeError: module 'numpy' has no attribute 'core'
43,578,533
0
python,numpy,pyspark,anaconda
Apart from upgrading and re-installing, sometimes it is caused by your Pandas. It might have dependency on older numpy so you may have to upgrade or reinstall pandas if upgrading numpy alone didn't resolve your problem.
I was wondering if anyone had this issue when running spark and trying to import numpy. Numpy imports properly in a standard notebook, but when I try importing it via a notebook running spark, I get this error. I have the most recent version of numpy and am running the most recent anaconda python 3.6. Thanks! --------...
0
1
22,785
0
43,579,860
0
0
0
0
1
false
5
2017-04-24T04:05:00.000
0
3
0
Pandas Plot With Positive Values One Color And Negative Values Another
43,579,626
0
python,pandas,matplotlib,plot
Make 2 separate dataframes by using boolean masking and the where keyword. The condition would be if >0 or not. Then plot both datframes one by one ,one top of the other, with different parameters for the color.
I have a pandas dataframe where I am plotting two columns out the 12, one as the x-axis and one as the y-axis. The x-axis is simply a time series and the y-axis are values are random integers between -5000 and 5000 roughly. Is there any way to make a scatter plot using only these 2 columns where the positive values of...
0
1
7,728
0
43,592,085
0
0
0
0
1
true
13
2017-04-24T15:04:00.000
4
2
0
Trained Machine Learning model is too big
43,591,621
1.2
python,machine-learning,pickle,random-forest
In the best case (binary trees), you will have 3 * 200 * (2^30 - 1) = 644245094400 nodes or 434Gb assuming each one node would only cost 1 byte to store. I think that 140GB is a pretty decent size in comparision.
We have trained an Extra Tree model for some regression task. Our model consists of 3 extra trees, each having 200 trees of depth 30. On top of the 3 extra trees, we use a ridge regression. We trained our model for several hours and pickled the trained model (the entire class object), for later use. However, the size o...
0
1
9,137
0
43,597,467
0
0
0
0
1
false
0
2017-04-24T19:34:00.000
0
1
0
python: one kernel density plot which includes multiple columns in a single dataframe
43,596,345
0
python,kernel-density
I did find how to work around by transforming the dataframe's columns into one single column. df.stack()
I need to make a single gaussian kernel density plot of a dataframe with multiple columns which includes all columns of the dataframe. Does anyone know how to do this? So far I only found how to draw a gaussian kernel plot of a single column with seaborn. ax = sns.kdeplot(df['shop1']) However, neither ax = sns.kdeplot(...
0
1
465
0
43,612,292
0
0
0
0
1
false
0
2017-04-25T13:05:00.000
2
2
0
Python: linguistic normalization
43,611,550
0.197375
python,nlp,nltk
This is a bit more experimental, but another possibility is to use word embeddings. The words great and good should have similar occurrence contexts, so their vectors should be similar, you cluster your words like that, and aggregate them into the same word/concept. Of course this will greatly depend on the corpus and ...
I'm looking for a lemmatization module/lib that will transfer a sentence like: "this is great" to "this is good". I'm familiar with some of the tools available in nltk such as stemming and lemmatization, however it's not exactly what I'm looking for My goal is to minimize the variety of ways saying the same thing.
0
1
593
0
43,625,960
0
0
0
0
1
false
1
2017-04-26T03:07:00.000
0
1
0
How to extract COMPLAINT features from texts in order to classify complaints from non-complaints texts
43,624,308
0
python,nlp,classification,feature-extraction,sentiment-analysis
I think you will find that bag-of-words is not so naive. It's actually a perfectly valid way of representing your data to give it to an SVM. If that's not giving you enough accuracy you can always include bigrams, i.e. word pairs, in your feature vector instead of just unigrams.
I have a corpus of around 6000 texts with comments from social network (FB, twitter), news content from general and regional news and magazines, etc. I have gone through first 300 of these texts and tag each of these 300 texts' content as either customer complaint or non-complaint. Instead of naive way of bag of words...
0
1
319
0
44,355,165
0
1
0
0
1
true
2
2017-04-26T03:41:00.000
1
1
0
Is scipy.misc.imread safe/efficient to run from multiple threads?
43,624,618
1.2
python,multithreading,scipy,multiprocessing,python-imaging-library
scipy.misc.imread is safe from multiple threads, but each call locks the global interpreter, so performance won't benefit from multithreading. It works well from multiprocessing, no unexpected issues.
I have set up a producer/consumer model using python Queues. In one producer I'm reading images using scipy.misc.imread. Reading images in one thread is not fast enough, it takes ~0.2s per image to read. About 20MB/sec reading from an SSD. I tried adding another identical thread using python's threading module. Howeve...
0
1
186
0
43,637,652
0
0
0
0
1
false
4
2017-04-26T14:47:00.000
0
3
0
Getting worse result using keras 2 than keras 1
43,637,515
0
python,keras,mse
Probably some default values that have changed from Keras 1.2. You should check the default values for you 1.2 code, and set the same value for your new code.
I ran the same code (with the same data) on CPU first using keras 1.2.0 and then keras 2.0.3 in both codes keras is with TensorFlow backend and also I used sklearn for model selection, plus pandas to read data. I was surprised when I got the MSE(Mean squared error) of 42 using keras 2.0.3 and 21 using keras 1.2.0. Can...
0
1
417
0
46,969,923
0
0
1
0
1
false
2
2017-04-26T17:50:00.000
-1
1
0
Numexpr detecting number of threads less than number of cores
43,641,247
-0.197375
python,arrays,cluster-computing,python-multithreading,numexpr
I´m not sure, how numexpr actually works internally, when detect_number_of_threads is called, but maybe it reads out the number of threads that is available to openmp and not the number of threads that were locally set.
I am using numexpr for simple array addition on a remote cluster. My computer has 8 cores and the remote cluster has 28 cores. Numexpr documentation says that "During initialization time Numexpr sets this number to the number of detected cores in the system" But the cluster gives this output. detect_number_of_cores() =...
0
1
5,204
0
48,454,088
0
0
0
0
1
false
1
2017-04-27T06:26:00.000
2
1
0
How to add proper nouns as vocab to Spacy models?
43,650,119
0.379949
python,nlp,named-entity-recognition,spacy
I'm not sure exactly what you want to do... but below are solutions for a few possibilities. You have a complete list of the medicines/drugs of interest... i. and you want a special rule for tokenisation of these strings: Would not recommend this approach but in principle you could add special cases to the Tokenizer. ...
I am using Spacy 1.8.0 with Python and I would like to use Spacy to do analysis on Medical Documents. There is a way off adding new entity types to spacy's named entity recognizer. However, is it possible to add the names of medicines/drugs as proper nouns to spcay's vocab? Or do they need to be added by training the s...
0
1
806
0
43,651,884
0
0
0
0
1
false
3
2017-04-27T07:54:00.000
3
2
0
How to Combine 2 integer columns in a dataframe and keep the type as integer itself in python
43,651,802
0.291313
python,pandas,dataframe
Maybe something like df[year] * 100 + df[month] would help.
I have 2 columns df[year] and df[month]. It has values ranging from 2000 to 2017 and month values 1 - 12. How to combine these to another column which would contain the combined output. Eg: Year Month Y0M 2000 1 200001 2000 2 200002 2000 3 200003 2000 10 200010 Note : there is a 0 added in betw...
0
1
3,502
0
43,688,972
0
0
0
0
1
false
0
2017-04-27T14:45:00.000
0
1
0
Python py2exe executable silent crash with scipy.linalg or numpy.linalg
43,661,027
0
python,numpy,scipy,py2exe
I managed to solve this problem and wanted to post the solution. The link posted by J.J. Hakala, showed that an error with the same symptoms can occur with matplotlib because py2exe does not transfer some necessary dll's to the new dist directory. I was already transferring the dll's mentioned in that post manually, ho...
I've been using py2exe to package some scripts as executable, which has worked well until this error. In one script I need to solve a straightforward system of linear equations. I've been doing this with scipy.linalg.lstsq. The problem is that any script I package with any scipy.linalg or numpy.linalg command crashes a...
0
1
189
0
43,662,378
0
0
0
0
1
false
0
2017-04-27T15:12:00.000
0
1
0
Feeding a queue from a list of arrays of different lengths
43,661,651
0
python,input,tensorflow,queue,pipeline
I assume you're using an RNN here, for an RNN to take a variable length array you're going to need to pad them with zeros to the batch length and pass in a value that tells the RNN how long each sequence is. I'd suggest taking the same approach here. Pad them before passing them into the TF queue and pass in another va...
I have a list of arrays (could be numpy arrays or just list of int) of different lengths and I would like to feed it into some TensorFlow queue for testing a larger input pipeline -- instead of serializing them and re-read from file system. Is it possible?
0
1
119
0
43,733,509
0
0
0
0
2
false
0
2017-04-27T21:35:00.000
0
4
0
Fast algorithm to find the n1 th smallest number to n2 th smallest number in an array
43,668,307
0
python,algorithm,sorting,columnsorting
If n2 (and therefore n1) are both small, then you can find the n2 smallest elements and ignore the first n1 ones. These approaches are described by Arun Kumar and user448810 and will be efficient as long as n1 remains small. However, you may be describing a situation in which n1 (and therefore n2) can grow (perhaps ev...
I have an array, I attempt to find the n1 th smallest number until the n2 th smallest number in the array and store them in another array of size n2-n1+1. Here n2>n1 and both are realtively small compared to the size of the array (for example, the array size is 10000, n1=5, n2=20). I can sort this array first, and the...
0
1
115
0
43,670,278
0
0
0
0
2
false
0
2017-04-27T21:35:00.000
1
4
0
Fast algorithm to find the n1 th smallest number to n2 th smallest number in an array
43,668,307
0.049958
python,algorithm,sorting,columnsorting
Since, the N1 and N2 are really small compared to size of array letus say N. We can have an implementation in O(N2 * LogN) using min heap data structures. Steps Construct a min heap. Complexity of this operation will be O(N) Loop for a range of 0 to N2: Get the root element and call heapify. Ignore first N1 ele...
I have an array, I attempt to find the n1 th smallest number until the n2 th smallest number in the array and store them in another array of size n2-n1+1. Here n2>n1 and both are realtively small compared to the size of the array (for example, the array size is 10000, n1=5, n2=20). I can sort this array first, and the...
0
1
115
0
43,670,482
0
0
0
1
1
false
0
2017-04-28T01:07:00.000
0
1
0
How to run python code at prescribed time and store output in database
43,670,334
0
python,database-design,web-scraping
In windows, you can use Task Scheduler or in Linux crontab. You can configure these to run python with your script at set intervals of time. This way you don't have a python script continuously running preventing some hangup in a single call from impacting all subsequent attempts to scrape or store in database. To stor...
I have written a piece of python code that scrapes the odds of horse races from a bookmaker's site. I wish to now: Run the code at prescribed increasingly frequent times as the race draws closer. Store the scraped data in a database fit for extraction and statistical analysis in R. Apologies if the question is poorly...
0
1
73
0
44,968,742
0
0
0
0
1
false
0
2017-04-28T03:36:00.000
1
2
0
How to build a machine learning system (smart system) that updates automatically?
43,671,556
0.099668
python,machine-learning,artificial-intelligence,data-analysis
you are asking "Incremental way of learning " !! it's depends on what task are you doing. you can not apply incremental learning everywhere. the real problem is when we train , in the process of training , there are many other things that are related(inter dependent). which gets changed during a new cycle. so if you ca...
I am kinda a newbie in machine learning. I have done lots of learning related to data analysis and ML algorithms, and I've got very good results and understanding of the algorithms. However, my approaches are normally getting datasets, writing scripts or notebooks to solve the problems and that's pretty much it. Which ...
0
1
405
0
44,010,620
0
0
0
0
1
false
1
2017-04-28T03:55:00.000
1
1
0
Extracting weights and tree structures from xgboost - plot trees
43,671,704
0.197375
python,algorithm,tree,random-forest,xgboost
You can dump the xgboost model into a text file, and parse it yourself. The file looks like this: booster[0(<-tree id)]: 0(<-node id):[f317(<-feature name)<0.187154] yes=1(<-child node id),no=2,missing=1 1:[f317<0.071172] yes=3,no=4,missing=3 ... 6379:leaf=0.125 (<-weight to that leaf) At the e...
I'm using xgboost package in python and I'm going to extract tree structures after training. For instance, I'm going to know the feature and threshold that has been selected at each node to export the tree structures to a function. Moreover, I need to know the weight of each tree after training. (As we know, the result...
0
1
1,889
0
43,674,438
0
0
0
0
1
false
0
2017-04-28T07:24:00.000
0
2
0
I need to implement a ​SparseArray​ with a ​linked list​ implementation
43,674,393
0
python,arrays,algorithm,linked-list
I'd bet these are already implemented in numpy or scipy.
sparse array objects will have a fixed size ​n ​ set when they are created - attempting to set or get elements larger than the size of the array should raise an ​IndexError
0
1
238
0
54,512,055
0
0
0
0
1
false
5
2017-05-01T18:45:00.000
0
3
0
Rendering a 3D mesh into an image using python
43,724,600
0
python,3d,rendering,render
nVidia's iRay is excellent however, 50k plus triangles is a serious task for any rendering engine. If you're looking for photorealistic quality you'll be looking at several minutes per render at 50k triangles from a simple lighting source. The more complex your lighting becomes the slower the process. A simple texture ...
I'm a working a neural network that does face reconstruction from a single image using tensorflow. I'm trying to figure out how to render the output of the network (which is a 3D mesh) into an image in python. Most of the libraries I found do 3D rendering in real time, I'm only looking to render and single frame. I als...
0
1
11,128
0
43,735,408
0
0
0
0
1
true
1
2017-05-02T05:33:00.000
1
1
0
How to avoid padded indices during calculation of softmax using tf.nn.softmax?
43,730,740
1.2
python-3.x,tensorflow,deep-learning
You could set your padded cells to a value close to -Infinity (i.e. very small compared to your other logits), so that their contributions are just negligible.
I have a matrix where each row is padded to maxlen. If I apply softmax on the matrix, the padded indices are also considered. How to apply softmax on the matrix while considering only the non-padded indices? Note: padded length in each row of the matrix would differ.
0
1
302
0
65,209,790
0
0
0
0
1
false
1
2017-05-02T13:51:00.000
0
1
1
opencv_annotation not recognised by windows
43,739,684
0
windows,python-2.7,opencv
You need to build it using CMake.
I am using OpenCV in windows. When I am using this command in windows command prompt then I am getting an error as opencv_annotation is not recognised as the internal or external command. opencv_annotation --annotations=/C:\Users\harsh\Desktop/annotation.txt --images=C:\Users\harsh\Desktop/pos/
0
1
743
0
44,855,960
0
0
0
0
1
false
0
2017-05-03T20:06:00.000
0
1
0
ValueError: unsupported pickle protocol: 3, python 2.7 cannot load pickle file even dump with protocol = 2
43,769,071
0
python,pickle
It is more likely to be a pickle file created with python 3. I guess you mainly use python 2.7. Either you re-create the pickle file with python 2.7 or you use python 3.
I have used: pickle.dump(data, f, protocol=2) And try to open the pickle file with python 2.7, however, it still pop up with the error "ValueError: unsupported pickle protocol: 3".
0
1
2,769
0
57,201,952
0
0
0
0
1
false
2
2017-05-03T20:24:00.000
0
1
0
Selecting test and train sets in Keras for Python
43,769,357
0
python,deep-learning,classification,keras,conv-neural-network
Your method is not impossible, but it is impractical. A Neural Network learns from the training dataset. If you give the same data again in the test set, it will generate accurate predictions, which will have no relation with the network's predictive power.
I am trying to use Keras in Python for time series classification. I have made a simple CNN and was wondering if it is possible to randomly select a subsection of the full data set at each epoch to serve as the test set? I am quite new to neural nets so if there are any reasons this is impossible or impractical I woul...
0
1
95
0
43,772,528
0
0
0
0
1
false
106
2017-05-04T01:22:00.000
7
5
0
How to print a specific row of a pandas DataFrame?
43,772,362
1
python,python-3.x,pandas,indexing
Sounds like you're calling df.plot(). That error indicates that you're trying to plot a frame that has no numeric data. The data types shouldn't affect what you print(). Use print(df.iloc[159220])
I have a massive dataframe, and I'm getting the error: TypeError: ("Empty 'DataFrame': no numeric data to plot", 'occurred at index 159220') I've already dropped nulls, and checked dtypes for the DataFrame so I have no guess as to why it's failing on that row. How do I print out just that row (at index 159220) of the d...
0
1
543,751
0
43,775,529
0
1
0
0
1
false
1
2017-05-04T06:09:00.000
2
3
0
Store points by x, y and multiple z
43,775,108
0.132549
python,arrays,list,dictionary,store
Here are a couple of ways of doing this: As Geotop suggested, use a dictionary indexed with a tuple. This according to me is an elegant solution. Depending on the purpose and ease of traverse, you may convert the same to a nested list as follows: a = [ [ (1,2), (3,4), (5,6) ], [ [123,234,345], [567,678,543],...
I will write an example so you can understand me : For instance, I have these 3 points : (0, 1, 244) - (0, 1, 255) - (1, 2, 133) Actually I need to average the Z when 2 points have the same (x, y). My idea is to store them in something (an array, a dictionary ?) with a double index/key where the value is an array of th...
0
1
85
0
68,590,093
0
0
0
0
1
false
14
2017-05-04T08:06:00.000
2
3
0
How to split a DataFrame in pandas in predefined percentages?
43,777,243
0.132549
python-3.x,pandas
Creating a dataframe with 70% values of original dataframe part_1 = df.sample(frac = 0.7) Creating dataframe with rest of the 30% values part_2 = df.drop(part_1.index)
I have a pandas dataframe sorted by a number of columns. Now I'd like to split the dataframe in predefined percentages, so as to extract and name a few segments. For example, I want to take the first 20% of rows to create the first segment, then the next 30% for the second segment and leave the remaining 50% to the th...
0
1
22,451
0
43,782,019
0
0
0
0
2
false
1
2017-05-04T11:28:00.000
0
2
0
Python 3.5 32 & 64 bit -- Scipy & Tensorflow issues
43,781,633
0
python,python-3.x,tensorflow,scipy,64-bit
Did you try using Anaconda or similar for the installation? - if this is an option in your case I would highly recommend it under Windows.
I'm trying to install Python 3.5 both 32 & 64 bit and also be able to transfer between the two as needed, but am not having any luck. Scipy will only install when I use the 32bit (Various issues when trying to install 64bit version even with physical .whl files). Meanwhile Tensorflow only works on x64. I'm using windo...
0
1
384
0
43,782,112
0
0
0
0
2
false
1
2017-05-04T11:28:00.000
0
2
0
Python 3.5 32 & 64 bit -- Scipy & Tensorflow issues
43,781,633
0
python,python-3.x,tensorflow,scipy,64-bit
Installing Linux (highly recommended) While your Issue with Scipy & Tensorflow. Why don't you use the Anaconda Installer for Windows x64 bit(it contains alomost all major packges for Data Science already(Scipy too)) , and then install Tensorflow
I'm trying to install Python 3.5 both 32 & 64 bit and also be able to transfer between the two as needed, but am not having any luck. Scipy will only install when I use the 32bit (Various issues when trying to install 64bit version even with physical .whl files). Meanwhile Tensorflow only works on x64. I'm using windo...
0
1
384
0
46,379,569
0
0
0
0
1
false
6
2017-05-05T04:53:00.000
1
2
1
Loading local file from client onto dask distributed cluster
43,796,774
0.099668
python,python-3.x,dask
Network solution : Under Windows only it should works with a shared forlder: dd.read_csv("\\server\shared_dir") Under Unix/Linux only it should works with HDFS: import hdfs3 and then hdfs.read_csv('/server/data_dir'...) But if you want to use Windows AND Linux workers at the same time I don't know since dd.read_csv...
A bit of a beginner question, but I was not able to find a relevant answer on this.. Essentially my data about (7gb) is located on my local machine. I have distributed cluster running on the local network. How can I get this file onto the cluster? The usual dd.read_csv() or read_parquet() fails, as the workers aren't ...
0
1
2,868
0
43,798,752
0
0
0
0
1
false
24
2017-05-05T06:54:00.000
0
4
0
One-Hot-Encode categorical variables and scale continuous ones simultaneouely
43,798,377
0
python,scikit-learn
Can't get your point as OneHotEncoder is used for nominal data, and StandardScaler is used for numeric data. So you shouldn't use them together for your data.
I'm confused because it's going to be a problem if you first do OneHotEncoder and then StandardScaler because the scaler will also scale the columns previously transformed by OneHotEncoder. Is there a way to perform encoding and scaling at the same time and then concatenate the results together?
0
1
26,114
0
43,805,354
0
0
0
0
2
true
1
2017-05-05T11:51:00.000
1
2
0
Tensor-flow training and Testing on different OS
43,804,250
1.2
linux,windows,python-3.x,tensorflow
No, the model will be exactly the same. You'll only have to make sure that your TF versions on Linux and Windows are compatible ones, but this is not made more difficult by the different OS, it's only a matter of which versoin you install on which device.
Will the performance of the model be affected if I train the data on LINUX system and then use that model in the WINDOWS application or a python script?
0
1
496
0
43,805,554
0
0
0
0
2
false
1
2017-05-05T11:51:00.000
0
2
0
Tensor-flow training and Testing on different OS
43,804,250
0
linux,windows,python-3.x,tensorflow
Akshay, as gdelab said, make sure about the version. If you are running the same model on same data(training and testing) again & again, it might affect but using on different OS might not. I faced the same issue.
Will the performance of the model be affected if I train the data on LINUX system and then use that model in the WINDOWS application or a python script?
0
1
496
0
43,895,291
0
0
0
0
1
true
0
2017-05-06T13:43:00.000
0
1
0
How does one input images and labels for Semantic Instance Segmentation with neural networks?
43,821,167
1.2
python,machine-learning,tensorflow,conv-neural-network,image-segmentation
You'll want to train your NN in such as way that you'll be able to use it for prediction. If you want to just predict the classes from the image, then all you want to send to your NN is the original image (probably color balanced) and predict the classes from the XML (convert that into a 1 hot class encoding) if ...
So I know for a standard convolutional neural network you can provide the neural net (NN) a file with a list of labels or simply separate your classes by folders but for instance segmentation I imagine it's different right? For example using a site like labelme2 you can annotate and segment images and then download th...
0
1
667
0
43,835,158
0
0
0
0
1
false
1
2017-05-07T18:15:00.000
0
3
0
How to change data types "object" in Pandas dataframe after importing a CSV?
43,835,016
0
python-3.x,pandas
During reading a csv file: Use dtype or converters attribute in read_csv in pandas import pandas as pd import numpy as np df = pd.read_csv('data.csv',dtypes = {'a':float64,'b':int32},headers=None) Here,automatically the types will be read as the datatype you specified. After having read the csv file: Use astype functio...
I have imported a CSV file as a Pandas dataframe. When I run df.dtypes I get most columns as "object", which is useless for taking into Bokeh for charts. I need to change a column as int, another column as date, and the rest as strings. I see the data types only once I import it. Would you recommend changing it durin...
0
1
7,317
0
68,993,454
0
0
0
0
1
false
1
2017-05-08T01:33:00.000
0
1
0
Draw the plot result in an image and save it as an image in python
43,838,497
0
python,image,plot
Use the command fig.savefig('line plot.jpg', bbox_inches='tight', dpi=150)
I want to draw the plot result in an image and save it as an image in python. I have a method that could Harris method in result there are alot of point that I want to trace in a picture but win I save the figure is giving me just the point not the image also please give me an advice
0
1
40
0
43,879,111
0
0
0
0
1
false
21
2017-05-09T00:18:00.000
3
2
0
Keras - is it possible to view the weights and biases of models in Tensorboard
43,859,133
0.291313
python,tensorflow,keras,tensorboard
I debugged this and found that the problem was I was not providing any validation data when I called fit(). The TensorBoard callback will only report on the weights when validation data is provided. That seems a bit restrictive, but I at least have something that works.
I just got started with Keras and built a Q-learning example program. I created a tensorboard callback and I include it in the call to model.fit, but the only things that appear in TensorBoard are the scalar summary for the loss and the network graph. Interestingly, if I open up the dense layer in the graph, I see a li...
0
1
17,275
0
43,860,724
0
0
0
1
1
true
0
2017-05-09T02:17:00.000
0
1
0
Python - convert mysql timestamps type to matplotlib and graph
43,859,988
1.2
python,mysql,matplotlib
you can use datetime module,although i use now() function to extract datetime from mysql,but i consider the format is the same。 for instance python>import datetime as dt i put the datetime data into a list named datelist,and now you can use datetime.strptime function to convert the date format to what you want python>...
After doing a bit of research I am finding it difficult to find out how to use mysql timestamps in matplotlib. Mysql fields to plot X-axis: Field: entered Type: timestamp Null: NO Default: CURRENT TIMESTAMP Sample: 2017-05-08 18:25:10 Y-axis: Field: value Type: float(12,6) Null: NO Sample: 123.332 What date format is m...
0
1
202
0
43,870,685
0
0
0
0
2
false
2
2017-05-09T12:21:00.000
1
3
0
dynamically growing array in numba jitted functions
43,869,734
0.066568
python,numpy,dynamic-arrays,numba
Typically the strategy I employ is to just allocate more than enough array storage to accommodate the calculation and then keep track of the final index/indices used, and then slice the array down to the actual size before returning. This assumes that you know beforehand what the maximum size you could possibly grow th...
It seems that numpy.resize is not supported in numba. What is the best way to use dynamically growing arrays with numba.jit in nopython mode? So far the best I could do is define and resize the arrays outside the jitted function, is there a better (and neater) option?
0
1
2,018