GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 53,747,543 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-12-12T15:51:00.000 | 1 | 1 | 0 | Is it possible to import bokeh figures from the html file they have been saved in? | 53,746,686 | 1.2 | python-3.x,plot,import,bokeh | As of Bokeh 1.0.2, there is not any existing API for this, and I don't think there is any simple technique that could accomplish this either. I think the only options are: some kind of (probably somewhat fragile) text scraping of the HTML files, or distributing all the HTML files and using something like <iframe> to co... | I've produced a few Bokeh output files as the result of a fairly time intensive process. It would be really cool to pull the plots together from their respective files and build a new output file where i could visualize them all in a column. I know I should have thought to do this earlier on before producing all the in... | 1 | 1 | 90 |
0 | 53,771,189 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-12-12T16:00:00.000 | 3 | 1 | 0 | Why is watershed function from SciKit too slow? | 53,746,868 | 0.53705 | python,time,native,scikit-image,watershed | It's hard to know without more details why your particular application runs slowly. In general, though, the scikit-image code is not as optimized as OpenCV, but covers many more use cases. For example, it can work with floating point values as input, rather than just uint8, and it can work with 3D or even higher-dimens... | I have made a comparison between time of execution only for watershed functions in OpenCV, Skimage(SciPy) and BoofCV. Although OpenCV appears to be much faster than the other two (average time: 0.0031 seconds on 10 samples), Skimage time of execution varies significantly (from 0.03 to 0.554 seconds). I am wondering why... | 0 | 1 | 276 |
0 | 59,012,601 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-12-12T17:20:00.000 | 3 | 2 | 0 | how to compare two text document with tfidf vectorizer? | 53,748,236 | 1.2 | python,nltk,cosine-similarity,tfidfvectorizer | As G. Anderson already pointed out, and to help the future guys on this, when we use the fit function of TFIDFVectorizer on document D1, it means that for the D1, the bag of words are constructed.
The transform() function computes the tfidf frequency of each word in the bag of word.
Now our aim is to compare the docum... | I have two different text which I want to compare using tfidf vectorization.
What I am doing is:
tokenizing each document
vectorizing using TFIDFVectorizer.fit_transform(tokens_list)
Now the vectors that I get after step 2 are of different shape.
But as per the concept, we should have the same shape for both the vect... | 0 | 1 | 2,848 |
0 | 53,748,320 | 0 | 1 | 0 | 0 | 4 | false | 2 | 2018-12-12T17:21:00.000 | 1 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 0.049958 | python,algorithm,time-complexity | Alternative 1 of course, since that only requires you to go through the list once. If you are to sort the list, you have to traverse the list at least once for the sorting, and then some for the search. | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to so... | 0 | 1 | 1,753 |
0 | 53,748,319 | 0 | 1 | 0 | 0 | 4 | false | 2 | 2018-12-12T17:21:00.000 | 2 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 0.099668 | python,algorithm,time-complexity | sorting a list has a O(log(N)*N) complexity at best.
Linear search has O(N) complexity.
So if you have to search more than once, you begin to gain time after some searches.
If objects are hashable (ex: integers) a nice alternative (when searching more than once only) to sorting+bisection search is to put them in a set.... | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to so... | 0 | 1 | 1,753 |
0 | 53,748,316 | 0 | 1 | 0 | 0 | 4 | true | 2 | 2018-12-12T17:21:00.000 | 3 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 1.2 | python,algorithm,time-complexity | Linear search takes just O(n), while sorting a list first takes O(n log n). Since you are going to search the list only once for a value, the fact that subsequent searches in the sorted list with a binary search takes only O(log n) does not help overcome the overhead of the O(n log n) time complexity involved in the so... | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to so... | 0 | 1 | 1,753 |
0 | 53,748,430 | 0 | 1 | 0 | 0 | 4 | false | 2 | 2018-12-12T17:21:00.000 | 1 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 0.049958 | python,algorithm,time-complexity | For solving these types of questions, it is simply necessary to see where you'd spend more time. For a million elements:
Insertion sort with 'n' inversions would take O(n) and then it would take an additional O(log(n)) time.
Whereas linear search would take only O(n) time.
Since there is only a single query method 1 ... | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to so... | 0 | 1 | 1,753 |
0 | 53,756,944 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-12-13T06:58:00.000 | 1 | 2 | 0 | How to implement machine learning models on mobile phones? | 53,756,524 | 0.099668 | android,python,ios,machine-learning | I think I'm qualified to answer this because it was yesterday that I viewed Google's "DevFestOnAir 2018". There was an "End to End Machine Learning" talk where the speaker mentioned what TensorFlow(TF) has to support AI in mobile devices.
Now, TF is available for JS , Java and many other languages, so this captures th... | I've built Machine Learning Models Random Forest and XGBOOST on Python or R
How can I implement that my model work in mobile phone IOS / Android? Not for training, just to predict the probability for users by properties and events. | 0 | 1 | 525 |
0 | 53,759,191 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2018-12-13T09:12:00.000 | 1 | 3 | 0 | Could you explain me the output of keras at each iteration? | 53,758,399 | 0.066568 | python,machine-learning,keras,deep-learning | As far as I can tell the output of the keras function is a running average loss and the loss is quite a lot larger at the beginning of the epoch, than in the end. The loss is reset after each epoch and a new running average is formed. Therefore, the old running average is quite a bit higher (or at least different), tha... | When I train a sequential model with keras using the method fit_generator, I see this output
Epoch 1/N_epochs
n/N [====================>..............] - ETA xxxx - loss: yyyy
I noticed that the loss decreased gradually with the number of steps, as expected. My problem is that I also noticed that when one epoch finis... | 0 | 1 | 740 |
0 | 53,759,336 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2018-12-13T09:12:00.000 | 4 | 3 | 0 | Could you explain me the output of keras at each iteration? | 53,758,399 | 1.2 | python,machine-learning,keras,deep-learning | The loss that Keras calculates during the epoch is accumulated and estimated online. So it includes the loss from the model after different weight updates.
Let we clarify with an easy case: assume for a second that the model is only improving (every weight update results in better accuracy and loss), and that each epoc... | When I train a sequential model with keras using the method fit_generator, I see this output
Epoch 1/N_epochs
n/N [====================>..............] - ETA xxxx - loss: yyyy
I noticed that the loss decreased gradually with the number of steps, as expected. My problem is that I also noticed that when one epoch finis... | 0 | 1 | 740 |
0 | 53,821,015 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-12-14T03:59:00.000 | 1 | 1 | 0 | Can sklearn.preprocessing.KBinsDiscretizer with strategy='quantile' drop the duplicated bins? | 53,773,352 | 1.2 | python-2.7,scikit-learn | That will not be possible. Set strategy='uniform' to achieve your goal. | I used sklearn.preprocessing.KBinsDiscretizer(n_bins=10, encode='ordinal') to discretize my continuous feature.
The strategy is 'quantile', by defalut. But my data distribution is actually not uniformly, like 70% of rows is 0.
Then I got KBinsDiscretizer.bins_edges=[0.,0.,0.,0.,0.,0.,0.,256.,602., 1306., 18464.].
Th... | 0 | 1 | 858 |
0 | 53,821,335 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-12-17T12:36:00.000 | 5 | 1 | 0 | Value of alpha in gensim word-embedding (Word2Vec and FastText) models? | 53,815,402 | 1.2 | python-3.x,gensim,word2vec,word-embedding,fasttext | The default starting alpha is 0.025 in gensim's Word2Vec implementation.
In the stochastic gradient descent algorithm for adjusting the model, the effective alpha affects how strong of a correction to the model is made after each training example is evaluated, and will decay linearly from its starting value (alpha) to... | I just want to know the effect of the value of alpha in gensim word2vec and fasttext word-embedding models? I know that alpha is the initial learning rate and its default value is 0.075 form Radim blog.
What if I change this to a bit higher value i.e. 0.5 or 0.75? What will be its effect? Does it is allowed to change t... | 0 | 1 | 2,594 |
0 | 53,818,650 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-17T14:56:00.000 | 0 | 2 | 0 | Matplotlib erase figure and plot new series of subplots | 53,817,735 | 0 | python,matplotlib | Call plt.show() before the 10th chart, then start over with plt.subplot(3, 3, 1), followed by the code to plot the 10th chart | I want to make a series of figures with 3x3 subplots using matplotlib. I can make the first figure fine (9 total subplots), but when I try to make a tenth subplot I get this error: ValueError: num must be 1 <= num <= 9, not 10. What I think I want to do is plot the first 9 subplots, clear the figure, and then plot the ... | 0 | 1 | 84 |
0 | 53,825,670 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2018-12-18T02:24:00.000 | 1 | 1 | 0 | How to use cross validation after imputing on a training and validation set? | 53,825,586 | 0.197375 | python,cross-validation,imputation | Generally, you'll want to split your data into three sets- a training set, testing set, and validation set. The testing set should be completely left out of training (your concern is correct.) When using cross validation, you don't need to worry about splitting your training and validation set- that's what cross valida... | So I've gotten myself a little confused.
At the moment, I've got a dataset of about 800 instances. I've split it into a training and validation set because there were missing values so I used SimpleImputer from sklearn and fit_transform-ed the training set and transformed the testing set. I did that because if I want ... | 0 | 1 | 726 |
0 | 53,830,482 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-12-18T09:50:00.000 | 1 | 2 | 0 | How to obtain small bins after FFT in python? | 53,830,329 | 1.2 | python,signal-processing,fft | Even if you use another transform, that will not make more data.
If you have a sampling of 1kHz and 2s of samples, then your precision is 0.5Hz. You can interpolate this with chirpz (or just use sinc(), that's the shape of your data between the samples of your comb), but the data you have on your current point is the d... | I'm using scipy.signal.fft.rfft() to calculate power spectral density of a signal. The sampling rate is 1000Hz and the signal contains 2000 points. So frequency bin is (1000/2)/(2000/2)=0.5Hz. But I need to analyze the signal in [0-0.1]Hz.
I saw several answers recommending chirp-Z transform, but I didn't find any too... | 0 | 1 | 279 |
0 | 53,860,082 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-12-18T09:50:00.000 | 1 | 2 | 0 | How to obtain small bins after FFT in python? | 53,830,329 | 0.099668 | python,signal-processing,fft | You can't get smaller frequency bins to separate out close spectral peaks unless you use more (a longer amount of) data.
You can't just use a narrower filter because the transient response of such a filter will be longer than your data.
You can get smaller frequency bins that are just a smooth interpolation between nea... | I'm using scipy.signal.fft.rfft() to calculate power spectral density of a signal. The sampling rate is 1000Hz and the signal contains 2000 points. So frequency bin is (1000/2)/(2000/2)=0.5Hz. But I need to analyze the signal in [0-0.1]Hz.
I saw several answers recommending chirp-Z transform, but I didn't find any too... | 0 | 1 | 279 |
0 | 53,830,550 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-18T09:51:00.000 | 0 | 1 | 0 | Centroid of a contour | 53,830,357 | 0 | python,opencv | You can use cv2.connectedComponentsWithStats it return the centroid and size of contour. | In OpenCV under Python, is there no better way to compute the centroid of the inside a contour than with the function cv2.moments, which computes all moments up to order 3 (and is overkill) ? | 0 | 1 | 153 |
0 | 53,831,640 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-18T11:01:00.000 | 1 | 2 | 0 | Sum of different CSV columns in python | 53,831,555 | 0.099668 | python,csv | Why don't you :
create a columnTotal integer array (one index for each column).
read the file line by line, per line:
Split the line using the comma as separator
Convert the splitted string parts to integers
Add the value of each column to the columnTotal array's colum index. | I am quite new to Python and therefore this might seem easy but I am really stuck here.
I have a CSV file with values in a [525599 x 74] matrix. For each column of the 74 columns I would like to have the total sum of all 525599 values saved in one list.
I could not figure out the right way to iterate over each column a... | 0 | 1 | 106 |
0 | 53,844,061 | 0 | 1 | 0 | 0 | 2 | true | 2 | 2018-12-18T23:09:00.000 | 1 | 2 | 0 | install numpy on python 3.5 Mac OS High sierra | 53,842,426 | 1.2 | python,python-3.x,macos,numpy | First, you need to activate the virtual environment for the version of python you wish to run. After you have done that then just run "pip install numpy" or "pip3 install numpy".
If you used Anaconda to install python then, after activating your environment, type conda install numpy. | I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work.
I have it on python2.7, but I would also like to install it for the next versions.
Currently, I have installed python 2.7, python 3.5, and python 3.7.
I tried to install numpy using:
brew install numpy --w... | 0 | 1 | 3,167 |
0 | 53,928,674 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2018-12-18T23:09:00.000 | 0 | 2 | 0 | install numpy on python 3.5 Mac OS High sierra | 53,842,426 | 0 | python,python-3.x,macos,numpy | If running pip3.5 --version or pip3 --version works, what is the output when you run pip3 freeze? If there is no output, it indicates that there are no packages installed for the Python 3 environment and you should be able to install numpy with pip3 install numpy. | I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work.
I have it on python2.7, but I would also like to install it for the next versions.
Currently, I have installed python 2.7, python 3.5, and python 3.7.
I tried to install numpy using:
brew install numpy --w... | 0 | 1 | 3,167 |
0 | 59,532,530 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2018-12-19T01:32:00.000 | 1 | 5 | 0 | Spark copying dataframe columns best practice in Python/PySpark? | 53,843,406 | 0.039979 | python,apache-spark,pyspark | Use dataframe.withColumn() which Returns a new DataFrame by adding a column or replacing the existing column that has the same name. | This is for Python/PySpark using Spark 2.3.2.
I am looking for best practice approach for copying columns of one data frame to another data frame using Python/PySpark for a very large data set of 10+ billion rows (partitioned by year/month/day, evenly). Each row has 120 columns to transform/copy. The output data frame... | 0 | 1 | 9,487 |
0 | 53,884,751 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2018-12-19T01:32:00.000 | 0 | 5 | 0 | Spark copying dataframe columns best practice in Python/PySpark? | 53,843,406 | 0 | python,apache-spark,pyspark | Bit of a noob on this (python), but might it be easier to do that in SQL (or what ever source you have) and then read it into a new/separate dataframe? | This is for Python/PySpark using Spark 2.3.2.
I am looking for best practice approach for copying columns of one data frame to another data frame using Python/PySpark for a very large data set of 10+ billion rows (partitioned by year/month/day, evenly). Each row has 120 columns to transform/copy. The output data frame... | 0 | 1 | 9,487 |
0 | 53,875,465 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-19T13:11:00.000 | 1 | 1 | 0 | Linear algebra with Pyomo | 53,851,982 | 0.197375 | python,optimization,pyomo | Pyomo is mainly a package for optimization. i.e. specifying data -> building problem -> sending to the solver -> wait for solver's results -> retrieving solution. Even if it can handle matrix-like data, it cannot manipulate it with matrix operations. This should be done using a good external library, before you send yo... | I'm trying put my optimization problem into Pyomo, but it is strongly dependent upon standard linear algebra operations - qr, inverse, transpose, product. Actually, this is Kalman filter problem; recursive linear algebra for long time series. I failed to find pyomo functions to implement it like I could in tensor flow.... | 0 | 1 | 600 |
0 | 53,903,711 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-12-20T03:25:00.000 | 0 | 2 | 0 | Would a Logistic Regression Machine Learning Model Work Here? | 53,862,029 | 0 | python,machine-learning,neural-network,classification,logistic-regression | It's true that you need a lot of data for applying neural networks.
It would have been helpful if you could be more precise about your dataset and the features. You can also try implementing K-Means-Clustering for your project. If your aim is to find out that did the patient took medicine or not then it can be done us... | I am in 10th grade and I am looking to use a machine learning model on patient data to find a correlation between the time of week and patient adherence. I have separated the week into 21 time slots, three for each time of day (1 is Monday morning, 2 is monday afternoon, etc.). Adherence values will be binary (0 means ... | 0 | 1 | 78 |
0 | 53,902,459 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-12-20T03:25:00.000 | 1 | 2 | 0 | Would a Logistic Regression Machine Learning Model Work Here? | 53,862,029 | 0.099668 | python,machine-learning,neural-network,classification,logistic-regression | In my opinion a logistic regression won't be enough for this as u are going to use a single parameter as input. When I imagine a decision line for this problem, I don't think it can be achieved by a single neuron(a logistic regression). It may need few more neurons or even few layers of them to do so. And u may need a ... | I am in 10th grade and I am looking to use a machine learning model on patient data to find a correlation between the time of week and patient adherence. I have separated the week into 21 time slots, three for each time of day (1 is Monday morning, 2 is monday afternoon, etc.). Adherence values will be binary (0 means ... | 0 | 1 | 78 |
0 | 53,866,818 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-20T10:15:00.000 | 2 | 1 | 0 | I installed CUDA10 but Anaconda installs CUDA9. Can I remove the former? | 53,866,591 | 0.379949 | python,tensorflow,cuda,anaconda,gpu | If you installed via conda install tensorflow-gpu all dependencies are in the Conda environment (e.g., CUDA dlls are in the lib subfolder in the environment), so yes you can safely uninstall CUDA 10.
Note: at least on Ubuntu I saw that XLA JIT optimization of code (which is an experimental feature still) requires CUDA ... | As a started with GPU programming, CUDA and Python I decided to install the latest version of CUDA (10) in order to experiment with ML.
After spending considerable time installing (huge downloads) I ended up with a version that isn't supporting Tensorflow.
I discovered the tensorflow-gpu meta package using Anaconda tho... | 0 | 1 | 1,375 |
0 | 53,871,442 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-20T15:10:00.000 | 5 | 1 | 0 | How to choose between keras.backend and keras.layers? | 53,871,303 | 1.2 | python,tensorflow,keras,deep-learning,keras-layer | You should definitely use keras.layers if there is a layer that achieves what you want to do. That's because, when building a model, Keras layers only accept Keras Tensors (i.e. the output of layers) as the inputs. However, the output of methods in keras.backend.* is not a Keras Tensor (it is the backend Tensor, such a... | I found there are a lot of same names in keras.backend or keras.layers, for example keras.backend.concatenate and keras.layers.Concatenate. I know vaguely that one is for tensor while the other is for layer. But when the code is so big, so many function made me confused that which is tensor or which is layer. Anybody h... | 0 | 1 | 523 |
0 | 53,885,612 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-20T16:20:00.000 | 0 | 1 | 0 | Image segmentation with 3 classes but one of them is easy to find, How can I write the network to not train on the easy one? | 53,872,418 | 0 | python-2.7,keras,neural-network,image-segmentation | I'm not sure if you can do that. I think You should apply some regularization and/or dropout to the network and feed it more data.
But what you could do is to label all the empty pixels as noise as signal is usually in the middle and noise is on the outer side of the signal graph. Then you train the network that way. Y... | I am using MS-D or UNet network for image segmentation. My image has three classes: noise, signal and empty. The class empty is easy to find because the pixel values for the empty class is mainly -1 while for the two other classes is between 0-1.
Is there a way that I only ask the network to find noise and signal clas... | 0 | 1 | 158 |
0 | 53,992,065 | 0 | 0 | 1 | 0 | 1 | false | 2 | 2018-12-20T20:15:00.000 | 3 | 2 | 0 | How to find row-echelon matrix form (not reduced) in Python? | 53,875,432 | 0.291313 | python,python-3.x,matrix | Bill M's answer is correct. When you find the LU decomposition the U matrix is a correct way of writing M in REF (note that REF is not unique so there are multiple possible ways to write it). To see why, remember that the LU decomposition finds P,L,U such that PLU = M. When L is full rank we can write this as U = (PL)... | I am working on a project for my Linear Algebra class. I am stuck with a small problem. I could not find any method for finding the row-echelon matrix form (not reduced) in Python (not MATLAB).
Could someone help me out?
Thank you.
(I use python3.x) | 0 | 1 | 3,252 |
0 | 53,876,998 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-12-20T20:55:00.000 | 1 | 1 | 0 | Word Embedding Interpretation | 53,875,910 | 1.2 | python,tensorflow | Yes and yes. So, if you have "I" [4.55, 6.78], "like" [3.12, 8.17], and "dogs" [1.87, 10.95], each embedded representation roughly equates directly to each word, and thus the order isn't lost when the embedding is done. And yes, the shape would be (batch_size, 600, 15) for batches of 600-word-sentences and embedding di... | Before I ask the question, let me preface this by stating that this question has been answered in many articles, but I still struggle to understand the basic format of word embeddings.
Let's start with the sentence "I like dogs". Assuming a simple hashing approach, "I like dogs" can be represented in the vector [1, 4,... | 0 | 1 | 85 |
0 | 53,965,459 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2018-12-21T02:43:00.000 | 0 | 1 | 0 | Backtesting a Universe of Stocks | 53,878,551 | 0 | python,excel,stocks,universe,back-testing | The amout of data is too much for EXCEL or CALC. Even if you want to screen only 500 Stocks from S&P 500, you will get 2,2 Millions of rows (approx. 220 days/year * 20 years * 500 stocks). For this amount of data, you should use a SQL Database like MySQL. It is performant enough to handle this amount of data. But you h... | I would like to develop a trend following strategy via back-testing a universe of stocks; lets just say all NYSE or S&P500 equities. I am asking this question today because I am unsure how to handle the storage/organization of the massive amounts of historical price data.
After multiple hours of research I am here, as... | 0 | 1 | 603 |
0 | 53,882,507 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-21T09:36:00.000 | 1 | 1 | 0 | How to create a tensorflow network of two saved tensorflow networks? | 53,882,317 | 1.2 | python,python-3.x,tensorflow | I am not sure if I got your point correctly, but Block Based Neural Networks might be what you are searching for. In BBNN each node can be a neural network and w.r.t what you describe one layer BBNN is what you need. | Let's say I've trained and saved 6 different networks where all of the values for hidden layer counts, neuron counts, and learn rates differ.
For example:
1 with 8 hidden layers with 16 neurons in each trained at .1 learn rate.
1 with 4 hidden layers with 4 neurons in each trained at .01 learn rate.
1 with 4 hidden la... | 0 | 1 | 36 |
0 | 53,905,344 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-23T10:29:00.000 | 0 | 1 | 0 | Slicing Array of four matrices | 53,902,843 | 1.2 | python,numpy,numpy-slicing | It's not really clear what RM and M are based on your description.
Is M the ndarray containing all 4 images, and RM the 2x2 array for a given pixel containing the data from the 4 images?
You can put the 4 images into the same ndarray so it has shape (4,N,M) and then reshape slices.
For example, to get the (0,0) entry ... | I've got an array of 4 images, each image lets say is NxM (all images share this same size)
(I'm implementing a Harris Corner detector by the way.)
Now I made a matrix M = ([Ix^2, Ixy],[Ixy, Iy^2]).reshape(2,2)
and now I'd like to compute my response.
which is usually Det(RM) - k*(trace(RM)**2)
RM being a 2x2 Matrix ... | 0 | 1 | 54 |
0 | 56,282,819 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-12-24T07:37:00.000 | 1 | 2 | 0 | module 'seaborn' has no attribute 'relplot' | 53,910,548 | 0.099668 | python,seaborn,google-colaboratory | Change directory to where pip3.exe is located:
for me: cd C:\Users\sam\AppData\Local\Programs\Python\Python37-32\Scripts
use .\
.\pip3 install seaborn==0.9.0 | I having trouble running relplot function in colab notebook but it works fine in jupyter notebook.
Getting the following error in colab
AttributeError Traceback (most recent call
last) in ()
----> 1 sns.relplot(x="total_bill", y="tip",
2 col="time", # Categorical v... | 0 | 1 | 11,284 |
0 | 54,089,983 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-25T10:26:00.000 | 0 | 4 | 0 | How to train your own model in AWS Sagemaker? | 53,921,454 | 0 | python,amazon-web-services,tensorflow,keras,amazon-sagemaker | You can convert your Keras model to a tf.estimator and train using the TensorFlow framework estimators in Sagemaker.
This conversion is pretty basic though, I reimplemented my models in TensorFlow using the tf.keras API which makes the model nearly identical and train with the Sagemaker TF estimator in script mode.
My ... | I just started with AWS and I want to train my own model with own dataset. I have my model as keras model with tensorflow backend in Python. I read some documentations, they say I need a Docker image to load my model. So, how do I convert keras model into Docker image. I searched through internet but found nothing that... | 1 | 1 | 2,089 |
0 | 53,925,094 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-25T13:11:00.000 | 0 | 1 | 0 | What data structure to use for ranking system which divides itself in groups? | 53,922,685 | 0 | django,python-3.x,data-structures | Just save the ranking score for every student. Calculate their group when you displaying them. | I have a quiz app where students can take tests. There is ranking based on every test. It's implemented with simple lists (Every new score is inserted into the list and then sorted (index+1 is the rank)).
But I want to add another abstraction. ie. Suppose 1000 students took the test and my ranking was 890. But those ... | 0 | 1 | 27 |
0 | 53,944,352 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-27T09:49:00.000 | 0 | 1 | 0 | how to constrain scipy curve_fit in positive result | 53,942,983 | 0 | python,scipy | One of the simpler ways to handle negative value in y, is to make a log transformation. Get the best fit for log transformed y, then do exponential transformation for actual error in the fit or for any new value prediction. | I'm using scipy curve_fit to curve a line for retention. however, I found the result line may produce negative number. how can i add some constrain?
the 'bounds' only constrain parameters not the results y | 0 | 1 | 149 |
0 | 53,962,197 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-12-27T20:37:00.000 | 1 | 2 | 0 | Create dataframe from Excel attachment in Outlook | 53,950,601 | 1.2 | python,excel,pandas,outlook | Attachments are MIME-encoded and have to be decoded back into the original format (which essentially means making a disk copy) for programs that are expecting that format.
What you want is to give pandas the identifier of the email, the name of the attachment, the details of the message store, and suitable authenticati... | Is it possible to read an Excel file from an Outlook attachment without saving it, and return a pandas dataframe from the attached file? The file will always be in the same format. | 0 | 1 | 2,158 |
0 | 53,964,532 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-12-28T09:02:00.000 | 0 | 1 | 0 | How to get list of context words in Gensim | 53,955,958 | 0 | python,gensim,word2vec,fasttext | The plain model doesn't retain any such co-occurrence statistics from the original corpus. It just has the trained results: vectors per word.
So, the ranked list of most_similar() vectors – which isn't exactly words that appeared-together, but strongly correlates to that – is the best you'll get from that file.
Only ... | How to get most frequent context words from pretrained fasttext model?
For example:
For word 'football' and corpus ["I like playing football with my friends"]
Get list of context words: ['playing', 'with','my','like']
I try to use
model_wiki = gensim.models.KeyedVectors.load_word2vec_format("wiki.ru.vec")
model.most_s... | 0 | 1 | 473 |
0 | 63,126,102 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2018-12-28T10:28:00.000 | 1 | 2 | 0 | AttributeError: 'AxesSubplot' object has no attribute 'hold' | 53,957,042 | 0.099668 | python-3.x | The API Changes document says:
Setting or unsetting hold (deprecated in version 2.0) has now been completely removed. Matplotlib now always behaves as if hold=True. To clear an axes you can manually use cla(), or to clear an entire figure use clf(). | I change a new computer and install Python3.6 and matplotlib,When I run the code last month in the old computer, I get the following error:
ax.hold(True)
AttributeError: 'AxesSubplot' object has no attribute 'hold' | 0 | 1 | 5,619 |
0 | 53,958,035 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-28T11:27:00.000 | 0 | 2 | 0 | What is the difference between Methods and Properties for an object in python? | 53,957,850 | 0 | python,methods,properties | Here in the above example you mentioned, you can pass argument to df.head() function, where as you cannot pass arguments for properties.
for same above example, if you have written df.head(20) it would return first 20 rows. | Suppose i have a data-frame object named as df, head() is a method that can be applied to df to see the first 5 records of the data-frame and df.size is a property to get the size of the data-frame.
For the property we are not using '()' as we used for a method. This was little confusing initially.
Could anyone expla... | 0 | 1 | 311 |
0 | 53,968,421 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-29T08:57:00.000 | 0 | 1 | 0 | Tuple treated as single value in group by statement, any workaround? | 53,968,141 | 0 | python,pandas | First I had to use a list as suggested by Gennady Kandaurov, and to later rename the columns I just had to add the two lists.
target = ['Shop', 'Route']
DF1.columns = target + ['static columns'] | I have some calculations roughly looking like this: trip_count = DF_Trip.groupby([target], as_index=False)['Delivery'].count()
All my DF could possibly be grouped by Shop, Route and Driver. When I enter a single value for target, f.e. target = 'Route' it works fine.
But when I want to enter multiple values, f.e. target... | 0 | 1 | 35 |
0 | 53,976,520 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-29T16:27:00.000 | 1 | 1 | 0 | Analyze just Pretty_Midi Instruments | 53,971,287 | 1.2 | python,artificial-intelligence,midi,music21,midi-instrument | In MIDI files, bank and program numbers uniquely identity instruments.
In General MIDI, drums are on channel 10 (and, in theory, should not use a Program Change message).
In GM2/GS/XG, the defaults for drums are the same, but can be changed with bank select messages. | Trying to figure out a good way of solving this problem but wanted to ask for the best way of doing this.
In my project, I am looking at multiple instrument note pairs for a neural network. The only problem is that there are multiple instruments with the same name and just because they have the same name doesn't mean t... | 0 | 1 | 431 |
0 | 53,976,226 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-12-29T23:04:00.000 | 0 | 2 | 0 | Reinforcement Learning Using Multiple Stock Ticker’s Datasets? | 53,974,005 | 0 | python-3.x,tensorflow,reinforcement-learning,stocks,openai-gym | Thanks to @Primusa I normalized my separate datasets by dividing each value by their respective maximums, then combined the datasets into one for training. Thanks! | Here’s a general question that maybe someone could point me in the right direction.
I’m getting into Reinforcement Learning with Python 3.6/Tensorflow and I have found/tweaked my own model to train on historical data from a particular stock. My question is, is it possible to train this model on more than just one stoc... | 0 | 1 | 296 |
0 | 62,697,169 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-12-29T23:04:00.000 | 0 | 2 | 0 | Reinforcement Learning Using Multiple Stock Ticker’s Datasets? | 53,974,005 | 0 | python-3.x,tensorflow,reinforcement-learning,stocks,openai-gym | I think normalizing the dataset with % change from previous close on all datasets could be a good start. in that way, any stock with any price seems normalized. | Here’s a general question that maybe someone could point me in the right direction.
I’m getting into Reinforcement Learning with Python 3.6/Tensorflow and I have found/tweaked my own model to train on historical data from a particular stock. My question is, is it possible to train this model on more than just one stoc... | 0 | 1 | 296 |
0 | 67,994,767 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-12-31T15:23:00.000 | 1 | 3 | 0 | Get training hyperparameters from a trained keras model | 53,988,984 | 0.066568 | python,keras,hdf5 | Configuration - model.get_config()
Optimizer config - model.optimizer.get_config()
Training Config model.history.params (this will be empty, if model is saved and reloaded)
Loss Fuction - model.loss | I am trying to figure out some of the hyperparamters used for training some old keras models I have. They were saved as .h5 files. When using model.summary(), I get the model architecture, but no additional metadata about the model.
When I open this .h5 file in notepad++, most of the file is not human readable, but t... | 0 | 1 | 3,271 |
0 | 54,002,191 | 0 | 1 | 0 | 0 | 1 | true | 70 | 2019-01-01T19:23:00.000 | 71 | 1 | 0 | How does the "number of workers" parameter in PyTorch dataloader actually work? | 53,998,282 | 1.2 | python,memory-management,deep-learning,pytorch,ram | When num_workers>0, only these workers will retrieve data, main process won't. So when num_workers=2 you have at most 2 workers simultaneously putting data into RAM, not 3.
Well our CPU can usually run like 100 processes without trouble and these worker processes aren't special in anyway, so having more workers than c... | If num_workers is 2, Does that mean that it will put 2 batches in the RAM and send 1 of them to the GPU or Does it put 3 batches in the RAM then sends 1 of them to the GPU?
What does actually happen when the number of workers is higher than the number of CPU cores? I tried it and it worked fine but How does it work? (I... | 0 | 1 | 54,707 |
0 | 54,008,906 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-01-02T06:13:00.000 | 0 | 2 | 0 | Dask Dataframe View Entire Row | 54,002,006 | 0 | python-3.x,dask | Dask does not normally display the data in a dataframe at all, because it represents lazily-evaluated values. You may want to get a specific row by index, using the .loc accessor (same as in Pandas, but only efficient if the index is known to be sorted).
If you meant to get the whole list of columns only, you can get t... | I want to see the entire row for a dask dataframe without the fields being cutoff, in pandas the command is pd.set_option('display.max_colwidth', -1), is there an equivalent for dask? I was not able to find anything. | 0 | 1 | 2,155 |
0 | 54,002,342 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-02T06:46:00.000 | 0 | 1 | 0 | What is use of function mnist.train.next_batch() in training dataset? | 54,002,301 | 0 | python,tensorflow | The function sample batch_size number of samples from a shuffled training dataset, then return the batch for training.
You could write your own next_batch() method that does the same thing, or modify it as you wish. Then use it similarly when you're training your model. | I am using TensorFlow for training my own dataset using capsule network. While training mnist dataset, it contains function mnist.train.next_batch(batch size). How to replace this function for training own dataset using TensorFlow? | 0 | 1 | 472 |
0 | 54,019,576 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-01-03T09:31:00.000 | 1 | 2 | 0 | Documentations for Numpy Functions in Jupyter | 54,019,510 | 0.099668 | python,jupyter-notebook | Highlight and press SHIFT + TAB. | Is it possible to display documentation of numpy functions from jupyter notebook?
help(linspace) did not work for me | 0 | 1 | 55 |
0 | 54,025,044 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-01-03T14:52:00.000 | 0 | 2 | 0 | error while installing tensorflow in conda environment (CondaError: Cannot link a source that does not exist.) | 54,024,671 | 1.2 | python,tensorflow,anaconda,conda | Try to run conda clean --all --yes and conda update anaconda.
Do you have a conda.exe file in the following folder C:\ProgramData\Anaconda3\Scripts\?
Do you use the latest Conda?
Another solution could be to create a conda environments conda create -n name_environment pip python=3.5 and using pip to install tensorflow ... | trying to install tensorflow using conda package manager
using following command
conda install -c conda-forge tensorflow
but it gives following error while executing transaction
CondaError: Cannot link a source that does not exist.
C:\ProgramData\Anaconda3\Scripts\conda.exe | 0 | 1 | 2,069 |
0 | 54,032,226 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-03T23:38:00.000 | 1 | 1 | 0 | k means clustering with fixed constraints (sum of specific attribute should be less than or equal 90,000) | 54,031,283 | 0.197375 | python,cluster-analysis,mean,arcgis | A turnkey solution will not work for you.
You'll have to formulate this as a standard constraint optimization problem and run a silver to optimize this. It's fairly straightforward: take the k-means objective and add your constraints... | Suppose I have 20,000 features on map, and each feature have many attributes (as well as the latitude and longitude). One of the attributes called population.
I want to split these 20,000 features into 3 clusters where the total sum of population of each cluster are equal to specific value 90,000 and features in each c... | 0 | 1 | 791 |
0 | 54,173,014 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-04T08:03:00.000 | 2 | 1 | 0 | Do Dash apps reload all data upon client log in? | 54,035,114 | 1.2 | python,performance,plotly-dash | The only thing that is called on every page load is the function you can assign to app.layout. This is useful if you want to display dynamic content like the current date on your page.
Everything else is just executed once when the app is starting.
This means if you load your data outside the app.layout (which I assum... | I'm wondering about how a dash app works in terms of loading data, parsing and doing initial calcs when serving to a client who logs onto the website.
For instance, my app initially loads a bunch of static local csv data, parses a bunch of dates and loads them into a few pandas data frames. This data is then displayed ... | 1 | 1 | 50 |
0 | 54,041,995 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2019-01-04T15:20:00.000 | 1 | 4 | 0 | Shuffling with constraints on pairs | 54,041,705 | 0.049958 | python,shuffle | A possible solution is to think of your number set as n chunks of item, each chunk having the length of m. If you randomly select for each chunk exactly one item from each lists, then you will never hit dead ends. Just make sure that the first item in each chunk (except the first chunk) will be of different list than t... | I have n lists each of length m. assume n*m is even. i want to get a randomly shuffled list with all elements, under the constraint that the elements in locations i,i+1 where i=0,2,...,n*m-2 never come from the same list. edit: other than this constraint i do not want to bias the distribution of random lists. that is, ... | 0 | 1 | 247 |
0 | 54,088,978 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2019-01-04T15:20:00.000 | 1 | 4 | 0 | Shuffling with constraints on pairs | 54,041,705 | 0.049958 | python,shuffle | A variation of b above that avoids dead ends: At each step you choose twice. First, randomly chose an item. Second, randomly choose where to place it. At the Kth step there are k optional places to put the item (the new item can be injected between two existing items). Naturally, you only choose from allowed places.
Mo... | I have n lists each of length m. assume n*m is even. i want to get a randomly shuffled list with all elements, under the constraint that the elements in locations i,i+1 where i=0,2,...,n*m-2 never come from the same list. edit: other than this constraint i do not want to bias the distribution of random lists. that is, ... | 0 | 1 | 247 |
0 | 54,049,978 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-04T16:12:00.000 | 0 | 1 | 0 | TimeDistribution Wrapper Fails the Compilation | 54,042,532 | 1.2 | python,tensorflow,video,keras | The problem was that input_shape must be specified outside Conv2D and inside TimeDistributed. Keep in mind it must be 4D '(batch_size, width, height, channels)' | I have an extremely simple cnn which i will be trying to bind to an rnn (but that in the future). For now, all I have is conv2D->maxpool>conv2d->maxpool->dense->dense. The CNN works well, no problems, compiles, runs.
'model.add(TimeDistributed(Conv2D(..., input_shape=(32,32,1))
RuntimeError: You must compile your model... | 0 | 1 | 74 |
0 | 59,746,722 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-01-05T08:26:00.000 | 0 | 1 | 0 | Invalid Argument error:Load a (frozen) Tensorflow model into memory (While testing the model on local machine) | 54,050,290 | 0 | python,python-3.x,tensorflow,object-detection-api | I had a similar issue. The solution for me was to take my GPU training files from TF1.9 and move them to my local TF1.5 CPU environment (which doesn't support AVX instructions). I then created the frozen model on the local environment from the training files and was successfully able to use it. | I am using the tensorflow object detection API.
I have performed the training on the remote server GPU and saved the frozen model and checkpoints.
After that i took that frozen model along with checkpoints and copied to my local machine and then performed the testing on my test data using the the script "object_detec... | 0 | 1 | 300 |
0 | 54,056,410 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-01-05T21:18:00.000 | 1 | 4 | 0 | Does installing Python also install libraries like scipy and numpy? | 54,056,362 | 0.049958 | python | If you copied your data from your previous computer to this one, you may have copied the python installation (and thereby the libraries you had installed before) in your appdata folder.
Another possibility is that you have install Anaconda, which is targeted especially at scientific things, and comes with numpy, scipy ... | I just got a new computer, and I was installing some Python libraries. When I tried to install numpy, I got a message on the console saying numpy was already downloaded. I went into the library folder, and not only was numpy there, but scipy, matplotlib, and a bunch of other libraries as well. How is this possible, con... | 0 | 1 | 379 |
0 | 54,056,390 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-01-05T21:18:00.000 | 1 | 4 | 0 | Does installing Python also install libraries like scipy and numpy? | 54,056,362 | 0.049958 | python | Python does not ship with these libraries unless you are using a pre-packaged distribution such as Anaconda. | I just got a new computer, and I was installing some Python libraries. When I tried to install numpy, I got a message on the console saying numpy was already downloaded. I went into the library folder, and not only was numpy there, but scipy, matplotlib, and a bunch of other libraries as well. How is this possible, con... | 0 | 1 | 379 |
0 | 54,056,384 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-01-05T21:18:00.000 | 1 | 4 | 0 | Does installing Python also install libraries like scipy and numpy? | 54,056,362 | 0.049958 | python | Although this is not the place for these types of questions, yes, there is no need to install libraries, as most of the times when you download Python in a distribution, such as Anaconda, they are also included. | I just got a new computer, and I was installing some Python libraries. When I tried to install numpy, I got a message on the console saying numpy was already downloaded. I went into the library folder, and not only was numpy there, but scipy, matplotlib, and a bunch of other libraries as well. How is this possible, con... | 0 | 1 | 379 |
0 | 54,060,749 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-01-06T09:25:00.000 | 0 | 1 | 0 | Python K-Means clustering and maximum distance | 54,060,208 | 0 | python,scikit-learn,cluster-analysis | Use hierarchical clustering.
With complete linkage.
Finding the true minimum cover is NP hard. So you don't want to do this. But this should produce a fairly good approximation in "just" O(n³).
This is basic knowledge. When looking for a clustering algorithm, at least read the Wikipedia article. Better even some book, ... | I would like to start by saying that my knowledge of clustering techniques is extremely limited, please don’t shoot me down too harshly.
I have a sizable set of 3D points (around 8,000) - think of a X, Y, Z triplets, for which the Z coordinate represents a point in the earth underground (negative). I would like to clus... | 0 | 1 | 1,912 |
0 | 54,090,809 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-01-07T09:06:00.000 | 0 | 2 | 0 | How to fix upload csv file in bigquery using python | 54,071,304 | 0 | python,google-cloud-platform,google-bigquery,google-cloud-storage | Thanks to all for a response.
Here is my solution to this problem:
with open('/path/to/csv/file', 'r') as f:
text = f.read()
converted_text = text.replace('"',"'") print(converted_text)
with open('/path/to/csv/file', 'w') as f:
f.write(converted_text) | while uploading csv file on BigQuery through storage , I am getting below error:
CSV table encountered too many errors, giving up. Rows: 5; errors: 1. Please look into the error stream for more details.
In schema , I am using all parameter as string.
In csv file,I have below data:
It's Time. Say "I Do" in my style.
I a... | 0 | 1 | 1,178 |
0 | 54,081,952 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-01-07T20:59:00.000 | 1 | 2 | 0 | np.linalg.qr(A) or scipy.linalg.orth(A) for finding the orthogonal basis (python) | 54,081,800 | 1.2 | python,numpy,matrix,vector | Note that sp.linalg.orth uses the SVD while np.linalg.qr uses a QR factorization. Both factorizations are obtained via wrappers for LAPACK functions.
I don't think there is a strong preference for one over the other. The SVD will be slightly more stable but also a bit slower to compute. In practice I don't think you wi... | If I have a vector space spanned by five vectors v1....v5, to find the orthogonal basis for A where A=[v1,v2...v5] and A is 5Xn
should I use np.linalg.qr(A) or scipy.linalg.orth(A)??
Thanks in advance | 0 | 1 | 4,408 |
0 | 54,087,388 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-08T07:32:00.000 | 1 | 1 | 0 | In gradient checking, do we add/subtract epsilon (a tiny value) to both theta and constant parameter b? | 54,087,106 | 1.2 | python,neural-network,backpropagation,gradient-descent | You should do it regardless, even for constants. The reason is simple: being constants, you know their gradient is zero, so you still want to check you "compute" it correctly. You can see it as an additional safety net | I've been doing Andrew Ng's DeepLearning AI course (course 2).
For the exercise in gradient checking, he implements a function converting a dictionary containing all of the weights (W) and constants (b) into a single, one-hot encoded vector (of dimensions 47 x 1).
The starter code then iterates through this vector, ad... | 0 | 1 | 222 |
0 | 54,102,609 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-08T17:44:00.000 | 1 | 1 | 0 | TF-IDF + Multiple Regression Prediction Problem | 54,097,067 | 1.2 | python,scikit-learn,nlp,regression,prediction | As you mentioned you could only so much with the body of text, which signifies the amount of influence of text on selling the cars.
Even though the model gives very poor prediction accuracy, you could ahead to see the feature importance, to understand what are the words that drive the sales.
Include phrases in your t... | I have a dataset of ~10,000 rows of vehicles sold on a portal similar to Craigslist. The columns include price, mileage, no. of previous owners, how soon the car gets sold (in days), and most importantly a body of text that describes the vehicle (e.g. "accident free, serviced regularly").
I would like to find out whic... | 0 | 1 | 406 |
0 | 54,102,957 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-01-09T03:32:00.000 | 0 | 2 | 0 | No module named... Issue | 54,102,868 | 0 | python | tensorflow is not suport by python 3.7 its only supported by 3.6 use a virtual environment to deal with multiple python versions. | Hi so i'm trying to get started on machine learning by installing tensorflow, however it's only supported by Python 3.6.x as of now.
I guess you can say this was a failed attempt to downgrade python.
My installed version of python is 3.7.2 which has all my modules installed.
I just installed Python 3.6.8.
The IDE i us... | 0 | 1 | 61 |
0 | 55,400,526 | 0 | 1 | 0 | 0 | 1 | true | 8 | 2019-01-09T16:18:00.000 | 4 | 1 | 0 | Python "See help(type(self)) for accurate signature." | 54,114,270 | 1.2 | python,documentation,docstring | There is a convention that the signature for constructing a class instance is put in the __doc__ on the class (since that is what the user calls) rather than on __init__ (or __new__) which determines that signature. This is especially true for extension types (written in C) whose __init__ cannot have its signature dis... | I have seen the following statement in a number of docstrings when help()ing a class: "See help(type(self)) for accurate signature."
Notably, it is in the help() for scipy.stats.binom.__init__ and for stockfish.Stockfish.__init__ at the very least. I assume, therefore, that it is some sort of stock message.
In any cas... | 0 | 1 | 1,623 |
0 | 54,129,738 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-01-10T13:02:00.000 | 1 | 2 | 0 | installed pandas but still can't import it | 54,129,321 | 1.2 | python-3.x,pandas | If you're using pycharm you can go to File -> Settings -> Project -> Project Interpreter.
There you'll get a list of all the packages installed with the current python that pycharm is using. There is a '+' sign on the right of the window that you can use to install new packages, just enter pandas there. | I already installed it with pip3 install pandas and using python3.7 but when I try to import pandas and run the code error popping up.
Traceback (most recent call last): File
"/Users/barbie/Python/Test/test.py", line 1, in
import pandas as pd ModuleNotFoundError: No module named 'pandas'
and if I try to i... | 0 | 1 | 6,513 |
0 | 54,131,475 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-01-10T14:05:00.000 | 0 | 2 | 0 | Scipy Weibull parameter confidence intervals | 54,130,419 | 0 | python,scipy,weibull | You could use scipy.optimize.curve_fit to fit the weibull distribution to your data. This will also give you the covariance and thus you can estimate the error of the fitted parameters. | I've been using Matlab to fit data to a Weibull distribution using [paramhat, paramci] = wblfit(data, alpha). This gives the shape and scale parameters for a Weibull distribution as well as the confidence intervals for each value.
I'm trying to use Scipy to accomplish the sane task and can easily get the parameters wi... | 0 | 1 | 496 |
0 | 62,532,493 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-01-10T14:20:00.000 | 0 | 3 | 0 | Role of activation function in calculating the cost function for artificial neural networks | 54,130,706 | 0 | python,neural-network,activation-function | -A cost function is a measure of error between what value your model predicts and what the value actually is. For example, say we wish to predict the value yi for data point xi . Let fθ(xi) represent the prediction or output of some arbitrary model for the point xi with parameters θ . One of many cost function... | I have some difficulty with understanding the role of activation functions and cost functions. Lets take a look at a simple example. Lets say I am building a neural network (artificial neural network). I have 5 „x“ variables and one „y“ variable.
If I do usual feature scaling and then apply, for example, Relu activati... | 0 | 1 | 179 |
0 | 54,130,955 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-01-10T14:20:00.000 | -1 | 3 | 0 | Role of activation function in calculating the cost function for artificial neural networks | 54,130,706 | -0.066568 | python,neural-network,activation-function | The value you're comparing your actual results to for the cost function doesn't (intrinsically) have anything to do with the input you used to get the output. It doesn't get transformed in any way.
Your expected value is [10,200,3] but you used Softmax on the output layer and RMSE loss? Well, too bad, you're gonna have... | I have some difficulty with understanding the role of activation functions and cost functions. Lets take a look at a simple example. Lets say I am building a neural network (artificial neural network). I have 5 „x“ variables and one „y“ variable.
If I do usual feature scaling and then apply, for example, Relu activati... | 0 | 1 | 179 |
0 | 54,142,820 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-11T08:09:00.000 | 0 | 2 | 0 | How can I read a file having different column for each rows? | 54,142,589 | 0 | python,jupyter-notebook | Use something like this to split it
split2=[]
split1=txt.split("\n")
for item in split1:
split2.append(item.split(" ")) | my data looks like this.
0 199 1028 251 1449 847 1483 1314 23 1066 604 398 225 552 1512 1598
1 1214 910 631 422 503 183 887 342 794 590 392 874 1223 314 276 1411
2 1199 700 1717 450 1043 540 552 101 359 219 64 781 953
10 1707 1019 463 827 675 874 470 943 667 237 1440 892 677 631 425
How can I read this file structure i... | 0 | 1 | 55 |
0 | 69,165,691 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-01-11T10:43:00.000 | 1 | 3 | 0 | How can I convert unicode to string of a dataframe column? | 54,144,887 | 0.066568 | python,apache-spark,pyspark,pyspark-sql,unicode-string | Since it's a string, you could remove the first and last characters:
From '[23,4,77,890,455]' to '23,4,77,890,455'
Then apply the split() function to generate an array, taking , as the delimiter. | I have a spark dataframe which has a column 'X'.The column contains elements which are in the form:
u'[23,4,77,890,455,................]'
. How can I convert this unicode to list.That is my output should be
[23,4,77,890,455...................]
. I have apply it for each element in the 'X' column.
I have tried df.w... | 0 | 1 | 8,294 |
0 | 54,145,335 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-11T11:02:00.000 | 1 | 1 | 0 | How to align training and test set when using pandas `get_dummies` with `drop_first=True`? | 54,145,226 | 0.197375 | python,machine-learning,sklearn-pandas,one-hot-encoding | When not using drop_first=True you have two options:
Perform the one-hot encoding before splitting the data in training and test set. (Or combine the data sets, perform the one-hot encoding, and split the data sets again).
Align the data sets after one-hot encoding: an inner join removes the features that are not pres... | I have a data set from telecom company having lots of categorical features. I used the pandas.get_dummies method to convert them into one hot encoded format with drop_first=True option. Now how can I use the predict function, test input data needs to be encoded in the same way, as the drop_first=True option also droppe... | 0 | 1 | 1,013 |
0 | 54,777,689 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-11T13:02:00.000 | 0 | 1 | 1 | Dask.distributed cluster administration | 54,147,096 | 0 | python,dask,dask-distributed | Usually people use a cluster manager like Kubernetes, Yarn, SLURM, SGE, PBS or something else. That system handles user authentication, resource management, and so on. A user then uses the one of the Dask-kubernetes, Dask-yarn, Dask-jobqueue projects to create their own short-lived scheduler and workers on the cluste... | I'm setting up Dask Python cluster at work (30 machines, 8 cores each in average). People use only a portion of their CPU power, so dask-workers will be running on background at low priority. All workers are listening to dask-scheduler on my master node. It works perfect if only I who use it, however it's gonna be used... | 0 | 1 | 101 |
0 | 54,161,780 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-12T01:03:00.000 | 0 | 1 | 0 | How can I train dlib shape predictor using a very large training set | 54,155,910 | 0 | python,computer-vision,face-recognition,dlib | I posted this as an issue on the dlib github and got this response from the author:
It's not reasonable to change the code to cycle back and forth between disk and ram like that. It will make training very slow. You should instead buy more RAM, or use smaller images.
As designed, large training sets need tons of RAM. | I'm trying to use the python dlib.train_shape_predictor function to train using a very large set of images (~50,000).
I've created an xml file containing the necessary data, but it seems like train_shape_predictor loads all the referenced images into RAM before it starts training. This leads to the process getting ter... | 0 | 1 | 738 |
0 | 57,248,756 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-01-12T16:15:00.000 | 2 | 1 | 0 | Python3.6 add audio to cv2 processed video | 54,161,418 | 0.379949 | python-3.x,opencv,audio,video-processing,cv2 | Stephen Meschke is right ! Use FFMPEG to extract and import audio.
Type in cmd:
Extract audio:
ffmpeg -i yourvideo.avi -f mp3 -ab 192000 -vn sound.mp3
Import audio:
ffmpeg -i yourvideo.avi -i sound.mp3 -c copy -map 0:v:0 -map 1:a:0 output.avi | I have a code that takes in a video then constructs a list of frames from that video. then does something with each frame then put the frames back together into cv2 video writer. However, when the video is constructed again, it loses all its audio. | 0 | 1 | 2,044 |
0 | 56,676,328 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-13T01:16:00.000 | 3 | 2 | 0 | What is the Keras 2.0 equivalent of `similarity = keras.layers.merge([target, context], mode='cos', dot_axes=0)` | 54,165,333 | 1.2 | python,tensorflow,keras | I tried with:
similarity = dot([target, context], axes=1, normalize=True) | Keras 2.0 has removed keras.layers.merge and now we should use keras.layers.Concatenate,
I was wonder what is the equivalent to having the 'cos' and 'dot_axis=0' arg, for example
similarity = keras.layers.merge([target, context], mode='cos', dot_axes=0)
How would I write that in keras 2.0? | 0 | 1 | 787 |
0 | 54,192,306 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-13T06:55:00.000 | 2 | 1 | 0 | Make the data from the second column stay at the second column | 54,166,726 | 1.2 | python,reportlab | If I get your question correct, the problem is that you use a spacer to control the contents' visual placement in two columns/frames. By this, you see it as a single long column split in two, meanwhile you need to see it as two separate columns (two separate frames).
Therefore you will get greater control if you end th... | I'm making a form using reportlab and its in two columns. The second columns is just a copy of the first column.
I used Frame() function to create two columns and I used a Spacer() function to separate the original form from the copied form into two columns.
My expected result is to make the data from the second colu... | 0 | 1 | 44 |
0 | 54,260,769 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-14T02:41:00.000 | 0 | 1 | 0 | Can we use scipy to do faster LU decomposition for band matrices? | 54,175,192 | 0 | python,scipy,linear-algebra | Lapack's *gbsv routine computes the LU decomp of an input banded matrix.
From python, you can use either its f2py wrapper (see e.g. the source of scipy.linalg.solve_banded for example usage) or drop to Cython and use scipy.linalg.cython_lapack bindings. | We know that elimination requires roughly 1/3 n^3 operations, and if we use LU decomposition stored in memory, it is reduced to n^2 operations. If we have a band matrix with w upper and lower diagonals, we can skip the zeros and bring it down to about nw^2 operations, and if we use LU decomposition, it can be done in a... | 0 | 1 | 407 |
0 | 54,182,847 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-14T13:41:00.000 | 1 | 2 | 0 | Convert grayscale png to RGB png image | 54,182,675 | 0.099668 | python,rgb,grayscale,medical,image-preprocessing | GIMP, Menu image -> Mode -> RGB mode | I have a dataset of medical images in grayscale Png format which must be converted to RGB format. Tried many solutions but in vain. | 0 | 1 | 1,046 |
0 | 54,188,845 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-01-14T19:17:00.000 | 1 | 1 | 0 | Python multiprocesing and NLTK wordnet path similarity | 54,187,798 | 0.197375 | python,nltk,python-multiprocessing,pool,wordnet | It is very likely that the module in separate processes attempts to access the very same file with Wordnet data. This would result in dependence on GIL to access the file or OS-level file locks use. Both cases would explain the behaviour you are observing. | I am using multiprocessing pool to speed up the title extraction process on a text corpus. At one stage of the code, I am using wordnet path similarity module to determine the similarity of two words.
If i run my code sequentially i.e. without the use of multiprocessing pool, I get normal times in calculating this pat... | 0 | 1 | 173 |
0 | 54,777,735 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-14T19:53:00.000 | 1 | 1 | 0 | joblib parallel_backend with dask resources | 54,188,251 | 0.197375 | python,dask,joblib | As of 2019-02-19, there is no way to do this. | Whenever I submit a dask task, I can specify the requisite resources for that task. e.g. client.submit(process, d, resources={'GPU': 1})
However, If I abstract my dask scheduler away as a joblib.parallel_backend, it is not clear how to specify resources when I do so.
How do I call joblib.parallel_backend('dask') and s... | 0 | 1 | 193 |
0 | 54,190,671 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-14T22:49:00.000 | 0 | 2 | 0 | How to decrypt a columnar transposition cipher | 54,190,370 | 1.2 | python,encryption,cryptography | I figured it out. Once you know the number of rows and columns, you can write the ciphertext into the rows, then permute the rows according to the key. Please correct if my explanation is wrong. The plain text is "execlent work you have cracked the code" | My question is not one of coding per say, but of understanding the algorithm.
Conceptually I understand how the column transposition deciphers text with a constant key value for example 10.
My confusion occurs, when the key is a permutation. For example key = [2,4,6,8,10,1,3,5,7,9] and a message like "XOV EK HLYR NUCO... | 0 | 1 | 1,074 |
0 | 54,197,292 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-15T06:52:00.000 | 1 | 1 | 0 | Good resources for video processing in Python? | 54,193,968 | 1.2 | python,opencv,video,video-streaming,video-processing | Of course are the alternatives to OpenCV in python if it comes to video capture but in my experience none of them preformed better | I am using the yolov3 model running on several surveillance cameras. Besides this I also run tensorflow models on these surveillaince streams. I feel a little lost when it comes to using anything but opencv for rtsp streaming.
So far I haven't seen people use anything but opencv in python. Are there any places I should... | 0 | 1 | 51 |
0 | 54,209,309 | 0 | 0 | 0 | 1 | 1 | false | 2 | 2019-01-15T06:54:00.000 | 0 | 4 | 0 | Automate File loading from s3 to snowflake | 54,193,979 | 0 | python,amazon-s3,snowflake-cloud-data-platform | There are some aspects to be considered such as is it a batch or streaming data , do you want retry loading the file in case there is wrong data or format or do you want to make it a generic process to be able to handle different file formats/ file types(csv/json) and stages.
In our case we have built a generic s3 to ... | In s3 bucket daily new JSON files are dumping , i have to create solution which pick the latest file when it arrives PARSE the JSON and load it to Snowflake Datawarehouse. may someone please share your thoughts how can we achieve | 0 | 1 | 1,762 |
0 | 54,207,491 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-15T19:46:00.000 | 1 | 1 | 0 | Save the exact state of Tensorflow model, random state, and Datasets API pointer for debugging | 54,205,857 | 0.197375 | python,tensorflow | From my personal experience I would approach it in the following ways.
Running the code with -i flag (python -i) which takes you to the interpreter with preserved state at the moment the script stops OR (even better) calling problematic parts of code from jupyter notebook which will also preserve the state after the e... | TLDR: Is there a way to freeze a Tensorflow model during runtime at time t1, such that running the network from time 0 to t2>t1 would lead to exactly the same results as running it from t1 to t2?
I have searched this quite a lot and couldn't find this exact scenario:
I have a tensorflow model which is receiving inputs ... | 0 | 1 | 163 |
0 | 54,207,278 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-01-15T21:38:00.000 | 0 | 1 | 0 | Tensorflow does not see gpu on pycharm | 54,207,221 | 0 | python-3.x,tensorflow,pycharm | Go to File -> Settings -> Project Interpreter and set the same python environment used by Anaconda. | Specifications:
System: Ubuntu 18.0.4
Tensorflow:1.9.0,
cudnn=7.2.1
Interpreter project: anaconda environment.
When I run the script on terminal with the same anaconda env, it works fine. Using pycharm, it does not work!! What is the issue ? | 0 | 1 | 94 |
0 | 56,453,634 | 0 | 0 | 0 | 0 | 1 | false | 26 | 2019-01-16T03:37:00.000 | 0 | 3 | 0 | pd.read_hdf throws 'cannot set WRITABLE flag to True of this array' | 54,210,073 | 0 | python,pandas,pytables,hdf | It seems that time-date strings were causing the problem and when I converted these from text to numpy (pd.to_datetime()) and stored the table and the problem went away so perhaps it has something to do with text data? | When running
pd.read_hdf('myfile.h5')
I get the following traceback error:
[[...some longer traceback]]
~/.local/lib/python3.6/site-packages/pandas/io/pytables.py in
read_array(self, key, start, stop) 2487 2488 if
isinstance(node, tables.VLArray):
-> 2489 ret = node[0][start:stop] 2... | 0 | 1 | 15,113 |
0 | 68,094,492 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-01-17T08:50:00.000 | 1 | 1 | 0 | Big Data Load in Pandas Data Frame | 54,232,066 | 0.197375 | python-3.x,oracle,jupyter-notebook,bigdata | pandas is not good if you have GBS of data it would be better to use distributed architecture to improve speed and efficiency. There is a library called DASK that can load large data and use distributed architecture. | As I am new in Big Data Platform, I would like like to do some feature engineering work with my data. The Database size is about 30-50 Gb. Is is possible to load the full data (30-50Gb) in a data frame like pandas data frame?
The Database used here is Oracle. I tried to load it but I am getting out of memory error. Fu... | 0 | 1 | 224 |
0 | 54,235,046 | 0 | 0 | 0 | 0 | 1 | true | 12 | 2019-01-17T08:51:00.000 | 16 | 1 | 1 | Dask: delayed vs futures and task graph generation | 54,232,080 | 1.2 | python,distributed-computing,dask | 1) Yup. If you're sending the data through a network, you have to have some way of asking the computer doing the computing for you how's that number-crunching coming along, and Futures represent more or less exactly that.
2) No. With Futures, you're executing the functions eagerly - spinning up the computations as soon... | I have a few basic questions on Dask:
Is it correct that I have to use Futures when I want to use dask for distributed computations (i.e. on a cluster)?
In that case, i.e. when working with futures, are task graphs still the way to reason about computations. If yes, how do I create them.
How can I generally, i.e. no m... | 0 | 1 | 1,864 |
0 | 54,235,779 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-17T12:10:00.000 | 0 | 2 | 0 | Access dict columns of a csv file in python pandas | 54,235,643 | 1.2 | python,python-3.x,pandas | You can only set the delmiter to one character so you can't use square brackets in this Way. You would need to use a single character such as " so that it knows to ignore the commas between the delmieters. | I have a dataset in csv file which contains one of the column as list(or dict which further includes several semi colons and commas because of key, value pair). Now trouble is accessing with Pandas and it is return mixed values because of the reason that it has several commas in the list which is in fact a single colum... | 0 | 1 | 105 |
0 | 54,241,747 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-01-17T17:59:00.000 | 5 | 1 | 0 | I get an ImportError: No module named 'numpy' when trying to use numpy in PyCarm, but it works fine in the interactive console | 54,241,710 | 1.2 | python,numpy | You probably arent using the same python installation in pycharm and in your console. Did you double-check in project settings ?
If you just want to install numpy, you can create a requirements.txt file and add numpy in it, pycharm will suggest to install it if not already done.
Alternatively, you could use a venv | I'm already installed numpy and it works in cmd.
my Python version is 3.7.2 and numpy version is 1.16.0
When I use numpy in windows cmd, It works.
import numpy is working well in the python interactive console.
But in pyCharm, it doesn't work and errors with No module named 'numpy'.
How can I solve it? | 0 | 1 | 229 |
0 | 54,247,129 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-01-18T02:42:00.000 | 0 | 2 | 0 | finding a string after identifying group | 54,247,079 | 0 | python,regex,pandas | re.match(r'(?:TEL)?:? ?([0-9 ]{9-12})').group(1)
(?:...) makes it a non-capturing group
([0-9 ]{9-12}) captures that part as the group(1) | I am iterating through a few thousand lines of some really messy data from a csv file using pandas. I'm iterating through one of the dataframe columns which contains generally fairly short strings of disparate, concatenated customer information (name, location, customer numbers, telephone numbers, etc).
There's not a ... | 0 | 1 | 51 |
0 | 54,271,206 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-18T21:17:00.000 | 0 | 1 | 0 | Machine learning through R/Python in the Netezza server | 54,261,531 | 0 | python,r,machine-learning,netezza | It is possible to install R and To my knowledge all kinds of R-packages can be installed. Some of the code will only run on the HOST but all the basics (like Apply and filtering) runs on all the SPU’s | Is it possible to run machine learning through R (RStudio) or Python in a Netezza server? More specifically, can I train models and make predictions using the Netezza server? Has anybody been able to install TensorFlow, Keras or Pytorch in the Netezza server for these ML tasks?
I appreciate any feedback whether this is... | 0 | 1 | 159 |
0 | 66,045,559 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-01-19T00:00:00.000 | -1 | 2 | 0 | Python how to get labels of a generated adjacency matrix from networkx graph? | 54,262,904 | -0.099668 | python-3.x,networkx,adjacency-matrix | If the adjacency matrix is generated without passing the nodeList, then you can call G.nodes to obtain the default NodeList, which should correspond to the rows of the adjacency matrix. | If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it.
So basically, how to get labels of that adjacency matrix ? | 0 | 1 | 910 |
0 | 54,274,980 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-01-19T00:00:00.000 | 0 | 2 | 0 | Python how to get labels of a generated adjacency matrix from networkx graph? | 54,262,904 | 1.2 | python-3.x,networkx,adjacency-matrix | Assuming you refer to nodes' labels, networkx only keeps the the indices when extracting a graph's adjacency matrix. Networkx represents each node as an index, and you can add more attributes if you wish. All node's attributes except for the index are kept in a dictionary. When generating graph's adjacency matrix only ... | If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it.
So basically, how to get labels of that adjacency matrix ? | 0 | 1 | 910 |
0 | 54,268,086 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-01-19T12:40:00.000 | 0 | 2 | 0 | How to represent bounds of variables in scipy.optimization where bound is function of another variable | 54,267,193 | 0 | python,scipy | You can try in the below way.
for i in range(0,100):
for j in range(0,int(i)):
for k in range(0,int(j)):
print(k) | I want to solve an lp optimization problem where the upper bounds of a few variables are not an integer, instead of a function of another variable. As an example, i, j and k are three variables and bounds are 0<=i<=100, 0<=j<=i-1 and 0<=k<=j-1. How can we represent such noninteger bounds in scipy lp solver? | 0 | 1 | 145 |
0 | 54,267,964 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-01-19T12:40:00.000 | 0 | 2 | 0 | How to represent bounds of variables in scipy.optimization where bound is function of another variable | 54,267,193 | 0 | python,scipy | Currently none of scipy's methods allows for applying dynamic bounds. You can make a non standard extension to scipy.optimize.minimize or fsolve or implement your own optimiser with dynamic bounds.
Now on whether it is a good idea to do so: NO!
That is because for a well formulated optimisation problem you want the de... | I want to solve an lp optimization problem where the upper bounds of a few variables are not an integer, instead of a function of another variable. As an example, i, j and k are three variables and bounds are 0<=i<=100, 0<=j<=i-1 and 0<=k<=j-1. How can we represent such noninteger bounds in scipy lp solver? | 0 | 1 | 145 |
0 | 54,303,968 | 0 | 0 | 0 | 0 | 1 | false | 54 | 2019-01-19T17:33:00.000 | 82 | 3 | 0 | Is there a head and tail method for Numpy array? | 54,269,647 | 1 | python,numpy | For a head-like function you can just slice the array using dataset[:10].
For a tail-like function you can just slice the array using dataset[-10:]. | I loaded a csv file into 'dataset' and tried to execute dataset.head(), but it reports an error. How to check the head or tail of a numpy array? without specifying specific lines? | 0 | 1 | 65,506 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.