GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 63,844,295 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-08-22T04:46:00.000 | 0 | 1 | 0 | Dash graph selected legend items as input to callback | 57,602,189 | 0 | python,callback,plotly-dash,legend-properties,plotly-python | Use restyleData input in the callback: Input("graph-id", "restyleData") | I have a Dash app with a dcc.Graph object and a legend for multiple deselectable traces. How can I pass the list of traces selected in the legend as an input to a callback? | 0 | 1 | 542 |
0 | 57,614,302 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-22T10:10:00.000 | 0 | 2 | 0 | Specify log-normal family in BAMBI model | 57,606,946 | 0 | python-3.x,bambi | Unless I'm misunderstanding something, I think all you need to do is specify link='log' in the fit() call. If your assumption is correct, the exponentiated linear prediction will be normally distributed, and the default error distribution is gaussian, so I don't think you need to build a custom family for this—the defa... | I'm trying to fit a simple Bayesian regression model to some right-skewed data. Thought I'd try setting family to a log-normal distribution. I'm using pymc3 wrapper BAMBI. Is there a way to build a custom family with a log-normal distribution? | 0 | 1 | 162 |
0 | 57,712,280 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-23T00:00:00.000 | 1 | 1 | 1 | Dask worker seem die but cannot find the worker log to figure out why | 57,618,323 | 0.197375 | python,dask | Worker logs are usually managed by whatever system you use to set up Dask.
Perhaps you used something like Kubernetes or Yarn or SLURM?
These systems all have ways to get logs back.
Unfortunately, once a Dask worker is no longer running, Dask itself has no ability to collect logs for you. You need to use the sys... | I have a piece of DASK code run on local machine which work 90% of time but will stuck sometimes. Stuck mean. No crash, no error print out not cpu usage. never end.
I google and think it maybe due to some worker dead. I will be very useful if I can see the worker log and figure out why.
But I cannot find my wor... | 0 | 1 | 149 |
0 | 57,627,460 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-23T12:22:00.000 | 0 | 1 | 0 | TFIDF vs Word2Vec | 57,626,276 | 0 | python,machine-learning,data-science,word2vec,tf-idf | In example one, the word2vec maybe doesn't have the words Bills and CHAPS into its bag of words. That been said, taking out these words the sentences are the same*.
In Example 2, maybe in the tokenization of the word2vec algorithm, it took the "requirements:" as one token and the "requirements" as a different one, Tha... | I am trying to find similarity score between two documents (containing around 15000 records).
I am using two methods in python:
1. TFIDF (Scikit learn) 2. Word2Vec (gensim, google pre-trained vectors)
Example1
Doc1- Click on "Bills" tab
Doc2- Click on "CHAPS" tab
First method gives 0.9 score.
Second method gives 1 scor... | 0 | 1 | 2,587 |
0 | 57,636,395 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-24T07:50:00.000 | 0 | 2 | 0 | How to get a callback when the specified epoch number is over? | 57,636,091 | 0 | python,keras | Actually, the way keras works this is probably not the best way to go, it would be much better to treat this as fine tuning, meaning that you finish the 10 epochs, save the model and then load the model (from another script) and continue training with the lr and data you fancy.
There are several reasons for this.
It i... | I want to fine turn my model when using Keras, and I want to change my training data and learning rate to train when the epochs arrive 10, So how to get a callback when the specified epoch number is over. | 0 | 1 | 484 |
0 | 57,638,917 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-24T11:40:00.000 | 0 | 1 | 0 | OpenCV feature pairs to point cloud | 57,637,608 | 0 | python,opencv | As you say, you have no calibration, so let’s forget about rectification. What you want is the depth of the points, so you can project them into 3D (which then uses just the intrinsic calibration of one camera, mainly the focal length).
Since you have no rectification, you cannot expect exact results, so let’s try to g... | I have some SIFT features in two stereo images, and I'm trying to place them in 3D space. I've found triangulatePoints, which seems to be what I want, however, I'm having trouble with the arguments.
triangulatePoints takes 4 arguments, projMatr1 and projMatr2, which is where my issues start, and projPoints1 and projPoi... | 0 | 1 | 135 |
0 | 57,641,090 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-24T18:30:00.000 | 1 | 3 | 0 | Best data type (in terms of speed/RAM) for millions of pairs of a single int paired with a batch (2 to 100) of ints | 57,640,595 | 0.066568 | python,numpy | Use numpy. It us the most efficient and you can use it easily with a machine learning model. | I have about 15 million pairs that consist of a single int, paired with a batch of (2 to 100) other ints.
If it makes a difference, the ints themselve range from 0 to 15 million.
I have considered using:
Pandas, storing the batches as python lists
Numpy, where the batch is stored as it's own numpy array (since numpy... | 0 | 1 | 71 |
0 | 67,599,276 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-24T20:59:00.000 | 0 | 2 | 0 | Detecting questions in text | 57,641,504 | 0 | python-3.x,machine-learning,text,nlp,analytics | Please use the NLP methods before processing the sentiment analysis. Use the TFIDF, Word2Vector to create vectors on the given dataset. And them try the sentiment analysis. You may also need glove vector for the conducting analysis. | I have a project where I need to analyze a text to extract some information if the user who post this text need help in something or not, I tried to use sentiment analysis but it didn't work as expected, my idea was to get the negative post and extract the main words in the post and suggest to him some articles about t... | 0 | 1 | 578 |
0 | 57,797,731 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-08-24T22:45:00.000 | 1 | 3 | 0 | PyTorch not downloading | 57,642,019 | 0.066568 | python,pip,pytorch | I've been in same situation.
My prob was, the python version... I mean, in the 'bit' way.
It was 32 bit that the python I'd installed.
You should check which bit of python you installed.
you can check in the app in setting, search python, then you will see the which bit you've installed.
After I installed the 64 bit... | I go to the PyTorch website and select the following options
PyTorch Build: Stable (1.2)
Your OS: Windows
Package: pip
Language: Python 3.7
CUDA: None
(All of these are correct)
Than it displays a command to run
pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I... | 0 | 1 | 2,971 |
0 | 57,642,037 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-08-24T22:45:00.000 | 0 | 3 | 0 | PyTorch not downloading | 57,642,019 | 0 | python,pip,pytorch | It looks like it can't find a version called "1.2.0+cpu" from it's list of versions that it can find (0.1.2, 0.1.2.post1, 0.1.2.post2). Try looking for one of those versions on the PyTorch website. | I go to the PyTorch website and select the following options
PyTorch Build: Stable (1.2)
Your OS: Windows
Package: pip
Language: Python 3.7
CUDA: None
(All of these are correct)
Than it displays a command to run
pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I... | 0 | 1 | 2,971 |
0 | 57,648,698 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-08-24T22:45:00.000 | 0 | 3 | 0 | PyTorch not downloading | 57,642,019 | 0 | python,pip,pytorch | So it looks like I cant install PyTorch because I am running python 32-bit. This may or may not be the problem but this is the only possible error that I could see this being the cause. | I go to the PyTorch website and select the following options
PyTorch Build: Stable (1.2)
Your OS: Windows
Package: pip
Language: Python 3.7
CUDA: None
(All of these are correct)
Than it displays a command to run
pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I... | 0 | 1 | 2,971 |
0 | 57,658,971 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-26T02:56:00.000 | 0 | 1 | 0 | Can someone explain or summarize the input shape of keras under different type of neural networks? | 57,651,278 | 0 | python,keras | Generally, CNN takes 4 dimensions as input data. Whenever you train a CNN the model in Keras it will automatically convert the input data into 4D. If you want to predict using your CNN model you have to make sure that even your output data/ or the data you want to run the inference on should have the same dimensions as... | I am very new to the python keras. And with the understanding of the Keras, I'm confusing about the input shape of Keras. I feel under different neural network, I need to reconstruct my data into different shapes.
For example, if I'm building a simple ANN, my train data should be a matrix like [m, n], the m is the numb... | 0 | 1 | 54 |
0 | 57,660,353 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-26T13:46:00.000 | 0 | 1 | 0 | combined different objects contains plots and data frames (tables) and paragraphs (markdowns) in to single html report | 57,659,156 | 0 | python,html,reporting | Use Jupyter or Zeppelin notebooks. They provide all of the functionality you described and can export to PDF. Reports can even be run/emailed on a predetermined schedule. | I want to generate a single report that integrates different objects as my analysis inlcudes plots (matplotlib, seaborn and bokeh) and pandas data frames (tables) and paragraphs (markdowns) into an HTML report in python | 1 | 1 | 16 |
0 | 57,661,488 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-26T15:52:00.000 | 1 | 1 | 0 | Does the simple parameters also change during Hyper-parameter tuning | 57,661,188 | 1.2 | python,machine-learning,cross-validation | Short is NO, they are not fixed.
Because, Hyper-parameters directly influence your simple parameters. So for a neural network, no of hidden layers to use is a hyper-parameter, while weights and biases in each layer can be called simple parameter. Of course, you can't make weights of individual layers constant when th... | During hyper-parameter tuning do the parameters (weights already learned during model training) are also optimized or are they fixed and only optimal values are found for the hyper-parameters? Please explain. | 0 | 1 | 34 |
0 | 57,663,266 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-08-26T17:30:00.000 | 2 | 1 | 0 | Is it appropriate to train W2V model on entire corpus? | 57,662,405 | 1.2 | python,machine-learning,nlp,word2vec | The answer to most questions like these in NLP is "try both" :-)
Contamination of test vs train data is not relevant or a problem in generating word vectors. That is a relevant issue in the model you use the vectors with. I found performance to be better with whole corpus vectors in my use cases.
Word vectors improve ... | I have a corpus of free text medical narratives, for which I am going to use for a classification task, right now for about 4200 records.
To begin, I wish to create word embeddings using w2v, but I have a question about a train-test split for this task.
When I train the w2v model, is it appropriate to use all of the ... | 0 | 1 | 1,011 |
0 | 57,681,370 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-26T22:11:00.000 | 0 | 1 | 0 | PySpark Group and apply UDF row by row operation | 57,665,530 | 0 | python,pyspark | you need an aggregation ?
df.groupBy("tag").agg({"date":"min"})
what about that ? | I have a dataset that contains 'tag' and 'date'. I need to group the data by 'tag' (this is pretty easy), then within each group count the number of row that the date for them is smaller than the date in that specific row. I basically need to loop over the rows after grouping the data. I don't know how to write a UDF w... | 0 | 1 | 62 |
0 | 58,252,317 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-27T08:01:00.000 | 0 | 1 | 0 | Estimator.train() and .predict() are too slow for small data sets | 57,670,160 | 0 | python,tensorflow-estimator | Convert to a tf.keras.Model instead of an Estimator, and use tf.keras.Model.fit() instead of Estimator.train(). fit() doesn't have the fixed delay that train() does. The Keras predict() doesn't either. | I'm trying to implement a DQN which makes many calls to Estimator.train() followed by Estimator.predict() on the same model with a small number of examples each. But each call takes a minimum of a few hundred milliseconds to over a second which is independent of the number of examples for small numbers like 1-20.
I thi... | 0 | 1 | 148 |
0 | 62,469,494 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-27T12:03:00.000 | 0 | 1 | 0 | Poor performance transfer learning ResNet50 | 57,674,274 | 0 | python,tensorflow,keras,deep-learning | I have read a few articles about the same topic - i have 12k jpeg images from 3 classes and after 3 epochs the accuracy dropped to 0. I am awaiting delivery of a new graphics card to improve performance (it's currently taking 90 - 120 minutes per epoch) and hope to give more feedback. I am just wondering if the face ... | I have a dataset of 11k images labeled for semantic segmentation. About 8.8k belong to 'group 1' and the rest to 'group 2'
I am trying to simulate what would happen if we lost access to 'group 1' imagery but not a network trained from them.
So I trained ResNet50 on group 1 only. Then used that network as a starting poi... | 0 | 1 | 150 |
0 | 57,680,106 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-08-27T17:48:00.000 | 2 | 1 | 0 | Saving large numpy 2d arrays | 57,679,863 | 1.2 | python,numpy | As pointed out in the comments, 1e6 rows * 4800 columns * 4 bytes per float32 is 18GiB. Writing a float to text takes ~9 bytes of text (estimating 1 for integer, 1 for decimal, 5 for mantissa and 2 for separator), which comes out to 40GiB. This takes a long time to do, since just the conversion to text itself is non-tr... | I have an array with ~1,000,000 rows, each of which is a numpy array of 4,800 float32 numbers.
I need to save this as a csv file, however using numpy.savetxt has been running for 30 minutes and I don't know how much longer it will run for.
Is there a faster method of saving the large array as a csv?
Many thanks,
Josh | 0 | 1 | 266 |
0 | 59,026,590 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-29T04:06:00.000 | 2 | 2 | 0 | How to implement fixed-point binary support in numpy | 57,702,835 | 0.197375 | python,numpy-ndarray | For anyone interested, this turned out to be too hard to do in Python extended Numpy, or just didn't fit the data model. I ended up writing a separate Python library of types implementing the behaviours I wanted, that use Numpy arrays of integers under the hood for speed.
It works OK and does the strict binary range ca... | I have a homebrew binary fixed-point arithmetic support library and would like to add numpy array support. Specifically I would like to be able to pass around 2D arrays of fixed-point binary numbers and do various operations on them such as addition, subtraction, multiplication, rounding, changing of fixed point format... | 0 | 1 | 2,338 |
0 | 57,712,269 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-08-29T13:17:00.000 | 8 | 1 | 0 | Difference between tf.data.Dataset.repeat() vs iterator.initializer | 57,711,103 | 1.2 | python,tensorflow,repeat | As we know, each epoch in the training process of a model takes in the whole dataset and breaks it into batches. This happens on every epoch.
Suppose, we have a dataset with 100 samples. On every epoch, the 100 samples are broken into 5 batches ( of 20 each ) for feeding them to the model. But, if I have to train the m... | Tensorflow has tf.data.Dataset.repeat(x) that iterates through the data x number of times. It also has iterator.initializer which when iterator.get_next() is exhausted, iterator.initializer can be used to restart the iteration. My question is is there difference when using tf.data.Dataset.repeat(x) technique vs iterato... | 0 | 1 | 1,504 |
0 | 57,732,477 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-29T16:15:00.000 | 0 | 1 | 0 | Spark Arrow, toPandas() and wide transformation | 57,714,161 | 1.2 | python,pandas,apache-spark,apache-arrow | toPandas() takes your spark dataframe object and pulls all partitions on the client driver machine as a pandas dataframe. Any operations on this new object (pandas dataframe) will be running on a single machine with python therefore no wide transformations will be possible because you aren't using spark cluster distri... | What does toPandas() actually do when using arrows optimization?
Is the resulting pandas dataframe safe for wide transformations (that requires data shuffling) on the pandas dataframe eg..merge operations? what about group and aggregate? What kind of performance limitation should I expect?
I am trying to standardize to... | 0 | 1 | 279 |
0 | 66,201,863 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-29T21:30:00.000 | 0 | 1 | 0 | 'Chart' object has no attribute 'configure_facet_cell' | 57,717,942 | 0 | python,bar-chart,configure,facet,altair | As per the comments this was created on an early version of the software and is no longer reproducible on current Altair versions. | I am using Altair package when I use following objects I have following error message.
AttributeError: 'Chart' object has no attribute 'configure_facet_cell'
In order to use attribute above, what should I install or add?
Thank you in advance. | 0 | 1 | 549 |
0 | 57,727,108 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-08-30T08:15:00.000 | 0 | 1 | 0 | Update Tensorboard while keeping Tensorflow old with conda | 57,723,038 | 1.2 | python,tensorflow,conda,tensorboard | as @jdehesa suggested in the comments, it's better to have a different conda environment for pytorch and then install just the tb there
!pip install tb-nightly | I have some legacy Keras/Tensorflow code, which is unstable using latest Tensorflow versions (1.13+). It works just fine with previous versions. However i want to use Pytorch's Tensorboard support which requires it to be 1.14+. I've installed all Tensorflow-related packages to 1.10 and wanted to do just conda install t... | 0 | 1 | 1,279 |
0 | 57,726,399 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-30T11:39:00.000 | 2 | 1 | 0 | Binary cross entropy Vs categorical cross entropy with 2 classes | 57,726,064 | 0.379949 | python,pytorch,cross-entropy | If you are using softmax on top of the two output network you get an output that is mathematically equivalent to using a single output with sigmoid on top.
Do the math and you'll see.
In practice, from my experience, if you look at the raw "logits" of the two outputs net (before softmax) you'll see that one is exactly ... | When considering the problem of classifying an input to one of 2 classes, 99% of the examples I saw used a NN with a single output and sigmoid as their activation followed by a binary cross-entropy loss. Another option that I thought of is having the last layer produce 2 outputs and use a categorical cross-entropy wit... | 0 | 1 | 636 |
0 | 57,743,094 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-30T22:08:00.000 | 0 | 1 | 0 | what are the options to implement random search? | 57,733,690 | 0 | python,ray | Generally, you don't need to use ray.tune.suggest.BasicVariantGenerator().
For the other two choices, it's up to what suits your need. tune.randint() is just a thin wrapper around tune.sample_from(lambda spec: np.random.randint(...)). You can do more expressive/conditional searches with the latter, but the former is e... | So i want to implement random search but there is no clear cut example as to how to do this. I am confused between the following methods:
tune.randint()
ray.tune.suggest.BasicVariantGenerator()
tune.sample_from(lambda spec: blah blah np.random.choice())
Can someone please explain how and why these methods are same/di... | 0 | 1 | 60 |
0 | 57,747,241 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-01T14:17:00.000 | 0 | 1 | 0 | How to get the day difference between date-column and maximum date of same column or different column in Python? | 57,746,737 | 0 | python,pandas,datetime64 | Although I was busy with this question for 2 days, now I realized that I had a big mistake. Sorry to everyone.
The reason that can not take the maximum value as date comes from as below.
Existing one: t=data_signups[["date_joined"]].max()
Must-be-One: t=data_signups["date_joined"].max()
So it works with as below.
dat... | I am setting up a new column as the day difference in Python (on Jupyter notebook).
I carried out the day difference between the column date and current day. Also, I carried out that the day difference between the date column and newly created day via current day (Current day -/+ input days with timedelta function).
... | 0 | 1 | 50 |
0 | 57,791,481 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-04T09:45:00.000 | 0 | 1 | 0 | Creating a python algorithm to train a keras model to predict a large sequence of integers | 57,785,752 | 0 | python,tensorflow,machine-learning,keras | One hot encoding increases number of columns according to unique categories in data set. I think you should check the performance of model with just using tokenizer not both. Because most of the time tokenizer alone performs very well. | I'm new to machine learning but I'm trying to apply it to a project I have. I was able to train a model to convert words from one language to another using LSTM layers. Say I use A as input to my model and I get B as output. What I do is:
'original word' -> word embedding -> one-hot encode (A) -> MODEL -> one-hot enc... | 0 | 1 | 201 |
0 | 57,786,810 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-09-04T10:38:00.000 | 0 | 2 | 0 | Preprocessing Dataset with Large Categorical Variables | 57,786,660 | 0 | python,pandas,machine-learning,data-analysis,preprocessor | You can have a look if your categorical variables are suitable for a Spearman rank correlation, which ranks the categorical variables and calculates the correlation coefficient. However, be careful for collinearity between the categorical variables. | I have tried to find out basic answers for this question, but none on Stack Overflow seems a best fit.
I have a dataset with 40 columns and 55,000 rows. Only 8 out of these columns are numerical. The remaining 32 are categorical with string values in each.
Now I wish to do an exploratory data analysis for a predictive... | 0 | 1 | 173 |
0 | 62,684,892 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-09-04T10:38:00.000 | 0 | 2 | 0 | Preprocessing Dataset with Large Categorical Variables | 57,786,660 | 1.2 | python,pandas,machine-learning,data-analysis,preprocessor | you got to check the correlation.. There are two scenarios I can think of..
if the target variable is continuous and independent variable is categorical, you can go with Kendall Tau correlation
if both target and independent variable are categorical, you can go with CramersV correlation
There's a package in python wh... | I have tried to find out basic answers for this question, but none on Stack Overflow seems a best fit.
I have a dataset with 40 columns and 55,000 rows. Only 8 out of these columns are numerical. The remaining 32 are categorical with string values in each.
Now I wish to do an exploratory data analysis for a predictive... | 0 | 1 | 173 |
0 | 57,807,337 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2019-09-04T13:56:00.000 | 0 | 2 | 0 | Improving sound latencies for video presentation in python | 57,790,029 | 0 | python,audio,video,latency,psychopy | Ultimately, yes, the issues are the same for audio-visual sync whether or not they are embedded in a movie file. By the time the computer plays them they are simply visual images on a graphics card and an audio stream on a sound card. The streams just happen to be bundled into a single (mp4) file. | I am creating an experiment in python. It includes the presentation of many mp4 videos, that include both image and sound. The sound is timed so that it appears at the exact same time as a certain visual image in the video. For the presentation of videos, I am using psychopy, namely the visual.MovieStim3 function.
Beca... | 0 | 1 | 82 |
0 | 57,849,959 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-04T22:29:00.000 | 1 | 1 | 0 | Pandas Interpolation Method 'Cubic' - spline or polynomial? | 57,796,327 | 1.2 | python-3.x,pandas,interpolation | In interpolation methods, 'polynomial' generally means that you generate a polynomial with the same number of coefficients as you have data points. So, for 10 data points you would get an order 9 polynomial.
'cubic' generally means piecewise 3rd order polynomials. A sliding window of 4 data points is used to generate t... | I am trying to understand interpolation in pandas and I don't seem to understand if the method 'cubic' is a polynomial interpolation of order 3 or a spline. Does anybody know what pandas uses behind that method? | 0 | 1 | 180 |
0 | 60,118,279 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-05T03:55:00.000 | 0 | 1 | 0 | How to take multi-GPU support to the OpenNMT-py (pytorch)? | 57,798,219 | 0 | python-2.7,pytorch,opennmt | Maybe you can check whether your torch and python versions fit the openmt requiremen.
I remember their torch is 1.0 or 1.2 (1.0 is better). You have to lower your latest of version of torch. Hope that would work | I used python-2.7 version to run the PyTorch with GPU support. I used this command to train the dataset using multi-GPU.
Can someone please tell me how can I fix this error with PyTorch in OpenNMT-py or is there a way to take pytorch support for multi-GPU using python 2.7?
Here is the command that I tried.
CUDA_VISI... | 0 | 1 | 166 |
0 | 57,807,062 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-05T12:59:00.000 | 0 | 1 | 0 | cannot import name '_pywrap_utils' from 'tensorflow.python' | 57,806,063 | 0 | python,python-3.x,tensorflow,tensorrt | Try pip3 install pywrap and pip3 install tensorflow pywrap utils should be included with tensorflow. If it is not found, that means TF was not installed correctly. | I am working on pose estimation using OpenPose. For that I installed TensorFlow GPU and installed all the requirements including CUDA development kit.
While running the Python script:
C:\Users\abhi\Anaconda3\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py, I encountered the following error:
ImportErr... | 0 | 1 | 1,170 |
0 | 57,814,580 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-05T19:56:00.000 | 1 | 2 | 0 | Time Series model for predicting online student grades? | 57,812,231 | 0.099668 | python,pandas,scikit-learn,time-series,forecasting | Look at the fbprophet module. This can separate a time series into components such as trend, seasonality and noise. The module was originally developed for web traffic.
You can incorporate this into your regression model in a number of ways by constructing additional variables, for example:
Ratio of trend at start of ... | I have a dataset with daily activities for online students (time spent, videos watched etc). based on this data I want to predict if each student will pass or not. Until this point I have been treating it as a classification problem, training a model for each week with the student activity to date and their final outco... | 0 | 1 | 217 |
0 | 58,536,847 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-09-06T16:27:00.000 | 2 | 1 | 0 | Tensorflow train.py throws Windows fatal exception | 57,825,630 | 0.379949 | python,tensorflow,machine-learning,tensorflow-datasets | I decided to share what solved my problem, might help the others. I reinstalled Tensorflow itself in a virtual environment, and upgraded it to version 1.8 (Requires Python 3.6, it is not compatible with higher versions (mine is 3.6.5 in particular)), make sure your PYTHONPATH variable is pointing to the right folder. A... | I've been working with Tensorflow for quite a while now, had some issues, but they never remained unresolved. Today i wanted to train a new model, when things got interesting. At first, the training stopped after one step without any reason. It happend before, opening a new cmd window solved it. Not this time tough. Af... | 0 | 1 | 3,049 |
0 | 57,835,318 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-06T17:54:00.000 | 1 | 1 | 0 | Calculate camera matrix with KNOWN parameters (Python)? | 57,826,605 | 1.2 | python,opencv,camera,computer-vision,projection-matrix | The projection matrix is simply a 3x4 matrix whose [0:3,0:3] left square is occupied by the product K.dot(R) of the camera intrinsic calibration matrix K and its camera-from-world rotation matrix R, and the last column is K.dot(t), where t is the camera-from-world translation. To clarify, R is the matrix that brings in... | OpenCV provides methods to calibrate a camera. I want to know if it also has a way to simply generate a view projection matrix if and when the parameters are known.
i.e I know the camera position, rotation, up, FOV... and whatever else is needed, then call MagicOpenCVCamera(parameters) and obtain a 4x4 transformation m... | 0 | 1 | 2,898 |
0 | 57,833,508 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-07T11:50:00.000 | 1 | 1 | 0 | LightGBM unexpected behaviour outside of jupyter | 57,833,411 | 0.197375 | python,jupyter-notebook,lightgbm | It can't be a jupyter problem since jupyter is just an interface to communicate with python. The problem could be that you are using different python environment and different version of lgbm... Check import lightgbm as lgb and lgb.__version__ on both jupyter and your python terminal and make sure there are the same (o... | I have this strange but when I'm using a LightGBM model to calculate some predictions.
I trained a LightGBM model inside of jupyter and dumped it into a file using pickle. This model is used in an external class.
My problem is when I call my prediction function from this external class outside of jupyter it always pred... | 0 | 1 | 100 |
0 | 57,839,165 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-09-08T00:50:00.000 | 2 | 1 | 0 | Elastic Beanstalk won't recognize absolute path to file, returns FileNotFoundError | 57,838,423 | 1.2 | python,flask,amazon-elastic-beanstalk | Problem solved! My process was being carried out by rq workers. The rq workers are running from my local machine and I did not realize that it would be the workers looking for the file path. I figured this out by printing os.getcwd() and noticing that the current working directory was still my local path. So, I threw a... | I am running a Flask application using AWS Elastic Beanstalk. The application deploys successfully, but there is a task in my code where I use pandas read_csv to pull data out of a csv file. The code line is:
form1 = pd.read_csv('/opt/python/current/app/application/model/static2/form1.csv')
When I try to execute that t... | 1 | 1 | 432 |
0 | 57,839,378 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-08T00:59:00.000 | 1 | 1 | 0 | Is it feasible to perform feature extraction in a Flutter app? | 57,838,465 | 1.2 | python,android,tensorflow,machine-learning,flutter | If you are targeting mobile, check the integration with “native” code. E.g. look for a java/kotlin library that can do the same on android. And a swift/objC one for iOS.
Then, you could wrap that functionality in a platform-specific module. | I am attempting to implement an audio classifier in my mobile app. When training the data, I used melspectrogram extracted from the raw audio. I am using Tensorflow Lite to integrate the model into the app.
The problem is that I need to perform the same feature extraction on the input audio from the mic before passin... | 0 | 1 | 552 |
0 | 57,859,057 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-09T00:45:00.000 | 0 | 1 | 0 | How can i change column of a df to index of the df? | 57,846,743 | 0 | python-3.x,date,dataframe,indexing | Using set_index we can a column index of the df
df.set_index('Date') | I have a "Date" column in my df and I wish to use it as the index of df
Date values in 'Date' column are in the correct format as per DateTime (yyyy-mm-dd) | 0 | 1 | 40 |
0 | 57,863,047 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-09T04:58:00.000 | 0 | 1 | 0 | Getting the parameters from lmfit | 57,848,079 | 0 | python,lmfit | Including a complete, minimal example that shows what you are doing is always a good idea. In addition, your subject is not a good reflection of your question. You have now asked enough questions about lmfit here on SO to know better.
You probably want to use ModelResult.eval() to evaluate the ModelResult (probably y... | I am doing a fit in python with lmfit and after I define my model (i.e. the functio I want to use for the fit) I do out = model.fit(...) and in order to visualize the result I do plt.plot(x, out.best_fit). This works fine, however this computes the value of the function only at the points used for the fit. How can I ap... | 0 | 1 | 114 |
0 | 66,486,493 | 0 | 0 | 0 | 0 | 1 | false | 38 | 2019-09-09T20:26:00.000 | -1 | 2 | 0 | pandas pd.options.display.max_rows not working as expected | 57,860,775 | -0.099668 | python,pandas | min_rows displays the number of rows to be displayed from the top (head) and from the bottom (tail) it will be evenly split..despite putting in an odd number. If you only want a set number of rows to be displayed without reading it into the memory,
another way is to use nrows = 'putnumberhere'.
e.g. results = pd.read_c... | I’m using pandas 0.25.1 in Jupyter Lab and the maximum number of rows I can display is 10, regardless of what pd.options.display.max_rows is set to.
However, if pd.options.display.max_rows is set to less than 10 it takes effect and if pd.options.display.max_rows = None then all rows show.
Any idea how I can get a pd.o... | 0 | 1 | 32,532 |
0 | 57,863,569 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-10T02:59:00.000 | 1 | 3 | 0 | DataFrame each column miltiply param then sum | 57,863,464 | 0.066568 | python,pandas,dataframe | I think df * param.to_list() is good. | I have a Dataframe which columns is ['a','b','c'] and a Series param contain three values which is params of Dataframe. The param.index is ['a','b','c']. I want to realize df['a'] * param['a'] + df['b'] * param['b'] + df['c'] * param['c']. Because there are too many columns and params in my code. So is there any concis... | 0 | 1 | 41 |
0 | 57,864,150 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-10T02:59:00.000 | 1 | 3 | 0 | DataFrame each column miltiply param then sum | 57,863,464 | 0.066568 | python,pandas,dataframe | df*param is enough, it will auto determine according to the index.
You can change series indexes to ['b','c','a'] for testing | I have a Dataframe which columns is ['a','b','c'] and a Series param contain three values which is params of Dataframe. The param.index is ['a','b','c']. I want to realize df['a'] * param['a'] + df['b'] * param['b'] + df['c'] * param['c']. Because there are too many columns and params in my code. So is there any concis... | 0 | 1 | 41 |
0 | 57,882,320 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-10T10:29:00.000 | 1 | 1 | 0 | Uploading a file from client to server in python bokeh | 57,868,893 | 1.2 | javascript,python,webserver,bokeh,bokehjs | I'm not sure where you are getting your information. The FileInput widget added in Bokeh 1.3.0 can upload any file the user chooses, not just JSON. | We have set up a bokeh server in our institute, which works properly. We also have a python-based code to analyse fMRI data which at the moment uses matplotlib to plot and save. But I want to transfer the code to bokeh server and allow everybody to upload files into the server from the client and when the analysis is d... | 1 | 1 | 184 |
0 | 62,965,796 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-10T16:13:00.000 | 0 | 1 | 0 | How to change location of .flair directory | 57,874,651 | 0 | python,python-3.x | When importing datasets into flair, one can specify a custom path to import from. Copy the flair datasets to a folder you choose on your larger harddrive and then specify that path when loading a dataset.
flair.datasets.WASSA_FEAR(data_folder="E:/flair_datasets/") | I'm currently using flair for sentiment analysis and it's datasets. The datasets for flair are quite large in size and are currently installed on my quite small SSD in my user folder. Is there anyway that I can move the .flair folder from my user folder on my SSD to my other drive without breaking anything.
Thanks in a... | 0 | 1 | 90 |
0 | 57,887,051 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-10T21:47:00.000 | 1 | 1 | 0 | Error with tf.nn.sparse_softmax_cross_entropy_with_logits | 57,878,623 | 0.197375 | python,tensorflow,neural-network,entropy | I don't understand how having a shape [50,1] is not the same as being 1D.
While you can reshape a [50, 1] 2D matrix into a [50] 1D matrix just with a simple squeeze, Tensorflow will never do that automatically.
The only heuristic the tf.nn.sparse_softmax_cross_entropy_with_logits uses to check if the input shape is c... | I am using tf.nn.sparse_softmax_cross_entropy_with_logits and when I pass through the labels and logits I get the following error
tensorflow.python.framework.errors_impl.InvalidArgumentError: labels
must be 1-D, but got shape [50,1]
I don't understnad how having a shape [50,1] is not the same as being 1D | 0 | 1 | 108 |
0 | 57,888,733 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-11T00:46:00.000 | 0 | 1 | 0 | Using tensorflow object detection for either or detection | 57,879,708 | 1.2 | python-3.x,tensorflow,object-detection | In case you're only expecting input images of tiles, either with defects or not, you don't need a class for no defect.
The API adds a background class for everything which is not the other classes.
So you simply need to state one class - defect, and tiles which are not detected as such are not defected.
So in your tra... | I have used Tensorflow object detection for quite awhile now. I am more of a user, I dont really know how it works. I am wondering is it possible to train it to recognize an object is something and not something? For example, I want to detect cracks on the tiles. Can i use object detection to do so where i show an imag... | 0 | 1 | 92 |
0 | 57,907,511 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-09-12T09:18:00.000 | 4 | 2 | 0 | Interpreting a sigmoid result as probability in neural networks | 57,903,518 | 0.379949 | python,tensorflow,sigmoid | As pointed out by Teja, the short answer is no, however, depending on the loss you use, it may be closer to truth than you may think.
Imagine you try to train your network to differentiate numbers into two arbitrary categories that are beautiful and ugly. Say your input number are either 0 or 1 and 0s have a 0.2 probab... | I've created a neural network with a sigmoid activation function in the last layer, so I get results between 0 and 1. I want to classify things in 2 classes, so I check "is the number > 0.5, then class 1 else class 0". All basic.
However, I would like to say "the probability of it being in class 0 is x and in class 1 i... | 0 | 1 | 1,575 |
0 | 57,905,788 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-12T09:51:00.000 | 0 | 2 | 0 | Is it possible to explain sklearn isolation forest prediction? | 57,904,088 | 1.2 | python,unsupervised-learning,anomaly-detection | You are creating an ensemble of trees so and the path of a given instance will be different for each tree in the ensemble. To detect an anomaly the isolation forest takes the average path length (number of splits to isolate a sample) of all the trees for a given instance and uses this to determine if it is an anomaly (... | I'm using the isolation forest algorithm from sklearn to do some unsupervised anomaly detection.
I need to explained the predictions and I was wondering if there is any way to get the paths that lead to the decision for each sample.
I usually used SHAP or ELI5 but i'd like to do something more custom. So i need the ex... | 0 | 1 | 2,005 |
0 | 57,910,171 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-12T15:26:00.000 | 0 | 2 | 0 | Best practice convert dataframe to dictionary? | 57,909,996 | 0 | python,pandas,dataframe,dictionary | I think easier to plot from pandas than a dict. Try using df.plot(). You can subset your df as required to only plot the information you're interested it. | I'm trying to take x number of columns from an existing df and convert them into a dictionary.
My questions are:
The method shown below is considered a good practice? I think it's repetitive and I'm sure it can be a more elegant code.
Should I convert from df to dictionary if my idea is to build a plot? Or it's an unn... | 0 | 1 | 52 |
0 | 57,921,590 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-09-13T09:40:00.000 | 0 | 1 | 0 | Storing/Loading huge numpy array with less memory | 57,921,092 | 0 | python,numpy,ram | I think that you can do a lot of things.
First of all you can change the data format to be stored in different ways:
in a file in your secondary memory to be read iteratively (dumping a python object on secondary memory is not efficient. You need to find a better format. For example a text file in which the lines are... | I have a numpy array of shape (20000, 600, 768). I need to store it, so later I could load it back to my code.
The main problem is memory usage when you load it back.
I have just 16GB RAM.
For example, I tried pickle. When it loads it all I almost have no memory left to do anything else. Especially to train the model.
... | 0 | 1 | 947 |
0 | 57,937,143 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-09-14T15:55:00.000 | 2 | 2 | 0 | Is it possible to validate a deep learning model by training small data subset? | 57,937,097 | 0.197375 | python,tensorflow,keras,resnet,vgg-net | Short answer: No, because Deep Learning works well on huge amount of data.
Long answer: No. The problem is that learning only one face could overfit your model on that specific face, without learning features not present in your examples. Because for example, the model has learn to detect your face thanks to a specific... | I am looking to train a large model (resnet or vgg) for face identification.
Is it valid strategy to train on few faces (1..3) to validate a model?
In other words - if a model learns one face well - is it evidence that the model is good for the task?
point here is that I don't want to spend a week of GPU expensive time... | 0 | 1 | 221 |
0 | 57,948,589 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-15T21:33:00.000 | 0 | 3 | 0 | structured numpy ndarray, how to get values | 57,948,331 | 0 | python,numpy,dictionary,key-value,numpy-ndarray | Just found out that I can use la.tolist() and it returns a dictionary, somehow? when I wanted a list, alas from there on I was able to solve my problem. | I have a structured numpy ndarray la = {'val1':0,'val2':1} and I would like to return the vals using the 0 and 1 as keys, so I wish to return val1 when I have 0 and val2 when I have 1 which should have been straightforward however my attempts have failed, as I am not familiar with this structure.
How do I return only t... | 0 | 1 | 2,969 |
0 | 57,962,750 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-09-16T11:58:00.000 | 1 | 1 | 0 | Is there any alternative for pandas.DataFrame function for Python? | 57,956,382 | 1.2 | python-3.x,pandas,numpy,kivy,buildozer | Similar to Pandas.DataFrame.
As database you likely know SQLite (in python see SQLAlchemy and SQLite3).
On the raw tables (i.e., pure matrix-like) Numpy (Numpy.ndarray), it lacks of some database functionalities compared to Pandas but it is fast and you could easily implement what you need. You can find many comparison... | I am developing an application for Android with Kivy, and package it with Buildozer. The core of my application is using pandas and specially the DataFrame function. It failed when I tried to package it with Buildozer even if I had put pandas in the requirements. So I want to use another library that can be used with B... | 0 | 1 | 2,855 |
0 | 58,013,231 | 1 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-16T16:45:00.000 | 1 | 1 | 0 | How to create button based chatbot | 57,961,205 | 1.2 | python,networkx,flowchart,rasa | Sure, you can.
You just need each button to point to another intent. The payload of each button should point have the /intent_value as its payload and this will cause the NLU to skip evaluation and simply predict the intent. Then you can just bind a trigger to the intent or use the utter_ method.
Hope that helps. | I have created a chatbot using RASA to work with free text and it is working fine. As per my new requirement i need to build button based chatbot which should follow flowchart kind of structure. I don't know how to do that what i thought is to convert the flowchart into graph data structure using networkx but i am not ... | 0 | 1 | 614 |
0 | 57,965,103 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-16T22:04:00.000 | 0 | 2 | 0 | Should a function accept/return values in (row,col) or (col,row) order? | 57,964,943 | 0 | c#,python,c++,conventions | The convention is to address (column, row) like (x, y).
Column refers to the type of value and row indicates the value for the column.
Column is a key and with Row you have the value.
But using (row, column) is fine too.
It depends of what you're doing.
In SQL/Linq, you always refers to Columns to get rows. | Say I have a function that accepts a row and a column as parameters, or returns a tuple of a row and a column as its return value. I know it doesn't actually make a difference, but is there a convention as to whether put the row first or the column first? Coming from math, if I think of the pair as coordinates into the... | 0 | 1 | 75 |
0 | 57,965,076 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-16T22:04:00.000 | 1 | 2 | 0 | Should a function accept/return values in (row,col) or (col,row) order? | 57,964,943 | 0.099668 | c#,python,c++,conventions | You shouldn't think of what is "horizontal" and what is "vertical". You should think of which convention is widely used not to introduce a lot of surprise to the developers who would use your code. The same is true for naming the parameters: use (x, y, z) for coordinates, (i, j) for indexes in matrix and (row, column) ... | Say I have a function that accepts a row and a column as parameters, or returns a tuple of a row and a column as its return value. I know it doesn't actually make a difference, but is there a convention as to whether put the row first or the column first? Coming from math, if I think of the pair as coordinates into the... | 0 | 1 | 75 |
0 | 57,980,615 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-17T07:53:00.000 | 0 | 1 | 0 | Python3, word2vec, How can I get the list of similarity rank about "price" in my model | 57,969,707 | 1.2 | python,gensim,word2vec,similarity,cosine-similarity | If you call wv.most_similar('price', topn=len(wv)), with a topn argument of the full vocabulary count of your model, you'll get back a ranked list of every word's similarity to 'price'.
If you call with topn=0, you'll get the raw similarities with all model words, unsorted (in the order the words appear inside wv.inde... | In gensim's word2vec python, I want to get the list of cosine similarity for "price".
I read the document of gensim word2vec, but document it describes most_similar and n_similarity function)()
I want the whole list of similarity between price and all others. | 0 | 1 | 204 |
0 | 57,975,781 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-17T13:31:00.000 | 0 | 1 | 0 | how to deal with high cardinal categorical feature into numeric for predictive machine learning model? | 57,975,387 | 0 | python,machine-learning,data-science,data-cleaning,data-processing | One approach could be to group your categorical levels into smaller buckets using business rules. In your case for the feature area_id you could simply group them based on their geographical location, say all area_ids from a single district (or for that matter any other level of aggregation) will be replaced by a singl... | I have two columns of having high cardinal categorical values, one column(area_id) has 21878 unique values and other has(page_entry) 800 unique values. I am building a predictive ML model to predict the hits on a webpage.
column information:
area_id: all the locations that were visited during the session. (has location... | 0 | 1 | 150 |
0 | 57,978,592 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-17T16:17:00.000 | 0 | 1 | 0 | Why not evaluate over test fit results in RandomizedSearchCV? | 57,978,263 | 0 | python,optimization,hyperparameters,gridsearchcv | when we are training a model we usually divide data into train, validation and test sets. Lets look at what are the propose of each set
Train Set: It is used by model to learn its parameters. Usually model reduces its cost on the train set and selects parameters that gives minimum cost.
Validation Set: By name, validat... | I'm trying optimize hiperparameters for classifiers and regression methods in sklearn. And I have a question. Why when you evaluate the results, you choose for example the best train accuracy, instead of evaluate this result over the test, and iterate others values with others train accuracys to obtain the best test ac... | 0 | 1 | 266 |
0 | 57,999,219 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-18T18:44:00.000 | 0 | 1 | 0 | how to display plot images outside of jupyter notebook? | 57,999,071 | 0 | python,jupyter-notebook | You can try to run an matplotlib example code with python console or ipython console. They will show you a window with your plot.
Also, you can use Spyder instead of those consoles. It is free, and works well with python libraries for data science. Of course, you can check your plots in Spyder. | So, this might be an utterly dumb question, but I have just started working with python and it's data science libs, and I would like to see seaborn plots displayed, but I prefer to work with editors I have experience with, like VS Code or PyCharm instead of Jupyter notebook. Of course, when I run the python code, the c... | 0 | 1 | 427 |
0 | 58,011,558 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-09-19T11:49:00.000 | 1 | 1 | 0 | How do I feed data into my neural network? | 58,010,363 | 1.2 | python,neural-network,backpropagation | So, you're saying you implemented a neural network on your own ?
well in this case, basically each neuron on the input layer must be assigned with a feature of a certain row, than just iterate through each layer and each neuron in that layer and calculate as instructed.
I'm sure you are familiar with the back-propagati... | I've coded a simple neural network for XOR in python. While there is loads of information online about how to program this, there isn't much on how to feed the data through it. I've tested the change in weights after one cycle for inputs [1,1] to compare my results with my lecture slides and it's 100% the same, so I be... | 0 | 1 | 704 |
0 | 58,028,414 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-09-20T11:50:00.000 | 3 | 2 | 0 | How to fine-tune a keras model with existing plus newer classes? | 58,027,839 | 0.291313 | python,tensorflow,keras,deep-learning,classification | With transfer learning, you can make the trained model classify among the new classes on which you just trained using the features learned from the new dataset and the features learned by the model from the dataset on which it was trained in the first place. Unfortunately, you can not make the model to classify between... | Good day!
I have a celebrity dataset on which I want to fine-tune a keras built-in model. SO far what I have explored and done, we remove the top layers of the original model (or preferably, pass the include_top=False) and add our own layers, and then train our newly added layers while keeping the previous layers froze... | 0 | 1 | 1,312 |
0 | 58,042,103 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-20T13:43:00.000 | 2 | 1 | 0 | Nvenc session limit per GPU | 58,029,589 | 1.2 | python,ffmpeg,python-imageio,nvenc | Nvidia limits it 2 per system Not 2 per GPU. The limitation is in the driver, not the hardware. There have been unofficially drivers posted to github which remove the limitation | I'm using Imageio, the python library that wraps around ffmpeg to do hardware encoding via nvenc. My issue is that I can't get more than 2 sessions to launch (I am using non-quadro GPUs). Even using multiple GPUs. I looked over NVIDIA's support matrix and they state only 2 sessions per gpu, but it seems to be per syste... | 0 | 1 | 967 |
0 | 58,031,057 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-20T14:25:00.000 | 0 | 1 | 0 | Do a vlookup with pandas in python | 58,030,255 | 1.2 | python,pandas,replace,vlookup | results = df2.merge(df1,on="sku", how="outer") | I am struggling with a vlookup in python.
I have two datasets.
First is called "output_apu_stock1". Here i have quantities and prices, that should be update the second dataset.
Second is called "Angebote_Master_File".
Now, if i run my code, the new dataset "results" contains only the values, that matches. Leads to the ... | 0 | 1 | 85 |
0 | 58,035,832 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-09-20T21:44:00.000 | 0 | 2 | 0 | Why can't python vectorize map() or list comprehensions | 58,035,479 | 0 | python,parallel-processing,vectorization,python-multiprocessing,simd | You want vectorization or JIT compilation use numba, pypy or cython but be warned the speed comes at the cost of flexibility.
numba is a python module that will jit compile certain functions for you but it does not support many kinds of input and barfs on some (many) python constructs. It is really fast when it works ... | I don't know that much about vectorization, but I am interested in understanding why a language like python can not provide vectorization on iterables through a library interface, much like it provides threading support. I am aware that many numpy methods are vectorized, but it can be limiting to have to work with nump... | 0 | 1 | 637 |
0 | 58,045,304 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-21T03:59:00.000 | 0 | 1 | 0 | What is difference between the result of using GPU or not? | 58,037,171 | 0 | python-3.x,keras,gpu,conv-neural-network | You probably don't have enough memory to fit all the images in the CPU during training. Using a GPU will only help if it has more memory. If this is happening because you have too many images or they're resolution is too high, you can try using keras' ImageDataGenerator and any of the flow methods to feed your data in ... | I have a CNN with 2 hidden layers. When i use keras on cpu with 8GB RAM, sometimes i have "Memory Error" or sometimes precision class was 0 but some classes at the same time were 1.00. If i use keras on GPU,will it solve my problem? | 0 | 1 | 72 |
0 | 62,098,032 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-09-22T04:50:00.000 | 12 | 2 | 0 | Can someone give a good math/stats explanation as to what the parameter var_smoothing does for GaussianNB in scikit learn? | 58,046,129 | 1.2 | python,machine-learning,scikit-learn,gaussian | A Gaussian curve can serve as a "low pass" filter, allowing only the samples close to its mean to "pass." In the context of Naive Bayes, assuming a Gaussian distribution is essentially giving more weights to the samples closer to the distribution mean. This might or might not be appropriate depending if what you want t... | I am aware of this parameter var_smoothing and how to tune it, but I'd like an explanation from a math/stats aspect that explains what tuning it actually does - I haven't been able to find any good ones online. | 0 | 1 | 4,054 |
0 | 58,234,695 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-22T06:19:00.000 | 1 | 2 | 0 | How to Compare Sentences with an idea of the positions of keywords? | 58,046,570 | 0.099668 | python,nlp,nltk | Semantic Similarity is a bit tricky this way, since even if you use context counts (which would be n-grams > 5) you cannot cope with antonyms (e.g. black and white) well enough. Before using different methods, you could try using a shallow parser or dependency parser for extracting subject-verb or subject-verb-object r... | I want to compare the two sentences. As a example,
sentence1="football is good,cricket is bad"
sentence2="cricket is good,football is bad"
Generally these senteces have no relationship that means they are different meaning. But when I compare with python nltk tools it will give 100% similarity. How can I fix this Issue... | 0 | 1 | 566 |
0 | 58,048,268 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-09-22T09:08:00.000 | 1 | 1 | 0 | What's the meaning of the number before the progress bar when tensorflow is training | 58,047,736 | 1.2 | python-3.x,tensorflow,tensor,tensorflow-estimator | 10 and 49 corresponds to the number of batches which your dataset has been divided into in each epoch.
For example, in your train dataset, there are totally 10000 images and your batch size is 64, then there will be totally math.ceil(10000/64) = 157 batches possible in each epoch. | Could anyone tell me what's the meaning of '10' and '49' in the following log of tensorflow?
Much Thanks
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 5.899410247802734 secs
10/10 [==============================] - 23s 2s/step - loss: 2.6726 - acc: 0.1459
49/49 [==================... | 0 | 1 | 173 |
0 | 58,049,249 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-22T12:12:00.000 | 1 | 2 | 0 | How do i retrain the model without losing the earlier model data with new set of data | 58,049,090 | 0.099668 | python-3.x,tensorflow,keras,deep-learning,face-recognition | With transfer learning you would copy an existing pre-trained model and use it for a different, but similar, dataset from the original one. In your case this would be what you need to do if you want to train the model to recognize your specific 100 people.
If you already did this and you want to add another person to t... | for my current requirement, I'm having a dataset of 10k+ faces from 100 different people from which I have trained a model for recognizing the face(s). The model was trained by getting the 128 vectors from the facenet_keras.h5 model and feeding those vector value to the Dense layer for classifying the faces.
But the is... | 0 | 1 | 675 |
0 | 58,049,979 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-22T13:36:00.000 | 0 | 1 | 0 | Linear models : Given the fact that both the models perform equally well on the test data set, which one would you prefer and why? | 58,049,729 | 0 | python,linear-regression | I'd probably go with the second one, just because the numbers in the second one are rounded more, and if they still do equally well, the extra digits in the first one are unnecessary and just make it look worse.
(As a side note, this question doesn't seem related to programming so you may want to post it in a different... | Consider two linear models:
L1: y = 39.76x + 32.648628
And
L2: y = 43.2x + 19.8
Given the fact that both the models perform equally well on the test data set, which one would you prefer and why? | 0 | 1 | 183 |
0 | 58,056,413 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-23T05:52:00.000 | 1 | 1 | 0 | Check inputs in csv file | 58,056,352 | 0.197375 | python,pandas | Simple approach that can be modified:
Open df using df = pandas.from_csv(<path_to_csv>)
For each column, use df['<column_name>'] = df['<column_name>'].astype(str) (str = string, int = integer, float = float64, ..etc).
You can check column types using df.dtypes | I`m new to python. I have a csv file. I need to check whether the inputs are correct or not. The ode should scan through each rows.
All columns for a particular row should contain values of same type: Eg:
All columns of second row should contain only string,
All columns of third row should contain only numbers... etc... | 0 | 1 | 59 |
0 | 58,059,345 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-09-23T09:20:00.000 | 0 | 1 | 0 | Remove "days 00:00:00"from dataframe | 58,059,278 | 1.2 | python,pandas,dataframe,days | Check this format,
df['date'] = pd.to_timedelta(df['date'], errors='coerce').days
also, check .normalize() function in pandas. | So, I have a pandas dataframe with a lot of variables including start/end date of loans.
I subtract these two in order to get their difference in days.
The result I get is of the type i.e. 349 days 00:00:00.
How can I keep only for example the number 349 from this column? | 0 | 1 | 1,164 |
0 | 58,080,853 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-23T17:28:00.000 | 0 | 1 | 0 | Tensorflow: OneHot-encoding with variable sized length | 58,067,427 | 0 | python,tensorflow,machine-learning,one-hot-encoding | This is how I solved the issue: The problem was that I was using someone else's code and the depth argument in tf.one_hot was derived from someTensor.get_shape().as_list()[1]. The problem here is that if the shape of someTensor is unknown, the argument is a Python-None which is not a valid argument for tf.one_hot. Howe... | I need to onehot-encode some positions with TensorFlow.
However, the length of the input sequences (and therefore the depth-argument in tf.one_hot) is None as I work with variable sized inputs.
This throws the following error:
"ValueError: Tried to convert 'depth' to a tensor and failed. Error: None values not suppor... | 0 | 1 | 129 |
0 | 58,072,232 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-09-23T20:00:00.000 | 1 | 3 | 0 | gensim word2vec extremely big and what are the methods to make file size smaller? | 58,069,421 | 1.2 | python,gensim,word2vec | The size of a full Word2Vec model is chiefly determined by the chosen vector-size, and the size of the vocabulary.
So your main options for big savings is to train smaller vectors, or a smaller vocabulary.
Discarding a few hundred stop-words or punctuation-tokens won't make a noticeable dent in the model size.
Disca... | I have a pre-trained word2vec bin file by using skipgram. The file is pretty big (vector dimension of 200 ), over 2GB. I am thinking some methods to make the file size smaller. This bin file contains vectors for punctuation, some stop words. So, I want to know what are the options to decrease the file size for this wor... | 0 | 1 | 2,014 |
0 | 58,070,875 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-23T22:18:00.000 | 0 | 2 | 0 | how to remove duplicates when using pandas concat to combine two dataframe | 58,070,840 | 0 | python,pandas,concat | drop_duplicates() only removes rows that are completely identical.
what you're looking for is pd.merge().
pd.merge(df1, df2, on='id) | I have two data from.
df1 with columns: id,x1,x2,x3,x4,....xn
df2 with columns: id,y.
df3 =pd.concat([df1,df2],axis=1)
when I use pandas concat to combine them, it became
id,y,id,x1,x2,x3...xn.
there are two id here.How can I get rid of one.
I have tried :
df3=pd.concat([df1,df2],axis=1).drop_duplicates().reset_index(d... | 0 | 1 | 1,418 |
0 | 58,075,290 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 0 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0 | python,machine-learning,statistics,data-science | What is the size of your dataset?How many rows are we talking here?
Your dataset is not balanced and so its kind of normal for a simple classification algorithm to predict the 'majority-class' most of the times and give you an accuracy of 90%. Can you collect more data that will have more positive examples in it.
Or, ... | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive ... | 0 | 1 | 4,004 |
0 | 58,082,997 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 0 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0 | python,machine-learning,statistics,data-science | You can try many different solutions.
If you have quite a lot data points. For instance you have 2k 1s and 20k 0s. You can try just dump those extra 0s only keep 2k 0s. Then train it. And also you can try to use different set of 2k 0s and same set of 2k 1s. To train multiple models. And make decision based on multiple ... | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive ... | 0 | 1 | 4,004 |
0 | 58,074,603 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 0 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0 | python,machine-learning,statistics,data-science | I'm assuming that your purpose is to obtain a model with good classification accuracy on some test set, regardless of the form of that model.
In that case, if you have access to the computational resources, try Gradient-Boosted Trees. That's a ensemble classifier using multiple decision trees on subsets of your data, ... | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive ... | 0 | 1 | 4,004 |
0 | 58,074,754 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 4 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0.197375 | python,machine-learning,statistics,data-science | There are several ways to do this :
You can change your model and test whether it performs better or not
You can Fix a different prediction threshold : here I guess you predict 0 if the output of your regression is <0.5, you could change the 0.5 into 0.25 for example. It would increase your True Positive rate, but of ... | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive ... | 0 | 1 | 4,004 |
0 | 58,078,010 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-09-24T09:39:00.000 | 1 | 3 | 0 | Use numpy structured array instead of dict to save space and keep speed | 58,077,373 | 0.066568 | python,numpy,dictionary,time-complexity,structured-array | numpy array use contiguous block of memory and can store only one type of object like int, float, string or other object. Where each item are allocated fixed bytes in memory.
Numpy also provide set of functions for operation like traversing array, arithmetic operation, some string operation on those stored items which... | Are numpy structured arrays an alternative to Python dict?
I would like to save memory and I cannot affort much of a performance decline.
In my case, the keys are str and the values are int.
Can you give a quick conversion line in case they actually are an alternative?
I also don't mind if you can suggest a different a... | 0 | 1 | 1,173 |
0 | 58,079,908 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-24T11:39:00.000 | 0 | 1 | 0 | How does decision tree recognize the features from a given text dataset? | 58,079,493 | 1.2 | python,machine-learning,scikit-learn,decision-tree,text-processing | The Decision Tree won't recognize from which features the attributes are coming. | I have a binary classification text data in which there are 10 text features.
I use various techniques like Bag of words, TFIDF etc. to convert them to numerical.
I use hstack() to stack all those features together again after processing them.
After converting them to numerical feature, each feature now has large numb... | 0 | 1 | 145 |
0 | 58,088,639 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-09-24T19:49:00.000 | 0 | 2 | 0 | Create array from image of chessboard | 58,087,263 | 0 | python,opencv,computer-vision | Based on either the edge detector or the red/green square detector, calculate the center coordinates of each square on the game board. For example, average the x-coordinate of the left and right edge of a square to get the x-coordinate of the square's center. Similarly, average the y-coordinate of the top and bottom ... | Basically, I'm working on a robot arm that will play checkers.
There is a camera attached above the board supplying pictures (or even videomaterial but I guess that is just a series of images and since checkers is not really a fast paced game I can just take a picture every few seconds and go from there)
I need to find... | 0 | 1 | 424 |
0 | 58,087,392 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-09-24T19:49:00.000 | 2 | 2 | 0 | Create array from image of chessboard | 58,087,263 | 0.197375 | python,opencv,computer-vision | Well, first of all remember that chess always starts with the same pieces on the same positions e.g. black knight starts at 8-B which can be [1][7] in your 2D array. If I were you I would start with a 2D array with the begin positions of all the chess pieces.
As to knowing which pieces are where: you do not need to re... | Basically, I'm working on a robot arm that will play checkers.
There is a camera attached above the board supplying pictures (or even videomaterial but I guess that is just a series of images and since checkers is not really a fast paced game I can just take a picture every few seconds and go from there)
I need to find... | 0 | 1 | 424 |
0 | 58,090,124 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-25T00:25:00.000 | 0 | 1 | 0 | Supremum Metric in Python for Knn with Uncertain Data | 58,089,636 | 1.2 | python,knn | I found out using scipy spatial distance and tweaking for-loops in standard knn helps a lot | I'm trying to make a classifier for uncertain data (e.g ranged data) using python. in certain dataset, the list is a 2D array or array of record (contains float numbers for data and a string for labels), where in uncertain dataset the list is a 3D array (contains range of float numbers for data and a string for labels)... | 0 | 1 | 119 |
0 | 58,096,338 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-25T05:01:00.000 | 0 | 1 | 0 | OpenCV camera calibration - Intrinsic matrix values are off | 58,091,445 | 0 | python-3.x,opencv,camera,camera-calibration | Cx and Cy are the coordinates (in pixels) of the principal point in your image. Usually a good approximation is (image_width/2, image_height/2).
An average reprojection error of 0.08 pixel seems quite good. | I used OpenCV's camera calibration function for calibrating my camera. I captured around 50 images with different angles and pattern near image borders.
Cx and Cy value in intrinsic matrix is around 300 px off. Is it alright? My average reprojection error is around 0.08 though. | 0 | 1 | 476 |
0 | 58,106,916 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-09-25T13:28:00.000 | 2 | 1 | 0 | Training a model from multiple corpus | 58,099,559 | 1.2 | python,artificial-intelligence,gensim,training-data,fasttext | Adjusting more-generic models with your specific domain training data is often called "fine-tuning".
The gensim implementation of FastText allows an existing model to expand its known-vocabulary via what's seen in new training data (via build_vocab(..., update=True)) and then for further training cycles including that... | Imagine I have a fasttext model that had been trained thanks to the Wikipedia articles (like explained on the official website).
Would it be possible to train it again with another corpus (scientific documents) that could add new / more pertinent links between words? especially for the scientific ones ?
To summarize, I... | 0 | 1 | 254 |
0 | 58,140,537 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-26T10:05:00.000 | 0 | 1 | 0 | How to get data from elastic search, if new data came then update it, and again inject it? | 58,114,367 | 0 | python-3.x,pandas,elasticsearch,elasticsearch-py | I'd recommend to not worry about it and just load everything into Elasticsearch. As long as your _ids are consistent the existing documents will be overwritten instead of duplicated. So just be sure to specify an _id for each document and you are fine, the bulk helpers in the elasticsearch-py client all support you set... | I have nearly 200 000 lines of tuples in my Pandas Dataframe. I injected that data into elastic search. Now, when I run the program It should check whether the present data already there in elastic search if not present insert into it. | 0 | 1 | 59 |
0 | 58,118,967 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-09-26T12:35:00.000 | 1 | 1 | 0 | Close all variable explorer windows in Spyder | 58,116,958 | 1.2 | python,window,spyder | (Spyder maintainer here) We don't have a command to do that, sorry. | Does anyone know of a quick way to close all open variable explorer windows in Spyder? (i.e. the windows that open when you click on a variable).
In Matlab, you can close all pop-up windows with close all. Does anything like that exist for Spyder? | 0 | 1 | 603 |
0 | 58,117,742 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-26T12:46:00.000 | 0 | 1 | 0 | ParserError: Error tokenizing data. C error | 58,117,142 | 1.2 | python-3.x,csv,dataframe | Did you try save these two .csv files as ANSI? I had problem with .csv when they were saved as UTF-8. | i'm using a script ScriptGlobal.py that will call and execute 2 other scripts script1.py and script2.py exec(open("./script2.py").read()) AND exec(open("./script1.py").read())
The output of my script1 is the creation of csv file.
df1.to_csv('file1.csv',index=False)
The output of my script2 is the creation of another cs... | 0 | 1 | 160 |
0 | 58,326,018 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-27T02:10:00.000 | 0 | 2 | 0 | What object types can be used for fetures in decision trees? Do I need to convert my "object" type to another type? | 58,126,842 | 0 | python,types,scikit-learn,decision-tree | I used one hot encoding to convert my categorical data because the scikit-learn decision tree packages do not support categorical data. | I imported a table using pandas and I was able to set independent variables (features) and my dependent variable (target). Two of my independent variables are "object type" and my others are int64 and float64. Do I need to convert my "object" type features to "class" or another type? How can I handle these in Sci-kit l... | 0 | 1 | 346 |
0 | 58,137,213 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-27T15:05:00.000 | 0 | 2 | 0 | find the row of a matrix with the largest element in column i | 58,137,114 | 0 | python | To my knowledge, there is not a builtin function in python itself. I would recommend just building the utility yourself, since it's basically just a max over specified list out of matrix, which isn't hard to implement. | Beginner python,
I want to create a method like: max(mat,i)= the row with the maximum value in the column i of matrix mat.
For example, I have a matrix a=[[1,2,3],[4,5,6],[7,8,9]], then the largest value of the i=3 column is 9 and so max(a,3)=[7,8,9].
I'm wondering if there is a builtin function in python? | 0 | 1 | 43 |
0 | 58,138,420 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-09-27T16:26:00.000 | 0 | 2 | 0 | Change column from Pandas date object to python datetime | 58,138,314 | 1.2 | python,pandas,datetime | type(data_raw['pandas_date']) will always return pandas.core.series.Series, because the object data_raw['pandas_date'] is of type pandas.core.series.Series. What you want is to get the dtype, so you could just do data_raw['pandas_date'].dtype.
data_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date'])
This is ... | I have a dataset with the first column as date in the format: 2011-01-01 and type(data_raw['pandas_date']) gives me pandas.core.series.Series
I want to convert the whole column into date time object so I can extract and process year/month/day from each row as required.
I used pd.to_datetime(data_raw['pandas_date']) and... | 0 | 1 | 384 |
0 | 58,139,262 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-27T17:37:00.000 | 1 | 2 | 0 | scipy ndimage has no attribute filter? | 58,139,189 | 0.099668 | python | Found it!
It should have been b=scipy.ndimage.filters instead of filter | I installed ndimage with sudo pip3 install scipy
then i'm importing it as import scipy.ndimage
then i'm doing the following line b=scipy.ndimage.filter.gaussian_filter(i,sigma=10)
and I get AttributeError: module 'scipy.ndimage' has no attribute 'filter'
Anyone encountered this before? | 0 | 1 | 384 |
0 | 58,364,117 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-30T06:16:00.000 | 1 | 1 | 0 | pin and allocate tensorflow on specific NUMA node | 58,162,375 | 0.197375 | python,tensorflow,numa | no, answer ...
I'm using numactl --cpunodebind=1 --membind=1 - binds execution and memory allocation to NUMA node 1. | My system has two NUMA nodes and two GTX 1080 Ti attached to NUMA node 1 (XEON E5).
The NN models are trained via single-machine multi-GPU data parallelism using Keras' multi_gpu_model.
How can TF be instructed to allocate memory and execute the TF workers (merging weights) only on NUMA node 1? For performance reasons ... | 0 | 1 | 366 |
0 | 58,167,125 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-09-30T11:20:00.000 | 4 | 2 | 0 | Why does iloc use [] and not ()? | 58,166,876 | 0.379949 | python,pandas | import pandas as pd here it is python module
pd.DataFrame(...) if you pay attention to naming convection DataFrame is a class here.
df.reindex() is a method called on instance itself.
df.columns has no bracket because it is an attribute of the object not a method
df.iloc is meant to get item by index so to show it's i... | I am relatively new to python and it seems to me (probably because I don't understand) that the syntax is sometimes slightly inconsistent.
Suppose we are working with the pandas package import pandas as pd. Then any method within this package can be accessed by pd.method, i.e. pd.DataFrame(...). Now, there are certain ... | 0 | 1 | 363 |
0 | 58,176,536 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-09-30T23:30:00.000 | 1 | 1 | 0 | replace do not work even with inplace=True | 58,176,409 | 1.2 | python,pandas | Without setting the regex flag to True, replace will look for an exact match.
To get a partial match, just use df.likes = df.likes.replace(' others', '', regex=True). | Replace function failed to work even with inplace=True.
data:
0 245778 others
1 245778 others
2 245778 others
4 245778 others
code:
df.likes=df.likes.astype('str')
df.likes.replace('others','',inplace=True)
Result:
0 245778 others
1 245778 others
2 245778 others
4 ... | 0 | 1 | 43 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.