GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 55,169,600 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2019-03-14T18:08:00.000 | 5 | 3 | 0 | How to make Altair plots responsive | 55,169,344 | 1.2 | python,vega-lite,altair | There is no way to do this. The dimensions of Altair/Vega-Lite charts are pre-determined by the chart specification and data, and cannot be made to scale with the size of the browser window. | Can one make Altair plots fit the screen size, rather than have a pixel-defined width and height? I've read things about autosize "fit", but I am unsure about where to specify these. | 0 | 1 | 2,245 |
0 | 55,172,773 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-03-14T21:59:00.000 | 0 | 2 | 0 | No module named scipy, spacy, nltk | 55,172,651 | 0 | python,pip,jupyter-notebook | Did you install Anaconda + Python ? Python doesn't come with package, maybe you're using Python path instead of Anaconda to run jupyter | (base) C:\Users\Kevin>pip install scipy Requirement already satisfied:
scipy in c:\programdata\anaconda3\lib\site-packages (1.1.0)
etc
Suddenly my jupyter notebook denies to import several packages. pandas and numpy work, but all the other packages do not (spacy, nltk, scipy, requests)
I tried reinstall packages, b... | 0 | 1 | 279 |
0 | 55,189,726 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-15T19:42:00.000 | 1 | 1 | 0 | Weird Indexing by Python and Numpy | 55,189,686 | 0.197375 | python,numpy | X[:100] means slice X from 0 to 100 or the end (whichever comes first)
But X[100] means the 100th element of X, and if it doesn't exist it throws an index out of range error | I have a variable X, it contains a list (Python list), of 10 Numpy 1-D arrays (basically vectors).
If I ask for X[100], it throws an error saying: IndexError: list index out of range
Which makes total sense, but, when I ask for X[:100], it doesn't throw an error and it returns the entire list!
Why is that? | 0 | 1 | 156 |
0 | 55,225,618 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-03-17T09:02:00.000 | 1 | 2 | 0 | Pandas timestamp to and from json | 55,205,436 | 0.099668 | python,json,pandas,numpy | If I have correctly understood your problem, you are looking for a serialization way preserving the data types of a dataframe.
The problem is that the interchange formats internally use few types: only strings for csv, strings and numbers for json. Of course there are ways to give formatting hints at read time (date fo... | Objects cannot be serialised to json so therefore need to be converted or parsed through a custom JsonEncoder class.
pandas Dataframe has a number of methods, like from_records to read json data. Yet when you read that json data back it is returned as int64 instead of timestamp.
There are many ways to skin a cat in pan... | 0 | 1 | 2,822 |
0 | 55,211,330 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-03-17T19:55:00.000 | 5 | 1 | 0 | Mask RCNN uses CPU instead of GPU | 55,211,277 | 1.2 | python,tensorflow,machine-learning,keras | It is either because that GPU_COUNT is set to 0 in config.py or you don't have tensorflow-gpu installed (which is required for tensorflow to run on GPU) | I'm using the Mask RCNN library which is based on tenserflow and I can't seem to get it to run on my GPU (1080TI). The inference time is 4-5 seconds, during which I see a usage spike on my cpu but not my gpu. Any possible fixes for this? | 0 | 1 | 3,990 |
0 | 55,231,076 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2019-03-18T19:47:00.000 | 0 | 2 | 0 | Python tensorflow: asyncio or threading | 55,229,009 | 0 | python,tensorflow,websocket,python-asyncio,python-multithreading | I am not an expert in threading/asyncio but maybe it would be easier to spawn an instance of Kafka and have a piece of code that would listen to a Kafka topic? To this topic you would push images or paths to images if you already store them locally. Moreover, using consumer-groups you would get a load balancing like th... | I am implementing a server for recognizing objects in photos using tensorflow-gpu in "semi-real" time. It will listen for new photos on a websocket connection, then enqueue it into a list for the detector run when it is free. Would it be simpler to use asyncio or threading to handle the websocket listener and the rec... | 0 | 1 | 924 |
0 | 61,762,278 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2019-03-18T19:47:00.000 | 0 | 2 | 0 | Python tensorflow: asyncio or threading | 55,229,009 | 1.2 | python,tensorflow,websocket,python-asyncio,python-multithreading | Ultimately I used asyncio to handle the websocket connection, enqueuing incoming images to a queue. I used threading which had a thread to read the image into RAM, extracted some metadata, and queued it for the object detector. The detector, running in another thread, tagged the images and queued the tags in the datab... | I am implementing a server for recognizing objects in photos using tensorflow-gpu in "semi-real" time. It will listen for new photos on a websocket connection, then enqueue it into a list for the detector run when it is free. Would it be simpler to use asyncio or threading to handle the websocket listener and the rec... | 0 | 1 | 924 |
0 | 55,440,002 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-18T19:56:00.000 | 0 | 1 | 0 | Get this error when trying to run tensorboard? | 55,229,123 | 0 | python,tensorflow | is your tensorboard 1.13.1?
if so,make it come to 1.12.1,solved the problem.
but i can't find out the reason. | File "C:\ProgramData\Anaconda3\Scripts\tensorboard-script.py", line 10, in
sys.exit(run_main())
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\main.py", line 57, in run_main
app.run(tensorboard.main, flags_parser=tensorboard.configure)
File "C:\ProgramData\Anaconda3\lib\site-packages\absl\app... | 0 | 1 | 669 |
0 | 55,229,729 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-18T20:10:00.000 | 0 | 1 | 0 | How is quantization applied/simulated in software? | 55,229,311 | 0 | python,c,approximation | In general there are three approaches:
Analysis
Simulation
Testing
To analyze you must, of course, understand the calculation, and be a skilled mathematician.
To simulate you must still understand the calculation since you need to re-write it in the simulation language, but you don't need to be so good at math ;-)
Te... | How is quantization applied/simulated in software in practice? Suppose for example that I'd like to compute how much error in an output of some function I will get if instead of using 16 bit floating point values I were to use 6 bit integer values in the parameters of the function. If it matters for this question, I am... | 0 | 1 | 77 |
0 | 55,230,793 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-18T21:53:00.000 | 1 | 1 | 0 | Is there an alternative to fully loading pre-trained word embeddings in memory? | 55,230,575 | 1.2 | python,machine-learning,memory-management,nlp,word-embedding | What task do you have in mind? If this is a similarity based task, you could simply use the load_word2vec_format method in gensim, this allows you to pass in a limit to the number of vectors loaded. The vectors in something like the Googlenews set are ordered by frequency, this will give you the critical vectors.
This... | I want to use pre-trained word embeddings in my machine learning model. The word embedings file I have is about 4GB. I currently read the entire file into memory in a dictionary and whenever I want to map a word to its vector representation I perform a lookup in that dictionary.
The memory usage is very high and I woul... | 0 | 1 | 237 |
0 | 55,238,524 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-03-18T22:14:00.000 | 0 | 5 | 0 | Python, how to combine integer matrix to a list | 55,230,862 | 0 | python,numpy | using numpy:
list(np.array(a).flatten()) | say I have a matrix : a = [[1,2,3],[4,5,6],[7,8,9]]. How can I combine it to b = [1,2,3,4,5,6,7,8,9]?
Many thanks | 0 | 1 | 61 |
0 | 55,241,306 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-19T12:35:00.000 | -1 | 1 | 0 | can we generate loss curve for mlpregressor with lbfgs solver | 55,241,237 | -0.197375 | python,regression | You can plot model.loss_curve_ on your dataframe, and you're good to go! | is it possible to generate loss curve for MLPregressor with lbfgs solver? it has been specified that it can be generated only for 'adam' solver.
if it can be done, kindly help me in this regard. | 0 | 1 | 286 |
0 | 57,216,578 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-19T12:43:00.000 | 0 | 1 | 0 | WinError 193] %1 is not a valid Win32 application | 55,241,360 | 1.2 | pandas,python-3.7 | Another thing might have happened. VS code automatically searches for the numpy and other packages from predefined OS locations. It might have found out 32 bit version of numpy instead of a 64 bit version.
To fix this Uninstall numpy from all OS locations.
* In VS code terminal Type pip uninstall numpy or conda uninst... | I'm using spyder and trying to import pandas as pd and its giving me the following error:
import pandas as pd
Traceback (most recent call last):
File "", line 1, in
import pandas as pd
File "C:\Users\omer qureshi\AppData\Roaming\Python\Python37\site-packages\pandas__init__.py", line 13, in
import(dependency)
... | 0 | 1 | 1,376 |
0 | 55,662,072 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2019-03-19T16:50:00.000 | 0 | 3 | 0 | Remove default formatting in header when converting pandas DataFrame to excel sheet | 55,246,202 | 0 | python,excel,pandas,dataframe,xlsxwriter | The key explanation is that: pandas writes a df's header with set_cell(). A cell format (in xlsxwriter speak, a "format" is a FormatObject that you have to add to the worksheetObject) can NOT be overridden with set_row(). If you are using set_row() to your header row, it will not work, you have to use set_cell(). | This is something that has been answered and re-answered time and time again because the answer keeps changing with updates to pandas. I tried some of the solutions I found here and elsewhere online and none of them have worked for me on the current version of pandas. Does anyone know the current, March 2019, pandas 0.... | 0 | 1 | 11,224 |
0 | 55,250,331 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-19T21:08:00.000 | 1 | 1 | 0 | Pyomo warm start | 55,250,019 | 0.197375 | python,cplex,pyomo | Make sure that the solution you give to CPLEX is feasible. Otherwise, CPLEX will reject it and start from scratch.
If your solution is feasible, it is possible that CPLEX simply found a better solution than yours, since, after all, it is CPLEX's job, and in my own experience, CPLEX is very good at it. Is this a maximi... | I've a MIP to solve with Pyomo and I want to set an initial solution for cplex.
So googling I find that I can set some variable of instance to some value and then execute this:
solver.solve(instance,warmstart=True,tee=True)
But when I run cplex it seem that it doesn't use the warm start, because for example i pass a so... | 0 | 1 | 1,303 |
0 | 55,257,099 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-20T01:46:00.000 | 0 | 1 | 0 | What are the reasons to use MonitoredTrainingSession vs Estimator in TensorFlow | 55,252,406 | 1.2 | python,tensorflow,machine-learning,tensorflow-estimator | Short answer is that MonitoredTrainingSession allows user to access Graph and Session objects, and training loop, while Estimator hides the details of graphs and sessions from the user, and generally, makes it easier to run training, especially, with train_and_evaluate, if you need to evaluate periodically.
MonitoredT... | I see many examples with either MonitoredTrainingSession or tf.Estimator as the training framework. However it's not clear why I would use one over the other. Both are configurable with SessionRunHooks. Both integrate with tf.data.Dataset iterators and can feed training/val datasets. I'm not sure what the benefits of o... | 0 | 1 | 283 |
0 | 55,256,209 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-20T04:25:00.000 | 0 | 1 | 0 | Is Feature Scaling recommended for AutoEncoder? | 55,253,587 | 1.2 | python,neural-network,deep-learning,pytorch,autoencoder | With a few exceptions, you should always apply feature scaling in machine learning, especially when working with gradient descent as in your SAE. Scaling your features will ensure a much smoother cost function and thus faster convergence to global (hopefully) minima.
Also worth noting that your much smaller loss afte... | Problem:
The Staked Auto Encoder is being applied to a dataset with 25K rows and 18 columns, all float values.
SAE is used for feature extraction with encoding & decoding.
When I train the model without feature scaling, the loss is around 50K, even after 200 epochs. But, when scaling is applied the loss is around 3 fr... | 0 | 1 | 535 |
0 | 55,263,805 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-20T11:16:00.000 | 1 | 1 | 0 | Hash computation vs bucket walkthrough | 55,259,486 | 0.197375 | python,algorithm,optimization | You need to benchmark the code in somewhat realistic scenario.
The reason why it's so hard to say is that you are not just comparing division (by the way, modern compilers avoid divisions with a large number of tricks). On modern CPUs you have large caches so likely the list will fit into L2 or L3 which decreases the r... | I have a nested r-tree like datastructure in Python (list of lists). The key is a large number (about 10 digits). On each level there are about x number of items (eg:10) in the list. Then within each list, it recurses and has x items and so on. The height of the tree is h levels (eg: 5). Each level also has an indicati... | 0 | 1 | 24 |
0 | 55,267,001 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-20T17:13:00.000 | 1 | 5 | 0 | bias and variance calculation for linear regression | 55,266,588 | 0.039979 | python,python-3.x,linear-regression | So in terms of a function to approximate your population, high bias means underfit, high variance overfit. To detect which, partition dataset into training, cross validation and test sets.
A low training error but high cross validation error means its overfit.
A high training error means its underfit.
High Bias: add ... | If we have 4 parameters of X_train, y_train, X_test, and y_test, how can we calculate the bias and variance of a machine learning algorithm like linear regression?
I have searched a lot but I could not find a single code for this. | 0 | 1 | 4,892 |
0 | 62,696,336 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-20T17:13:00.000 | 0 | 5 | 0 | bias and variance calculation for linear regression | 55,266,588 | 0 | python,python-3.x,linear-regression | In real life, we cannot calculate bias & variance. Recap: Bias measures how much the estimator (can be any machine learning algorithm) is wrong with respect to varying samples, and similarly variance measures how much the estimator fluctuate around the expected value of the estimator. To calculate the bias & variance, ... | If we have 4 parameters of X_train, y_train, X_test, and y_test, how can we calculate the bias and variance of a machine learning algorithm like linear regression?
I have searched a lot but I could not find a single code for this. | 0 | 1 | 4,892 |
0 | 61,523,772 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-20T17:13:00.000 | 0 | 5 | 0 | bias and variance calculation for linear regression | 55,266,588 | 0 | python,python-3.x,linear-regression | Evaluation of Variance:
Variance = np.var(Prediction) # Where Prediction is a vector variable obtained post the
# predict() function of any Classifier.
SSE = np.mean((np.mean(Prediction) - Y)** 2) # Where Y is your dependent variable.
# SSE : S... | If we have 4 parameters of X_train, y_train, X_test, and y_test, how can we calculate the bias and variance of a machine learning algorithm like linear regression?
I have searched a lot but I could not find a single code for this. | 0 | 1 | 4,892 |
0 | 55,269,360 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-20T19:26:00.000 | 0 | 2 | 0 | Decision Tree Learning | 55,268,748 | 1.2 | python,decision-tree | Lets assume after some splits you are left with two records with 3 features/attributes (last column being the truth label)
1 1 1 2
2 2 2 1
Now you are about to select the next best feature to split on, so you call this method remainder(examples, attribute) as part of selection which internally calls nk1, p... | I want to implement the decision-tree learning alogorithm.
I am pretty new to coding so I know it's not the best code, but I just want it to work. Unfortunately i get the error: e2 = b(pk2/(pk2 + nk2))
ZeroDivisionError: division by zero
Can someone explain to me what I am doing wrong? | 0 | 1 | 104 |
0 | 55,297,814 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-21T14:22:00.000 | 0 | 1 | 0 | Training with mixed-precision using the tensorflow estimator api | 55,282,603 | 0 | python,tensorflow,tensorflow-estimator | I found the issue: I used tf.get_variable to store the learning rate. This variable has no gradient. Normal optimizers do not care, but tf.contrib.mixed_precision.LossScaleOptimizer crashes. Therefore, make sure these variables are not added to tf.GraphKeys.TRAINABLE_VARIABLES. | Does anyone has experience with mixed-precision training using the tensorflow estimator api?
I tried casting my inputs to tf.float16 and the results of the network back to tf.float32. For scaling the loss I used tf.contrib.mixed_precision.LossScaleOptimizer.
The error messages I get are relatively uninformative: "Trie... | 0 | 1 | 293 |
0 | 55,283,054 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-21T14:35:00.000 | 0 | 1 | 0 | How to prevent Dataframe.to_dict() to generate timestamps | 55,282,891 | 0 | python,dataframe,dictionary,timestamp | Simply convert those columns to object type with .astype(str) before calling to_dict(). | I am trying to use the python Dataframe to_dict() method without generating timestamps.
My problem: I have a dataframe with cells containing dates such as this: "2019-06-01". When I call the dataframe method "to-dict()" to generate a dictionnary, it converts the datevalue into something like: "Timestamp('2019-06-01 00:... | 0 | 1 | 252 |
0 | 55,286,759 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-21T17:23:00.000 | 2 | 1 | 0 | Handling Categorical Data with Many Values in sklearn | 55,285,986 | 1.2 | python,pandas,scikit-learn,categorical-data | org_id does not seem to be a feature that brings any info for the classification, you should drop this value and not pass it into the classifier.
In a classifier you only want to pass features that are discriminative for the task that you are trying to perform: here the elements that can impact the retention or churn. ... | I am trying to predict customer retention with a variety of features.
One of these is org_id which represents the organization the customer belongs to. It is currently a float column with numbers ranging from 0.0 to 416.0 and 417 unique values.
I am wondering what the best way of preprocessing this column is before fe... | 0 | 1 | 257 |
0 | 55,295,961 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-21T20:01:00.000 | 0 | 1 | 0 | Python: Iterate through every pixel in an image for image recognition | 55,288,421 | 0 | python-3.x,algorithm,image-recognition | Comparing every pixel with a "pattern" can be done with convolution. You should take a look at Haar cascade algorithm. | I'm a newbie in image processing and python in general. For an image recognition project, I want to compare every pixel with one another. For that, I need to create a program that iterates through every pixel, takes it's value (for example "[28, 78, 72]") and creates some kind of values through comparing it to every ot... | 0 | 1 | 63 |
0 | 55,289,028 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-21T20:38:00.000 | 1 | 1 | 0 | numpy.savetxt() rounding values | 55,288,883 | 0.197375 | python,save | You can set the precision through changing fmt parameter. For example np.savetxt('tmp.txt',a, fmt='%1.3f') would leave you with an output with the precision of first three decimal points | I'm using numpy.savetxt() to save an array, but its rounding my values to the first decimal point, which is a problem. anyone have any clue how to change this? | 0 | 1 | 789 |
0 | 55,294,058 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-22T03:06:00.000 | 0 | 1 | 0 | Training SVM in Python with pictures | 55,292,341 | 0 | python,svm | as far i understood you want to train your svm to classify these images into the classes named a,b,c,d . For that you can use any of the good image processing techniques to extract features (such as HOG which is nicely implemented in opencv) from your image and then use these features , and the label as the input to yo... | I have basic knowledge of SVM, but now I am working with images. I have images in 5 folders, each folder, for example, has images for letters a, b, c, d, e. The folder 'a' has images of handwriting letters for 'a, folder 'b' has images of handwriting letters for 'b' and so on.
Now how can I use the images as my train... | 0 | 1 | 72 |
0 | 55,298,046 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-22T10:44:00.000 | 1 | 1 | 0 | Adding numpy arrays of different shape | 55,297,902 | 0.197375 | python,numpy | with .squeeze you can convert a (n,1) vector into an (n,) vector, then adding should work | I would like to add two vectors, one of which is (n,1) and the other (n,) such that the type is (n,)
Just adding them with + gives the type (n,1).
What is the function to convert it to a vector (same type as np.zeros(n))?
Or to compute the sum directly into this format? | 0 | 1 | 65 |
0 | 55,310,781 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-22T11:09:00.000 | 0 | 1 | 0 | can we find the required string in image using CNN/LSTM? or do we need to apply NLP after extracting text using CNN/LSTM. can someone please clarify? | 55,298,360 | 0 | deep-learning,lstm,python-tesseract | NLP is used to allow the network to try and "understand" text. I think what you want here is to see if a picture contains text. For this, NLP would not be required, since you are not trying to get the network to analyze or understand the text. Instead, this should be more of an object detection type problem.
There are ... | Im building a parser algorithm on images. tesseract not giving accuracy. so im thinking to build a CNN+LSTM based model for image to text conversion. is my approach is the right one? can we extract only the required string directly from CNN_LSTM model instead of NLP? or you see any other ways to improve tesseract accur... | 0 | 1 | 39 |
0 | 55,303,353 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2019-03-22T13:48:00.000 | 1 | 1 | 0 | opencv_createsamples command it is not recognized in pycharm | 55,301,045 | 0.197375 | python,opencv,pycharm | I tried this a quickie and I got the same message. I also use PyCharm. Are you sure you're using the right version of opencv? I'm on 2.4 which is quite old. Maybe this is a method that's been added in a later version. If you can import cv2 it shouldn't be the pythonpath. | I just started fiddling with opencv and python using pycharm. I followed a tutorial on how to create a Haar Cascade file, but when I reached the step where I had to use 'opencv_createsamples' command, it returned:
"is not recognized as an internal or external command"
I searched for a solution. Most of them said to a... | 0 | 1 | 1,017 |
0 | 67,337,028 | 0 | 0 | 0 | 0 | 2 | false | 203 | 2019-03-23T12:14:00.000 | 2 | 14 | 0 | ImportError: libGL.so.1: cannot open shared object file: No such file or directory | 55,313,610 | 0.028564 | python,ubuntu-14.04 | had the same issue on centos 8 after using pip3 install opencv on a non gui server which is lacking all sorts of graphics libraries.
dnf install opencv
pulls in all needed dependencies. | I am trying to run cv2, but when I try to import it, I get the following error:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The suggested solution online is installing
apt install libgl1-mesa-glx
but this is already installed and the latest version.
NB: I am actually running this ... | 0 | 1 | 205,048 |
0 | 71,321,056 | 0 | 0 | 0 | 0 | 2 | false | 203 | 2019-03-23T12:14:00.000 | 0 | 14 | 0 | ImportError: libGL.so.1: cannot open shared object file: No such file or directory | 55,313,610 | 0 | python,ubuntu-14.04 | For me, the problem was related to proxy setting. For pypi, I was using nexus mirror to pypi, for opencv nothing worked. Until I connected to a different network. | I am trying to run cv2, but when I try to import it, I get the following error:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The suggested solution online is installing
apt install libgl1-mesa-glx
but this is already installed and the latest version.
NB: I am actually running this ... | 0 | 1 | 205,048 |
0 | 55,328,727 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-24T21:05:00.000 | 0 | 1 | 0 | Pandas Data Frame Data Type Recognized as object and not Numeric | 55,328,568 | 0 | python | After digging around some more.. I found a better way to debug the data.
pd.to_numeric(model_data['Value2SPY'])
Did the trick because when it bombed out it told me the line item..
ValueError: Unable to parse string "#DIV/0!" at position 241396
The code I was using before "if not isinstance(val, int):" just was a b... | I looked at the data and it seemed numeric?. I wrote a little loop and it displays values like 84 as not int, or 214.56 as not float. It just seems broken. Do Pandas Data Frames just have a randomness to them?
My data set has this shape:
(622380, 45)
When I isolate the column it still has a problem. But when I shorten ... | 0 | 1 | 349 |
0 | 55,332,549 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-25T03:15:00.000 | 0 | 1 | 0 | how to drop multiple (~5000) columns in the pandas dataframe? | 55,330,844 | 0 | python-3.x,pandas,dataframe | Let us assume your DataFrame is named as df and you have a list cols of column indices you want to retain. Then you should use:
df1 = df.iloc[:, cols]
This statement will drop all the columns other than the ones whose indices have been specified in cols. Use df1 as your new DataFrame. | I have a dataframe with 5632 columns, and I only want to keep 500 of them. I have the columns names (that I wanna keep) in a dataframe as well, with the names as the row index. Is there any way to do this? | 0 | 1 | 84 |
0 | 72,260,042 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2019-03-25T13:31:00.000 | 0 | 4 | 0 | OSError: [E050] Can't find model 'fr_core_web_md'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory | 55,338,972 | 0 | python,spacy | first install the package then import it (but not vice versa)
first : !python3 -m spacy download fr_core_news_md
then : nlp = spacy.load("fr_core_news_md") | I am working on NLP project so I am using spacy, the problem is when I import nlp=spacy.load('fr_core_news_md'), it doesn't work for me and I get this error:
OSError: [E050] Can't find model 'fr_core_news_md'. It doesn't seem to
be a shortcut link, a Python package or a valid path to a data
directory."
Despite the us... | 1 | 1 | 7,779 |
0 | 69,374,319 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2019-03-25T13:31:00.000 | 1 | 4 | 0 | OSError: [E050] Can't find model 'fr_core_web_md'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory | 55,338,972 | 0.049958 | python,spacy | It is worth mentioning the bug I had recently.
I installed the fr_core_news_md, and then I tried to load the fr_core_news_sm.
It was around 2:00 AM I wasn't able to find it.
I slept and came back in the morning and found the solution. | I am working on NLP project so I am using spacy, the problem is when I import nlp=spacy.load('fr_core_news_md'), it doesn't work for me and I get this error:
OSError: [E050] Can't find model 'fr_core_news_md'. It doesn't seem to
be a shortcut link, a Python package or a valid path to a data
directory."
Despite the us... | 1 | 1 | 7,779 |
0 | 55,343,775 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-25T17:42:00.000 | 0 | 1 | 0 | Count Name list (mixed with Number) | 55,343,651 | 0 | python | Courtesy of @JohnGordon:
Use if val == 'Jacob Lee' or val.startswith('Jacob Lee ') or val == '30220' or val.startswith('30220 '): | I'm trying to count the Customer's Name in my data.
For example, if there are ["Jacob Lee", "Jacob Lee 30220", "30220"] in the column, I want to count these cases as a same person. Because 30220 is Jacob Lee's account number.
I'm not sure how to code this function.
FYI: I'm using python 3. | 0 | 1 | 53 |
0 | 55,377,993 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-26T13:57:00.000 | 0 | 1 | 0 | Update trained object detection Models to correspond to TF updates | 55,358,952 | 1.2 | python,tensorflow,object-detection-api | After doing some looking, the graph has to be updated. Since I did not still have the training checkpoints, I was successful in updating the graph by exporting from the previously frozen graph as the checkpoint.
python3 export_inference_graph.py --input_type image_tensor --pipeline_config_path FROZENGRAPHDIRECTORY/pi... | I am transitioning to new version of TF for stability reasons (I was using a nightly docker build on Ubuntu 18.04 from before mainline switched to CUDA 10). When I attempt to run my models in the new version I get the following error, which I assume to mean that there is an incompatibility with the models trained on t... | 0 | 1 | 1,299 |
0 | 58,124,093 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-26T16:48:00.000 | 0 | 1 | 0 | Highest Polarity Score (Sentiment Analysis) using the TextBlob library | 55,362,335 | 0 | python,textblob | You can use .sentiment_assessments to get some more idea of how your sentence is being evaluated.
Sentiment(polarity=0.6, subjectivity=0.6000000000000001, assessments=[(['really', 'really', 'really', 'love'], 0.5, 0.6, None), (['good'], 0.7, 0.6000000000000001, None)]) | I've started to use the TextBlob library; for sentiment analysis.
I have run a few tests on a few phrases and I have the polarity and subjectivity score - fine.
What sentence would return the highest polarity value within TextBlob?
For instance
"I really, really, really love and admire your beauty, my good friend"
... | 0 | 1 | 247 |
0 | 55,368,146 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-26T23:03:00.000 | 0 | 1 | 0 | Tower of colored cubes | 55,367,429 | 0 | python,artificial-intelligence,evolutionary-algorithm | First, I'm not sure how you get 12 rotations; I get 24: 4 orientations with each of the 6 faces on the bottom. Use a standard D6 (6-sided die) and see how many different layouts you get.
Apparently, the first thing you need to build is a something (a class?) that accurately represents a cube in any of the available or... | Consider a set of n cubes with colored facets (each one with a specific color
out of 4 possible ones - red, blue, green and yellow). Form the highest possible tower of k cubes ( k ≤ n ) properly rotated (12 positions of a cube), so the lateral faces of the tower will have the same color, using and evolutionary algorith... | 0 | 1 | 227 |
0 | 55,368,249 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-27T00:15:00.000 | 1 | 1 | 0 | Can I get [0 0] from Categorical Labeling in CNN's? | 55,367,994 | 1.2 | python,keras,conv-neural-network | That should not be possible. Your "garbage" would be a third class, requiring labels of [1 0 0], [0 1 0], and [0 0 1].
Very simply, the model you've described will return one of two categories, whichever has a higher rating in your final layer. This happens whether the input values are 0.501 and 0.499, or 0.011 and 0... | From what I understand from keras labeling, one hot encoding does not permit the values to be [0 0]? is this assumption correct?
We are trying to classify 2 classes and we want to be able to detect garbage when a garbage image is fed. However, it always detects either
[0 1] or [1 0]. Is it possible to get [0 0] as a l... | 0 | 1 | 74 |
0 | 55,376,254 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-27T09:49:00.000 | 0 | 3 | 0 | What's the difference between shape(150,) and shape (150,1)? | 55,374,185 | 0 | python,numpy | Although they both occupy same space and positions in memory,
I think they are the same, I mean they both represent a column vector.
No they are not and certainly not according to NumPy (ndarrays).
The main difference is that the
shape (150,) => is a 1D array, whereas
shape (150,1) => is a 2D array | What's the difference between shape(150,) and shape (150,1)?
I think they are the same, I mean they both represent a column vector. | 0 | 1 | 331 |
0 | 55,629,323 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-28T13:20:00.000 | 1 | 1 | 0 | Feed complex64 data into Keras sequential model | 55,398,697 | 1.2 | python,keras,sequential | Adding an InputLayer(... dtype='complex64') layer, i.e. an InputLayer() with data type specified as 'complex64' as the first layer of the sequential model allowed me to pass complex64 data to the model. | I am working in training a CNN in fourier domain. To speed up training, I thought of taking the fft of the entire dataset before training and feeding this data to the sequential model. But inside the first layer of the model, which is a custom Keras layer, the training data is shown to have float32 data type. Does the ... | 0 | 1 | 127 |
0 | 55,557,061 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2019-03-28T15:05:00.000 | 1 | 1 | 0 | Finding any feasible flow in graph as fast as possible | 55,400,911 | 1.2 | python,graph,network-flow | So I finally got time to sum this up. The solution I used is to take the initial graph and transform it in these steps.
(Weights are in this order: lower bound, current flow, upper bound.)
1. Connect t to s by edge of (0, 0, infinity).
2. To each node of the
initial graph add balance value equal to: (sum of low... | I have a flow graph with lower and upper bounds and my task is to find any feasible solution as fast as possible. I found many algorithms and approaches to maximum/minimum flow and so on (also many times uses feasible solution as start point) but nothing specific for any feasible solution. Is there any algorithm/approa... | 0 | 1 | 861 |
0 | 55,402,030 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-28T15:43:00.000 | 1 | 1 | 0 | How does image resolution affect result and accuracy in Keras? | 55,401,716 | 1.2 | python-2.7,tensorflow,keras | 1- Of course it will affect the training speed as the spatial dimensions is one of the most important key of the model speed performance.
2- We can say sure it'll affect the accuracy, but how much exactly that depends on many of other aspects like what objects are you classifying and what dataset are you working with. | I'm using Keras (with Tensorflow backend) for an image classification project. I have a total of almost 40 000 hi-resolution (1920x1080) images that I use as training input data. Training takes about 45 minutes and this is becoming a problem so I was thinking that I might be able to speed things up by lowering the reso... | 0 | 1 | 404 |
0 | 55,794,331 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-28T15:58:00.000 | 1 | 1 | 0 | What is TargetEncoder and BinaryEncoder in sklearn category_encoders? | 55,402,010 | 0.197375 | python,python-3.x,scikit-learn,categorical-data | Target encoding maps the categorical variable to the mean of the target variable. As it uses the target, steps must be taken to avoid overfitting (usually done with smoothing).
Binary encoding converts each integer into binary digits with each binary digit having its one column. It is essentially a form of feature has... | I've been looking for a way to vectorize categorical variable and then I've come across category_encoders. It supports multiple ways to categorize.
I tried TargetEncoder and BinaryEncoder but the docs doesn't explain much about the working of it?
I really appreciate if anyone could explain how target encoder and binary... | 0 | 1 | 754 |
0 | 55,404,739 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2019-03-28T17:45:00.000 | 1 | 3 | 0 | Concepts to measure text "relevancy" to a subject? | 55,403,920 | 0.066568 | python,machine-learning,nlp,data-science | There are many many ways to do this, and the best method changes depending on the project. Perhaps the easiest way to do this is to keyword search in your articles and then empirically choose a cut off score. Although simple, this actually works pretty well, especially in a topic like this one where you can think of a ... | I do side work writing/improving a research project web application for some political scientists. This application collects articles pertaining to the U.S. Supreme Court and runs analysis on them, and after nearly a year and half, we have a database of around 10,000 articles (and growing) to work with.
One of the prim... | 0 | 1 | 336 |
0 | 55,405,781 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-28T19:44:00.000 | 1 | 1 | 0 | Sometimes it is necessary to show my dataframe to properly ask the question. How can I do that? | 55,405,730 | 0.197375 | python,pandas,dataframe | You could either provide the code the generate sample data or you could do print(df) and paste the result with code format as a part of your question. For us it is possible to copy a dataframe as text a load it into a proper dataframe. Usually you can provide less than 20 rows of sample data and that should be enough t... | I need to ask a question related to a DataFrame. I tried to add screenshots before but I got -3 reputation and it says I am not allowed to upload the image. What is the best way then. I am new to stack overflow. Please help. | 0 | 1 | 531 |
0 | 55,413,219 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2019-03-29T06:50:00.000 | 5 | 2 | 0 | Is there a pytorch method to check the number of cpus? | 55,411,921 | 0.462117 | python,neural-network,deep-learning,pytorch | At present pytorch doesn't support multiple cpu cluster in DistributedDataParallel implementation. So, I am assuming you mean number of cpu cores.
There's no direct equivalent for the gpu count method but you can get the number of threads which are available for computation in pytorch by using
torch.get_num_threads() | I can use this torch.cuda.device_count() to check the number of GPUs. I was wondering if there was something equivalent to check the number of CPUs. | 0 | 1 | 6,996 |
0 | 55,742,886 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-29T08:44:00.000 | 0 | 1 | 0 | Installing tensorflow-gpu on a Laptop with two graphic cards | 55,413,488 | 1.2 | python,tensorflow,installation | At what phase are you getting the "Download wasn't..." message? did
you try manually downloading the wheel file and installing it directly
and locally? – Ido_f
Downloading the local CUDA installation solved my issues. | A lot of people have issues installing tensorflow-gpu on their computers and I have read a lot of them and tried out a lot of them as well. So I'm not coming for an easy answer without searching the web beforehand.
I'm running W10 with an NVIDIA Quadro P600 which can supposedly run CUDA.
The thing is whenever I'm try... | 0 | 1 | 235 |
0 | 55,415,405 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-03-29T10:17:00.000 | 0 | 3 | 0 | Number of 2D array to 3D array, extending third dimension | 55,415,158 | 0 | python,arrays,numpy,append | Try creating a new array that you fill with your 2D arrays
new3DArray = numpy.empty(10, 60, 100) | I have 10 different matrix of size (60, 100). I want to put them along the third dimension inside a for loop, so that the final shape is (10, 60, 100).
I tried with concatenate and end up with size (600, 100). | 0 | 1 | 402 |
1 | 56,324,221 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2019-03-30T18:02:00.000 | 2 | 8 | 0 | Matplotlib with Pydroid 3 on Android: how to see graph? | 55,434,255 | 0.049958 | android,python,matplotlib,pydroid | I also had this problem a while back, and managed to fix it by using plt.show()
at the end of your code. With matplotlib.pyplot as plt. | I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finis... | 0 | 1 | 8,119 |
1 | 60,702,515 | 0 | 0 | 0 | 0 | 3 | true | 4 | 2019-03-30T18:02:00.000 | 0 | 8 | 0 | Matplotlib with Pydroid 3 on Android: how to see graph? | 55,434,255 | 1.2 | android,python,matplotlib,pydroid | After reinstalling it worked.
The problem was that I forced Pydroid to update matplotlib via Terminal, not the official PIP tab.
The version of matplotlib was too high for pydroid | I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finis... | 0 | 1 | 8,119 |
1 | 66,386,763 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2019-03-30T18:02:00.000 | 0 | 8 | 0 | Matplotlib with Pydroid 3 on Android: how to see graph? | 55,434,255 | 0 | android,python,matplotlib,pydroid | You just need to add a line
plt.show()
Then it will work. You can also save the file before showing
plt.savefig("*imageName*.png") | I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finis... | 0 | 1 | 8,119 |
0 | 55,441,333 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-03-31T12:57:00.000 | 0 | 2 | 0 | My trained image classifer model classify all the images that are not even in the category | 55,441,123 | 0 | java,android,python-3.x,tensorflow | what about checking probability score? Eventhough a cup is classifed as Dog , it will have less score. so you can set your threshold. If the probability score > threshold value, then it will be displayed as animal otherwise not. | I have already trained a model to recognise animal and it is working, deployed into android application. I'm finding for a solution to make the image classifier to only classify the trained categories. I'm not sure whether to do this through the model training or any code to be added to solve this.
Example, if a pictur... | 0 | 1 | 115 |
0 | 55,451,075 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-01T08:34:00.000 | -1 | 2 | 0 | Do I need a test-train split for K-means clustering even if I'm not looking to predict anything? | 55,450,949 | -0.099668 | python-3.x,machine-learning,cluster-analysis,k-means | No, in clustering (i.e unsupervised learning ) you do not need to split the data | I have a set of 2000 points which are basically x,y coordinates of pass origins from association football. I want to run a k-means clustering algorithm on it to just classify it to get which 10 passes are the most common (k=10). However, I don't want to predict any points for future values. I simply want to work with t... | 0 | 1 | 8,449 |
0 | 55,471,123 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-01T20:01:00.000 | 1 | 2 | 0 | converting an array of size (n,n,m) to (None,n,n,m) | 55,462,718 | 0.099668 | python,arrays,numpy,conv-neural-network,reshape | The shape (None, 14,14,3) represent ,(batch_size,imgH,imgW,imgChannel) now imgH and imgW can be use interchangeably depends on the network and the problem.
But the batchsize is given as "None" in the neural network because we don't want to restrict our batchsize to some specific value as our batchsize depends on a lot... | I am trying to reshape an array of size (14,14,3) to (None, 14,14,3). I have seen that the output of each layer in convolutional neural network has shape in the format(None, n, n, m).
Consider that the name of my array is arr
I tried arr[None,:,:] but it converts it to a dimension of (1,14,14,3).
How should I do it? | 0 | 1 | 163 |
0 | 55,472,980 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-02T08:12:00.000 | 1 | 1 | 0 | I need to remove multi co-linearity between features | 55,469,874 | 0.197375 | python,machine-learning | You don't need to transform the data.Instead you can change the way that you are calculating correlation between variables. As these are categorical features, you have to use Chi-Squared test of independence.Then, you won't be facing this issue. | I have categorical variables such as Gender,Anxiety,Alcoholic and when i convert these categorical variables into numerical values using encoder techniques then all these variables resembles same in values and then multi co linearity is existing. How i can convert these variables to number so that multi co linearity do... | 0 | 1 | 35 |
0 | 55,488,118 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-02T09:36:00.000 | 0 | 2 | 0 | Code conversion from python 2 to python 3 | 55,471,459 | 0 | python-2.7,pytorch | The problem is(can) that cpu object is expected but it gpu object. Try to put the object to cpu:
mask.cpu() | I'm setting up a new algorithm which combines an object detector(bounding box detector) which is in python 3 and a mask generator which is in python 2. The problem here is I have several python 2 files which is required for the mask generation algorithm. So I tried 2to3 to convert all my python 2 files to python 3. The... | 0 | 1 | 174 |
0 | 55,475,569 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-04-02T12:52:00.000 | 3 | 3 | 0 | Creating a big matrix where every element is the same | 55,475,303 | 1.2 | python,matrix,sage | @JamesKPolk gave me a working solution.
T = matrix(RDF, 6000, 6000, lambda i,j: 1/6000) | I'm trying to create a matrix of dimension nxn in Sage. But every element in the matrix has to be 1/n. The size of n is around 7000.
First I tried using creating a matrix of ones with the build in sagemethod, and then multiplying the matrix with 1/n. This is very slow and crashes my jupyter notebook kernel.
T =matrix.... | 0 | 1 | 140 |
0 | 55,482,982 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-02T20:11:00.000 | -1 | 2 | 0 | Python: (uniform) sampling from a rectangle | 55,482,837 | -0.099668 | python,random | I think modulo operator (%) is your friend to check if x and y are in [a,c] and [b,d]
If you can't use random between 2 numbers (others 0 and 1), you can try to make x = (random() *(c-a)+a)
Same with y :)
EDIT : Oh, i send it just after Merig | Say I have a rectangle [a,b]x[c,d], where a,b,c,d are reals.
I would like to produce k points (x,y) sampled uniformly from this rectangle, i.e. a <= x <= c and b <= y <= d.
Obviously, if sampling from [0,1]x[0,1] is possible, then
the problem is solved. How to achieve any of the two goals, in python?
Or, another tool s... | 0 | 1 | 577 |
0 | 55,487,120 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-03T04:28:00.000 | 0 | 2 | 0 | Does the accuracy of the deep learning program drop if I do not put in the default input shape into the pretrained model? | 55,487,087 | 1.2 | python,keras,conv-neural-network,pre-trained-model,transfer-learning | Usually, with convolutional neural networks, differences in the image shape (the width/height of an image) will not matter. However, differences in the # of channels in the image (equivalently the depth of the image), will affect the performance. In fact, there will usually be dimension mismatch errors you get if the m... | As the title says, I want to know whether input shape affects the accuracy of the deep learning model.
Also, can pre-trained models (like Xception) be used on grayscale images?
P.S. : I recently started learning deep learning so if possible please explain in simple terms. | 0 | 1 | 54 |
0 | 55,496,284 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-03T12:55:00.000 | 0 | 1 | 0 | Shape of passed values blah indices imply blah | 55,495,746 | 1.2 | python-3.x,pandas,scikit-learn | So I after implementing @Quang Hoang's suggestion panda.reshape(array_name, (-1, 78)) on x_test, y_test and x_train this finally converted all nessasary arrays into the required 2D format. | I am attempting to pass my custom dataset which is loaded in from a CSV file using
panda.readcsv() through sklearns MLPRegressor.
My initial error was my 1D array needed to become a 2D array. Expected 2D array, got 1D array instead: array=[0. 0. 1. ... 0. 0. 1.].
So I used panda.reshape(x_test, (-1, 1)) on both the x... | 0 | 1 | 44 |
0 | 55,506,296 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-04T00:37:00.000 | 0 | 1 | 0 | Need to find the MEAN of Col 6 based on the value of Col 5 (Col 5 is 0/1) | 55,506,207 | 0 | python | If I understand your question correctly:
First you need to import your spreadsheet into python with csv module.
Then you need "for" loop to sum all your col per person.
Calculate mean of each obs.
If result greater than half of total number, get to your student 1 ;else get them 0 . | I have a spreadsheet with 10 columns and 727 obs. Col 5 is 0/1 whether a student is economically disadvantaged or not. I need to find the mean of Col 6 (test score) based on whether the student is economically disadvantaged or not. Help! | 0 | 1 | 16 |
0 | 55,534,189 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-04-05T11:04:00.000 | 2 | 2 | 0 | Сhoosing the right NN model (speed / performance) | 55,533,997 | 0.197375 | python,machine-learning,neural-network | TinyYOLO is a smaller version of the original YOLO network. You could try that one. | Im beginner and this is my first steps.
Im already learning about different Neural Network architecture and i have a question:
Which model i should choice for Rasberry PI / android?
Im already tried "ResNet" with 98x98 resolution of images and that model requires almost full power of my PC. Exactly:
Model takes 2 GB of... | 0 | 1 | 78 |
0 | 55,534,144 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-04-05T11:04:00.000 | 3 | 2 | 0 | Сhoosing the right NN model (speed / performance) | 55,533,997 | 1.2 | python,machine-learning,neural-network | Object Detection on Raspberry Pi with 5-10FPS is highly unrealistic.
You can have a look at YOLO or SSD, for example YOLO has also a smaller implementation which can run on RPI but you will be happy with 1FPS. | Im beginner and this is my first steps.
Im already learning about different Neural Network architecture and i have a question:
Which model i should choice for Rasberry PI / android?
Im already tried "ResNet" with 98x98 resolution of images and that model requires almost full power of my PC. Exactly:
Model takes 2 GB of... | 0 | 1 | 78 |
0 | 55,540,865 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-04-05T16:28:00.000 | 2 | 1 | 0 | How do apply Q-learning to an OpenAI-gym environment where multiple actions are taken at each time step? | 55,539,820 | 0.379949 | python,reinforcement-learning,openai-gym,q-learning | You can take one of two approaches - depend on the problem:
Think of the set of actions you need to pass to the environment as independent and make the network output actions values for each one (make softmax separately) - so if you need to pass two actions, the network will have two heads, one for each axis.
Think of... | I have successfully used Q-learning to solve some classic reinforcement learning environments from OpenAI Gym (i.e. Taxi, CartPole). These environments allow for a single action to be taken at each time step. However I cannot find a way to solve problems where multiple actions are taken simultaneously at each time step... | 0 | 1 | 937 |
0 | 61,877,221 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-06T10:08:00.000 | 0 | 1 | 0 | Is there any difference between del and pop for column in python pandas dataframe? | 55,548,010 | 0 | python,python-3.x,pandas,dataframe,del | The difference between deleting and popping a column is that pop will return the deleted column back to you while del method won't. | I have just learned how to work with DataFrame in python's Pandas through an online course and there is this question:
"What is the difference between deleting and popping column?"
I thought they work the same way but most of the answers are
"You can store a popped column"
What does that mean?
I saw from the documenta... | 0 | 1 | 939 |
0 | 55,552,428 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-06T18:52:00.000 | 3 | 1 | 0 | Python - Error at equals sign for no reason? | 55,552,402 | 0.53705 | python | eval evaluates expressions. Assignment in Python is not an expression, it's a statement.
But you don't need this anyway. Make a list or dict to hold all of your values. | I'm using eval and pybrain to make neural networks. Here's it stripped down. Using python 3.6
from pybrain import *
numnn = 100
eval("neuralNetwork" + chr(numnn) + " = buildNetwork(2, 3, 1, bias=True)") | 0 | 1 | 127 |
0 | 66,080,521 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2019-04-07T08:18:00.000 | 1 | 5 | 0 | Importing COCO datasets to google colaboratory | 55,556,965 | 0.039979 | python,computer-vision,google-colaboratory,semantic-segmentation | Using drive is better for further use. Also unzip the zip with using colab ( !unzip ) because using zip extractor on drive takes longer. I've tried :D | The COCO dataset is very large for me to upload it to google colab. Is there any way I can directly download the dataset to google colab? | 0 | 1 | 10,788 |
0 | 55,602,989 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-08T06:39:00.000 | 0 | 1 | 0 | Windowed writes in python, e.g. to NetCDF | 55,567,542 | 0 | python,large-data,netcdf | This can be done using netCDF4 (the python library of low level NetCDF bindings). Simply assign to a slice of a dataset variable, and optionally call the dataset .sync() method afterward to ensure no delay before those changes are flushed to the file.
Note this approach also provides the opportunity to progressively g... | In python how can I write subsets of an array to disk, without holding the entire array in memory?
The xarray input/output docs note that xarray does not support incremental writes, only incremental reads except by streaming through dask.array. (Also that modifying a dataset only affects the in-memory copy, not the con... | 0 | 1 | 100 |
0 | 55,602,759 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-08T11:11:00.000 | 1 | 2 | 0 | Delay in Savitzky-Golay filtering | 55,572,128 | 0.099668 | python,scipy,filtering,signal-processing | You are asking about lag/latency of a digital filter: the only possible answer for a real-time filter is that the latency is determined entirely by the window size of the filter.
Non-realtime filters (e.g, where the full set of samples is provided to the filter, as for e.g. the scipy Savitsky-Golay filter) can pretend/... | I am applying a Savitzky-Golay filter to a signal, using the scipy function.
I need to calculate the lag of the filtered signal, and how much is it behind the original signal.
Could someone shed some light on this matter? How could I calculate it with scipy? How should I interpret the result correctly?
I would be very ... | 0 | 1 | 1,092 |
0 | 55,577,000 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-08T14:47:00.000 | 0 | 3 | 0 | How to check the p values of parameters in OLS | 55,576,097 | 0 | python,statsmodels | The p-value corresponds to the probability of observing this value of a under the null hypothesis (which is typically 0 as this is the case when there is no effect of the covariate x on the outcome y).
This is under the assumptions of linear regression which among other things state that a follows a normal distribution... | When running a linear regression, like y=a*x+b, the summary gives me the p-values of whether the parameters equal to zero, what if I would like to see the p-value of whether the parameter a equals to 2 or something different from zero?
I expect the OLS summary gives me the p value of whether a is different from 2. | 0 | 1 | 3,001 |
0 | 55,655,008 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-04-08T15:01:00.000 | 1 | 2 | 0 | Hardware for python multiprocessing | 55,576,373 | 0.099668 | python,pandas,multiprocessing,gpu,xeon-phi | Took a while, but after changing it all to numpy and achieving a little more vectorization I managed to get a speed increase of over 20x - so thanks Paul.
max9111 thanks too, I'll have a look into numba. | I have a task where I need to run the same function on many different pandas dataframes. I load all the dataframes into a list then pass it to Pool.map using the multiprocessing module. The function code itself has been vectorized as much as possible, contains a few if/else clauses and no matrix operations.
I'm current... | 0 | 1 | 428 |
0 | 55,679,352 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-09T14:44:00.000 | 0 | 1 | 0 | YOLO v2 bad accuracy in Tensorflow | 55,595,512 | 1.2 | python,tensorflow,keras,computer-vision,artificial-intelligence | Okay, so it turned out that YOLOv2 was performing very well on unseen data except that the unseen data has to be the same size of images as the ones it's trained on. Don't feed Yolo with 800x800 images if it's been trained on 400x400and 300x400 images. Also the Keras accuracy measure is meaningless for detection. It mi... | I'm currently using a custom version of YOLO v2 from pjreddie.com written with Tensorflow and Keras. I've successfully got the model to start and finish training over 100 epochs with 10000 training images and 2400 testing images which I randomly generated along with the associated JSON files all on some Titan X gpus wi... | 0 | 1 | 260 |
0 | 55,598,802 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-09T18:02:00.000 | 1 | 1 | 0 | How to apply a single fully connected layer to each point in an image | 55,598,702 | 1.2 | python,tensorflow,machine-learning,keras | What you're describing is a 1x1 convolution with output depth 1. You can implement it just as you implement the rest of the convolution layers. You might want to apply tf.squeeze afterwards to remove the depth, which should have size 1. | I'm trying to set up a non-conventional neural network using keras, and am having trouble efficiently setting this up.
The first few layers are standard convolutional layers, and the output of these have d channels, which each have image shapes of n x n.
What I want to do is use a single dense layer to map this d x n ... | 0 | 1 | 234 |
0 | 70,400,820 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-04-09T19:37:00.000 | 0 | 2 | 0 | how do I implement ssim for loss function in keras? | 55,600,106 | 0 | python,tensorflow,keras,loss-function | other choice would be
ssim_loss = 1 - tf.reduce_mean(tf.image.ssim(target, output, max_val=self.max_val))
then
combine_loss = mae (or mse) + ssim_loss
In this way, you are minimizing both of them. | I need SSIM as a loss function in my network, but my network has 2 outputs. I need to use SSIM for first output and cross-entropy for the next. The loss function is a combination of them. However, I need to have a higher SSIM and lower cross-entropy, so I think the combination of them isn't true. Another problem is th... | 0 | 1 | 3,145 |
0 | 69,070,357 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-04-10T03:32:00.000 | 1 | 1 | 0 | How to limit the number of threads that opencv uses in Python? | 55,604,373 | 0.197375 | python,multithreading,opencv | You can use cv2.setNumThreads(n) (Where n = number of threads)
But it didn't work for me, its still using all the CPU. | I am designing a program which will run continuously on a ROC64. It includes the usage of BackgroudsubtractorMOG2(a background-subtracting algorithm implemented in opencv). Opencv seems to use multithreading optimization in this algorithm and it eats up all the CPU resources. I understand that in C++ we can limit the n... | 0 | 1 | 3,919 |
0 | 55,613,124 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-10T12:40:00.000 | 0 | 2 | 0 | Found duplicate column when trying to query with Spark SQL | 55,612,813 | 1.2 | python,apache-spark,dataframe,pyspark,apache-spark-sql | I've got the solution now. What I needed to use was:
Add from pyspark.sql.functions import * at the file header
Simply use col()'s alias function like so:
filtered_df2 = filtered_df.select(col("li"),col("result.li").alias("result_li"), col("fw")).orderBy("fw") | I want to do a filter on a dataframe like so:
filtered_df2 = filtered_df.select("li", "result.li", "fw").orderBy("fw")
However, the nested column, result.li has the same name as li and this poses a problem. I get the following error:
AnalysisException: 'Found duplicate column(s) when inserting into hdfs://...: `li`;'
H... | 0 | 1 | 2,453 |
0 | 55,619,709 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-10T19:02:00.000 | 0 | 1 | 0 | Is there a module which tries to fit different functions to set of data points? | 55,619,624 | 0 | python,curve-fitting,data-fitting | In scipy there is curve_fit but I believe you have to define the curve that is going into it. | Let’s say I have 100 data points, consisting of two values (x,y or V1, V2).
Right now I am defining a bunch of functions (like log, exp, poly, sigmoid etc.) with a bunch of parameters to scale the data and/or adapt the base-equation. Then I use scipy.optimize.minimize to fit them to the data. After that I compare th... | 0 | 1 | 65 |
0 | 55,621,089 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-04-10T19:53:00.000 | 1 | 1 | 0 | Algorithm used in KNeighborsClassifier with sparse input? | 55,620,365 | 1.2 | python,scikit-learn | No, it means that if the input is sparse, whichever value passed to the argument algorithm will be ignored and brute force algorithm will be used (which is equivalent to algorithm='brute') | For classification algorithm KNeighborsClassifier what does fitting on a sparse input mean?
Does it mean if I have x_train and x_test as sparse csr matrix and If I fit on x_train and don't specify algorithm it will automatically choose brute? can anyone clear this confusion.
algorithm : {‘auto’, ‘ball_tree’, ‘kd_tree’... | 0 | 1 | 133 |
0 | 58,826,866 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-04-11T00:35:00.000 | 1 | 1 | 0 | Suggestions for feature engineering | 55,623,095 | 0.197375 | python,machine-learning,data-science,feature-engineering | You can extract the following features:
Simple Moving Averages for day 2 and day 3 respectively. This means you now have two extra columns.
Percentage Change from previous day
Percentage Change from day 1 to 3 | I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.
I am converting this time series data to colum... | 0 | 1 | 106 |
0 | 55,625,183 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2019-04-11T02:25:00.000 | 0 | 2 | 0 | Syntax Error In Python When Trying To Refer To Range Of Columns | 55,623,798 | 0 | python,pandas | Steven Burnap's explanation is correct, but the solution can be simplified - just remove the internal parentheses:
db = db.drop(db.columns[12:22], axis = 1)
this way, db.columns[12:22] is a 'slice' of the columns array (actually index, but doesn't matter here), which goes into the drop method. | I am trying to remove the last several columns from a data frame. However I get a syntax error when I do this:
db = db.drop(db.columns[[12:22]], axis = 1)
This works but it seems clumsy...
db = db.drop(db.columns[[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]], axis = 1)
How do I refer to a range of columns? | 0 | 1 | 99 |
0 | 55,623,865 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2019-04-11T02:25:00.000 | 1 | 2 | 0 | Syntax Error In Python When Trying To Refer To Range Of Columns | 55,623,798 | 0.099668 | python,pandas | The first example uses [12:22] is a "slice" of nothing. It's not a meaningful statement, so as you say, it gives a syntax error. It seems that what you want is a list containing the numbers 12 through 22. You need to either write it out fully as you did, or use some generator function to create it.
The simplest is r... | I am trying to remove the last several columns from a data frame. However I get a syntax error when I do this:
db = db.drop(db.columns[[12:22]], axis = 1)
This works but it seems clumsy...
db = db.drop(db.columns[[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]], axis = 1)
How do I refer to a range of columns? | 0 | 1 | 99 |
0 | 70,600,860 | 0 | 0 | 0 | 0 | 1 | false | 53 | 2019-04-11T08:16:00.000 | 1 | 3 | 0 | Evaluating pytorch models: `with torch.no_grad` vs `model.eval()` | 55,627,780 | 0.066568 | python,machine-learning,deep-learning,pytorch,autograd | If you're reading this post because you've been encountering RuntimeError: CUDA out of memory, then with torch.no grad(): will likely to help save the memory. Using only model.eval() is unlikely to help with the OOM error.
The reason for this is that torch.no grad() disables autograd completely (you can no longer backp... | When I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()? | 0 | 1 | 16,936 |
0 | 55,628,823 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-11T09:06:00.000 | 0 | 2 | 0 | how can I fix Memory Error on np.arange(0.01*1e10,100*1e10,0.5)? | 55,628,673 | 0 | python,numpy,memo | You are trying to create an array of roughtly 2e12 elements. If every element was to be a byte, you would need approximately 2Tb of free memory to allocate it. Not sure you have so much ram available, that is why you have the memory error.
Note: the array you are trying to allocate contains floats, so it is even bigger... | I have Memory Error when I run np.arange() with large number like 1e10.
how can I fix Memory Error on np.arange(0.01*1e10,100*1e10,0.5) | 0 | 1 | 195 |
0 | 55,629,272 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-04-11T09:30:00.000 | 1 | 3 | 0 | Shape of tensor | 55,629,163 | 0.066568 | python,tensorflow,machine-learning,pytorch | Your understanding of the shapes is correct. From the context probably the x_train are 60k images of handwritten numbers (with resolution 28x28 pixel) and the y_train are simply the 60k true number, which the images show. | I came across this piece of code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print("Shape of x_train: " + str(x_train.shape))
print("Shape of y_train: " + str(y_train.shape))
And found that the output looks like this
(60000, 28, 28)
(60000,)
For the first line of output
So far my understanding, does it mea... | 0 | 1 | 67 |
0 | 55,629,310 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2019-04-11T09:30:00.000 | 1 | 3 | 0 | Shape of tensor | 55,629,163 | 1.2 | python,tensorflow,machine-learning,pytorch | You are right the first line gives 60K items of 28x28 size data thus (60000, 28, 28).
The y_train are labels of the x_train. Thus they are a one dimensional and 60k in number.
For example: If the first item of the x_train is a handwritten image of 3, then the first item of y_train will be '3' which is the label. | I came across this piece of code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print("Shape of x_train: " + str(x_train.shape))
print("Shape of y_train: " + str(y_train.shape))
And found that the output looks like this
(60000, 28, 28)
(60000,)
For the first line of output
So far my understanding, does it mea... | 0 | 1 | 67 |
0 | 55,648,613 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-11T11:57:00.000 | 1 | 1 | 0 | KMeans: Extracting the parameters/rules that fill up the clusters | 55,631,944 | 0.197375 | python,scikit-learn,k-means | Got the answer in a different topic:
Just record the cluster means. Then when new data comes in, compare it to each mean and put it in the one with the closest mean. | I have created a 4-cluster k-means customer segmentation in scikit learn (Python). The idea is that every month, the business gets an overview of the shifts in size of our customers in each cluster.
My question is how to make these clusters 'durable'. If I rerun my script with updated data, the 'boundaries' of the clus... | 0 | 1 | 454 |
0 | 55,637,346 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-11T16:35:00.000 | 2 | 1 | 0 | Applying a permutation along one axis in TensorFlow | 55,637,345 | 1.2 | python,tensorflow | tf.gather can be used to that end. In fact, it is even more general, as the indices it takes as one of its inputs don't need to represent a permutation. | How to permute "dimensions" along a single axis of a tensor?
Something akin to tf.transpose, but at the level of "dimensions" along an axis, instead of at the level of axes.
To permute them randomly (along the first axis), there it tf.random.shuffle, and to shift them, there is tf.roll. But I can't find a more general ... | 0 | 1 | 321 |
0 | 55,645,447 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-11T18:22:00.000 | 1 | 2 | 0 | similarity score between phrases | 55,638,949 | 0.099668 | python,similarity,levenshtein-distance,sentence-similarity | You can also measure the similarity between two phrases using Levenshtein distance, threating each word as a single element. When you have strings of unequal sizes you can use the Smith-Waterman or the Needleman-Wunsch algorithm. Those algorithms are widely used in bioinformatics and the implementation can be found in ... | Levenshtein distance is an approach for measuring the difference between words, but not so for phrases.
Is there a good distance metric for measuring differences between phrases?
For example, if phrase 1 is made of n words x1 x2 x_n, and phrase 2 is made of m words y1 y2 y_m. I'd think they should be fuzzy aligned by w... | 0 | 1 | 574 |
0 | 55,657,266 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-04-12T17:54:00.000 | 1 | 1 | 0 | How to use Pandas in Pycharm | 55,657,228 | 0.197375 | python,pandas,pycharm | It simply means that the pandas is not installed properly or not even installed at all.
The TimeOut error is generally for a connection problem, retry again after some time or try resetting your connection. | I tried to install Pandas in Project Interpreter under my Project -> Clicked on '+' .. but it says "Time out" and shows nothing. So I installed it using "py -m pip install pandas" in cmd, but I dont see it under Project Interpreter - there is only pip and setuptools.
What should I do to make it work ?
I am still gettin... | 0 | 1 | 130 |
0 | 55,660,859 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-13T00:02:00.000 | 1 | 2 | 0 | How to select columns from a matrix with an algorithm | 55,660,812 | 1.2 | python,numpy-slicing | The column index i should satisfy 0 =< i modulo (210+70) <= 70-1 | I am writing a user defined function in python to extract specific chunks of columns from a matrix efficiently.
My matrix is 48 by 16240. The data is organised in some pattern column wise.
My objective is to make 4 matrices out of it. The first matrix is extracted by selecting the first 70 columns, skip the next 210, s... | 0 | 1 | 82 |
0 | 55,670,156 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-13T11:57:00.000 | 3 | 1 | 0 | How do I calculate the similarity of a word or couple of words compared to a document using a doc2vec model? | 55,665,180 | 1.2 | python,gensim,doc2vec | There's a number of possible approaches, and what's best will likely depend on the kind/quality of your training data and ultimate goals.
With any Doc2Vec model, you can infer a vector for a new text that contains known words – even a single-word text – via the infer_vector() method. However, like Doc2Vec in general, ... | In gensim I have a trained doc2vec model, if I have a document and either a single word or two-three words, what would be the best way to calculate the similarity of the words to the document?
Do I just do the standard cosine similarity between them as if they were 2 documents? Or is there a better approach for compar... | 0 | 1 | 343 |
0 | 55,680,593 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-14T20:15:00.000 | 0 | 1 | 0 | Comparing feature extractors (or comparing aligned images) | 55,679,644 | 1.2 | python,opencv,computer-vision | From your question, it seems like the task is not to compare the feature extractors themselves, but rather to find which type of feature extractor leads to the best alignment.
For this, you need two things:
a way to perform the alignment using the features from different extractors
a way to check the accuracy of the a... | I'd like to compare ORB, SIFT, BRISK, AKAZE, etc. to find which works best for my specific image set. I'm interested in the final alignment of images.
Is there a standard way to do it?
I'm considering this solution: take each algorithm, extract the features, compute the homography and transform the image.
Now I need to... | 0 | 1 | 89 |
0 | 55,689,959 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-15T12:45:00.000 | 0 | 1 | 0 | Neural Network with different input shapes | 55,689,510 | 0 | python,tensorflow,machine-learning,deep-learning,computer-vision | In my experience you cannot train any network with different sample size on the same batch.
Fully convolutional network is similar to a fully connected network with fully connected layers at the end. As such any input image in the batch must have the same dims (w,h,d).
The difference is that a fully connected layers ... | I'm currently designing the architecture of a neural network for the colorization of grayscale images. Later on it should be able to colorize images with different sizes and different aspect ratios. I read that this would not be possible with a common CNN. I also read that the only options are downscaling the images to... | 0 | 1 | 863 |
0 | 55,692,831 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-04-15T14:53:00.000 | 1 | 1 | 0 | Python Array Data Structure with History | 55,691,861 | 1.2 | python,data-structures | Have you considered writing a log file? A good use of memory would be to have the arrays contain only the current relevant values but build in a procedure where the update statement could trigger a logging function. This function could write to a text file, database or an array/dictionary of some sort. These types o... | I recently needed to store large array-like data (sometimes numpy, sometimes key-value indexed) whose values would be changed over time (t=1 one element changes, t=2 another element changes, etc.). This history needed to be accessible (some time in the future, I want to be able to see what t=2’s array looked like).
An... | 0 | 1 | 272 |
0 | 55,694,836 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-04-15T17:53:00.000 | 0 | 2 | 0 | Why does ~pd.isnull() return -2? | 55,694,800 | 0 | python | It's because you used an arithmetic, bitwie negation operator instead of a logical negation. | I was doing some quick tests for handling missing values and came across this weird behavior. When looking at ~pd.isnull(np.nan), I expect it to return False, but instead it returns -2. Why is this? | 0 | 1 | 59 |
0 | 55,711,407 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-04-16T14:48:00.000 | 1 | 2 | 0 | Dealing with new words in gensim not found in model | 55,710,967 | 0.099668 | python,nlp,gensim | The models are defined on vectors, which, by default setting, depend only on old words so I do not expect them depend on new words.
It is still possible, depending on code, for new words to affect results. To be on safe side I recommend to test your particular model and/or metrics on a small text (with and without a b... | Lets say I am trying to compute the average distance between a word and a document using distances() or compute cosine similarity between two documents using n_similarity(). However, lets say these new documents contain words that the original model did not. How does gensim deal with that?
I have been reading through t... | 0 | 1 | 179 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.