GUI and Desktop Applications int64 0 1 | A_Id int64 5.3k 72.5M | Networking and APIs int64 0 1 | Python Basics and Environment int64 0 1 | Other int64 0 1 | Database and SQL int64 0 1 | Available Count int64 1 13 | is_accepted bool 2
classes | Q_Score int64 0 1.72k | CreationDate stringlengths 23 23 | Users Score int64 -11 327 | AnswerCount int64 1 31 | System Administration and DevOps int64 0 1 | Title stringlengths 15 149 | Q_Id int64 5.14k 60M | Score float64 -1 1.2 | Tags stringlengths 6 90 | Answer stringlengths 18 5.54k | Question stringlengths 49 9.42k | Web Development int64 0 1 | Data Science and Machine Learning int64 1 1 | ViewCount int64 7 3.27M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 37,311,742 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-05-18T23:33:00.000 | 3 | 3 | 0 | High-dimensional data structure in Python | 37,311,699 | 0.197375 | python,numpy,pandas,machine-learning,multi-index | If you need labelled arrays and pandas-like smart indexing, you can use xarray package which is essentially an n-dimensional extension of pandas Panel (panels are being deprecated in pandas in future in favour of xarray).
Otherwise, it may sometimes be reasonable to use plain numpy arrays which can be of any dimensiona... | What is best way to store and analyze high-dimensional date in python? I like Pandas DataFrame and Panel where I can easily manipulate the axis. Now I have a hyper-cube (dim >=4) of data. I have been thinking of stuffs like dict of Panels, tuple as panel entries. I wonder if there is a high-dim panel thing in Python.
... | 0 | 1 | 2,444 |
0 | 37,313,404 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-05-19T03:10:00.000 | 1 | 2 | 0 | How to convert two channel audio into one channel audio | 37,313,320 | 0.099668 | python,audio | i handle this by using Matlab.python can do the same. (left-channel+right-channel)/2.0 | I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to ... | 0 | 1 | 3,971 |
0 | 37,313,414 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2016-05-19T03:10:00.000 | 1 | 2 | 0 | How to convert two channel audio into one channel audio | 37,313,320 | 1.2 | python,audio | To convert any stereo audio to mono, what I have always seen is the following:
For each pair of left and right samples:
Add the values of the samples together in a way that will not overflow
Divide the resulting value by two
Use this resulting value as the sample in the mono track - make sure to round it properly if ... | I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to ... | 0 | 1 | 3,971 |
0 | 37,333,670 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-05-19T18:40:00.000 | 0 | 1 | 0 | Annotating bokeh chart plot | 37,331,559 | 1.2 | python,plot,bokeh,glyph | It's always possible to use .add_glyph directly, but it is a bit of a pain. The feature to add all the "glyph methods" e.g. .circle, .rect, etc. is in GitHub master, and will be available in the upcoming 0.12 release. | Is it possible to annotate or add markers of any form to Bokeh charts (specifically Bar graphs)? Say I want to add an * (asterisks) on top of a bar; is this possible? | 0 | 1 | 240 |
0 | 37,352,954 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-05-20T17:49:00.000 | 1 | 1 | 0 | Python arrays for objects | 37,352,895 | 0.197375 | python,multidimensional-array | NumPy arrays actually do allow non-numerical contents, so you can just use NumPy. | I would like to make a numpy-like multi-dimensional array of non-numerical objects in python. I believe Numpy arrays only allow numerical values. List-of-lists are much less convenient to index- for example, I'd like to be able to ask for myarray[1,:,2] which requires much more complicated calls from lists of lists.
I... | 0 | 1 | 42 |
0 | 37,416,493 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2016-05-23T08:52:00.000 | 4 | 1 | 0 | What is the optimal topic-modelling workflow with MALLET? | 37,386,595 | 1.2 | python,r,text-mining,lda,mallet | Thank you for this thorough summary!
As an alternative to topicmodels try the package mallet in R. It runs Mallet in a JVM directly from R and allows you to pull out results as R tables. I expect to release a new version soon, and compatibility with tm constructs is something others have requested.
To clarify, it's a g... | Introduction
I'd like to know what other topic modellers consider to be an optimal topic-modelling workflow all the way from pre-processing to maintenance. While this question consists of a number of sub-questions (which I will specify below), I believe this thread would be useful for myself and others who are interest... | 0 | 1 | 1,478 |
0 | 53,760,310 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-05-24T12:15:00.000 | -1 | 4 | 0 | f1 score of all classes from scikits cross_val_score | 37,413,302 | -0.049958 | python,scikit-learn,cross-validation | For individual scores of each class, use this :
f1 = f1_score(y_test, y_pred, average= None)
print("f1 list non intent: ", f1) | I'm using cross_val_score from scikit-learn (package sklearn.cross_validation) to evaluate my classifiers.
If I use f1 for the scoring parameter, the function will return the f1-score for one class. To get the average I can use f1_weighted but I can't find out how to get the f1-score of the other class. (precision and ... | 0 | 1 | 13,883 |
0 | 37,845,330 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-05-24T18:17:00.000 | 1 | 1 | 0 | H5 file with images in Python: Want to randomly select without replacement | 37,421,035 | 0.197375 | python,file,vectorization,hdf5,h5py | An elegant way for sampling without replacement is computing a random permutation of the numbers 1..N (numpy.random.permutation) and then using chunks of size M from it.
Storing data in an h5py file is kind of arbitrary. You could use a single higher dimensional data set or a group containing the N two dimensional data... | I have familiarized myself with the basics of H5 in python. What I would like to do now is two things:
Write images (numpy arrays) into an H5 file.
Once that is done, be able to pick out $M$ randomly.
What is meant here is the following: I would like to write a total of $N=100000$ numpy arrays (images), into one H5 ... | 0 | 1 | 274 |
0 | 37,522,101 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-05-24T20:25:00.000 | 1 | 2 | 0 | orderedDict vs pandas series | 37,423,208 | 0.099668 | python,python-3.x,pandas,series,ordereddictionary | Ordered dict is implemented as part of the python collections lib. These collection are very fast containers for specific use cases. If you would be looking for only dictionary related functionality (like order in this case) i would go for that. While you say you are going to do more deep analysis in an area where pand... | Still new to this, sorry if I ask something really stupid. What are the differences between a Python ordered dictionary and a pandas series?
The only difference I could think of is that an orderedDict can have nested dictionaries within the data. Is that all? Is that even true?
Would there be a performance differenc... | 0 | 1 | 1,177 |
0 | 38,984,971 | 0 | 1 | 0 | 0 | 7 | false | 20 | 2016-05-25T00:55:00.000 | 0 | 15 | 0 | import error; no module named Quandl | 37,426,196 | 0 | python-2.7,importerror,quandl | I am following a Youtube tutorial where they use 'Quandl'. It should be quandl. Change it and it won't throw error. | I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl,
I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks | 0 | 1 | 49,345 |
0 | 38,992,511 | 0 | 1 | 0 | 0 | 7 | false | 20 | 2016-05-25T00:55:00.000 | 12 | 15 | 0 | import error; no module named Quandl | 37,426,196 | 1 | python-2.7,importerror,quandl | Use below syntax all in lower case
import quandl | I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl,
I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks | 0 | 1 | 49,345 |
0 | 60,087,731 | 0 | 1 | 0 | 0 | 7 | false | 20 | 2016-05-25T00:55:00.000 | 1 | 15 | 0 | import error; no module named Quandl | 37,426,196 | 0.013333 | python-2.7,importerror,quandl | check whether it exists with the installed modules by typing
pip list
in the command prompt and if there is no module with the name quandl then type
pip install quandl
in the command prompt . Worked for me in the jupyter | I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl,
I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks | 0 | 1 | 49,345 |
0 | 59,360,553 | 0 | 1 | 0 | 0 | 7 | false | 20 | 2016-05-25T00:55:00.000 | 0 | 15 | 0 | import error; no module named Quandl | 37,426,196 | 0 | python-2.7,importerror,quandl | quandl has now changed, you require an api key, go the site and register your email.
import quandl
quandl.ApiConfig.api_key = 'your_api_key_here'
df = quandl.get('WIKI/GOOGL') | I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl,
I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks | 0 | 1 | 49,345 |
0 | 52,893,601 | 0 | 1 | 0 | 0 | 7 | false | 20 | 2016-05-25T00:55:00.000 | 0 | 15 | 0 | import error; no module named Quandl | 37,426,196 | 0 | python-2.7,importerror,quandl | With Anaconda\Jupyter notebook go to the install directory (C:\Users\<USER_NAME>\AppData\Local\Continuum\anaconda3) where <USER_NAME> is your logged in Username. Then execute in command prompt:
python -m pip install Quandl
import quandl | I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl,
I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks | 0 | 1 | 49,345 |
0 | 43,598,162 | 0 | 1 | 0 | 0 | 7 | false | 20 | 2016-05-25T00:55:00.000 | 0 | 15 | 0 | import error; no module named Quandl | 37,426,196 | 0 | python-2.7,importerror,quandl | install quandl for version 3.1.0
Check package path where you installed, make sure it's name is quandl not Quandl (my previous name is Quandl, so when I use import quandl, it always said "no module named quandl")
If your package's name is Quandl, delete it and reinstall it. (I use anaconda to install my package, it's c... | I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl,
I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks | 0 | 1 | 49,345 |
0 | 45,563,219 | 0 | 1 | 0 | 0 | 7 | false | 20 | 2016-05-25T00:55:00.000 | -1 | 15 | 0 | import error; no module named Quandl | 37,426,196 | -0.013333 | python-2.7,importerror,quandl | Sometimes the quandl module is present with "Quandl" in following location
C:\Program Files (x86)\Anaconda\lib\site-packages\Quandl.
But the scripts from Quandl refer to quandl in import statements.
So, renaming folder Quandl to quandl worked for me.
New path:
"C:\Program Files (x86)\Anaconda\lib\site-packages**quandl... | I am am trying to run the Quandl module on a virtualenv which I have uninstalled packages only pandas and then Quandl,
I am running Python 2.7.10 - I have uninstalled all other python versions, but its still giving me the issue of 'ImportError: No module named Quandl'. Do you know what might be wrong? Thanks | 0 | 1 | 49,345 |
0 | 37,480,887 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-27T10:10:00.000 | 0 | 1 | 0 | how to access a particular column of a particular csv file from many imported csv files | 37,480,728 | 0 | python,r,csv | you can use list.data[[1]]$name1 | Suppose that i have multiple .csv files with columns of same kind . If i wanted to access data of a particular column from a specified .csv file , how is it possible?
All .csv files have been stored in list.data
for ex:
Suppose that here , list.data[1] gives me the first .csv file.
How will i access a column of this fi... | 0 | 1 | 51 |
0 | 37,485,073 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-27T13:36:00.000 | 1 | 1 | 0 | Import error: Error importing scipy | 37,484,932 | 0.197375 | python-2.7,pandas,matplotlib,anaconda,importerror | The csv_test.py file you have "added by hand" tries to import scipy; as the error message says, don't do that.
You can probably place your test code in a private location, without messing with the scipy installation directory.
I suggest uninstalling and reinstalling at least pandas and scipy, possibly everything, to e... | import pandas
Traceback (most recent call last):
File "", line 1, in
File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/init.py", line 37
, in
import pandas.core.config_init
File "/Users/.../anaconda/lib/python2.7/site-packages/pandas/core/config_init.py",
line 18, in
from ... | 0 | 1 | 466 |
0 | 37,596,921 | 0 | 0 | 0 | 1 | 1 | false | 4 | 2016-05-27T20:59:00.000 | 0 | 3 | 0 | loss of precision when using pandas to read excel | 37,492,173 | 0 | python,excel,pandas,dataframe,precision | Excel might be truncating your values, not pandas. If you export to .csv from Excel and are careful about how you do it, you should then be able to read with pandas.read_csv and maintain all of your data. pandas.read_csv also has an undocumented float_precision kwarg, that might be useful, or not useful. | I tried to use pandas to read an excel sheet into a dataframe but for floating point columns, the data is read incorrectly. I use the function read_excel() to do the task
In excel, the value is 225789.479905466 while in the dataframe, the value is 225789.47990546614 which creates discrepancy for me to import data from ... | 0 | 1 | 4,533 |
0 | 37,499,607 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-05-28T10:19:00.000 | 0 | 1 | 0 | Classification of sparse data | 37,497,795 | 1.2 | python,r,classification,data-mining,text-classification | There is nothing wrong with using this coding strategy for text and support vector machines.
For your actual objective:
support vector regression (SVR) may be more appropriate
beware of the journal impact factor. It is very crude. You need to take temporal aspects into account; and many very good work is not published... | I am struggling with the best choice for a classification/prediction problem. Let me explain the task - I have a database of keywords from abstracts for different research papers, also I have a list of journals with specified impact factors. I want to build a model for article classification based on their keywords, th... | 0 | 1 | 539 |
0 | 37,506,002 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2016-05-28T21:51:00.000 | 0 | 2 | 0 | Finding multiple roots on an interval with Python | 37,504,035 | 0 | python,optimization,scipy | Depends on your function, but it might be possible to solve symbolically using SymPy. That would give all roots. It can find eigenvalues symbolically if necessary.
Finding all extrema is the same as finding all roots of the derivative of your function, so it won't be any easier than finding all roots (as WarrenWeckesse... | I'm looking for an efficient method to find all the roots of a function f on an interval [a,b].
The problem I have is that all the nice methods from scipy.optimize require either that f(a) and f(b) have different signs, or that I provide an initial guess x0, but I know nothing about my roots before running the code.
N... | 0 | 1 | 1,874 |
0 | 37,531,473 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-05-29T03:49:00.000 | 0 | 2 | 0 | Error importing numpy & graphlab after installing ipython | 37,505,970 | 0 | python,python-2.7,ipython,graphlab | Please remember that console applications and GUI application do not share the same environment on OS X.
This also means that if you install a Python package from the console, this would probably not be visible to PyCharm.
Usually you need to install packages using PyCharm in order to be able to use them. | I have got a strange issue.
I am now using graphlab/numpy to develop a project via Pycharm 5. OS is Mac OS 10.11.5. I created a p2.7 virtual environment for the project. Programme runs well. But after I install ipython, I can no longer import graphlab and numpy correctly.
Error message:
AttributeError: 'module' object... | 0 | 1 | 398 |
0 | 37,516,498 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-05-29T08:26:00.000 | 1 | 1 | 0 | Is there any way to use libgpuarray with Intel GPU? | 37,507,758 | 1.2 | python,gpu,gpgpu,theano | libgpuarray have been made to support OpenCL, but we don't have time to finish it. Many thinks work, but we don't have the time to make sure it work everywhere.
In any cases, you must find an OpenCL version that support that GPU, install it and reinstall libgpuarray to have it use it.
Also, I'm not sure that GPU will g... | I'm looking for a way to use Intel GPU as a GPGPU with Theano.
I've already installed Intel OpenCL and libgpuarray, but a test code 'python -c "import pygpu;pygpu.test()"' crashed the process. And I found out devname method caused it. It seems there would be a lot more errors.
Is it easy to fixed them to work well? I u... | 0 | 1 | 920 |
0 | 43,774,149 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-30T02:03:00.000 | 0 | 1 | 0 | Backtesting with Data | 37,516,730 | 0 | python,pandas,back-testing | The problem in optimising the loop is that say you have 3 years and you have only 3 events that you are interested. Then you can use an event based backtesting with 3 iteration only.
The problem here is that you have to precompute the event which will need to the data anyway and most of the time you will need statisti... | In a hypothetical scenario where I have a large amount of data that is received, and is generally in chronological order upon receipt, is there any way to "play" the data forward or backward, thereby recreating the flow of new information on demand? I know that in a simplistic sense, I can always have a script (with wh... | 0 | 1 | 392 |
0 | 37,538,260 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-05-31T06:11:00.000 | 0 | 1 | 0 | giving more weight to a feature using sklearn svm | 37,538,068 | 0 | python,svm | You can create one more feature vector in your training data, if the name of the book contains your predefined words then make it one otherwise zero. | I 'm using svm to predict the label from the title of a book. However I want to give more weight to some features pre defined. For example, if the title of the book contains words like fairy, Alice I want to label them as children's books. I'm using word n-gram svm. Please suggest how to achieve this using sklearn. | 0 | 1 | 156 |
0 | 37,543,931 | 0 | 0 | 0 | 0 | 2 | false | 32 | 2016-05-31T10:50:00.000 | 0 | 10 | 0 | How to replace all non-NaN entries of a dataframe with 1 and all NaN with 0 | 37,543,647 | 0 | python,pandas,dataframe | Use: df.fillna(0)
to fill NaN with 0. | I have a dataframe with 71 columns and 30597 rows. I want to replace all non-nan entries with 1 and the nan values with 0.
Initially I tried for-loop on each value of the dataframe which was taking too much time.
Then I used data_new=data.subtract(data) which was meant to subtract all the values of the dataframe to its... | 0 | 1 | 42,508 |
0 | 65,940,835 | 0 | 0 | 0 | 0 | 2 | false | 32 | 2016-05-31T10:50:00.000 | 0 | 10 | 0 | How to replace all non-NaN entries of a dataframe with 1 and all NaN with 0 | 37,543,647 | 0 | python,pandas,dataframe | Generally there are two steps - substitute all not NAN values and then substitute all NAN values.
dataframe.where(~dataframe.notna(), 1) - this line will replace all not nan values to 1.
dataframe.fillna(0) - this line will replace all NANs to 0
Side note: if you take a look at pandas documentation, .where replaces a... | I have a dataframe with 71 columns and 30597 rows. I want to replace all non-nan entries with 1 and the nan values with 0.
Initially I tried for-loop on each value of the dataframe which was taking too much time.
Then I used data_new=data.subtract(data) which was meant to subtract all the values of the dataframe to its... | 0 | 1 | 42,508 |
0 | 37,553,823 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-31T17:24:00.000 | 0 | 1 | 0 | B-splines with Scipy : can I add a datapoint without full recompute? | 37,552,035 | 0 | python,numpy,scipy,bspline | Short answer: No.
Spline construction is a global process, so if you add a data point, you really need to recompute the whole spline. Which involves solving an N-by-N linear system etc.
If you're adding many knots sequentially, you probably can construct a process where you're using a factorization of the colocation m... | I have a bspline created with scipy.interpolate.splrep with points (x_0,y_0) to (x_n,y_n). Usual story. But I would like to add a data point (x_n+1,y_n+1) and appropriate knot without recomputing the entire spline. Can anyone think of a way of doing this elegantly?
I could always take the knot list returned by splrep,... | 0 | 1 | 56 |
0 | 43,214,847 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-05-31T21:24:00.000 | 6 | 3 | 0 | Python OpenCV import error with python 3.5 | 37,555,890 | 1 | python,opencv | No need to change the python version, you can just use the pip command
open cmd ( admin mode) and type
pip install opencv-python | I am having some difficulties installing opencv with python 3.5.
I have linked the cv files, but upon import cv2 I get an error saying ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/cv2.so, 2): Symbol not found: _PyCObject_Type or more specifically:
/Library/Framework... | 0 | 1 | 12,712 |
0 | 37,556,551 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-05-31T22:00:00.000 | 0 | 2 | 0 | Is there a way to prefilter data before using read_csv to download data into pandas dataframe? | 37,556,334 | 0 | python,csv,pandas,dataframe,yelp | Typically yes, load everything, then filter your dataset.
But if you really want to pre-filter, and you're on a unix like system, you can prefilter using grep before even starting Python.
A compromise between them is to write the prefilter using Python and Pandas, this way you download the data, prefilter them (write p... | I'm working with the yelp dataset and of course it is millions of entries long so I was wondering if there was any way where you can just download what you need or do you have to pick away at it manually? For example, yelp has reviews on everything from auto repair to beauty salons but I only want the reviews on the re... | 0 | 1 | 412 |
0 | 37,557,227 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-05-31T23:24:00.000 | -1 | 2 | 0 | How do I get python to import pandas? | 37,557,174 | -0.099668 | python,python-3.x,pandas,packages | Anaconda has included one version of Python with it. You have to change your system environment path with Anaconda's instead of the former one to avoid conflict. Also, if you want to make the whole process easy, it is recommended to use PyCharm, and it will ask you to choose the Python interpreter you want. | I installed Python 3.5.1 from www.python.org. Everything works great. Except that you can't install pandas using pip (it needs visualstudio to compile, which I don't have). So I installed Anaconda (www.continuum.io/downloads). Now I can see pandas as part of the list of installed modules, but when I run python programs... | 0 | 1 | 1,823 |
0 | 37,574,806 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-06-01T14:11:00.000 | 0 | 4 | 0 | How can I make my neural network emphasize that some data is more important than the rest? | 37,571,165 | 0 | python,neural-network,tensorflow,data-science | To emphasize on any vector elements in your input vector you will have to give less information of the unimportant vector to your neural network.
Try to encode the first less important 285 numbers into one number or any vector size you like, with a a multiplayer neural network then use that number with other 4 number a... | I looked around online but couldn't find anything, but I may well have missed a piece of literature on this. I am running a basic neural net on a 289 component vector to produce a 285 component vector. In my input, the last 4 pieces of data are critical to change the rest of the input into the resultant 285 for the o... | 0 | 1 | 1,982 |
0 | 37,575,927 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-06-01T17:32:00.000 | 0 | 2 | 0 | What Arguments to use while doing a KS test in python with student's t distribution? | 37,575,270 | 0 | python,scipy,kolmogorov-smirnov | The args argument must be a tuple but it can be a single variable. You can do your test using ks_statistic, pvalue = scipy.stats.kstest(x, 't', (10,)) if 10 is the degrees of freedom. | I have data regarding metallicity in stars, I want to compare it with a student's t distribution. To do this I am running a Kolmogorov-Smirnov test using scipy.stats.kstest on python
KSstudentst = scipy.stats.kstest(data,"t",args=(a,b))
But I am unable to find what the arguments are supposed to be. I know the stude... | 0 | 1 | 3,626 |
0 | 37,580,756 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-06-01T23:14:00.000 | 1 | 4 | 0 | numpy Boolean array representation of an integer | 37,580,272 | 0.049958 | python,numpy | Maybe it is not the easiest, but a compact way is
from numpy import array
array([i for i in bin(5)[2:]]) == '1' | What's the easiest way to produce a numpy Boolean array representation of an integer? For example, map 6 to np.array([False, True, True], dtype=np.bool). | 0 | 1 | 653 |
0 | 37,593,003 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-06-02T13:00:00.000 | 1 | 2 | 0 | Can I export RapidMiner model to integrate with python? | 37,592,608 | 1.2 | python,machine-learning,scikit-learn,rapidminer | Practically, I would say no - just train your model in sklearn from the beginning if that's where you want it.
Your RapidMiner model is some kind of object. The two formats you are exporting as are just storage methods. Sklearn models are a different kind of object. You can't directly save one and load it into the o... | I have trained a classifier model using RapidMiner after a trying a lot of algorithms and evaluate it on my dataset.
I also export the model from RapidMiner as XML and pkl file, but I can't read it in my python program (scikit-learn).
Is there any way to import RapidMiner classifier/model in a python program and use ... | 0 | 1 | 2,438 |
0 | 37,614,696 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-02T19:54:00.000 | 0 | 1 | 0 | anaconda env couldn't import any of the packages | 37,600,960 | 0 | python,python-3.x,anaconda,conda | Finally, I figured out the answer. It is all about the PATH variable. It was pointing to os python rather than anaconda python. Thanks all for your time. | pip list inside conda env:
pip list
matplotlib (1.4.0)
nose (1.3.7)
numpy (1.9.1)
pandas (0.15.2)
pip (8.1.2)
pyparsing (2.0.1)
python-dateutil (2.4.1)
pytz (2016.4)
scikit-learn (0.15.2)
scipy (0.14.0)
setuptools (21.2.1)
six (1.10.0)
wheel (0.29.0)
which python:
/Users/xxx/anaconda/envs/pythonenvname/bin/python
(pyth... | 0 | 1 | 204 |
0 | 48,292,419 | 0 | 0 | 0 | 0 | 1 | false | 25 | 2016-06-03T10:50:00.000 | 34 | 3 | 0 | What are ways to speed up seaborns pairplot | 37,612,434 | 1 | python,performance,parallel-processing,seaborn | Rather than parallelizing, you could downsample your DataFrame to say, 1000 rows to get a quick peek, if the speed bottleneck is indeed occurring there. 1000 points is enough to get a general idea of what's going on, usually.
i.e. sns.pairplot(df.sample(1000)). | I have a dataframe with 250.000 rows but 140 columns and I'm trying to construct a pair plot. of the variables.
I know the number of subplots is huge, as well as the time it takes to do the plots. (I'm waiting for more than an hour on an i5 with 3,4 GHZ and 32 GB RAM).
Remebering that scikit learn allows to construct r... | 0 | 1 | 15,683 |
0 | 60,295,986 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2016-06-03T16:06:00.000 | 3 | 4 | 0 | PySpark computing correlation | 37,618,977 | 0.148885 | python,apache-spark,pyspark,apache-spark-sql,apache-spark-mllib | df.stat.corr("column1","column2") | I want to use pyspark.mllib.stat.Statistics.corr function to compute correlation between two columns of pyspark.sql.dataframe.DataFrame object. corr function expects to take an rdd of Vectors objects. How do I translate a column of df['some_name'] to rdd of Vectors.dense object? | 0 | 1 | 31,767 |
0 | 37,682,991 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-04T16:13:00.000 | 1 | 1 | 0 | how to close tensorboard server with jupyter notebook | 37,632,393 | 0.197375 | python,tensorflow,jupyter-notebook,tensorboard | The jupyter stuff seems fine. In general, if you don't close TensorBoard properly, you'll find out as soon as you try to turn on TensorBoard again and it fails because port 6006 is taken. If that isn't happening, then your method is fine.
As regards the logdir, passing in the top level logdir is generally best because ... | what is the proper way to close tensorboard with jupyter notebook?
I'm coding tensorflow on my jupyter notebook. To launch, I'm doing: 1.
!tensorboard --logdir = logs/
open a new browser tab and type in localh... | 0 | 1 | 2,652 |
0 | 37,693,149 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2016-06-07T17:54:00.000 | 0 | 3 | 0 | TFLearn pip installation bug | 37,686,139 | 0 | python,tensorflow | Last tflearn update had a compatibility issue with old TensorFlow versions (like mrry said, caused by 'variance_scaling_initializer()' that was only compatible with TensorFlow 0.9).
That error had already been fix, so you can just update TFLearn and it should works fine with any TensorFlow version over 0.7. | I've tried installing tflearn through pip as follows
pip install tflearn
and now when I open python, the following happens:
>>> import tflearn
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "//anaconda/lib/python2.7/site-packages/tflearn/__init__.py", line 22, in <module>
from . im... | 0 | 1 | 3,984 |
0 | 65,464,031 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2016-06-08T07:00:00.000 | 1 | 2 | 0 | Python and OpenCV - getting the duration time of a video at certain points | 37,695,376 | 0.099668 | python,opencv,video | You can simply measure a certain position in the video in milliseconds using
time_milli = cap.get(cv2.CAP_PROP_POS_MSEC)
and then divide time_milli by 1000 to get the time in seconds. | Let say I have made a program to detect a green ball in a video. Whenever there is a green ball detected, I want to print out the duration of video at the time the green ball is detected. Is it possible? | 0 | 1 | 5,963 |
0 | 46,964,844 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-08T12:33:00.000 | 0 | 1 | 0 | Convert a 16-bit image from rgb to Lab without losing precision | 37,702,630 | 0 | python,image,16-bit,lab-color-space | skimage version of rgb2lab uses floating point input where the color range is 0-1. You can normalize your 16bit image to this range and use the rgb2lab routine. | i have a 16 bit image in PhoPhoto RGB color space.For equalizzation I want to convert it in Lab colorspace and then equalize L-channel without losing precision. I have used skimage.color.rgb2lab but this convert image in float64.
help me!! | 0 | 1 | 455 |
0 | 37,721,273 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-06-09T08:26:00.000 | 0 | 1 | 0 | Efficient representing sub-graphs (data structure) in Python | 37,720,588 | 0 | python,performance,data-structures,graph | The key point here is the data format of the graphs already generated by your algorithm. Does it contruct a new graph by adding vertices and edges ? Is it rewritable ? Does it uses a given format (matrix, adjacency list, vertices and nodes sets etc.)
If you have the choice however, because your subgraph have a "low" ca... | What is the efficient way of keeping and comparing generated sub-graphs from given input graph G in Python?
Some details:
Input graph G is a directed, simple graph with number of vertices varying from n=100-10000. Number of edges - it can be assumed that maximum would be 10% of complete graph (usually less) so it give... | 0 | 1 | 437 |
0 | 37,729,976 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-09T13:51:00.000 | 0 | 1 | 0 | creating new CSV by duplicating and modifying existing records multiple times from the source CSV | 37,727,899 | 0 | python,csv,hive,apache-pig | It really depends on what do you want to achieve and what hardware do you use.
If you need to process this file fast and you actually have real Hadoop cluster (bigger than 1 or 2 nodes), then probably the best way would be to write Pig script or even simple Hadoop MapReduce job to process this file. With this approach... | I am a newbie in big data, I have an assignment in which I was given a CSV file and date field is one of the fields in that file. The file size is only 10GB, but I need to create a much larger file, 2TB in size, for big data practice purpose, by duplicating the file's content but increasing the date in order making th... | 0 | 1 | 59 |
0 | 37,752,325 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-10T15:00:00.000 | 1 | 1 | 0 | K-Means Implementation in Python | 37,751,430 | 0.197375 | python,machine-learning,scikit-learn,computer-science,k-means | Before answering which is better, here is a quick reminder of the algorithm:
"Choose" the number of clusters K
Initiate your first centroids
For each point, find the closest centroid
according to a distance function D
When all points are attributed to a cluster, calculate the barycenter of the cluster which become its... | Is it better to implement my own K-means Algorithm in Python or use the pre-implemented K-mean Algorithm in Python libraries like for example Scikit-Learn? | 0 | 1 | 614 |
0 | 37,768,933 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-11T19:33:00.000 | 1 | 1 | 0 | Scikit-learn KNN(K Nearest Neighbors ) parallelize using Apache Spark | 37,767,790 | 0.197375 | python,scala,apache-spark,machine-learning,scikit-learn | Well according to discussions https://issues.apache.org/jira/browse/SPARK-2336 here MLLib (Machine Learning Library for Apache Spark) does not have an implementation of KNN.
You could try https://github.com/saurfang/spark-knn. | I have been working on machine learning KNN (K Nearest Neighbors) algorithm with Python and Python's Scikit-learn machine learning API.
I have created sample code with toy dataset simply using python and Scikit-learn and my KNN is working fine. But As we know Scikit-learn API is build to work on single machine and henc... | 0 | 1 | 5,059 |
0 | 37,796,440 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-13T08:50:00.000 | 0 | 1 | 0 | Sync Choregraphe and Matlab | 37,785,380 | 0 | python,c++,matlab,nao-robot | Using NAO C++ SDK, it may be possible to make a MEX-FILE in Matlab that "listens" to NAO. Then NAO just has to raise an event in its memory (ALMemory) that Matlab would catch to start running the script. | I have a Wizard of Oz experiment using Choregraphe to make a NAO perform certain tasks running on machine A. The participant interacting with the NAO also interacts with a machine B. When I start the experiment (in Choregraphe on machine A) I want a certain MATLAB script to start on machine B. I.e. Choregraphe will ini... | 0 | 1 | 107 |
0 | 37,821,215 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-06-14T19:36:00.000 | 1 | 2 | 0 | Using Multiple Languages while developing a Spark application | 37,820,668 | 0.099668 | python,scala,apache-spark,pyspark | An experienced developer will be able to pick up a new language and become productive fairly quickly.
I would only consider using the two languages together if:
The deadlines are too tight to allow for the developer to get up to speed,
The integration between the modules is quite limited (and you're confident that won... | I'm working on a project with another person. My part of the project involves analytics with Spark's Machine Learning, while my teammate is using Spark Streaming to pipeline data from the source to the program and out to an interface.
I am planning to use Scala since it has the best support for Spark. However, my teamm... | 0 | 1 | 452 |
0 | 37,846,825 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2016-06-15T07:46:00.000 | 2 | 4 | 0 | Neural network only learns most common training image | 37,829,169 | 0.099668 | python,neural-network,tensorflow | Trying to squeeze blood from a stone!
I'm skeptical that with 4283 training examples your net will learn 62 categories...that's a big ask for such a small amount of data. Especially since your net is not a conv net...and it's forced to reduce its dimensionality to 100 at the first layer. You may as well pca it and sa... | I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hi... | 0 | 1 | 379 |
0 | 37,846,462 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2016-06-15T07:46:00.000 | 0 | 4 | 0 | Neural network only learns most common training image | 37,829,169 | 0 | python,neural-network,tensorflow | Which optimizer are you using? If you've only tried gradient descent, try using one of the adaptive ones (e.g. adagrad/adadelta/adam). | I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hi... | 0 | 1 | 379 |
0 | 37,829,933 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2016-06-15T07:46:00.000 | 0 | 4 | 0 | Neural network only learns most common training image | 37,829,169 | 0 | python,neural-network,tensorflow | Your learning rate is way too high. It should be around 0.01, you can experiment around it but 0.5 is too high.
With a high learning rate, the network is likely to get stuck in a configuration and output something fixed, like you observed.
EDIT
It seems the real problem is the unbalanced classes in the dataset. You ca... | I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hi... | 0 | 1 | 379 |
0 | 37,876,437 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-06-15T10:33:00.000 | 0 | 1 | 0 | Adding h5 files in a zip to use with PySpark | 37,832,937 | 1.2 | python,pyspark,caffe | Found that you can add the additional files to all the workers by using --files argument in spark-submit. | I am using PySpark 1.6.1 for my spark application. I have additional modules which I am loading using the argument --py-files. I also have a h5 file which I need to access from one of the modules for initializing the ApolloNet.
Is there any way I could access those files from the modules if I put them in the same arch... | 1 | 1 | 84 |
0 | 37,839,374 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-15T15:08:00.000 | 0 | 1 | 1 | How to plot data while it's being processed | 37,839,265 | 0 | python,plot,gnuplot | You can plot the data as it is being processed, but there's a couple of issues that come along with it in terms of efficiency.
Gnuplot needs to do work each time to process your data
Gnuplot needs to wait for your operating system to paint your screen whenever it updates
Your program needs to wait for Gnuplot to do an... | I'm in the process of converting a large (several GBs) bin file to csv format using Python so the resulting data can be plotted. I'm doing this conversion because the bin file is not in a format that a plotting tool/module could understand, so there needs to be some decoding/translation. Right now it seems like Gnuplot... | 0 | 1 | 67 |
0 | 37,890,119 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-06-15T20:33:00.000 | 0 | 2 | 0 | Plotting a point cloud and moving the camera | 37,845,256 | 1.2 | python,matplotlib,gnuplot,visualization | No, gnuplot cannot really move the viewing point, for the good reason that the viewing point is at infinity: all you can do is set an angle and magnification (using set view) and an offset within the viewing window (with set origin). That means, you can move the viewing point on a sphere at infinity, but not among the... | I have a list of points given by their x, y, z coordinates. I would like to plot these points on a computer. I have managed to do this with gnuplot and with the python library matplotlib separately. However for these two solutions, it seems hard to change the 'viewing point', or the point from which the projection of t... | 0 | 1 | 1,343 |
0 | 37,851,865 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-16T06:50:00.000 | 1 | 1 | 0 | low_memory parameter in read_csv function | 37,851,796 | 0.197375 | python,pandas,ipython,spyder | This come from the docs themselves. Have you read them?
low_memory : boolean, default True
Internally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
... | What does the low_memory parameter do in the read_csv function from the pandas library? | 0 | 1 | 113 |
0 | 37,855,371 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2016-06-16T09:24:00.000 | 4 | 2 | 0 | Why does accumulate work for numpy.maximum but not numpy.argmax | 37,855,059 | 1.2 | python,numpy,numpy-ufunc | Because max is associative, but argmax is not:
max(a, max(b, c)) == max(max(a, b), c)
argmax(a, argmax(b, c)) != argmax(argmax(a, b), c) | These two look like they should be very much equivalent and therefore what works for one should work for the other? So why does accumulate only work for maximum but not argmax?
EDIT: A natural follow-up question is then how does one go about creating an efficient argmax accumulate in the most pythonic/numpy-esque way? | 0 | 1 | 1,719 |
0 | 37,879,801 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-06-17T10:42:00.000 | 1 | 1 | 0 | Creating a 3D grid using X,Y,Z coordinates at cell centers | 37,879,558 | 1.2 | python,numpy,scipy | If you grid is regular:
You have calculate dx = x[i+1]-x[i], dy = y[i+1]-y[i], dz = z[i+1]-z[i].
Then calculate new arrays of points:
x1[i] = x[i]-dx/2, y1[i] = y[i]-dy/2, z1[i] = z[i]-dz/2.
If mesh is irregular you have to do the same but dx,dy,dz you have to define for every grid cell. | I have a question, I have been given x,y,z coordinate values at cell centers of a grid. I would like to create structured grid using these cell center coordinates.
Any ideas how to do this? | 0 | 1 | 425 |
0 | 37,905,017 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2016-06-17T21:49:00.000 | 5 | 2 | 0 | Pandas Series: Log Normalize | 37,890,849 | 0.462117 | python,pandas,normalization | If your data is in the range (-1;+1) (assuming you lost the minus in your question) then log transform is probably not what you need. At least from a theoretical point of view, it's obviously the wrong thing to do.
Maybe your data has already been preprocessed (inadequately)? Can you get the raw data? Why do you think ... | I have a Pandas Series, that needs to be log-transformed to be normal distributed. But I can´t log transform yet, because there are values =0 and values below 1 (0-4000). Therefore I want to normalize the Series first. I heard of StandardScaler(scikit-learn), Z-score standardization and Min-Max scaling(normalization).
... | 0 | 1 | 64,605 |
0 | 64,930,005 | 0 | 0 | 0 | 0 | 1 | false | 87 | 2016-06-18T02:51:00.000 | -2 | 8 | 0 | Using Keras & Tensorflow with AMD GPU | 37,892,784 | -0.049958 | python,python-2.7,opencl,tensorflow,keras | Technically you can if you use something like OpenCL, but Nvidia's CUDA is much better and OpenCL requires other steps that may or may not work. I would recommend if you have an AMD gpu, use something like Google Colab where they provide a free Nvidia GPU you can use when coding. | I'm starting to learn Keras, which I believe is a layer on top of Tensorflow and Theano. However, I only have access to AMD GPUs such as the AMD R9 280X.
How can I setup my Python environment such that I can make use of my AMD GPUs through Keras/Tensorflow support for OpenCL?
I'm running on OSX. | 0 | 1 | 121,342 |
0 | 37,902,814 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-06-18T10:46:00.000 | 1 | 3 | 0 | How to represent a 3D .obj object as a 3D array? | 37,896,090 | 0.066568 | python,c++,arrays,opencv,3d | If I understand correctly, you want to create a voxel representation of 3D models? Something like the visible human displays?
I would use one of the OBJ file loaders recommended above to import the model into an OpenGL program. Rotate and scale to whatever alignment you want along XYZ.
Then draw the object with a fragm... | Is there any way by which 3D models can be represented as 3D arrays? Are there any libraries that take .obj or .blend files as input and give an array representation of the same?
I thought that I would slice object and export the slices to an image. I would then use those images in opencv to build arrays for each slice... | 0 | 1 | 3,543 |
0 | 37,947,823 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2016-06-21T14:42:00.000 | 6 | 2 | 0 | Neural Network composed of multiple activation functions | 37,947,558 | 1.2 | python,neural-network,scikits,activation-function | A neural network is just a (big) mathematical function. You could even use different activation functions for different neurons in the same layer. Different activation functions allow for different non-linearities which might work better for solving a specific function. Using a sigmoid as opposed to a tanh will only ma... | I am using the sknn package to build a neural network. In order to optimize the parameters of the neural net for the dataset I am using I am using an evolutionary algorithm. Since the package allows me to build a neural net where each layer has a different activation function, I was wondering if that is a practical cho... | 0 | 1 | 5,281 |
0 | 37,971,709 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-06-21T15:39:00.000 | 1 | 2 | 0 | spyder, numpy, anaconda : cannot import name multiarray | 37,948,852 | 0.099668 | python-2.7,numpy,spyder | I solved the problem by executing the spyder version of the python2 environment.
It is located in Anaconda3\envs\python2\Scripts\spyder.exe | I am on Windows 10, 64bits, use Anaconda 4 and I created an environment with python 2.7 (C:/Anaconda3/envs/python2/python.exe)
In this environment, I successfully installed numpy and when I type "python", enter, "import numpy", enter, it works perfectly in the anaconda prompt window.
In spyder however, when I open a py... | 0 | 1 | 833 |
0 | 42,165,864 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-06-21T15:39:00.000 | 1 | 2 | 0 | spyder, numpy, anaconda : cannot import name multiarray | 37,948,852 | 0.099668 | python-2.7,numpy,spyder | I have encountered same issue. I have followed every possible solution, which is stated on stack-overflow. But no luck. The cause of error might be the python console. I have installed a 3.5 Anaconda, and the default console is the python 2.7, which I have installed primarily with pydev. I did this and now it is workin... | I am on Windows 10, 64bits, use Anaconda 4 and I created an environment with python 2.7 (C:/Anaconda3/envs/python2/python.exe)
In this environment, I successfully installed numpy and when I type "python", enter, "import numpy", enter, it works perfectly in the anaconda prompt window.
In spyder however, when I open a py... | 0 | 1 | 833 |
0 | 38,003,329 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2016-06-21T20:49:00.000 | -3 | 3 | 0 | How to load one line at a time from a pickle file? | 37,954,324 | -0.197375 | python,numpy,pickle | Thanks everyone. I ended up finding a workaround (a machine with more RAM so I could actually load the dataset into memory). | I have a large dataset: 20,000 x 40,000 as a numpy array. I have saved it as a pickle file.
Instead of reading this huge dataset into memory, I'd like to only read a few (say 100) rows of it at a time, for use as a minibatch.
How can I read only a few randomly-chosen (without replacement) lines from a pickle file? | 0 | 1 | 14,112 |
0 | 37,971,491 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-22T14:22:00.000 | 0 | 1 | 0 | Neural Network converging and accurate in training, but failing in real world | 37,970,895 | 0 | python,neural-network,prediction | What Ashafix says was my first though, you should post your training and test data also the data that you use for 'real world'.
Another problem it could be that when you are testing, you are using only previous correct whether data(data you already have), and when you are in practice you are using your predicts and cor... | This is my first question on this site. I'm attempting to practice neural networks by having my program predict whether the temperature will go up or down on a given day relative to the previous day. My training data set consists of the previous ten days and whether they went up or down relative to the day before the... | 0 | 1 | 68 |
0 | 37,997,580 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-23T12:48:00.000 | 0 | 3 | 0 | Java calling python function with tensorflow graph | 37,992,129 | 0 | java,python-2.7,tensorflow | I've had the same problem, Java+Python+TensorFlow. I've ended up setting up a simple http server. If that's too slow for you, you can shave off some overhead by employing sockets directly. | So I have a neural network in tensorflow (python2.7) and I need to retrieve its output using Java. I have a simple python function getValue(input) which starts the session and retrieves the value. I am open to any suggestions. I believe Jython wont work cause tensorflow is not in the library. I need the call to be as f... | 1 | 1 | 1,916 |
0 | 37,996,674 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2016-06-23T16:02:00.000 | 3 | 3 | 0 | Extremely low p-values from non-parametric tests | 37,996,628 | 1.2 | python,scipy,statistics,distribution,kolmogorov-smirnov | You do not need to worry about something going wrong with the scipy functions. P values that low just mean that it's really unlikely that your samples have the same parent populations.
That said, if you were not expecting the distributions to be (that) different, now is a good time to make sure you're measuring what y... | I'm using Python's non-parametric tests to check whether two samples are consistent with being drawn from the same underlying parent populations: scipy.stats.ks_2samp (2-sample Kolmogorov-Smirnov), scipy.stats.anderson_ksamp (Anderson-Darling for k samples), and scipy.stats.ranksums (Mann-Whitney-Wilcoxon for 2 samples... | 0 | 1 | 1,931 |
0 | 38,006,265 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-06-23T16:02:00.000 | 0 | 3 | 0 | Extremely low p-values from non-parametric tests | 37,996,628 | 0 | python,scipy,statistics,distribution,kolmogorov-smirnov | Well, you've bumped into a well-known feature of significance tests, which is that the p-value typically goes to zero as the sample size increases without bound. If the null hypothesis is false (which can often be established a priori), then you can get as small a p-value as you wish, just by increasing the sample size... | I'm using Python's non-parametric tests to check whether two samples are consistent with being drawn from the same underlying parent populations: scipy.stats.ks_2samp (2-sample Kolmogorov-Smirnov), scipy.stats.anderson_ksamp (Anderson-Darling for k samples), and scipy.stats.ranksums (Mann-Whitney-Wilcoxon for 2 samples... | 0 | 1 | 1,931 |
0 | 46,231,053 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-06-26T20:41:00.000 | 0 | 2 | 0 | matplotlib: How to assign the dpi of your figure to meet some set maximum file size? | 38,042,987 | 0 | python,image,matplotlib,save | You should do a for loop over different dpi values in decreasing order and in every loop save the image, check the file size and delete the image if filesize > 15 MB. After filesize < 15 MB break the loop. | Just wondering if there is a tried and tested method for setting the dpi output of your figure in matplotlib to be governed by some maximum file size, e.g., 15MB. | 0 | 1 | 80 |
0 | 38,043,037 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-06-26T20:41:00.000 | 0 | 2 | 0 | matplotlib: How to assign the dpi of your figure to meet some set maximum file size? | 38,042,987 | 0 | python,image,matplotlib,save | There can not be such a mechanism, because the file size can only be determined by actually rendering the finished drawing to a file, and that must happen after setting up the figure (where you set the DPI).
How, for example, should anyone know, before rendering your curve, how well it's compressible as PNG? Or how bi... | Just wondering if there is a tried and tested method for setting the dpi output of your figure in matplotlib to be governed by some maximum file size, e.g., 15MB. | 0 | 1 | 80 |
0 | 38,061,355 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-27T18:27:00.000 | 0 | 2 | 0 | Trying to find frequent patterns in a sequence python | 38,060,783 | 0 | python,algorithm,pattern-matching,sequence,apriori | There are a couple of business decisions you have to take before you will have a normal algorithm. The first and the most important decision is what size of the set do you want. Clearly if you have {a, b, ... x} is the most frequent set, then every subset (like {a, x} or {c, d, y} will be at least with the same frequen... | I am trying to find frequent ( ordered or unordered) patterns in a column. The column contains numeric IDs. for eg:
s=[1 2 3 4 1 2 6 7 8 2 1 10 11]
Here 1 2 or 2 1 taking as a same case is the most frequent set.
Please help me to solve this problem, I could think of apriori, FP algorithms but I don't have any transact... | 0 | 1 | 735 |
0 | 38,065,115 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-06-27T23:42:00.000 | 0 | 1 | 0 | Python: multi-dimensional lists - appending one 2D list into another list | 38,064,885 | 1.2 | python,arrays,list,multidimensional-array | Python's fairly friendly about this sort of thing and will let you have lists as elements of lists . Here's an example of one way to do it.
TableA = [['01/01/2000', '$10'], ['02/01/2000', '$11']]
If you entered this straight into the python interpreter, you'd define TableA as a list with two elements. Both of these ele... | I'm looking to create a master array/list that takes several two dimensional lists and integrates them into the larger list.
For example I have a TableA[] which has dates as one array/list and prices as another array/list. I have another as TableB[] which has the same. TableA[0][0] has the first date; TableA[0][1] has ... | 0 | 1 | 1,397 |
0 | 38,067,029 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-28T00:57:00.000 | 0 | 1 | 0 | Tensorflow Copy Weights Issue | 38,065,448 | 0 | python-3.x,neural-network,tensorflow | If you could have your code/ more detail here that would be beneficial. However, you can return the session you're using to train N1 and access it while you want to train N2. | I am using Tensorflow 0.8 to train the Deep Neural Networks. Currently, I encounter an issue that I want to define two exact same Neural Networks N1 and N2, and I train N1, during the training loop, I copy updated weights from N1 to N2 every 4 iterations. In fact, I know there is way using tf.train.saver.save() to save... | 0 | 1 | 539 |
0 | 38,072,512 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-28T08:52:00.000 | 0 | 1 | 0 | Plot 2 boxplots , each from different pandas dataframe in a figure? | 38,071,436 | 0 | python,pandas,matplotlib,boxplot | Use return_type='axes' to get data1.boxplot to return a matplotlib Axes object. Then pass that axes to the second call to boxplot using ax=ax. This will cause both boxplots to be drawn on the same axes.
Alternatively if you just want them plotted side to side use matplotlib subplot | I want to plot boxplots for each of the dataframes side by side. Below is an example dataset.
data 1 :
id |type |activity | feature1
1 | A | ACTIVE | 12
2 | B | INACTIVE| 10
3 | C | ACTIVE| 9
data 2:
id | type | activity | feature1
1 | A | ACTIVE | 13
2 | B | INACTIVE | 14
3 | C | ACTIVE | 15
First boxplot should be to... | 0 | 1 | 865 |
0 | 38,833,074 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-28T13:59:00.000 | 0 | 1 | 0 | Implicit DAE Mass Matrix Python | 38,078,299 | 0 | python,matrix,ode | Since the mass matrix is singular, this is a "differential-algebraic equation". You can find off-the-shelf solvers for DAEs, such as the IDA solver from the SUNDIALS library. SUNDIALS has python bindings in the scikit.odes package. | I have a problem M*y' = f(y) that are going to be solved in Python, where M is the mass matrix, y' the derivative and y is a vector, such that y1, y2 etc. refers to different points in r.
Have anyone used a mass matrix on a similar problem in Python?
The problem is a 2D-problem in r- and z-direction. The r-directio... | 0 | 1 | 680 |
0 | 54,546,005 | 0 | 0 | 0 | 0 | 1 | false | 51 | 2016-06-28T15:05:00.000 | 0 | 10 | 0 | How can I implement incremental training for xgboost? | 38,079,853 | 0 | python,machine-learning,xgboost | To paulperry's code, If change one line from "train_split = round(len(train_idx) / 2)" to "train_split = len(train_idx) - 50". model 1+update2 will changed from 14.2816257268 to 45.60806270012028. And a lot of "leaf=0" result in dump file.
Updated model is not good when update sample set is relative small.
For binary:l... | The problem is that my train data could not be placed into RAM due to train data size. So I need a method which first builds one tree on whole train data set, calculate residuals build another tree and so on (like gradient boosted tree do). Obviously if I call model = xgb.train(param, batch_dtrain, 2) in some loop - it... | 0 | 1 | 48,270 |
0 | 38,084,884 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2016-06-28T17:58:00.000 | 0 | 2 | 0 | Python 3.5.1 Unable to import numpy after update | 38,083,176 | 0 | python,numpy,pandas,anaconda | I was able to resolve this issue using conda to remove and reinstall the packages that were failing to import. I will leave the question marked unanswered to see if anyone else has a better solution, or guidance on how to prevent this in the future. | I'm running Python 3.5.1 on a Windows 7 machine. I've been using Anaconda without issue for several months now. This morning, I updated my packages (conda update --all) and now I can't import numpy (version 1.11.0) or pandas(version 0.18.1).
The error I get from Python is:
Syntax Error: (unicode error) 'unicodeescape' ... | 0 | 1 | 604 |
0 | 38,106,001 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-06-29T16:45:00.000 | 0 | 1 | 0 | Odd results from MATLAB function called in python | 38,105,605 | 1.2 | python,matlab,python-2.7,cplex | So turns out, that on occasion the code will run if some variables aren't specified as double, and in other cases it integer division or what not, results false results. I have no idea how this correlates to the input as it really shouldn't but I just went and specified all variables in the relevant section of code to ... | So I have a rather comlicated matlab function (it calls a simulation that in turn calls an external optimisation suite (cplex or gurobi)). And for certain settings and inputs the MATLAB function and the python function called from Matlab give the same result but for others they differ (correct answer is ~4500) python ... | 0 | 1 | 62 |
0 | 38,110,151 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-06-29T20:55:00.000 | 0 | 1 | 0 | OpenCV Python - cv2 Module not found | 38,109,860 | 1.2 | python,python-2.7,opencv | Try to reinstall it by sudo apt-get install python-opencv,
But first you may check out something you might be skipping.
Make sure the script you are running on the terminal is on the same python version/ location as on IDLE.
Maybe your IDLE is running on a different interpreter(different location).
Open IDLE and check... | Even though I believe I have installed correctly OpenCV, I cannot overcome the following problem. When I start a new python project from IDLE (2.7) the cv2 module is imported successfully. If I close IDLE and try to run the .py file, an error message is displayed that says "ImportError: No module named cv2". Then if I ... | 0 | 1 | 2,805 |
0 | 63,753,350 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-30T06:05:00.000 | 1 | 2 | 0 | Install python CV2 on spark cluster(data bricks) | 38,115,108 | 0.099668 | python,opencv,apache-spark,pyspark,databricks | Try to download numpy first followed by opencv-python it will work.
Steps:
Navigate to Install Library-->Select PyPI----->In Package--->numpy
(after installation completes, proceed to step 2)
Navigate to Install Library-->Select PyPI----->In Package--->opencv-python | i want to install pythons library CV2 on a spark cluster using databricks community edition and i'm going to:
workspace-> create -> library , as the normal procedure and then selecting python in the Language combobox, but in the "PyPi Package" textbox , i tried "cv2" and "opencv" and had no luck. Does anybody has tried... | 0 | 1 | 1,543 |
0 | 38,124,167 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-06-30T07:04:00.000 | 2 | 1 | 0 | python sklearn: what is the different between "sklearn.preprocessing.normalize(X, norm='l2')" and "sklearn.svm.LinearSVC(penalty='l2')" | 38,116,078 | 0.379949 | python,scikit-learn | These 2 are different things and you normally need them both in order to make a good SVC model.
1) The first one means that in order to scale (normalize) the X data matrix you need to divide with the L2 norm of each column, which is just this : sqrt(sum(abs(X[:,j]).^2)) , where j is each column in your data matrix X . ... | here is two method of normalize :
1:this one is using in the data Pre-Processing: sklearn.preprocessing.normalize(X, norm='l2')
2:the other method is using in the classify method : sklearn.svm.LinearSVC(penalty='l2')
i want to know ,what is the different between them? and does the two step must be used in a compl... | 0 | 1 | 378 |
0 | 38,119,245 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-06-30T09:24:00.000 | 0 | 2 | 0 | Use hard drive instead of RAM in Python | 38,118,942 | 0 | python,pandas,memory,pydev | If all you need is a virtualization of the disk as a large RAM memory you might set up a swap file on the system. The kernel will then automatically swap pages in and out as needed, using heuristics to figure out what pages should be swapped and which should stay on disk. | I'd like to know if there's a method or a Python Package that can make me use a large dataset without writing it in RAM.
I'm also using pandas for statistical function.
I need to have access on the entire dataset because many statistical functions needs the entire dataset to return credible results.
I'm using PyDev (wi... | 0 | 1 | 5,299 |
0 | 57,539,584 | 0 | 1 | 0 | 0 | 2 | false | 6 | 2016-06-30T15:59:00.000 | 0 | 2 | 0 | Pycharm debugger, view as array option | 38,128,164 | 0 | python,matlab,debugging,numpy,pycharm | You need to ensure that after you "view as array" you then enter the correct slice. I.e. if you view a color image which has shape (500, 1000, 3) as an array, the default slicing option will be image[0]. This is the first row of pixels and will appear as a (1000, 3) array. In order to see one of the three color channel... | First of all, sorry if it's not the place to post this question, I know it is more related to the software I'm using to program than programming itself, but I figured someone here would probably know the answer.
I often use PyCharm (currently on version 2016.1.2) and its useful debugger to code in Python. I'm currently... | 0 | 1 | 2,669 |
0 | 41,962,870 | 0 | 1 | 0 | 0 | 2 | false | 6 | 2016-06-30T15:59:00.000 | 6 | 2 | 0 | Pycharm debugger, view as array option | 38,128,164 | 1 | python,matlab,debugging,numpy,pycharm | I encountered the same problem when I tried to view a complex arrays with 'Color' check box checked. Unchecking the check box showed the array. Maybe some inf or nan value present in you array which does not allow to show colored array. | First of all, sorry if it's not the place to post this question, I know it is more related to the software I'm using to program than programming itself, but I figured someone here would probably know the answer.
I often use PyCharm (currently on version 2016.1.2) and its useful debugger to code in Python. I'm currently... | 0 | 1 | 2,669 |
0 | 38,130,043 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-06-30T17:44:00.000 | 3 | 3 | 0 | Python equivalent for matlab's perms | 38,130,008 | 0.197375 | python,matlab,numpy,scipy | Python has a built-in function called itertools.permutations. You can call it on any iterable in python and it returns all full length permutations. | Is there an equivalent method in numpy or scipy for matlab's perms function? In matlab, perms returns a matrix of all possible permutations of the input in reverse lexicographical order. | 0 | 1 | 865 |
0 | 38,144,150 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-07-01T11:13:00.000 | 1 | 2 | 0 | importing whole python module doesn't allow using submodules | 38,143,991 | 0.099668 | python,numpy,import,module,scikit-learn | Numpy conveniently imports its submodules in its __init__.py file and adds them to __all__. There's not much you can do about it when using a library - it either does it or not. sklearn apparently doesn't. | My question is specific to scikit-learn python module, but I had similar issues with matplotlib as well.
When I want to use sklearn, if I just do 'import sklearn' and then call whatever submodule I need, like ' sklearn.preprocessing.scale()', I get an error
"AttributeError: 'module' object has no attribute 'preprocess... | 0 | 1 | 1,297 |
0 | 38,145,044 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2016-07-01T11:56:00.000 | 7 | 2 | 0 | Does Python cache repeatedly accessed files? | 38,144,825 | 1.2 | python,pandas | No, Python is just a language and doesn't really do anything on its own. A particular Python library might implement caching, but the standard functions you use to open and read files don't do so. The higher-level file-loading functions in Pandas and the CSV module don't do any caching either.
The operating system migh... | I was wondering if Python is smart enough enough to cache repeatedly accessed files, e.g. when reading the same CSV with pandas or unpickling the same file multiple times.
Is this even Python's responsibility, or should the operating system take care of it? | 0 | 1 | 1,776 |
0 | 38,439,059 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-07-01T13:23:00.000 | 0 | 2 | 1 | Where can i cache pandas dataframe in tornado requesthandler | 38,146,607 | 1.2 | python,caching,tornado,requesthandler | Depends on how and where you want to be able to access this cache in the future, and how you want to handle invalidation. If the CSV files don't change then this could be as simple as @functools.lru_cache or a global dict. If you need one cache shared across multiple processes then you could use something like memcache... | I want to cache a pandas dataframe into tornado requesthandler. So i don't want to repeat the pd.read_csv() for every hit to that particular url. | 0 | 1 | 906 |
0 | 57,898,590 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-07-01T13:33:00.000 | 0 | 2 | 0 | Estimation of fundamental matrix or essential matrix from feature matching | 38,146,821 | 0 | python,opencv,matrix,computer-vision | Yes, Computing Fundamental Matrix gives a different matrix every time as it is defined up to a scale factor.
It is a Rank 2 matrix with 7DOF(3 rot, 3 trans, 1 scaling).
The fundamental matrix is a 3X3 matrix, F33(3rd col and 3rd row) is scale factor.
You make ask why do we append matrix with constant at F33, Because of... | I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing:
1.The essential matrix that i compute from in built function does not match with the one i find fr... | 0 | 1 | 1,100 |
0 | 38,615,979 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-07-01T13:33:00.000 | 1 | 2 | 0 | Estimation of fundamental matrix or essential matrix from feature matching | 38,146,821 | 0.099668 | python,opencv,matrix,computer-vision | Both F and E are defined up to a scale factor. It may help to normalize the matrices, e. g. by dividing by the last element.
RANSAC is a randomized algorithm, so you will get a different result every time. You can test how much it varies by triangulating the points, or by computing the reprojection errors. If the resul... | I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing:
1.The essential matrix that i compute from in built function does not match with the one i find fr... | 0 | 1 | 1,100 |
0 | 38,156,630 | 0 | 1 | 0 | 0 | 1 | false | 54 | 2016-07-01T23:34:00.000 | 39 | 3 | 0 | What is the difference between native int type and the numpy.int types? | 38,155,039 | 1 | python,numpy | There are several major differences. The first is that python integers are flexible-sized (at least in python 3.x). This means they can grow to accommodate any number of any size (within memory constraints, of course). The numpy integers, on the other hand, are fixed-sized. This means there is a maximum value they c... | Can you please help understand what are the main differences (if any) between the native int type and the numpy.int32 or numpy.int64 types? | 0 | 1 | 41,529 |
0 | 38,158,929 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-07-02T05:30:00.000 | 0 | 4 | 0 | opencv-python object detection | 38,156,827 | 0 | python,python-2.7,opencv | Your question is way too general. Feature matching is a very vast field.
The type of algorithm to be used totally depends on the object you want to detect, its environment etc.
So if your object won't change its size or angle in the image then use Template Matching.
If the image will change its size and orientation you... | I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in py... | 0 | 1 | 905 |
0 | 38,674,476 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-07-02T05:30:00.000 | 0 | 4 | 0 | opencv-python object detection | 38,156,827 | 0 | python,python-2.7,opencv | u can try out sliding window method. if ur object is the same in all samples | I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in py... | 0 | 1 | 905 |
0 | 38,176,821 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-04T03:42:00.000 | 0 | 1 | 0 | Does pandas.read_csv loads all data at once? | 38,176,645 | 0 | python,pandas | The way to check is len(df). This will give you the number of lines in DataFrame. Then, you need to check the csv file for lines. On linux, use wc -l. Or, you may need to use something else to findout lines in csv. Notepad++, Nano, SumlineText. | I want to know when I use pandas.read_csv('file.csv') function to read csv file, did it load all data of file.csv into DataFrame? | 0 | 1 | 106 |
0 | 38,181,950 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-04T06:22:00.000 | 0 | 1 | 0 | Regression: Variable influence | 38,178,028 | 0 | python,sas,regression | In SAS, apart from the correlation (Pearson index) you can use a ranking index like the Spearman coefficient (proc corr).
In addition, supposing you have the correct modules (STAT/MINER) licensed you can use:
a linear (logistic) regression on standardized regressors and compare the betas
a tree and compare the variabl... | Is there any way in SAS , Python to find the most influential variables in rank order apart from correlation ?
I might be missing something and any suggestion would be appreciated how to interpret it. | 0 | 1 | 64 |
0 | 38,220,858 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-07-05T12:52:00.000 | 1 | 2 | 0 | TF-IDF vectorizer doesn't work better than countvectorizer (sci-kit learn | 38,203,983 | 0.099668 | python-2.7,scikit-learn,tf-idf | There is no reason why idf would give more information for a classification task. It performs well for search and ranking, but classification needs to gather similarity, not singularities.
IDF is meant to spot the singularity between one sample vs the rest of the corpus, what you are looking for is the singularity betw... | I am working on a multilabel text classification problem with 10 labels.
The dataset is small, +- 7000 items and +-7500 labels in total. I am using python sci-kit learn and something strange came up in the results. As a baseline I started out with using the countvectorizer and was actually planning on using the tfidf ... | 0 | 1 | 1,238 |
0 | 38,204,179 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-07-05T12:52:00.000 | 1 | 2 | 0 | TF-IDF vectorizer doesn't work better than countvectorizer (sci-kit learn | 38,203,983 | 0.099668 | python-2.7,scikit-learn,tf-idf | The question is, why not ? Both are different solutions.
What is your dataset, how many words, how are they labelled, how do you extract your features ?
countvectorizer simply count the words, if it does a good job, so be it. | I am working on a multilabel text classification problem with 10 labels.
The dataset is small, +- 7000 items and +-7500 labels in total. I am using python sci-kit learn and something strange came up in the results. As a baseline I started out with using the countvectorizer and was actually planning on using the tfidf ... | 0 | 1 | 1,238 |
0 | 44,310,145 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-07-05T12:52:00.000 | 1 | 3 | 0 | Encoding in .sas7dbat | 38,203,988 | 0.066568 | python,pandas,encoding,sas | read_sas from pandas seem to not like encoding = "utf-8". I had a similar problem. Using SAS7BDAT('foo.sas7bdata').to_data_frame() solved the decoding issues of sas files for me. | I am trying to import a sas dataset(.sas7bdat format) using pandas function read_sas(version 0.17) but it is giving me the following error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 12: ordinal not in range(128) | 0 | 1 | 9,854 |
0 | 38,206,084 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-07-05T13:10:00.000 | 1 | 2 | 0 | Viewing a portion of a very large CSV file? | 38,204,346 | 0.099668 | python,excel,csv | If you want to do somewhat more selective fishing for particular rows, then the python csv module will allow you to read the csv file row by row into Python data structures. Consult the documentation.
This may be useful if just grabbing the first hundred lines reveals nothing about many of the columns because they are ... | I have a ~1.0gb CSV file, and when trying to load it into Excel just to view, Excel crashes. I don't know the schema of the file, so it's difficult for me to load it into R or Python. The file contains restaurant reviews and has commas in it.
How can I open just a portion of the file (say, the first 100 rows, or 1.0mb'... | 0 | 1 | 1,514 |
0 | 38,773,030 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2016-07-05T16:03:00.000 | 1 | 1 | 0 | Error in loadNamespace(name) Cairo | 38,207,942 | 0.197375 | python,r,jupyter,cairo | I had the precise same issue and fixed it by installing Cairo package:
install.packages("Cairo")`
library(Cairo)
Thanks to my collegue Bartek for providing this solution | It's my first time using R in jupyter. I've installed everything like jupyter, python, R and lRkernel. I can use just typing or calculation in jupyter but whenever I want to use a graph library like plot or ggplot2 it shows
Error in loadNamespace(name): there is no package called 'Cairo'
Traceback: plot without titl... | 0 | 1 | 411 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.