GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
53,956,744
0
0
0
0
1
false
3
2016-07-05T19:04:00.000
0
1
0
What is the output of Spark MLLIB LDA topicsmatrix?
38,210,820
0
python,apache-spark-mllib,bayesian,lda
i think the matrix is m*n m is the words number and n is the topic number
The output of LDAModel.topicsMatrix() is unclear to me. I think I understand the concept of LDA and that each topic is represented by a distribution over terms. In the LDAModel.describeTopics() it is clear (I think): The highest sum of likelihoods of words of a sentence per topic, indicates the evidence of this tweet b...
0
1
391
0
43,283,828
0
0
0
0
1
false
0
2016-07-06T12:04:00.000
1
2
0
jep for using scikit model in java
38,223,546
0.099668
java,python-2.7,machine-learning,scikit-learn,jepp
The _PyThreadState_Current error implies that it's using the wrong Python. You should be able to fix it by setting PATH and LD_LIBRARY_PATH to the python/bin and python/lib directories you want to use (and built Jep and sklearn against) before launching the process. That will ensure that Python, Jep, and sklearn are ...
I am using jep for running python script in java, I basically need to run the script that uses scikit package. But it shows me error when I try to run, which I couldn't understand. This is the piece of code in my program, Jep jep = new Jep(); jep.eval("import sklearn"); It shows the below error,but sklearn works perfe...
0
1
860
0
38,223,850
0
0
0
0
2
false
1
2016-07-06T12:10:00.000
1
2
0
Difference between Matlab spectrogram and matplotlib specgram?
38,223,687
0.099668
python,matlab,matplotlib,spectrogram
The value in Matlab is a scalar as it represents the size of the window, and Matlab uses a Hamming window by default. The Window argument also accepts a vector, so you can pass in any windowing function you want.
I am trying to incorporate a preexisting spectrogram from Matlab into my python code, using Matplotlib. However, when I enter the window value, there is an issue: in Matlab, the value is a scalar, but Matplotlib requires a vector. Why is this so?
0
1
865
0
38,225,574
0
0
0
0
2
true
1
2016-07-06T12:10:00.000
1
2
0
Difference between Matlab spectrogram and matplotlib specgram?
38,223,687
1.2
python,matlab,matplotlib,spectrogram
The arguments are just organized differently. In matplotlib, the window size is specified using the NFFT argument. The window argument, on the other hand, is only for specifying the window itself, rather than the size. So, like MATLAB, the window argument accepts a vector. However, unlike MATLAB, it also accepts a fu...
I am trying to incorporate a preexisting spectrogram from Matlab into my python code, using Matplotlib. However, when I enter the window value, there is an issue: in Matlab, the value is a scalar, but Matplotlib requires a vector. Why is this so?
0
1
865
0
38,233,222
0
0
0
0
1
false
10
2016-07-06T15:40:00.000
2
2
0
Same Python code, same data, different results on different machines
38,228,088
0.197375
python,numpy,scipy,scikit-learn,anaconda
If your code uses linear algebra, check it. Generally, roundoff errors are not deterministic, and if you have badly conditioned matrices, it can be it.
I have a very strange problem that I get different results on the same code and same data on different machines. I have a python code based on numpy/scipy/sklearn and I use anaconda as my base python distribution. Even when I copy the entire project directory (which includes all the data and code) from my main machine ...
0
1
6,808
0
38,230,601
0
0
0
0
2
true
0
2016-07-06T17:46:00.000
2
2
0
Why use matplotlib instead of some existing software/grapher
38,230,462
1.2
python,matplotlib,graph,data-science
Matplotlib gives you a nice level of access: you can change all details of the plots, modify ticks, labels, spacing, ... it has many sensible defaults, so a oneliner plot(mydata) produces fairly nice plots it plays well with numpy and other numerical tools, so you can pass your data science objects directly to the plo...
Hi, I feel like this question might be completely stupid, but I am still going to ask it, as I have been thinking about it. What are the advantages, of using a plotter like matplotlib, instead of an existing software, or grapher. For now, I have guessed that although it takes a lot more time to use such a library, you ...
0
1
146
0
38,230,638
0
0
0
0
2
false
0
2016-07-06T17:46:00.000
3
2
0
Why use matplotlib instead of some existing software/grapher
38,230,462
0.291313
python,matplotlib,graph,data-science
Adding to Robin's answer, I think reproducibility is key. When you make your graphs with matplotlib, since you are coding everything rather than using an interface, all of you work is reproducible, you can just run your script again. Using other software, specifically programs with user interfaces, means that each time...
Hi, I feel like this question might be completely stupid, but I am still going to ask it, as I have been thinking about it. What are the advantages, of using a plotter like matplotlib, instead of an existing software, or grapher. For now, I have guessed that although it takes a lot more time to use such a library, you ...
0
1
146
0
39,263,619
0
0
0
0
1
false
0
2016-07-07T18:35:00.000
0
1
0
Dump Python sklearn model in Windows and read it in Linux
38,252,931
0
python,linux,windows,scikit-learn,pickle
Python pickle should run between windows/linux. There may be incompatibilities if: python versions on the two hosts are different (If so, try installing same version of python on both hosts); AND/OR if one machine is 32-bit and another is 64-bit (I dont know any fix so far for this problem)
I am trying to save a sklearn model on a Windows server using sklearn.joblib.dump and then joblib.load the same file on a linux server (centOS71). I get the error below: ValueError: non-string names in Numpy dtype unpickling This is what I have tried: Tried both python27 and python35 Tried the built in open() with 'wb...
0
1
610
0
65,132,518
0
0
0
0
2
false
141
2016-07-07T22:12:00.000
0
7
0
Difference(s) between merge() and concat() in pandas
38,256,104
0
python,pandas,join,merge,concat
Only concat function has axis parameter. Merge is used to combine dataframes side-by-side based on values in shared columns so there is no need for axis parameter.
What's the essential difference(s) between pd.DataFrame.merge() and pd.concat()? So far, this is what I found, please comment on how complete and accurate my understanding is: .merge() can only use columns (plus row-indices) and it is semantically suitable for database-style operations. .concat() can be used with eith...
0
1
114,669
0
49,564,930
0
0
0
0
2
false
141
2016-07-07T22:12:00.000
14
7
0
Difference(s) between merge() and concat() in pandas
38,256,104
1
python,pandas,join,merge,concat
pd.concat takes an Iterable as its argument. Hence, it cannot take DataFrames directly as its argument. Also Dimensions of the DataFrame should match along axis while concatenating. pd.merge can take DataFrames as its argument, and is used to combine two DataFrames with same columns or index, which can't be done with ...
What's the essential difference(s) between pd.DataFrame.merge() and pd.concat()? So far, this is what I found, please comment on how complete and accurate my understanding is: .merge() can only use columns (plus row-indices) and it is semantically suitable for database-style operations. .concat() can be used with eith...
0
1
114,669
0
38,291,737
0
0
0
0
1
false
1
2016-07-10T12:07:00.000
1
2
0
Using pandas over csv library for manipulating CSV files in Python3
38,291,701
0.099668
python,csv
You should always try to use as much as possible the work that other people have already been doing for you (such as programming the pandas library). This saves you a lot of time. Pandas has a lot to offer when you want to process such files so this seems to me to be the the best way to deal with such files. Since the ...
Forgive me if my questions is too general, or if its been asked before. I've been tasked to manipulate (e.g. copy and paste several range of entries, perform calculations on them, and then save them all to a new csv file) several large datasets in Python3. What are the pros/cons of using the aforementioned libraries? ...
0
1
114
0
38,297,891
0
0
0
0
1
true
1
2016-07-11T00:52:00.000
2
1
0
Grey Level Co-Occurrence Matrix // Python
38,297,765
1.2
python,image-processing,scikit-image,glcm
The simplest way for binning 8-bits images is to divide each value by 32. Then each pixel value is going to be in [0,8[. Btw, more than avoiding sparse matrices (which are not really an issue), binning makes the GLCM more robust to noise.
I am trying to find the GLCM of an image using greycomatrix from skimage library. I am having issues with the selection of levels. Since it's an 8-bit image, the obvious selection should be 256; however, if I select values such as 8 (for the purpose of binning and to prevent sparse matrices from forming), I am getting ...
0
1
2,092
0
53,769,151
0
0
0
0
1
false
1
2016-07-11T10:42:00.000
0
1
0
Ensembling with dynamic weights
38,304,942
0
python,scikit-learn,classification,multilabel-classification,voting
I thing Voting Classifier only accepts different static weights for each estimator. However you may solve the problem by assigning class weights with the class_weight parameter of the random forest estimator by calculating the class weights on your train set.
I was wondering if it is possible to use dynamic weights in sklearn's VotingClassifier. Overall i have 3 labels 0 = Other, 1 = Spam, 2 = Emotion. By dynamic weights I mean the following: I have 2 classifiers. First one is a Random Forest which performs best on Spam detection. Other one is a CNN which is superior for to...
0
1
139
0
38,317,151
0
0
0
0
1
false
1
2016-07-11T19:43:00.000
1
1
0
Speeding up TensorFlow Cifar10 Example for Experimentation
38,314,964
0.197375
python,tensorflow
Note that this exercise only speeds up the first step time by skipping the prefetching of a larger from of the data. This exercise does not speed up the overall training That said, the tutorial text needs to be updated. It should read Search for min_fraction_of_examples_in_queue in cifar10_input.py. If you lower thi...
The TensorFlow tutorial for using CNN for the cifar10 data set has the following advice: EXERCISE: When experimenting, it is sometimes annoying that the first training step can take so long. Try decreasing the number of images that initially fill up the queue. Search for NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN in cifar10.py. ...
0
1
174
0
38,353,930
0
0
0
0
1
false
0
2016-07-12T06:15:00.000
1
1
0
how to use dot production on batch data?
38,321,248
0.197375
python,numpy,theano,deep-learning,keras
This expression should do the trick: theano.tensor.tanh((x * y).sum(2)) The dot product is computed 'manually' by doing element-wise multiplication, then summing over the last dimension.
I am trying to apply tanh(dot(x,y)); x and y are batch data of my RNN. x,y have shape (n_batch, n_length, n_dim) like (2,3,4) ; 2 samples with 3 sequences, each is 4 dimensions. I want to do inner or dot production to last dimension. Then tanh(dot(x,y)) should have shape of (n_batch, n_length) = (2, 3) Which functio...
0
1
291
0
43,412,660
0
1
0
0
1
false
4
2016-07-12T11:52:00.000
0
1
0
n_jobs don't work in sklearn-classes
38,328,159
0
python,scikit-learn
Several scikit-learn tools such as GridSearchCV and cross_val_score rely internally on Python’s multiprocessing module to parallelize execution onto several Python processes by passing n_jobs > 1 as argument. Taken from Sklearn documentation: The problem is that Python multiprocessing does a fork system call withou...
Does anybody use "n_jobs" of sklearn-classes? I am work with sklearn in Anaconda 3.4 64 bit. Spyder version is 2.3.8. My script can't finish its execution after setting "n_jobs" parameter of some sklearn-class to non-zero value.Why is this happening?
0
1
1,402
0
38,437,943
0
1
0
0
1
false
0
2016-07-13T07:03:00.000
0
1
0
How to do semantic keyword search with nlp
38,344,740
0
java,python,search,nlp,semantics
Your questions is somewhat vague but I will try nonetheless... If I understand you correctly then what you want to do (depending on the effort you want to spend) is the following: Expand the keyword to a synonym list that you will use to search for in the topics (you can use WordNet for this). Use collocations (n-gram...
I want to do SEMANTIC keyword search on list of topics with NLP(Natural Language Processing ). It would be very appreciable if you post any reference links or ideas.
0
1
296
0
38,375,229
0
0
0
0
1
true
0
2016-07-14T13:04:00.000
0
1
0
Sample orientation in the class, clustered by k-means in Python
38,375,062
1.2
python,scikit-learn,k-means
The way you're defining the orientation to us seems like you've got the right idea. If you use the farthest distance from the center as the denominator, then you'll get 0 as your minimum (cluster center) and 1 as your maximum (the farthest distance) and a linear distance in-between.
I've got some clustered classes, and a sample with a prediction. Now, i want to know the "orientation" of the sample, which varies from 0 to 1, where 0 - right in the class center, 1 - right on the class border(radius). I guess, it's going to be orientation=dist_from_center/class_radius So, I'm struggled to find class ...
0
1
41
0
38,376,532
0
0
0
0
1
false
22
2016-07-14T14:08:00.000
5
6
0
Changing the scale of a tensor in tensorflow
38,376,478
0.16514
python,tensorflow,conv-neural-network
sigmoid(tensor) * 255 should do it.
Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale ...
0
1
25,931
0
38,389,853
0
0
0
1
1
false
5
2016-07-15T05:57:00.000
1
3
0
Sort A list of Strings Based on certain field
38,388,799
0.066568
python,list,python-2.7,sorting
you can use string.split(),string.split(',')[1]
Overview: I have data something like this (each row is a string): 81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M 3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:...
0
1
358
0
42,047,026
0
0
0
1
1
false
32
2016-07-15T18:33:00.000
5
7
0
How to write data to Redshift that is a result of a dataframe created in Python?
38,402,995
0.141893
python,pandas,dataframe,amazon-redshift,psycopg2
Assuming you have access to S3, this approach should work: Step 1: Write the DataFrame as a csv to S3 (I use AWS SDK boto3 for this) Step 2: You know the columns, datatypes, and key/index for your Redshift table from your DataFrame, so you should be able to generate a create table script and push it to Redshift to crea...
I have a dataframe in Python. Can I write this data to Redshift as a new table? I have successfully created a db connection to Redshift and am able to execute simple sql queries. Now I need to write a dataframe to it.
0
1
57,271
0
38,433,583
0
1
0
0
1
false
37
2016-07-16T21:25:00.000
86
2
0
ipython : get access to current figure()
38,415,774
1
python,matplotlib,ipython,axis,figure
plt.gcf() to get current figure plt.gca() to get current axis
I want to add more fine grained grid on a plotted graph. The problem is all of the examples require access to the axis object. I want to add specific grid to already plotted graph (from inside ipython). How do I gain access to the current figure and axis in ipython ?
0
1
55,702
0
38,424,476
0
0
0
0
2
false
1
2016-07-16T22:56:00.000
0
2
0
Need to disable Sympy output of 'False' (0, False) in logical operator 'not'
38,416,381
0
python,sympy,logical-operators
If you use the operators &, |, and ~ for and, or, and not, respectively, you will get a symbolic boolean expression. I also recommend using sympy.true and sympy.false instead of 1 and 0.
I am using Sympy to process randomly generated expressions which may contain the boolean operators 'and', 'or', and 'not'. 'and' and 'or' work well: a = 0 b = 1 a and b 0 a or b 1 But 'not' introduces a 2nd term 'False' in addition to the desired value: a, not b (0, False) When processed by Sympy (where 'data' (b...
0
1
35
0
38,416,665
0
0
0
0
2
false
1
2016-07-16T22:56:00.000
1
2
0
Need to disable Sympy output of 'False' (0, False) in logical operator 'not'
38,416,381
0.099668
python,sympy,logical-operators
a, not b doesn't do what you think it does. You are actually asking for, and correctly receiving, a tuple of two items containing: a not b As the result shows, a is 0 and not b is False, 1 being truthy and the not of truthy being False. The fact that a happens to be the same value as the result you want doesn't mean ...
I am using Sympy to process randomly generated expressions which may contain the boolean operators 'and', 'or', and 'not'. 'and' and 'or' work well: a = 0 b = 1 a and b 0 a or b 1 But 'not' introduces a 2nd term 'False' in addition to the desired value: a, not b (0, False) When processed by Sympy (where 'data' (b...
0
1
35
0
47,932,683
0
0
0
0
1
false
30
2016-07-17T22:01:00.000
4
7
0
How to create random orthonormal matrix in python numpy
38,426,349
0.113791
python,numpy,linear-algebra,orthogonal
if you want a none Square Matrix with orthonormal column vectors you could create a square one with any of the mentioned method and drop some columns.
Is there a method that I can call to create a random orthonormal matrix in python? Possibly using numpy? Or is there a way to create a orthonormal matrix using multiple numpy methods? Thanks.
0
1
28,284
0
38,433,637
0
0
0
0
1
true
1
2016-07-18T09:49:00.000
3
3
0
Python - Plotting vertical line
38,433,584
1.2
python,matplotlib
Assuming you know where the curve begins, you can just use: plt.plot((x1, x2), (y1, y2), 'r-') to draw the line from the point (x1, y1) to the point (x2, y2) Here in your case, x1 and x2 will be same, only y1 and y2 should change, as it is a straight vertical line that you want.
I have a curve of some data that I am plotting using matplotlib. The small value x-range of the data consists entirely of NaN values, so that my curve starts abruptly at some value of x>>0 (which is not necessarily the same value for different data sets I have). I would like to place a vertical dashed line where the ...
0
1
4,256
0
38,469,723
0
0
0
0
1
false
0
2016-07-18T16:53:00.000
2
2
0
How to print top ten topics using Gensim?
38,442,161
0.197375
python,lda,gensim,topic-modeling
Like the documentation says, there is no natural ordering between topics in LDA. If you have your own criterion for ordering the topics, such as frequency of appearance, you can always retrieve the entire list of topics from your model and sort them yourself. However, even the notion of "top ten most frequent topics" ...
In the official explanation, there is no natural ordering between the topics in LDA. As for the method show_topics(), if it returned num_topics <= self.num_topics subset of all topics is therefore arbitrary and may change between two LDA training runs. But I tends to find the top ten frequent topics of corpus. Is there...
0
1
836
0
38,447,138
0
0
0
0
1
false
2
2016-07-18T22:05:00.000
2
1
0
Print out summaries in console
38,446,706
0.379949
python,tensorflow,protocol-buffers,tensorboard
Overall, there isn't first class support for your use case in TensorFlow, so I would parse the merged summaries back into a tf.Summary() protocol buffer, and then filter / print data as you see fit. If you come up with a nice pattern, you could then merge it back into TensorFlow itself. I could imagine making this an ...
Tensorflow's scalar/histogram/image_summary functions are very useful for logging data for viewing with tensorboard. But I'd like that information printed to the console as well (e.g. if I'm a crazy person without a desktop environment). Currently, I'm adding the information of interest to the fetch list before callin...
0
1
1,812
0
38,460,641
0
0
0
0
1
false
1
2016-07-19T12:55:00.000
0
1
0
Python Glueviz - is there a way to replace ie update the imported data
38,459,234
0
python
As it turns out, the data is not stored in the Glueviz session file, but rather loaded fresh each time the saved session is opened from the original data source file. Hence the solution is simple: Replace the data source file with a new file (of the same type) in with the updated data. The updated data file must ha...
I am using Glueviz 0.7.2 as part of the Anaconda package, on OSX. Glueviz is a data visualization and exploration tool. I am regularly regenerating an updated version of the same data set from an external model, then importing that data set into Glueviz. Currently I can not find a way to have Glueviz refresh or updat...
0
1
464
0
38,638,969
0
0
0
0
2
false
1
2016-07-20T21:03:00.000
1
2
0
Horizontally layering LSTM cells
38,490,811
0.099668
python,tensorflow,neural-network,recurrent-neural-network,lstm
However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another. This is the definition of recurrence. All RNNs do this.
I am pretty new to the whole neural network scene, and I was just going through a couple of tutorials on LSTM cells, specifically, tensorflow. In the tutorial, they have an object tf.nn.rnn_cell.MultiRNNCell, which from my understanding, is a vertical layering of LSTM cells, similar to layering convolutional networks. ...
0
1
301
0
71,838,557
0
0
0
0
2
false
1
2016-07-20T21:03:00.000
0
2
0
Horizontally layering LSTM cells
38,490,811
0
python,tensorflow,neural-network,recurrent-neural-network,lstm
Horizontally stacked is useless in any case I can think of. A common confusion is that there are multiple cells (with different parameters) due to the visualization of the process within an RNN. RNNs loop over themselves so for every input they generate new input for the cell itself. So they use the same weights over a...
I am pretty new to the whole neural network scene, and I was just going through a couple of tutorials on LSTM cells, specifically, tensorflow. In the tutorial, they have an object tf.nn.rnn_cell.MultiRNNCell, which from my understanding, is a vertical layering of LSTM cells, similar to layering convolutional networks. ...
0
1
301
0
38,503,517
0
1
0
0
1
false
0
2016-07-21T01:54:00.000
0
1
0
Python Anaconda - no module named numpy
38,493,608
0
python,python-2.7,numpy,anaconda
The anaconda package in the AUR is broken. If anyone encounters this, simply install anaconda from their website. The AUR attempts to do a system-wide install, which gets rather screwy with the path.
I recently installed Anaconda on Arch Linux from the Arch repositories. By default, it was set to Python3, whereas I would like to use Python2.7. I followed the Anaconda documentation to create a new Python2 environment. Upon running my Python script which uses Numpy, I got the error No module named NumPy. I found this...
0
1
1,790
0
38,572,975
0
0
0
0
1
false
0
2016-07-21T22:03:00.000
0
1
1
module error in multi-node spark job on google cloud cluster
38,515,096
0
python-3.x,numpy,pyspark,google-cloud-platform,gcp
Not sure if this qualifies as a solution. I submitted the same job using dataproc on google platform and it worked without any problem. I believe the best way to run jobs on google cluster is via the utilities offered on google platform. The dataproc utility seems to iron out any issues related to the environment.
This code runs perfect when I set master to localhost. The problem occurs when I submit on a cluster with two worker nodes. All the machines have same version of python and packages. I have also set the path to point to the desired python version i.e. 3.5.1. when I submit my spark job on the master ssh session. I get ...
0
1
206
0
38,517,371
0
0
0
0
1
true
1
2016-07-22T02:43:00.000
2
1
0
Global dataframes - good or bad
38,517,334
1.2
python,pandas,global
Yes. Instead of using globals, you should wrap your data into an object and pass that object around to your functions instead (see dependency injection). Wrapping it in an object instead of using a global will : Allow you to unit test your code. This is absolutely the most important reason. Using globals will make it ...
I have a program that i load millions of rows into dataframes, and i declare them as global so my functions (>50) can all use them like i use a database in the past. I read that using globals are a bad, and due to the memory mapping for it, it is slower to use globals. I like to ask if globals are bad, how would the ...
0
1
151
0
38,518,356
0
0
0
0
1
false
0
2016-07-22T04:09:00.000
0
1
0
Date field in SAS imported in Python pandas Dataframe
38,518,000
0
python,pandas,dataframe,import,sas
I don't know how python stores dates, but SAS stores dates as numbers, counting the number of days from Jan 1, 1960. Using that you should be able to convert it in python to a date variable somehow. I'm fairly certain that when data is imported to python the formats aren't honoured so in this case it's easy to work aro...
I have imported a SAS dataset in python dataframe using Pandas read_sas(path) function. REPORT_MONTH is a column in sas dataset defined and saved as DATE9. format. This field is imported as float64 datatype in dataframe and having numbers which is basically a sas internal numbers for storing a date in a sas dataset. N...
0
1
398
0
38,537,431
0
1
0
0
1
true
1
2016-07-23T00:58:00.000
2
2
0
AttributeError: 'module' object has no attribute '__version__'
38,537,125
1.2
python,module,dataset,attributeerror,lda
Do you have a module named lda.py or lda.pyc in the current directory? If so, then your import statement is finding that module instead of the "real" lda module.
I have installed LDA plibrary (using pip) I have a very simple test code (the next two rows) import lda print lda.datasets.load_reuters() But i keep getting the error AttributeError: 'module' object has no attribute 'datasets' in fact i get that each time i access any attribute/function under lda!
0
1
4,833
0
38,548,024
0
1
0
0
3
false
1
2016-07-24T01:44:00.000
0
3
0
Represent sparse matrix in Python without library usage
38,547,996
0
python,data-structures
Dict with tuples as keys might work.
I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.
0
1
331
0
38,548,640
0
1
0
0
3
false
1
2016-07-24T01:44:00.000
1
3
0
Represent sparse matrix in Python without library usage
38,547,996
0.066568
python,data-structures
The scipy.sparse library uses different formats depending on the purpose. All implement a 2d matrix dictionary of keys - the data structure is a dictionary, with a tuple of the coordinates as key. This is easiest to setup and use. list of lists - has 2 lists of lists. One list has column coordinates, the other colum...
I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.
0
1
331
0
38,548,006
0
1
0
0
3
false
1
2016-07-24T01:44:00.000
0
3
0
Represent sparse matrix in Python without library usage
38,547,996
0
python,data-structures
Lots of ways to do it. For example you could keep a list where each list element is either one of your data objects, or an integer representing N blank items.
I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.
0
1
331
0
38,555,266
0
1
0
0
2
false
1
2016-07-24T18:03:00.000
4
3
0
Hardware requirements to deal with a big matrix - python
38,555,120
0.26052
python,numpy,matrix
Well, the first question is, wich type of value will you store in your matrix? Suposing it will be of integers (and suposing that every bytes uses the ISO specification for size, 4 bytes), you will have 4*10^12 bytes to store. That's a large amount of information (4 TB), so, in first place, I don't know from where you ...
I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000. Considering that: The matrix will be dense, and should be stored in the RAM. I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are...
0
1
230
0
38,555,206
0
1
0
0
2
false
1
2016-07-24T18:03:00.000
1
3
0
Hardware requirements to deal with a big matrix - python
38,555,120
0.066568
python,numpy,matrix
Actually, the memory would be a big issue here. Depending on the type of the matrix elements. Each float takes 24 bytes for example as it is a boxed object. As your matrix is 10^12 you can do the math. Switching to C would probably make it more memory-efficient, but not faster, as numpy is essentially written in C wit...
I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000. Considering that: The matrix will be dense, and should be stored in the RAM. I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are...
0
1
230
0
38,556,752
0
1
0
0
1
true
11
2016-07-24T19:47:00.000
15
1
0
In Tensorflow, what is the difference between a Variable and a Tensor?
38,556,078
1.2
python,tensorflow
It's true that a Variable can be used any place a Tensor can, but the key differences between the two are that a Variable maintains its state across multiple calls to run() and a variable's value can be updated by backpropagation (it can also be saved, restored etc as per the documentation). These differences mean tha...
The Tensorflow documentation states that a Variable can be used any place a Tensor can be used, and they seem to be fairly interchangeable. For example, if v is a Variable, then x = 1.0 + v becomes a Tensor. What is the difference between the two, and when would I use one over the other?
0
1
3,747
0
45,135,108
0
0
0
0
1
false
1
2016-07-25T06:48:00.000
1
2
0
Feed Mxnet Rec to Tensorflow
38,561,304
0.099668
python,tensorflow,mxnet
You can probably feed the data. You will need to use MXNet Iterators to get the data out of the records, and then each record you will need to cast to something that Tensorflow understands.
I have created Mxnet Rec data through Im2rec. I would like to feed this into Tensorflow. Is it possible ? and How would i do that? Any idea ?
0
1
365
0
48,574,315
0
1
0
0
1
false
42
2016-07-25T20:45:00.000
16
3
0
Convert Column Name from int to string in pandas
38,577,126
1
python,pandas
You can simply use df.columns = df.columns.map(str) DSM's first answer df.columns = df.columns.astype(str) didn't work for my dataframe. (I got TypeError: Setting dtype to anything other than float64 or object is not supported)
I have a pandas dataframe with mixed column names: 1,2,3,4,5, 'Class' When I save this dataframe to h5file, it says that the performance will be affected due to mixed types. How do I convert the integer to string in pandas?
0
1
62,888
0
45,575,576
0
1
0
0
1
false
1
2016-07-27T00:03:00.000
0
1
0
Programming on PySpark (local) vs. Python on Jupyter Notebook
38,601,730
0
python,apache-spark,pyspark
I'm in a similar situation. We've done most of our development in Python (primarily Pandas) and now we're moving into Spark as our environment has matured to the point that we can use it. The biggest disadvantage I see to PySpark is when we have to perform operations across an entire DataFrame but PySpark doesn't dire...
Recently I've been working a lot with pySpark, so I've been getting used to it's syntax, the different APIs and the HiveContext functions. Many times when I start working on a project I'm not fully aware of what its scope will be, or the size of the input data, so sometimes I end up requiring the full power of distribu...
0
1
1,310
0
38,612,762
0
0
0
0
1
true
3
2016-07-27T10:53:00.000
4
1
0
Dealing with big data to perform random forest classification
38,610,955
1.2
python,pandas,scikit-learn,sparse-matrix,bigdata
I would suggest you give CloudxLab a try. Though it is not free it is quite affordable ($25 for a month). It provides complete environment to experiment with various tools such as HDFS, Map-Reduce, Hive, Pig, Kafka, Spark, Scala, Sqoop, Oozie, Mahout, MLLib, Zookeeper, R, Scala etc. Many of the popular trainers are us...
I am currently working on my thesis, which involves dealing with quite a sizable dataset: ~4mln observations and ~260ths features. It is a dataset of chess games, where most of the features are player dummies (130k for each colour). As for the hardware and the software, I have around 12GB of RAM on this computer. I a...
0
1
392
0
43,754,593
0
1
0
0
1
false
0
2016-07-27T11:43:00.000
0
1
0
Import Theano on Anaconda of platform windows10
38,611,999
0
python,anaconda,theano
Just found the temperary solution , rename configparser.py to config_parser or any other name that are not confilct . and change name of each module include it to config_parser .
I download the theano from github, and install it. But when I try to import the theano in ipython, I meet this problem In [1]: import theano ImportError Traceback (most recent call last) <ipython-input-1-3397704bd624> in <module>() ----> 1 import theano C:\Anaconda3\lib\site-packages\thean...
0
1
437
0
38,615,418
0
0
0
0
1
true
2
2016-07-27T13:58:00.000
1
1
0
Tf-Idf vectorizer analyze vectors from lines instead of words
38,615,088
1.2
python,scikit-learn,vectorization,tf-idf,text-analysis
You seem to be misunderstanding what the TF-IDF vectorization is doing. For each word (or N-gram), it assigns a weight to the word which is a function of both the frequency of the term (TF) and of its inverse frequency of the other terms in the document (IDF). It makes sense to use it for words (e.g. knowing how often ...
I'm trying to analyze a text which is given by lines, and I wish to vectorize the lines using sckit-learn package's TF-IDF-vectorization in python. The problem is that the vectorization can be done either by words or n-grams but I want them to be done for lines, and I already ruled out a work around that just vectorize...
0
1
791
0
54,919,826
0
0
0
0
1
false
8
2016-07-28T21:52:00.000
-3
3
0
Tensorflow: Convert Tensor to numpy array WITHOUT .eval() or sess.run()
38,647,353
-0.197375
python,numpy,tensorflow
.numpy() will convert tensor to an array.
How can you convert a tensor into a Numpy ndarray, without using eval or sess.run()? I need to pass a tensor into a feed dictionary and I already have a session running.
0
1
13,763
0
41,493,134
0
0
0
0
1
false
6
2016-07-29T10:55:00.000
3
2
0
how to save jupyter output into a pdf file
38,657,054
0.291313
python-2.7,pdf,jupyter-notebook
When I want to save a Jupyter Notebook I right click the mouse, select print, then change Destination to Save as PDF. This does not save the analysis outputs though. So if I want to save a regression output, for example, I highlight the output in Jupyter Notebook, right click, print, Save as PDF. This process creates f...
I am doing some data science analysis on jupyter and I wonder how to get all the output of my cell saved into a pdf file ? thanks
1
1
19,046
0
38,720,955
0
0
0
0
1
true
0
2016-08-01T18:50:00.000
1
1
0
fastest format to load saved graph structure into python-igraph
38,706,050
1.2
python,profiling,igraph
If you don't have vertex or edge attributes, your best bet is a simple edge list, i.e. Graph.Read_Edgelist(). The disadvantage is that it assumes that vertex IDs are in the range [0; |V|-1], so you'll need to have an additional file next to it where line i contains the name of the vertex with ID=i.
I have a very large network structure which I am working with in igraph. There are many different file formats which igraph Graph objects can write to and then be loaded from. I ran into memory problems when using g.write_picklez, and Graph.Read_Lgl() takes about 5 minutes to finish. I was wondering if anyone had alrea...
0
1
484
0
38,718,933
0
0
0
0
1
true
2
2016-08-01T23:11:00.000
3
1
0
Upgraded Seaborn 0.7.0 to 0.7.1, getting AttribueError for missing axlabel
38,709,439
1.2
python,seaborn
Changes were made in 0.7.1 to clean up the top-level namespace a bit. axlabel was not used anywhere in the documentation, so it was moved to make the main functions more discoverable. You can still access it with sns.utils.axlabel. Sorry for the inconvenience. Note that it's usually just as easy to do ax.set(xlabel=".....
Having trouble with my upgrade to Seaborn 0.7.1. Conda only has 0.7.0 so I removed it and installed 0.7.1 with pip. I am now getting this error: AttributeError: module 'seaborn' has no attribute 'axlabel' from this line of code sns.axlabel(xlabel="SAMPLE GROUP", ylabel=y_label, fontsize=16) I removed and reinstalled 0....
0
1
853
0
38,721,746
0
0
0
0
1
false
2
2016-08-02T04:48:00.000
0
2
0
Function that depends on the row number
38,711,966
0
python,pandas,numbers,row
Sorry I couldnt add a code sample but Im on my phone. piRSquared confirmed my fears when he said the info is lost. I guess ill have to do a loop everytime or add a column with numbers ( that will get scrambled if i sort them : / ). Thanks everyone.
In pandas, is it possible to reference the row number for a function. I am not talking about .iloc. iloc takes a location i.e. a row number and returns a dataframe value. I want to access the location number in the dataframe. For instance, if the function is in the cell that is 3 rows down and 2 columns across, I wa...
0
1
118
0
38,724,313
0
0
0
0
2
false
2
2016-08-02T15:10:00.000
0
2
0
Sending pandas dataframe to java application
38,724,255
0
java,python,pandas,numpy,jython
Have you tried using xml to transfer the data between the two applications ? My next suggestion would be to output the data in JSON format in a txt file and then call the java application which will read the JSON from the text file.
I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you .
1
1
2,769
0
57,166,461
0
0
0
0
2
false
2
2016-08-02T15:10:00.000
0
2
0
Sending pandas dataframe to java application
38,724,255
0
java,python,pandas,numpy,jython
Better approach here is to use java pipe input like python pythonApp.py | java read. Output of python application can be used as an input for java application till the format of data is consitent and known. Above soultions of creating a file and then reading also works but is prone to more errors.
I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you .
1
1
2,769
0
38,729,045
0
1
0
0
1
false
0
2016-08-02T17:35:00.000
0
1
0
How to turn off matplotlib inline function and install pygtk?
38,727,035
0
python,matplotlib,ipython
You need to install pyGTK. How to do so depends on what you're using to run Python. You could also not use '%matplotlib inline' and then it'll default to whatever is installed on your system.
I got two questions when I was plotting graph in ipython. once, i implement %matplotlib inline, I don't know how to switch back to use floating windows. when I search for the method to switch back, people told me to implement %matplotlib osx or %matplotlib, however, I finally get an error, which is Gtk* backend re...
0
1
336
0
38,740,100
0
0
0
0
1
false
1
2016-08-02T19:00:00.000
4
1
0
Inputs not a sequence wth RNNs and TensorFlow
38,728,501
0.664037
python,neural-network,tensorflow,recurrent-neural-network
I think when you use the tf.nn.rnn function it is expecting a list of tensors and not just a single tensor. You should unpack input in the time direction so that it is a list of tensors of shape [?, 22501]. You could also use tf.nn.dynamic_rnn which I think can handle this unpack for you.
I have some very basic lstm code with tensorflow and python, where my code is output = tf.nn.rnn(tf.nn.rnn_cell.BasicLSTMCell(10), input_flattened, initial_state=tf.placeholder("float", [None, 20])) where my input flattened is shape [?, 5, 22501] I'm getting the error TypeError: inputs must be a sequence on the state ...
0
1
3,118
0
38,733,854
0
0
0
0
1
true
57
2016-08-03T02:07:00.000
59
2
0
Difference between scikit-learn and sklearn
38,733,220
1.2
python,python-2.7,scikit-learn
You might need to reinstall numpy. It doesn't seem to have installed correctly. sklearn is how you type the scikit-learn name in python. Also, try running the standard tests in scikit-learn and check the output. You will have detailed error information there. Do you have nosetests installed? Try: nosetests -v sklearn. ...
On OS X 10.11.6 and python 2.7.10 I need to import from sklearn manifold. I have numpy 1.8 Orc1, scipy .13 Ob1 and scikit-learn 0.17.1 installed. I used pip to install sklearn(0.0), but when I try to import from sklearn manifold I get the following: Traceback (most recent call last): File "", line 1, in File...
0
1
81,337
0
38,751,473
0
0
0
0
2
false
1
2016-08-03T18:42:00.000
1
2
0
MiniBatchKMeans gives different centroids after subsequent iterations
38,751,364
0.099668
python,statistics,scikit-learn,cluster-analysis,k-means
The behavior you are experiencing probably has to do with the under the hood implementation of k-means clustering that you are using. k-means clustering is an NP-hard problem, so all the implementations out there are heuristic methods. What this means practically is that for a given seed, it will converge toward a loca...
I am using the MiniBatchKMeans model from the sklearn.cluster module in anaconda. I am clustering a data-set that contains approximately 75,000 points. It looks something like this: data = np.array([8,3,1,17,5,21,1,7,1,26,323,16,2334,4,2,67,30,2936,2,16,12,28,1,4,190...]) I fit the data using the process below. from sk...
0
1
1,020
0
38,754,035
0
0
0
0
2
false
1
2016-08-03T18:42:00.000
1
2
0
MiniBatchKMeans gives different centroids after subsequent iterations
38,751,364
0.099668
python,statistics,scikit-learn,cluster-analysis,k-means
Read up on what mini-batch k-means is. It will never even converge. Do one more iteration, the result will change again. It is design for data sets so huge you cannot load them into memory at once. So you load a batch, pretend this were the full data set, do one iterarion. Repeat woth the next batch. If your batches ar...
I am using the MiniBatchKMeans model from the sklearn.cluster module in anaconda. I am clustering a data-set that contains approximately 75,000 points. It looks something like this: data = np.array([8,3,1,17,5,21,1,7,1,26,323,16,2334,4,2,67,30,2936,2,16,12,28,1,4,190...]) I fit the data using the process below. from sk...
0
1
1,020
0
38,763,173
0
0
0
0
2
true
0
2016-08-04T06:08:00.000
2
3
0
How should we set the number of the neurons in the hidden layer in neural network?
38,759,647
1.2
python-2.7,machine-learning,neural-network
Yes - this is a really important issue. Basically there are two ways to do that: Try different topologies and choose best: due to the fact that number of neurons and layers are a discrete parameters you cannot differentiate your loss function with respect to this parameters in order to use a gradient descent methods. ...
In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer?
0
1
638
0
38,776,068
0
0
0
0
2
false
0
2016-08-04T06:08:00.000
1
3
0
How should we set the number of the neurons in the hidden layer in neural network?
38,759,647
0.066568
python-2.7,machine-learning,neural-network
You have to set the number of neurons in hidden layer in such a way that it shouldn't be more than # of your training example. There are no thumb rule for number of neurons. Ex: If you are using MINIST Dataset then you might have ~ 78K training example. So make sure that combination of Neural Network (784-30-10) = 784*...
In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer?
0
1
638
0
38,788,040
0
0
0
0
1
false
3
2016-08-04T12:34:00.000
1
1
0
Scikit-Learn- How to add an 'unclassified' category?
38,767,481
0.197375
python,scikit-learn,text-classification
In the supervised learning approach as it is, you cannot add extra category. Therefore I would use some heuristics. Try to predict probability for each category. Then, if all 4 or at least 3 probabilities are approximately equal, you can say that the sample is "unknown". For this approach LinearSVC or other type of Su...
I am using Scikit-Learn to classify texts (in my case tweets) using LinearSVC. Is there a way to classify texts as unclassified when they are a poor fit with any of the categories defined in the training set? For example if I have categories for sport, politics and cinema and attempt to predict the classification on a ...
0
1
198
1
38,774,932
0
0
0
0
1
false
0
2016-08-04T18:23:00.000
0
1
0
Difference between Kivy camera and opencv camera
38,774,748
0
python,opencv,camera,kivy,motion-detection
opencv is a computer vision framework (hence the c-v) which can interact with device cameras. Kivy is a cross-platform development tool which can interact with device cameras. It makes sense that there are good motion detection tutorials for opencv but not kivy camera, since this isnt really what kivy is for.
What is the difference between Kivy Camera and opencv ? I am asking this because in Kivy Camera the image gets adjusted according to frame size but in opencv this does not happen. Also I am not able to do motion detection in kivy camera whereas I found a great tutorial for motion detection on opencv. If someone can cla...
0
1
386
0
38,794,707
0
0
0
0
1
false
0
2016-08-05T17:14:00.000
0
3
0
How to get constant function to keep shape in NumPy
38,794,622
0
python,numpy,array-broadcasting
Use x.fill(1). Make sure to return it properly as fill doesn't return a new variable, it modifies x
I have a NumPy array A with shape (m,n) and want to run all the elements through some function f. For a non-constant function such as for example f(x) = x or f(x) = x**2 broadcasting works perfectly fine and returns the expected result. For f(x) = 1, applying the function to my array A however just returns the scalar 1...
0
1
1,182
0
38,803,263
0
0
0
0
1
false
0
2016-08-05T18:43:00.000
2
1
0
Is it possible to increase the number of centroids in KMeans during fitting?
38,795,912
0.379949
python,scikit-learn,cluster-analysis,k-means
It is not a good idea to do this during optimization, because it changes the optimization procedure substantially. It will essentially reset the whole optimization. There are strategies such as bisecting k-means that try to learn the value of k during clustering, but they are a bit more tricky than increasing k by one ...
I am attempting to use MiniBatchKMeans to stream NLP data in and cluster it, but have no way of determining how many clusters I need. What I would like to do is periodically take the silhouette score and if it drops below a certain threshold, increase the number of centroids. But as far as I can tell, n_clusters is set...
0
1
77
0
38,819,049
1
0
1
0
1
false
3
2016-08-07T22:03:00.000
0
1
0
Most efficient way to check twitter friendship? (over 5000 check)
38,818,981
0
python,twitter,tweepy
I don't know much about the limits with Tweepy, but you can always write a basic web scraper with urllib and BeautifulSoup to do so. You could take a website such as www.doesfollow.com which accomplishes what you are trying to do. (not sure about request limits with this page, but there are dozens of other websites tha...
I'm facing problem like this. I used tweepy to collect +10000 tweets, i use nltk naive-bayes classification and filtered the tweets into +5000. I want to generate a graph of user friendship from that classified 5000 tweet. The problem is that I am able to check it with tweepy.api.show_frienship(), but it takes so much...
0
1
592
0
39,021,770
0
1
0
0
1
true
2
2016-08-08T12:08:00.000
1
3
0
How to use Tensorflow and Sci-Kit Learn together in one environment in PyCharm?
38,828,829
1.2
python,pycharm,tensorflow,anaconda,ubuntu-16.04
Anaconda defaults doesn't provide tensorflow yet, but conda-forge do, conda install -c conda-forge tensorflow should see you right, though (for others reading!) the installed tensorflow will not work on CentOS < 7 (or other Linux Distros of a similar vintage).
I am using Ubuntu 16.04 . I tried to install Tensorflow using Anaconda 2 . But it installed a Environment inside ubuntu . So i had to create a virtual environment and then use Tensorflow . Now how can i use both Tensorflow and Sci-kit learn together in a single environment .
0
1
2,492
0
39,119,230
0
0
0
1
1
true
0
2016-08-09T10:03:00.000
1
1
0
Exporting R data.frame/tbl to Google BigQuery table
38,847,743
1.2
python,r,dataframe,google-bigquery
It looks like bigrquery package does the job with insert_upload_job(). In the package documentation, it says this function > is only suitable for relatively small datasets but it doesn't specify any size limits. For me, it's been working for tens of thousands of rows.
I know it's possible to import Google BigQuery tables to R through bigrquery library. But is it possible to export tables/data frames created in R to Google BigQuery as new tables? Basically, is there an R equivalent of Python's temptable.insert_data(df) or df.to_sql() ? thanks for your help, Kasia
0
1
635
0
38,916,691
0
0
0
0
1
true
1
2016-08-11T17:53:00.000
1
1
0
Find the Number of Distinct Topics After LDA in Python/ R
38,903,061
1.2
python,r,lda,topic-modeling,text-analysis
First, your question kind of assumes that topics identified by LDA correspond to real semantic topics - I'd be very careful about that assumption and take a look at the documents and words assigned to topics you want to interpret that way, as LDA often have random extra words assigned, can merge two or more actual topi...
As far as I know, I need to fix the number of topics for LDA modeling in Python/ R. However, say I set topic=10 while the results show that, for a document, nine topics are all about 'health' and the distinct number of topics for this document is 2 indeed. How can I spot it without examining the key words of each topic...
0
1
618
0
38,924,162
0
1
0
0
1
false
3
2016-08-12T15:51:00.000
1
2
0
How can I implement a dictionary with a NumPy array?
38,921,975
0.099668
python,arrays,numpy,dictionary,red-black-tree
The most basic form of a dictionary is a structure called a HashMap. Implementing a hashmap relies on turning your key into a value that can be quickly looked up. A pathological example would be using ints as keys: The value for key 1 would go in array[1], the value for key 2 would go in array[2], the Hash Function ...
I need to write a huge amount number-number pairs into a NumPy array. Since a lot of these pairs have a second value of 0, I thought of making something akin to a dictionary. The problem is that I've read through the NumPy documentation on structured arrays and it seems like dictionaries built like those on the page ca...
0
1
4,918
0
38,945,813
0
0
0
0
1
false
1
2016-08-14T19:11:00.000
0
2
0
How does cv2.fitEllipse handle width/height with regards to rotation?
38,945,695
0
python,opencv,ellipse,data-fitting
Empirically, I ran code matching thousands of ellipses, and I never got one return value where the returned width was greater than the returned height. So it seems OpenCV normalizes the ellipse such that height >= width.
An ellipse of width 50, height 100, and angle 0, would be identical to an ellipse of width 100, height 50, and angle 90 - i.e. one is the rotation of the other. How does cv2.fitEllipse handle this? Does it return ellipses in some normalized form (i.e. angle is picked such that width is always < height), or can it provi...
0
1
1,827
0
52,064,081
0
0
0
0
6
false
57
2016-08-15T14:07:00.000
24
10
0
Dataframe not showing in Pycharm
38,956,660
1
python,pandas,pycharm
I have faced the same problem with PyCharm 2018.2.2. The reason was having a special character in a column's name as mentioned by Yunzhao . If your having a column name like 'R&D' changing it to 'RnD' will fix the problem. It's really strange JetBrains hasn't solved this problem for over 2 years.
I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame. However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothin...
0
1
27,005
0
51,483,568
0
0
0
0
6
false
57
2016-08-15T14:07:00.000
9
10
0
Dataframe not showing in Pycharm
38,956,660
1
python,pandas,pycharm
I have met the same problems. I figured it was because of the special characters in column names (in my case) In my case, I have "%" in the column name, then it doesn't show the data in View as DataFrame function. After I remove it, everything was correctly shown. Please double check if you also have some special char...
I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame. However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothin...
0
1
27,005
0
57,003,355
0
0
0
0
6
false
57
2016-08-15T14:07:00.000
2
10
0
Dataframe not showing in Pycharm
38,956,660
0.039979
python,pandas,pycharm
In my situation, the problem is caused by two same cloumn name in my dataframe. Check it by:df.columns.shape[0] == len(set(df.columns))
I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame. However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothin...
0
1
27,005
0
55,593,342
0
0
0
0
6
false
57
2016-08-15T14:07:00.000
2
10
0
Dataframe not showing in Pycharm
38,956,660
0.039979
python,pandas,pycharm
I use PyCharm 2019.1.1 (Community Edition) and I run Python 3.7. When I first click on "View as DataFrame" there seems to be the same issue, but if I wait a few second the content pops up. For me it is a matter of loading.
I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame. However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothin...
0
1
27,005
0
57,313,249
0
0
0
0
6
false
57
2016-08-15T14:07:00.000
2
10
0
Dataframe not showing in Pycharm
38,956,660
0.039979
python,pandas,pycharm
For the sake of completeness: I face the same problem, due to the fact that some elements in the index of the dataframe contain a question mark '?'. One should avoid that too, if you still want to use the data viewer. Data viewer still worked, if the index strings contain hashes or less-than/greather-than signs though.
I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame. However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothin...
0
1
27,005
0
64,172,890
0
0
0
0
6
false
57
2016-08-15T14:07:00.000
1
10
0
Dataframe not showing in Pycharm
38,956,660
0.019997
python,pandas,pycharm
As of 2020-10-02, using PyCharm 2020.1.4, I found that this issue also occurs if the DataFrame contains a column containing a tuple.
I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame. However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothin...
0
1
27,005
0
38,966,174
0
1
0
0
1
true
1
2016-08-15T17:49:00.000
1
1
0
Mutable indexed heterogeneous data structure?
38,960,221
1.2
python,pandas,tuples
You could use a dict of dicts instead of a dict of namedtuples. Dicts are mutable, so you'll be able to modify the inner dicts. Given what you said in the comments about the structures of each DataFrame-1 and -2 being comparable, you could also group all of each into one big DataFrame, by adding a column to each DataF...
Is there a data class or type in Python that matches these criteria? I am trying to build an object that looks something like this: ExperimentData ID 1 sample_info_1: character string sample_info_2: character string Dataframe_1: pandas data frame Dataframe_2: pandas data frame ID 2 (etc.) Right now, I am usin...
0
1
68
0
45,422,652
0
0
0
0
1
false
2
2016-08-15T22:35:00.000
0
2
0
How to load images in S3 for a deep learning model with EC2 instance (GPU)
38,964,041
0
python,amazon-web-services,amazon-s3,amazon-ec2,keras
you can do it using Jupyter notebook otherwise use: Duck for MAC, Putty for windows. I hope it helps
I'm trying to train a Keras model on AWS GPU. How would you load images (training data) in S3 for a deep learning model with EC2 instance (GPU)?
0
1
764
0
39,922,584
0
0
0
0
1
false
22
2016-08-16T10:22:00.000
0
3
0
Keras: How to use fit_generator with multiple outputs of different type
38,972,380
0
python,deep-learning,keras
The best way to achieve this seems to be to create a new generator class expanding the one provided by Keras that parses the data augmenting only the images and yielding all the outputs.
In a Keras model with the Functional API I need to call fit_generator to train on augmented images data using an ImageDataGenerator. The problem is my model has two outputs: the mask I'm trying to predict and a binary value. I obviously only want to augment the input and the mask output and not the binary value. How ca...
0
1
21,904
0
38,976,616
0
0
0
0
1
false
1
2016-08-16T13:35:00.000
0
1
0
Initializing a very large pandas dataframe
38,976,431
0
python,pandas,numpy,large-data
Out of curiosity, is there a reason you want to use Pandas for this? Image analysis is typically handled in matrices making NumPy a clear favorite. If I'm not mistaken, both sk-learn and PIL/IMAGE use NumPy arrays to do their analysis and operations. Another option: avoid the in-memory step! Do you need to access a...
Background: I have a sequence of images. In each image, I map a single pixel to a number. Then I want to create a pandas dataframe where each pixel is in its own column and images are rows. The reason I want to do that is so that I can use things like forward fill. Challenge: I have transformed each image into a one di...
0
1
310
0
38,980,686
0
0
0
0
1
true
0
2016-08-16T16:57:00.000
0
1
0
What does Random Forest do with unseen data?
38,980,544
1.2
python,machine-learning,scikit-learn,random-forest
They will be treated in the same manner as the minimal value already encountered in the training set. RF is just a bunch of voting decision trees, and (basic) DTs can only form decisions in form of "if feature X is > then T go left, otherwise go right". Consequently, if you fit it to data which, for a given feature, ha...
When I built my random forest model using scikit learn in python, I set a condition (where clause in sql query) so that the training data only contain values whose value is greater than 0. I am curious to know how random forest handles test data whose value is less than 0, which the random forest model has never seen b...
0
1
709
0
38,984,364
0
0
0
0
1
true
2
2016-08-16T20:39:00.000
3
1
0
add training data to existing LinearSVC
38,984,069
1.2
python,machine-learning,scikit-learn
You cannot add data to SVM and achieve the same result as if you would add it to the original training set. You can either retrain with extended training set starting with the previous solution (should be faster) or train on new data only and completely diverge from the previous solution. There are only few models that...
I am scraping approximately 200,000 websites, looking for certain types of media posted on the websites of small businesses. I have a pickled linearSVC, which I've trained to predict the probability that a link found on a web page contains media of the type that I'm looking for, and it performs rather well (overall acc...
0
1
902
0
38,987,964
0
0
0
0
1
false
2
2016-08-17T03:05:00.000
1
2
0
How to detect ending location (x,y,z) of certain sequence in 3D domain
38,987,464
0.099668
python,algorithm,graph,analytics,d3dimage
One approach would be to choose a threshold density, convert all voxels below this threshold to 0 and all above it to 1, and then look for the pair of 1-voxels whose shortest path is longest among all pairs of 1-voxels. These two voxels should be near the ends of the longest "rope", regardless of the exact shape that ...
I have protein 3D creo-EM scan, such that it contains a chain which bends and twists around itself - and has in 3-dimension space 2 chain endings (like continuous rope). I need to detect (x,y,z) location within given cube space of two or possibly multiplier of 2 endings. Cube space of scan is presented by densities in ...
0
1
97
0
46,249,521
0
1
0
0
2
false
1
2016-08-17T06:49:00.000
0
2
0
How to install scikit-learn
38,989,896
0
python,windows,scikit-learn
Old post, but right answer is, 'sudo pip install -U numpy matplotlib --upgrade' for python2 or 'sudo pip3 install -U numpy matplotlib --upgrade' for python3
I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well. How can I install these modules using the pip command?
0
1
2,435
0
38,990,089
0
1
0
0
2
false
1
2016-08-17T06:49:00.000
-1
2
0
How to install scikit-learn
38,989,896
-0.099668
python,windows,scikit-learn
Using Python 3.4, I run the following from the command line: c:\python34\python.exe -m pip install package_name So you would substitute "numpy" and "matplotlib" for 'package_name'
I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well. How can I install these modules using the pip command?
0
1
2,435
0
38,994,681
0
0
0
0
1
false
0
2016-08-17T08:36:00.000
0
1
0
Scikit-learn and pyspark integration
38,991,799
0
python,apache-spark,scikit-learn,pyspark
The fact that you are using spark shouldn't hold you from using external python libraries. You can import sklearn library in your spark-python code, and use sklearn logistic regression model with the saved pkl file.
I have trained a logistic regression model in sklearn and saved the model to .pkl files. Is there a method of using this pkl file from within spark?
0
1
374
0
39,093,328
0
0
0
0
1
false
4
2016-08-17T15:48:00.000
3
1
0
In Keras, If samples_per_epoch is less than the 'end' of the generator when it (loops back on itself) will this negatively affect result?
39,001,104
0.53705
python,machine-learning,deep-learning,theano,keras
I'm dealing some something similar right now. I want to make my epochs shorter so I can record more information about the loss or adjust my learning rate more often. Without diving into the code, I think the fact that .fit_generator works with the randomly augmented/shuffled data produced by the keras builtin ImageDat...
I'm using Keras with Theano to train a basic logistic regression model. Say I've got a training set of 1 million entries, it's too large for my system to use the standard model.fit() without blowing away memory. I decide to use a python generator function and fit my model using model.fit_generator(). My generator fu...
0
1
1,961
0
69,125,753
0
0
0
0
1
false
47
2016-08-18T00:56:00.000
0
3
0
How to transform Dask.DataFrame to pd.DataFrame?
39,008,391
0
python,pandas,dask
MRocklin's answer is correct and this answer gives more details on when it's appropriate to convert from a Dask DataFrame to and Pandas DataFrame (and how to predict when it'll cause problems). Each partition in a Dask DataFrame is a Pandas DataFrame. Running df.compute() will coalesce all the underlying partitions in...
How can I transform my resulting dask.DataFrame into pandas.DataFrame (let's say I am done with heavy lifting, and just want to apply sklearn to my aggregate result)?
0
1
32,382
0
39,018,076
0
0
0
0
1
true
2
2016-08-18T12:15:00.000
2
1
0
Deploy caffe regression model
39,017,998
1.2
python,neural-network,deep-learning,caffe,conv-neural-network
For deploy you only need to discard the loss layer, in your case the "EuclideanLoss" layer. The output of your net is the "bottom" you fed the loss layer. For "SoftmaxWithLoss" layer (and "SigmoidCrossEntropy") you need to replace the loss layer, since the loss layer includes an extra layer inside it (for computational...
I have trained a regression network with caffe. I use "EuclideanLoss" layer in both the train and test phase. I have plotted these and the results look promising. Now I want to deploy the model and use it. I know that if SoftmaxLoss is used, the final layer must be Softmax in the deploy file. What should this be in t...
0
1
594
0
39,046,078
0
0
1
0
1
true
0
2016-08-19T18:42:00.000
0
1
0
Tweepy import Error on HDFS running on Centos 7
39,045,825
1.2
python-2.7,hadoop,hdfs,tweepy,centos7
It looks like you're using Anaconda's Python to run your script, but you installed tweepy into CentOS's system installation of Python using pip. Either use conda to install tweepy, or use Anaconda's pip executable to install tweepy onto your Hadoop cluster.
I have a Hadoop Cluster running on Centos 7. I am running a program (sitting on HDFS) to extract tweets and I need to import tweepy for that. I did pip install tweepy as root on all the nodes of the cluster but i still get an import error when I run the program. Error says: ImportError: No module named tweepy I am su...
0
1
192
0
39,119,151
0
0
0
0
1
false
0
2016-08-24T07:22:00.000
0
1
0
python2.7 histogram comparison - white background anomaly
39,116,877
0
python,python-2.7,opencv,image-processing,histogram
You can remove the white color, rebin the histogra and then compare: Compute a histrogram with 256 bins. Remove the white bin (or make it zero). Regroup the bins to have 64 bins by adding the values of 4 consecutive bins. Perform the compareHist(). This would work for any "predominant color". To generalize, you can...
my program's purpose is to take 2 images and decide how similar they are. im not talking here about identical, but similarity. for example, if i take 2 screenshots of 2 different pages of the same website, their theme colors would probably be very similar and therefor i want the program to declare that they are similar...
0
1
161
0
39,177,157
0
0
0
0
1
false
5
2016-08-26T13:54:00.000
2
1
0
Tensorflow: show or save forget gate values in LSTM
39,168,025
0.379949
python,neural-network,tensorflow,lstm
If you are using tf.rnn_cell.BasicLSTMCell , the variable you are looking for will have the following suffix in its name : <parent_variable_scope>/BasicLSTMCell/Linear/Matrix . This is a concatenated matrix for all the four gates. Its first dimension matches the sum of the second dimensions of the input matrix and the ...
I am using the LSTM model that comes by default in tensorflow. I would like to check or to know how to save or show the values of the forget gate in each step, has anyone done this before or at least something similar to this? Till now I have tried with tf.print but many values appear (even more than the ones I was exp...
0
1
1,695
0
39,174,418
0
0
0
0
1
false
0
2016-08-26T18:23:00.000
0
1
0
Python: How to interpolate errors using scipy interpolate.interp1d
39,172,559
0
python,scipy,interpolation
As long as you can assume that your errors represent one-sigma intervals of normal distributions, you can always generate synthetic datasets, resample and interpolate those, and compute the 1-sigma errors of the results. Or just interpolate values+err and values-err, if all you need is a quick and dirty rough estimate.
I have a number of data sets, each containing x, y, and y_error values, and I'm simply trying to calculate the average value of y at each x across these data sets. However the data sets are not quite the same length. I thought the best way to get them to an equal length would be to use scipy's interoplate.interp1d for ...
0
1
403
0
39,229,405
0
1
0
0
1
false
0
2016-08-30T01:45:00.000
0
1
0
Convert Python dict files into MATLAB struct
39,217,618
0
python,matlab,dictionary,struct
So python -> MATLAB is a bit tricky with dictionaries/structs because the type of object that MATLAB is expecting is a dictionary object where each key is a single variable you want from python as a simple data type (array,int, etc). It doesn't like having nested dictionaries. I recommend 1: Store each dictionary s...
I have a function in Python that outputs a dict. I run this function into MATLAB and save the output to a parameter (say tmp) which is a dict of nested other dicts itself. Now I want to convert this file into a useful format such as structure. To elaborate: tmp is a dict. data = struct(tmp) is a structure but the field...
0
1
1,329
0
39,255,667
0
0
0
0
1
false
4
2016-08-30T19:05:00.000
1
1
0
In python apache beam, is it possible to write elements in a specific order?
39,235,274
0.197375
python,google-cloud-dataflow,apache-beam
While this isn't part of the base distribution, this is something you could implement by processing these elements and sorting them as part of a global window before writing out to a file, with the following caveats: The entire contents of the window would need to fit in memory, or you would need to chunk up the file ...
I'm using beam to process time series data over overlapping windows. At the end of my pipeline I am writing each element to a file. Each element represents a csv row and one of the fields is a timestamp of the associated window. I would like to write the elements in order of that timestamp. Is there a way to do this us...
0
1
998
0
40,301,138
0
0
0
0
1
false
1
2016-08-31T12:26:00.000
1
2
0
python 3D numpy array time index
39,249,639
0.099668
python,arrays,datetime,numpy,multidimensional-array
Posting the psuedo solution I used: The problem here is the lack of date-time indexing for 3d array data (i.e. satillite, radar). Whilst there is time series functions in pandas there is not for arrays (as far as i'm aware). This solution was possible because the data files I use have date-time in the name e.g. '20040...
Is there a way to index a 3 dimensional array using some form of time index (datetime etc.) on the 3rd dimension? My problem is that I am doing time series analysis on several thousand radar images and I need to get, for example, monthly averages. However if i simply average over every 31 arrays in the 3rd dimension it...
0
1
949
0
56,981,228
0
1
0
0
1
false
8
2016-08-31T18:47:00.000
0
3
0
Plotly + iPython Notebook - Plots Disappear on Reopen
39,256,913
0
python,ipython,jupyter-notebook,plotly
I also meet this annoying issue. I find no way to reveal them on the notebook, but I find a compromise way to display them on an html page that is File -> Print Preview.
When I create a notebook with plotly plots, save and reopen the notebook, the plots fail to render upon reopening - there are just blank blocks where the plots should be. Is this expected behavior? If not, is there a known fix?
0
1
1,438
0
39,273,086
0
0
0
0
1
false
0
2016-09-01T13:34:00.000
1
2
0
Suggestions to handle multiple python pandas scripts
39,273,012
0.099668
python,pandas
Instead of writing a CSV output which you have to re-parse, you can write and read the pandas.DataFrame in efficient binary format with the methods pandas.DataFrame.to_pickle() and pandas.read_pickle(), respectively.
I currently have several python pandas scripts that I keep separate because of 1) readability, and 2) sometimes I am interested in the output of these partial individual scripts. However, generally, the CSV file output of one of these scripts is the CSV input of the next and in each I have to re-read datetimes which i...
0
1
66