markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Aside: Image processing via convolutionsAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conve...
# Colab users only! %mkdir -p cs231n/notebook_images %cd drive/My\ Drive/$FOLDERNAME/cs231n %cp -r notebook_images/ /content/cs231n/ %cd /content/ from imageio import imread from PIL import Image kitten = imread('cs231n/notebook_images/kitten.jpg') puppy = imread('cs231n/notebook_images/puppy.jpg') # kitten is wide, a...
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Convolution: Naive backward passImplement the backward pass for the convolution operation in the function `conv_backward_naive` in the file `cs231n/layers.py`. Again, you don't need to worry too much about computational efficiency.When you are done, run the following to check your backward pass with a numeric gradient...
np.random.seed(231) x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 2, 'pad': 3} out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache) dx_num = eval_numerical_gradient_array(...
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Max-Pooling: Naive forwardImplement the forward pass for the max-pooling operation in the function `max_pool_forward_naive` in the file `cs231n/layers.py`. Again, don't worry too much about computational efficiency.Check your implementation by running the following:
x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], ...
Testing max_pool_forward_naive function: difference: 4.1666665157267834e-08
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Max-Pooling: Naive backwardImplement the backward pass for the max-pooling operation in the function `max_pool_backward_naive` in the file `cs231n/layers.py`. You don't need to worry about computational efficiency.Check your implementation with numeric gradient checking by running the following:
np.random.seed(231) x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_back...
Testing max_pool_backward_naive function: dx error: 3.27562514223145e-12
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Fast layersMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file `cs231n/fast_layers.py`. The fast convolution implementation depends on a Cython extension; to compile i...
%cd drive/My\ Drive/$FOLDERNAME/cs231n/ !python setup.py build_ext --inplace
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gra...
# Rel errors should be around e-9 or less from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import time %load_ext autoreload %autoreload 2 np.random.seed(231) x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16...
Testing pool_forward_fast: Naive: 0.248409s fast: 0.001589s speedup: 156.347839x difference: 0.0 Testing pool_backward_fast: Naive: 0.323171s fast: 0.007276s speedup: 44.415689x dx difference: 0.0
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Convolutional "sandwich" layersPreviously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file `cs231n/layer_utils.py` you will find sandwich layers that implement a few commonly used patterns for convolutional networks. Run the cells below to sanity ...
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward np.random.seed(231) x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': ...
Testing conv_relu: dx error: 1.5218619980349303e-09 dw error: 2.702022646099404e-10 db error: 1.451272393591721e-10
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Three-layer ConvNetNow that you have implemented all the necessary layers, we can put them together into a simple convolutional network.Open the file `cs231n/classifiers/cnn.py` and complete the implementation of the `ThreeLayerConvNet` class. Remember you can use the fast/sandwich layers (already imported for you) in...
model = ThreeLayerConvNet() N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N) loss, grads = model.loss(X, y) print('Initial loss (no regularization): ', loss) model.reg = 0.5 loss, grads = model.loss(X, y) print('Initial loss (with regularization): ', loss)
Initial loss (no regularization): 2.3025850635890874 Initial loss (with regularization): 2.508599728507643
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Gradient checkAfter the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors ...
num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 np.random.seed(231) X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs) model = ThreeLayerConvNet(num_filters=3, filter_size=3, input_dim=input_dim, hidden_dim=7, ...
W1 max relative error: 1.103635e-05 W2 max relative error: 1.521379e-04 W3 max relative error: 1.763147e-05 b1 max relative error: 3.477652e-05 b2 max relative error: 2.516375e-03 b3 max relative error: 7.945660e-10
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Overfit small dataA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
np.random.seed(231) num_train = 100 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } model = ThreeLayerConvNet(weight_scale=1e-2) solver = Solver(model, small_data, num_epochs=15, batch_size=50, ...
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss') plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show()
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Train the netBy training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001) solver = Solver(model, data, num_epochs=1, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=20)...
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Visualize FiltersYou can visualize the first-layer convolutional filters from the trained network by running the following:
from cs231n.vis_utils import visualize_grid grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show()
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Spatial Batch NormalizationWe already saw that batch normalization is a very useful technique for training deep fully-connected networks. As proposed in the original paper (link in `BatchNormalization.ipynb`), batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modificat...
np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10 print('Before spatial batch normalization:') print(' Shape: ', x.shape) print(' Means: ', x.mean...
After spatial batch normalization (test-time): means: [-0.08446363 0.08091916 0.06055194 0.04564399] stds: [1.0241906 1.09568294 1.0903571 1.0684257 ]
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Spatial batch normalization: backwardIn the file `cs231n/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_batchnorm_backward`. Run the following to check your implementation using a numeric gradient check:
np.random.seed(231) N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gam...
[[[ 4.82500057e-08 -6.18616346e-07 3.49900510e-08 -1.01656172e-06 2.21274578e-04] [-8.18296259e-07 1.06936237e-07 2.56172679e-08 6.08105008e-08 -7.21749724e-06] [-1.88721588e-05 3.50657245e-07 -3.24214426e-06 -9.50347872e-08 3.30202440e-07] [-4.69853077e-06 -3.05069739e-07 9.78361879e-08 7.28839...
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Group NormalizationIn the previous notebook, we mentioned that Layer Normalization is an alternative normalization technique that mitigates the batch size limitations of Batch Normalization. However, as the authors of [2] observed, Layer Normalization does not perform as well as Batch Normalization when used with Conv...
np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 6, 4, 5 G = 2 x = 4 * np.random.randn(N, C, H, W) + 10 x_g = x.reshape((N*G,-1)) print('Before spatial group normalization:') print(' Shape: ', x.s...
[[ 0 1] [ 2 3] [ 4 5] [ 6 7] [ 8 9] [10 11] [12 13] [14 15] [16 17] [18 19] [20 21] [22 23] [24 25] [26 27] [28 29] [30 31] [32 33] [34 35]] [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23] [24 25 26 27 28 29] [30 31 32 33 34 35]] [[ 0 1 2 3 4 5] [ 6 7 ...
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Spatial group normalization: backwardIn the file `cs231n/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_groupnorm_backward`. Run the following to check your implementation using a numeric gradient check:
np.random.seed(231) N, C, H, W = 2, 6, 4, 5 G = 2 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) gn_param = {} fx = lambda x: spatial_groupnorm_quick_forward(x, gamma, beta, G, gn_param)[0] fg = lambda a: spatial_groupnorm_quick_forward(...
dx error: 7.413109542981906e-08 dgamma error: 9.468085754206675e-12 dbeta error: 3.35440867127888e-12
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Pre-processing and analysis for one-source with distance 25 Load or create R scripts
get.data <- dget("get_data.r") #script to read data files get.pars <- dget("get_pars.r") #script to extract relevant parameters from raw data get.mv.bound <- dget("get_mvbound.r") #script to look at movement of boundary across learning plot.cirib <- dget("plot_cirib.r") #script to plot confidence intervals as ribbon p...
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Load data
fnames <- list.files(pattern = "*.csv") #create a vector of data file names, assuming all csv files are data nfiles <- length(fnames) #number of data files alldat <- list(get.data(fnames[1])) #initialize list containing all data with first subject for(i1 in c(2:nfiles)) alldat[[i1]] <- get.data(fnames[i1]) #populate li...
[1] "Processing sj: 1"
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
NOTE that get pars will produce warnings whenever the subject data has a perfectly strict boundary--these can be safely ignored.
head(allpars) dim(allpars)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
KEY **PID**: Unique tag for each participant **cond**: Experiment condition **axlab**: What shape (spiky/smooth) got the "Cooked" label? For counterbalancing, not interesting **closebound**: What was the location of the closest source boundary? **cbside**: Factor indicating what side of the range midpoint has the...
zthresh <- 2.5
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
First check t1bound and t2bound to see if there are any impossible values.
plot(allpars$t1bound, allpars$t2bound)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
There is an impossible t2bound so let's remove it.
dim(allpars) sjex <- as.character(allpars$PID[allpars$t2bound < 0]) #Add impossible value to exclude list sjex <- unique(sjex) #remove any accidental repeats noo <- allpars[is.na(match(allpars$PID, sjex)),] #Copy remaining subjects to noo object dim(noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Write "no impossible" (nimp) file for later agglomeration in mega-data
write.csv(noo, "summary/one25_grids_nimp.csv", row.names = F, quote=F)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Check to make sure "aligned" shift computation worked (should be an X pattern)
plot(noo$alshift, noo$bshift)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Check initial boundary for outliers
plot(zscore(noo$t1bound)) abline(h=c(-zthresh,0,zthresh))
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Add any outliers to the exclusion list and recompute no-outlier data structure
sjex <- c(sjex, as.character(allpars$PID[abs(zscore(allpars$t1bound)) > zthresh])) sjex <- unique(sjex) #remove accidental repeats noo <- noo[is.na(match(noo$PID, sjex)),] dim(noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Now compute Zscore for aligned shift for all subjects and look for outliers
noo$Zalshift <- zscore(noo$alshift) #Compute Z scores for this aligned shift plot(noo$Zalshift); abline(h = c(-zthresh,0,zthresh)) #plot Zscores
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Again add any outliers to exclusion list and remove from noo
sjex <- c(sjex, as.character(noo$PID[abs(noo$Zalshift) > zthresh])) sjex <- unique(sjex) #remove accidental repeats noo <- noo[is.na(match(noo$PID, sjex)),] dim(noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Data analysis Does the initial (t1) boundary differ between the two groups? It shouldn't since they have the exact same experience to this point.
t.test(t1bound ~ closebound, data = noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Reassuringly, it doesn't. So what is the location of the initial boundary on average?
t.test(noo$t1bound) #NB t.test of a single vector is a good way to compute mean and CIs
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
The mean boundary is shifted a bit positive relative to the midpoint between labeled examplesNext, looking across all subjects, does the aligned boundary shift differ reliably from zero? Also, what are the confidence limits on the mean shift?
t.test(noo$alshift)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
The boundary shifts reliably toward the close source. The mean amount of shift is 18, and the confidence interval spans 9-27.Next, where does the test 2 boundary lie for each group, and does this differ depending on where the source was?
t.test(t2bound ~ closebound, data = noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
When the source was at 125, the boundary ends up at 134; when the source is at 175, the boundary ends up at 166.Is the boundary moving all the way to the source?
t.test(noo$t2bound[noo$closebound==125]) #compute confidence intervals for source at 125 subgroup t.test(noo$t2bound[noo$closebound==175]) #compute confidence intervals for source at 175 subgroup
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
In both cases boundaries move toward the source. When the initial boundary is closer to the source (source at 175), the final boundary ends up at the source. When it is farther away (source at 125), the final boundary ends up a little short of the source.Another way of looking at the movement is to compute, for each su...
#Predict the boundary shift from the distance between initial bound and source m <- lm(bshift ~ t1dist, data = noo) #fit linear model predicting shift from distance summary(m) #look at model parameters
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Distance predicts shift significantly. The intercept is not reliably different from zero, so that, with zero distance, boundary does not shift. The slope of 0.776 suggests that the boundary shifts 78 percent of the way toward the close source. Let's visualize:
plot(noo$t1dist, noo$bshift) #plot distance of source against boundary shift abline(lm(bshift~t1dist, data = noo)$coefficients) #add least squares line abline(0,1, col = 2) #Add line with slope 1 and intercept 0
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
The black line shows the least-squares linear fit; the red line shows the expected slope if learner moved all the way toward the source. True slope is quite a bit shallower. If we compute confidence limits on slope we get:
confint(m, 't1dist', level = 0.95)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
So the confidence limit extends v close to 1 Export parameter data
write.csv(noo, paste("summary/onesrc25_noo_z", zthresh*10, ".csv", sep=""), row.names=F, quote=F)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Further analyses Movement of boundary over the course of learning
nsj <- length(alldat) #Number of subjects is length of alldat object mvbnd <- matrix(0, nsj, 301) #Initialize matrix of 0s to hold boundary-movement data, with 301 windows for(i1 in c(1:nsj)) mvbnd[i1,] <- get.mv.bound(alldat, sj=i1) #Compute move data for each sj and store in matrix rows
Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message:...
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Again, ignore warnings here
tmp <- cbind(allpars[,1:6], mvbnd) #Add subject and condition data columns mvb.noo <- tmp[is.na(match(tmp$PID, sjex)),] #Remove excluded subjects head(mvb.noo) tmp <- mvb.noo[,7:307] #Copy movement data into temporary object tmp[abs(tmp) > 250] <- NA #Remove boundary estimates that are extreme (outside 50-250 range) t...
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
About此笔记包含了以下内容:* keras 的基本使用* 组合特征* 制作dataset* 模型的存取(2种方式)* 添加检查点
import tensorflow as tf from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt import math from tensorflow.keras.utils import plot_model import os # fea_x = [i for i in np.arange(0, math.pi * 2.0, 0.01)] # print(fea_x[:50]) x0 = np.random.randint(0, math.pi * 6.0 * 100.0, 5000) / 100.0 x...
_____no_output_____
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
Save Model
!pip install h5py pyyaml model_path = os.getcwd() + "/mymodel.h5" model_path
_____no_output_____
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
这里使用默认的优化器, 默认优化器不能直接保存, 读取模型时需要再次创建优化器并编译使用 keras 内置的优化器可以直接保存和读取, 比如: `tf.keras.optimizers.Adam()`
model.save(model_path) new_model = tf.keras.models.load_model(model_path) opt = tf.train.AdagradOptimizer(learning_rate=0.1) # opt = tf.train.RMSPropOptimizer(0.1) new_model.compile(optimizer=opt, loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) new_model.summary(...
[array([[ 0.06555444, -0.5212194 ], [-0.5949386 , -0.09970472], [-0.37951565, -0.21464492], [-0.13808419, 0.24510457], [ 0.36669165, -0.2663816 ], [ 0.45086718, -0.26410016], [-0.04899281, -0.6156222 ]], dtype=float32), array([-0.4162824, 0.4162828], dtype=float32)]
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
保存为 pb 文件
pb_model_path = os.getcwd() + '/pbmdoel' pb_model_path tf.contrib.saved_model.save_keras_model(new_model, pb_model_path) !ls {pb_model_path}
assets saved_model.pb variables
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
读取 pb 文件
model2 = tf.contrib.saved_model.load_keras_model(pb_model_path) model2.summary() # 使用前要先编译 model2.compile(optimizer=opt, loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) loss, acc = model2.evaluate(dataset, steps=200) print("Restored model, accuracy: {:5.2f}%"...
200/200 [==============================] - 0s 2ms/step - loss: 0.2203 - acc: 0.9250 Restored model, accuracy: 92.50%
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
[Strings](https://docs.python.org/3/library/stdtypes.htmltext-sequence-type-str)
my_string = 'Python is my favorite programming language!' my_string type(my_string) len(my_string)
_____no_output_____
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Respecting [PEP8](https://www.python.org/dev/peps/pep-0008/maximum-line-length) with long strings
long_story = ('Lorem ipsum dolor sit amet, consectetur adipiscing elit.' 'Pellentesque eget tincidunt felis. Ut ac vestibulum est.' 'In sed ipsum sit amet sapien scelerisque bibendum. Sed ' 'sagittis purus eu diam fermentum pellentesque.') long_story
_____no_output_____
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.replace()` If you don't know how it works, you can always check the `help`:
help(str.replace)
Help on method_descriptor: replace(self, old, new, count=-1, /) Return a copy with all occurrences of substring old replaced by new. count Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences. If the optional argument count is given, onl...
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
This will not modify `my_string` because replace is not done in-place.
my_string.replace('a', '?') print(my_string)
Python is my favorite programming language!
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
You have to store the return value of `replace` instead.
my_modified_string = my_string.replace('is', 'will be') print(my_modified_string)
Python will be my favorite programming language!
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.format()`
secret = '{} is cool'.format('Python') print(secret) print('My name is {} {}, you can call me {}.'.format('John', 'Doe', 'John')) # is the same as: print('My name is {first} {family}, you can call me {first}.'.format(first='John', family='Doe'))
My name is John Doe, you can call me John. My name is John Doe, you can call me John.
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.join()`
help(str.join) pandas = 'pandas' numpy = 'numpy' requests = 'requests' cool_python_libs = ', '.join([pandas, numpy, requests]) print('Some cool python libraries: {}'.format(cool_python_libs))
Some cool python libraries: pandas, numpy, requests
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Alternatives (not as [Pythonic](http://docs.python-guide.org/en/latest/writing/style/idioms) and [slower](https://waymoot.org/home/python_string/)):
cool_python_libs = pandas + ', ' + numpy + ', ' + requests print('Some cool python libraries: {}'.format(cool_python_libs)) cool_python_libs = pandas cool_python_libs += ', ' + numpy cool_python_libs += ', ' + requests print('Some cool python libraries: {}'.format(cool_python_libs))
Some cool python libraries: pandas, numpy, requests Some cool python libraries: pandas, numpy, requests
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.upper(), str.lower(), str.title()`
mixed_case = 'PyTHoN hackER' mixed_case.upper() mixed_case.lower() mixed_case.title()
_____no_output_____
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.strip()`
help(str.strip) ugly_formatted = ' \n \t Some story to tell ' stripped = ugly_formatted.strip() print('ugly: {}'.format(ugly_formatted)) print('stripped: {}'.format(ugly_formatted.strip()))
ugly: Some story to tell stripped: Some story to tell
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.split()`
help(str.split) sentence = 'three different words' words = sentence.split() print(words) type(words) secret_binary_data = '01001,101101,11100000' binaries = secret_binary_data.split(',') print(binaries)
['01001', '101101', '11100000']
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Calling multiple methods in a row
ugly_mixed_case = ' ThIS LooKs BAd ' pretty = ugly_mixed_case.strip().lower().replace('bad', 'good') print(pretty)
this looks good
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Note that execution order is from left to right. Thus, this won't work:
pretty = ugly_mixed_case.replace('bad', 'good').strip().lower() print(pretty)
this looks bad
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
[Escape characters](http://python-reference.readthedocs.io/en/latest/docs/str/escapes.htmlescape-characters)
two_lines = 'First line\nSecond line' print(two_lines) indented = '\tThis will be indented' print(indented)
This will be indented
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
ADVANCED TEXT MINING- 본 자료는 텍스트 마이닝을 활용한 연구 및 강의를 위한 목적으로 제작되었습니다.- 본 자료를 강의 목적으로 활용하고자 하시는 경우 꼭 아래 메일주소로 연락주세요.- 본 자료에 대한 허가되지 않은 배포를 금지합니다.- 강의, 저작권, 출판, 특허, 공동저자에 관련해서는 문의 바랍니다.- **Contact : ADMIN(admin@teanaps.com)**--- WEEK 02-2. Python 자료구조 이해하기- 텍스트 데이터를 다루기 위한 Python 자료구조에 대해 다룹니다.--- 1. 리스트(LIST) 자료구조 이해하기-...
# 1) 리스트를 생성합니다. new_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] print(new_list) # 2) 리스트의 마지막 원소 뒤에 새로운 원소를 추가합니다. new_list.append(100) print(new_list) # 3) 더하기 연산자를 활용해 두 리스트를 결합합니다. new_list = new_list + [101, 102] print(new_list) # 4-1) 리스트에 존재하는 특정 원소 중 일치하는 가장 앞의 원소를 삭제합니다. new_list.remove(3) print(new_list) # 4-2) 리스트...
False
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
1.2. 리스트(LIST) 인덱싱: 리스트에 존재하는 특정 원소를 불러옵니다.---
new_list = [0, 1, 2, 3, 4, 5, 6, 7, "hjvjg", 9] # 1) 리스트에 존재하는 N 번째 원소를 불러옵니다. print("0번째 원소 :", new_list[0]) print("1번째 원소 :", new_list[1]) print("4번째 원소 :", new_list[4]) # 2) 리스트에 존재하는 N번째 부터 M-1번째 원소를 리스트 형식으로 불러옵니다. print("0~3번째 원소 :", new_list[0:3]) print("4~9번째 원소 :", new_list[4:9]) print("2~3번째 원소 :", new_list[2...
끝에서 |-1|-1번째 이전의 모든 원소 : [0, 1, 2, 3, 4, 5, 6, 7, 8] 끝에서 |-1|-1번째 부터 모든 원소 : [9] 끝에서 |-2|-1번째 이전의 모든 원소 : [0, 1, 2, 3, 4, 5, 6, 7] 끝에서 |-2|-1번째 부터 모든 원소 : [8, 9]
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
1.3. 다차원 리스트(LIST): 리스트의 원소에 다양한 값 또는 자료구조를 저장할 수 있습니다.---
# 1-1) 리스트의 원소에는 유형(TYPE)의 값 또는 자료구조를 섞어서 저장할 수 있습니다. new_list = ["텍스트", 0, 1.9, [1, 2, 3, 4], {"서울": 1, "부산": 2, "대구": 3}] print(new_list) # 1-2) 리스트의 각 원소의 유형(TYPE)을 type(변수) 함수를 활용해 확인합니다. print("Type of new_list[0] :", type(new_list[0])) print("Type of new_list[1] :", type(new_list[1])) print("Type of new_list[2] :...
new_list : [[4, 5, 1], [0, 1, 2], [2, 3, 7], [9, 6, 8]] new_list[0] : [4, 5, 1] new_list[1] : [0, 1, 2] new_list[2] : [2, 3, 7] new_list[3] : [9, 6, 8]
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
2. 딕셔너리(DICTIONARY) 자료구조 이해하기--- 2.1. 딕셔너리(DICTIONARY): 값 또는 자료구조를 저장할 수 있는 구조를 선언합니다.---
# 1) 딕셔너리를 생성합니다. new_dict = {"마케팅팀": 98, "개발팀": 78, "데이터분석팀": 83, "운영팀": 33} print(new_dict) # 2) 딕셔너리의 각 원소는 KEY:VALUE 쌍의 구조를 가지며, KEY 값에 대응되는 VALUE를 불러옵니다. print(new_dict["마케팅팀"]) # 3-1) 딕셔너리에 새로운 KEY:VALUE 쌍의 원소를 추가합니다. new_dict["미화팀"] = 55 print(new_dict) # 3-2) 딕셔너리에 저장된 각 원소의 KEY 값은 유일해야하기 때문에, 중복된 KEY 값이 추가되는 경...
{'마케팅팀': 98, '개발팀': '재평가', '데이터분석팀': {'등급': 'A'}, '운영팀': ['A'], '미화팀': 55, 0: '오타'}
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
2.2. 딕셔너리(DICTIONARY) 인덱싱: 딕셔너리에 존재하는 원소를 리스트 형태로 불러옵니다.---
# 1-1) 다양한 함수를 활용해 딕셔너리를 인덱싱 가능한 구조로 불러옵니다. new_dict = {"마케팅팀": 98, "개발팀": 78, "데이터분석팀": 83, "운영팀": 33} print("KEY List of new_dict :", new_dict.keys()) print("VALUE List of new_dict :", new_dict.values()) print("(KEY, VALUE) List of new_dict :", new_dict.items()) for i, j in new_dict.items(): print(i, j) # 1-2) 불러...
KEY List of new_dict : ['마케팅팀', '개발팀', '데이터분석팀', '운영팀'] VALUE List of new_dict : [98, 78, 83, 33] (KEY, VALUE) List of new_dict : [('마케팅팀', 98), ('개발팀', 78), ('데이터분석팀', 83), ('운영팀', 33)]
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
Euler Problem 148=================We can easily verify that none of the entries in the first seven rows ofPascal's triangle are divisible by 7:However, if we check the first one hundred rows, we will find that only 2361of the 5050 entries are not divisible by 7.Find the number of entries which are not divisible by 7 in...
def f(n): if n == 0: return 1 return (1+(n%7))*f(n//7) def F(n): if n == 0: return 0 r = n % 7 return 28*F(n//7) + r*(r+1)//2*f(n//7) print(F(10**9))
2129970655314432
MIT
Euler 148 - Exploring Pascal's triangle.ipynb
Radcliffe/project-euler
Gradient CheckingWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want t...
# Packages import numpy as np from testCases import * from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
_____no_output_____
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
1) How does gradient checking work?Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.Because forward propagation is relatively easy to implement, you're confident you got that...
# GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value...
J = 8
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
**Expected Output**: ** J ** 8 **Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J ...
# GRADED FUNCTION: backward_propagation def backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with res...
dtheta = 2
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
**Expected Output**: ** dtheta ** 2 **Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.**Instructions**:- First compute "gradapprox" using the formula above (1) and a sma...
# GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradien...
The gradient is correct! difference = 2.91933588329e-10
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
**Expected Output**:The gradient is correct! ** difference ** 2.9193358103083e-10 Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. Now, in the more general case, your cos...
def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3",...
_____no_output_____
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
Now, run backward propagation.
def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the grad...
_____no_output_____
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. **How does gradient checking work?**.As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagati...
# GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", ...
Your backward propagation works perfectly fine! difference = 1.69916980932e-07
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
Code along 4 Scale, Standardize, or Normalize with scikit-learn När ska man använda MinMaxScaler, RobustScaler, StandardScaler, och Normalizer Attribution: Jeff Hale Varför är det ofta nödvändigt att genomföra så kallad variable transformation/feature scaling det vill säga, standardisera, normalisera eller på andra s...
#Importerar de bibliotek vi behöver import numpy as np import pandas as pd from sklearn import preprocessing import matplotlib import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') #Denna kod sätter upp hur matplotlib ska visa grafer och plotar %matplotlib inline ma...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Original Distributions Data som det kan se ut i original, alltså när det samlats in, innan någon pre-processing har genomförts.För att ha data att använda i övningarna skapar nedanstående kod ett antal randomiserade spridningar av data
#skapa kolumner med olika fördelningar df = pd.DataFrame({ 'beta': np.random.beta(5, 1, 1000) * 60, # beta 'exponential': np.random.exponential(10, 1000), # exponential 'normal_p': np.random.normal(10, 2, 1000), # normal platykurtic 'normal_l': np.random.normal(10, 10, 1000), # normal ...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Uppgift 1: a. Plotta de kurvor som skapats i ovanstående cell i en och samma koordinatsystem med hjälp av [seaborn biblioteket](https://seaborn.pydata.org/api.htmldistribution-api).>Se till att det är tydligt vilken kurva som representerar vilken distribution.>>Koden för själva koordinatsystemet är given, fortsätt kod...
# plot original distribution plot fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('Original Distributions') #De fem kurvorna sns.kdeplot(df['beta'], ax=ax1) sns.kdeplot(df['exponential'], ax=ax1) sns.kdeplot(df['normal_p'], ax=ax1) sns.kdeplot(df['normal_l'], ax=ax1) sns.kdeplot(df['bimodal'], ax=ax1...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
b. Visa de fem första raderna i den dataframe som innehåller alla distributioner.
df.head()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
c. För samtliga fem attribut, beräkna:* medel* medianVad för bra metod kan användas för att få ett antal statistiska mått på en dataframe? Hämta denna information med denna metod.
df.describe()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
d. I pandas kan du plotta din dataframe på några olika sätt. Gör en plot för att ta reda på hur skalan på de olika attibuten ser ut, befinner sig alla fem i ungefär samma skala?
df.plot()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
* Samtliga värden ligger inom liknande intervall e. Vad händer om följande kolumn med randomiserade värden läggs till?
new_column = np.random.normal(1000000, 10000, (1000,1)) df['new_column'] = new_column col_names.append('new_column') df['new_column'].plot(kind='kde') # plot våra originalvärden tillsammans med det nya värdet fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('Original Distributions') sns.kdeplot(df['be...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Hur gick det? Testar några olika sätt att skala dataframes.. MinMaxScalerMinMaxScaler subtraherar varje värde i en kolumn med medelvärdet av den kolumnen och dividerar sedan med antalet värden.
mm_scaler = preprocessing.MinMaxScaler() df_mm = mm_scaler.fit_transform(df) df_mm = pd.DataFrame(df_mm, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After MinMaxScaler') sns.kdeplot(df_mm['beta'], ax=ax1) sns.kdeplot(df_mm['exponential'], ax=ax1) sns.kdeplot(df_mm['normal_p'...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt med värdena?
df_mm['beta'].min() df_mm['beta'].max()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vi jämför med min och maxvärde för varje kolumn innan vi normaliserade vår dataframe
mins = [df[col].min() for col in df.columns] mins maxs = [df[col].max() for col in df.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Let's check the minimums and maximums for each column after MinMaxScaler.
mins = [df_mm[col].min() for col in df_mm.columns] mins maxs = [df_mm[col].max() for col in df_mm.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? RobustScalerRobustScaler subtraherar med medianen för kolumnen och dividerar med kvartilavståndet (skillnaden mellan största 25% och minsta 25%)
r_scaler = preprocessing.RobustScaler() df_r = r_scaler.fit_transform(df) df_r = pd.DataFrame(df_r, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After RobustScaler') sns.kdeplot(df_r['beta'], ax=ax1) sns.kdeplot(df_r['exponential'], ax=ax1) sns.kdeplot(df_r['normal_p'], ax=ax...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vi kollar igen min och max efteråt (OBS; jämför med originalet högst upp innan vi startar olika skalningsmetoder).
mins = [df_r[col].min() for col in df_r.columns] mins maxs = [df_r[col].max() for col in df_r.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? StandardScalerStandardScaler skalar varje kolumn till att ha 0 som medelvärde och standardavvikelsen 1
s_scaler = preprocessing.StandardScaler() df_s = s_scaler.fit_transform(df) df_s = pd.DataFrame(df_s, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After StandardScaler') sns.kdeplot(df_s['beta'], ax=ax1) sns.kdeplot(df_s['exponential'], ax=ax1) sns.kdeplot(df_s['normal_p'], a...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vi kontrollerar min och max efter skalningen återigen
mins = [df_s[col].min() for col in df_s.columns] mins maxs = [df_s[col].max() for col in df_s.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? I jämförelse med de två innan? NormalizerNormaliser transformerar rader istället för kolumner genom att (default) beräkna den Euclidiska normen som är roten ur summan av roten ur samtliga värden. Kallas för l2.
n_scaler = preprocessing.Normalizer() df_n = n_scaler.fit_transform(df) df_n = pd.DataFrame(df_n, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After Normalizer') sns.kdeplot(df_n['beta'], ax=ax1) sns.kdeplot(df_n['exponential'], ax=ax1) sns.kdeplot(df_n['normal_p'], ax=ax1) s...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Min och max efter skalning
mins = [df_n[col].min() for col in df_n.columns] mins maxs = [df_n[col].max() for col in df_n.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? Nu tar vi en titt på alla olika sätt att skala tillsammans, dock skippar vi normalizern då det är väldigt ovanligt att man vill skala om rader. Kombinerad plot
#Själva figuren fig, (ax0, ax1, ax2, ax3) = plt.subplots(ncols=4, figsize=(20, 8)) ax0.set_title('Original Distributions') sns.kdeplot(df['beta'], ax=ax0) sns.kdeplot(df['exponential'], ax=ax0) sns.kdeplot(df['normal_p'], ax=ax0) sns.kdeplot(df['normal_l'], ax=ax0) sns.kdeplot(df['bimodal'], ax=ax0) sns.kdeplot(df['...
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Tutorial 2. Solving a 1D diffusion equation
# Document Author: Dr. Vishal Sharma # Author email: sharma_vishal14@hotmail.com # License: MIT # This tutorial is applicable for NAnPack version 1.0.0-alpha4
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
I. BackgroundThe objective of this tutorial is to present the step-by-step solution of a 1D diffusion equation using NAnPack such that users can follow the instructions to learn using this package. The numerical solution is obtained using the Forward Time Central Spacing (FTCS) method. The detailed description of the...
import matplotlib.pyplot as plt from nanpack.benchmark import ParallelPlateFlow import nanpack.preprocess as pre from nanpack.grid import RectangularGrid from nanpack.parabolicsolvers import FTCS import nanpack.postprocess as post
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
As the first step in simulation, we have to tell our script to read the inputs and assign those inputs to the variables/objects that we will use in our entire code. For this purpose, there is a class `RunConfig` in `nanpack.preprocess` module. We will call this class and assign an object (instance) to it so that we can...
FileName = "path/to/project/input/config.ini" # specify the correct file path cfg = pre.RunConfig(FileName) # cfg is an instance of RunConfig class which can be used to access class variables. You may choose any variable in place of cfg.
******************************************************* ******************************************************* Starting configuration. Searching for simulation configuration file in path: "D:/MyProjects/projectroot/nanpack/input/config.ini" SUCCESS: Configuration file parsing. Checking whether all sections are includ...
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
You will obtain several configuration messages on your output screen so that you can verify that your inputs are correct and that the configuration is successfully completed. Next step is the assignment of initial conditions and the boundary conditions. For assigning boundary conditions, I have created a function `BC()...
# Assign initial conditions cfg.U[0] = 40.0 cfg.U[1:] = 0.0 # Assign boundary conditions U = BC(cfg.U)
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Next, we will be calculating location of all grid points within the domain using the function `RectangularGrid()` and save values into X. We will also require to calculate diffusion number in X direction. In nanpack, the program treats the diffusion number = CFL for 1D applications that we entered in the configuration ...
X, _ = RectangularGrid(cfg.dX, cfg.iMax) diffX,_ = pre.DiffusionNumbers(cfg.Dimension, cfg.diff, cfg.dT, cfg.dX)
Calculating diffusion numbers: Completed.
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Next, we will initialize some local variables before start the time stepping:
Error = 1.0 # variable to keep track of error n = 0 # variable to advance in time
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Start time loop using while loop such that if one of the condition returns False, the time stepping will be stopped. For explanation of each line, see the comments. Please note the identation of the codes within the while loop. Take extra care with indentation as Python is very particular about it.
while n <= cfg.nMax and Error > cfg.ConvCrit: # start loop Error = 0.0 # reset error to 0.0 at the beginning of each step n += 1 # advance the value of n at each step Uold = U.copy() # store solution at time level, n U = FTCS(Uold, diffX) # solve for U using FTCS method at time level n+1 Error = pos...
ITER ERROR ---- ----- 10 4.92187500 20 3.52394104 30 2.88928896 40 2.50741375 50 2.24550338 60 2.05156084 70 1.90048503 80 1.77844060 90 1.67704721 100 1.59085792 110 1.51614304 120 ...
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
In the above convergence monitor, it is worth noting that the solution error is gradually moving towards zero which is what we need to confirm stability in the solution. If the solution becomes unstable, the errors will rise, probably upto the point where your code will crash. As you know that the solution obtained is ...
# Write output to file post.WriteSolutionToFile(U, n, cfg.nWrite, cfg.nMax, cfg.OutFileName, cfg.dX) # Write convergence history log to a file post.WriteConvHistToFile(cfg, n, Error)
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Verify that the files are saved in the target directory.Now let us obtain analytical solution of this flow that will help us in validating our codes.
# Obtain analytical solution Uana = ParallelPlateFlow(40.0, X, cfg.diff, cfg.totTime, 20)
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Next, we will validate our results by plotting the results using the matplotlib package that we have imported above. Type the following lines of codes:
plt.rc("font", family="serif", size=8) # Assign fonts in the plot fig, ax = plt.subplots(dpi=150) # Create axis for plotting plt.plot(U, X, ">-.b", linewidth=0.5, label="FTCS",\ markersize=5, markevery=5) # Plot data with required labels and markers, customize the plot however you may like plt.plot(Uana, X, "o...
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS