markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Function for the boundary conditions.
def BC(U): """Return the dependent variable with the updated values at the boundaries.""" U[0] = 40.0 U[-1] = 0.0 return U
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
lesson goals * Intro to markdown, plain text-based syntax for formatting docs* markdown is integrated into the jupyter notebook What is markdown? * developed in 2004 by John Gruber - a way of formatting text - a perl utility for converting markdown into html**plain text files** have many advantages of other forma...
from IPython import display display.YouTubeVideo('Rc4JQWowG5I') whos display.YouTubeVideo?? help(display.YouTubeVideo)
Help on class YouTubeVideo in module IPython.lib.display: class YouTubeVideo(IFrame) | Class for embedding a YouTube Video in an IPython session, based on its video id. | | e.g. to embed the video from https://www.youtube.com/watch?v=foo , you would | do:: | | vid = YouTubeVideo("foo") | displa...
CC0-1.0
Markdown 101-class.ipynb
uc-data-services/elag2016-jupyter-jumpstart
Trade Strategy __Summary:__ In this code we shall test the results of given model
# Import required libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import os np.random.seed(0) import warnings warnings.filterwarnings('ignore') # User defined names index = "BTC-USD" filename_whole = "whole_dataset"+index+"_xgboost_model.csv" filename_trending = "Trending_dataset"+index...
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
Functions
def initialize(df): days, Action1, Action2, current_status, Money, Shares = ([] for i in range(6)) Open_price = list(df['Open']) Close_price = list(df['Adj Close']) Predicted = list(df['Predicted']) Action1.append(Predicted[0]) Action2.append(0) current_status.append(Predicted[0]) if(Pre...
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
# Change to Images directory os.chdir("..") os.chdir(str(os.getcwd()) + "\\Images")
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
Whole Dataset
df_whole_train = df_whole[df_whole["Sample"] == "Train"] df_whole_test = df_whole[df_whole["Sample"] == "Test"] df_whole_test_2019 = df_whole_test[df_whole_test.index.year == 2019] df_whole_test_2020 = df_whole_test[df_whole_test.index.year == 2020] output_train_whole = Get_TradeData(df_whole_train) output_test_whole =...
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
__Comments:__ Based on the performance of model on Train Sample, the model has definitely learnt the patter, instead of over-fitting. But the performance of model in Test Sample is very poor Segment Model
df_model_train = df_model[df_model["Sample"] == "Train"] df_model_test = df_model[df_model["Sample"] == "Test"] df_model_test_2019 = df_model_test[df_model_test.index.year == 2019] df_model_test_2020 = df_model_test[df_model_test.index.year == 2020] output_train_model = Get_TradeData(df_model_train) output_test_model =...
_____no_output_____
MIT
Dev/BTC-USD/Codes/07 XgBoost Performance Results .ipynb
Sidhus234/WQU-Capstone-Project-2021
Interacting with a Car Object In this notebook, you've been given some of the starting code for creating and interacting with a car object.Your tasks are to:1. Become familiar with this code. - Know how to create a car object, and how to move and turn that car.2. Constantly visualize. - To make sure your code i...
import numpy as np import car %matplotlib inline
_____no_output_____
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Define the initial variables
# Create a 2D world of 0's height = 4 width = 6 world = np.zeros((height, width)) # Define the initial car state initial_position = [0, 0] # [y, x] (top-left corner) velocity = [0, 1] # [vy, vx] (moving to the right)
_____no_output_____
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Create a car object
# Create a car object with these initial params carla = car.Car(initial_position, velocity, world) print('Carla\'s initial state is: ' + str(carla.state))
Carla's initial state is: [[0, 0], [0, 1]]
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Move and track state
# Move in the direction of the initial velocity carla.move() # Track the change in state print('Carla\'s state is: ' + str(carla.state)) # Display the world carla.display_world()
Carla's state is: [[0, 1], [0, 1]]
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
TODO: Move in a square pathUsing the `move()` and `turn_left()` functions, make carla traverse a 4x4 square path.The output should look like:
## TODO: Make carla traverse a 4x4 square path ## Display the result carla.move() carla.display_world()
_____no_output_____
MIT
Object Tracking and Localization/Representing State and Motion/Interacting with a Car Object/Interacting with a Car Object.ipynb
brand909/Computer-Vision
Monte Carlo Integration with Python Dr. Tirthajyoti Sarkar ([LinkedIn](https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/), [Github](https://github.com/tirthajyoti)), Fremont, CA, July 2020--- DisclaimerThe inspiration for this demo/notebook stemmed from [Georgia Tech's Online Masters in Analytics (OMSA) program...
import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
A simple function which is difficult to integrate analyticallyWhile the general Monte Carlo simulation technique is much broader in scope, we focus particularly on the Monte Carlo integration technique here.It is nothing but a numerical method for computing complex definite integrals, which lack closed-form analytical...
def f1(x): return (15*x**3+21*x**2+41*x+3)**(1/4) * (np.exp(-0.5*x))
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Plot
x = np.arange(0,4.1,0.1) y = f1(x) plt.figure(figsize=(8,4)) plt.title("Plot of the function: $\sqrt[4]{15x^3+21x^2+41x+3}.e^{-0.5x}$", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show()
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Riemann sums?There are many such techniques under the general category of [Riemann sum](https://medium.com/r/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FRiemann_sum). The idea is just to divide the area under the curve into small rectangular or trapezoidal pieces, approximate them by the simple geometrical calculatio...
rect = np.linspace(0,4,5) plt.figure(figsize=(8,4)) plt.title("Area under the curve: With Riemann sum", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.fill_between(x,y1=y,y2=0,color='orange',alpha=0.6) for i in range(5): plt.vlines(x=rect[i],ymin=0,ymax=2,color='blue') plt.grid(True) plt.xticks(fontsize=14...
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
What if I go random?What if I told you that I do not need to pick the intervals so uniformly, and, in fact, I can go completely probabilistic, and pick 100% random intervals to compute the same integral?Crazy talk? My choice of samples could look like this…
rand_lines = 4*np.random.uniform(size=5) plt.figure(figsize=(8,4)) plt.title("With 5 random sampling intervals", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.fill_between(x,y1=y,y2=0,color='orange',alpha=0.6) for i in range(5): plt.vlines(x=rand_lines[i],ymin=0,ymax=2,color='blue') plt.grid(True) plt.xti...
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Or, this?
rand_lines = 4*np.random.uniform(size=5) plt.figure(figsize=(8,4)) plt.title("With 5 random sampling intervals", fontsize=18) plt.plot(x,y,'-',c='k',lw=2) plt.fill_between(x,y1=y,y2=0,color='orange',alpha=0.6) for i in range(5): plt.vlines(x=rand_lines[i],ymin=0,ymax=2,color='blue') plt.grid(True) plt.xti...
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
It just works!We don't have the time or scope to prove the theory behind it, but it can be shown that with a reasonably high number of random sampling, we can, in fact, compute the integral with sufficiently high accuracy!We just choose random numbers (between the limits), evaluate the function at those points, add th...
def monte_carlo(func, a=0, b=1, n=1000): """ Monte Carlo integration """ u = np.random.uniform(size=n) #plt.hist(u) u_func = func(a+(b-a)*u) s = ((b-a)/n)*u_func.sum() return s
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Another version with 10-spaced sampling
def monte_carlo_uniform(func, a=0, b=1, n=1000): """ Monte Carlo integration with more uniform spread (forced) """ subsets = np.arange(0,n+1,n/10) steps = n/10 u = np.zeros(n) for i in range(10): start = int(subsets[i]) end = in...
5.73321706375046
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
How good is the calculation anyway?This integral cannot be calculated analytically. So, we need to benchmark the accuracy of the Monte Carlo method against another numerical integration technique anyway. We chose the Scipy `integrate.quad()` function for that.Now, you may also be thinking - **what happens to the accur...
inte_lst = [] for i in range(100,2100,50): inte = monte_carlo_uniform(f1,a=0,b=4,n=i) inte_lst.append(inte) result,_ = quad(f1,a=0,b=4) plt.figure(figsize=(8,4)) plt.plot([i for i in range(100,2100,50)],inte_lst,color='blue') plt.hlines(y=result,xmin=0,xmax=2100,linestyle='--',lw=3) plt.xticks(fontsize=14) pl...
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Not bad at all...Therefore, we observe some small perturbations in the low sample density phase, but they smooth out nicely as the sample density increases. In any case, the absolute error is extremely small compared to the value returned by the Scipy function - on the order of 0.02%.The Monte Carlo trick works fantas...
%%timeit -n100 -r100 inte = monte_carlo_uniform(f1,a=0,b=4,n=500)
107 µs ± 6.57 µs per loop (mean ± std. dev. of 100 runs, 100 loops each)
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
Speed of the Scipy function
%%timeit -n100 -r100 quad(f1,a=0,b=4)
216 µs ± 5.31 µs per loop (mean ± std. dev. of 100 runs, 100 loops each)
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
RepeatFor a probabilistic technique like Monte Carlo integration, it goes without saying that mathematicians and scientists almost never stop at just one run but repeat the calculations for a number of times and take the average.Here is a distribution plot from a 10,000 run experiment. As you can see, the plot almost ...
inte_lst = [] for i in range(10000): inte = monte_carlo_uniform(f1,a=0,b=4,n=500) inte_lst.append(inte) plt.figure(figsize=(8,4)) plt.title("Distribution of the Monte Carlo runs", fontsize=18) plt.hist(inte_lst,bins=50,color='orange',edgecolor='k') plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.x...
_____no_output_____
MIT
Monte-Carlo-integration.ipynb
00inboxtest/Stats-Maths-with-Python
This illustrates the datasets.make_multilabel_classification dataset generator. Each sample consists of counts of two features (up to 50 in total), which are differently distributed in each of two classes.Points are labeled as follows, where Y means the class is present:| 1 | 2 | 3 | Color ||--- |--- |--- |-------...
import sklearn sklearn.__version__
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Imports This tutorial imports [make_ml_clf](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_multilabel_classification.htmlsklearn.datasets.make_multilabel_classification).
import plotly.plotly as py import plotly.graph_objs as go from __future__ import print_function import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_multilabel_classification as make_ml_clf
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Calculations
COLORS = np.array(['!', '#FF3333', # red '#0198E1', # blue '#BF5FFF', # purple '#FCD116', # yellow '#FF7216', # orange '#4DBD33', # green '#87421F' # brown ]) ...
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Plot Results n_labels=1
data, p_c, p_w_c = plot_2d(n_labels=1) layout=go.Layout(title='n_labels=1, length=50', xaxis=dict(title='Feature 0 count', showgrid=False), yaxis=dict(title='Feature 1 count', showgrid=False), ) fig = go.Figure(d...
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
n_labels=3
data = plot_2d(n_labels=3) layout=go.Layout(title='n_labels=3, length=50', xaxis=dict(title='Feature 0 count', showgrid=False), yaxis=dict(title='Feature 1 count', showgrid=False), ) fig = go.Figure(data=data[0],...
_____no_output_____
CC-BY-3.0
_posts/scikit/randomly-generated-multilabel-dataset/randomly-generated-multilabel-dataset.ipynb
bmb804/documentation
Log the concentrations to and learn the models for CaCO3 again to avoid 0 happen in the prediction.
import numpy as np import pandas as pd import dask.dataframe as dd import matplotlib.pyplot as plt import seaborn as sns plt.style.use('ggplot') #plt.style.use('seaborn-whitegrid') plt.style.use('seaborn-colorblind') plt.rcParams['figure.dpi'] = 300 plt.rcParams['savefig.dpi'] = 300 plt.rcParams['savefig.bbox'] = 'ti...
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Launch deployment
from dask.distributed import Client from dask_jobqueue import SLURMCluster cluster = SLURMCluster( project="aslee@10.110.16.5", queue='main', cores=40, memory='10 GB', walltime="00:10:00", log_directory='job_logs' ) client.close() cluster.close() client = Client(cluster) cluster.scale(100) #clu...
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Build model for CaCO3
from dask_ml.model_selection import train_test_split merge_df = dd.read_csv('data/spe+bulk_dataset_20201008.csv') X = merge_df.iloc[:, 1: -5].to_dask_array(lengths=True) X = X / X.sum(axis = 1, keepdims = True) y = merge_df['CaCO3%'].to_dask_array(lengths=True) X_train, X_test, y_train, y_test = train_test_split(X, y,...
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Grid searchWe know the relationship between the spectra and bulk measurements might not be linear; and based on the pilot_test.ipynb, the SVR algorithm with NMF transformation provides the better cv score. So we focus on grid search with NMF transformation (4, 5, 6, 7, 8 components based on the PCA result) and SVR. Fi...
from dask_ml.model_selection import GridSearchCV from sklearn.decomposition import NMF from sklearn.svm import SVR from sklearn.pipeline import make_pipeline from sklearn.compose import TransformedTargetRegressor pipe = make_pipeline(NMF(max_iter = 2000, random_state = 24), SVR()) params = { 'nmf__n_components': [...
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Visualization
#result_df = pd.DataFrame(grid.cv_results_) #result_df.to_csv('results/caco3_grid_nmf+svr_{}.csv'.format(date)) result_df = pd.read_csv('results/caco3_grid_nmf+svr_20201013.csv', index_col = 0) #result_df = result_df[result_df.mean_test_score > -1].reset_index(drop = True) from mpl_toolkits.mplot3d import Axes3D from m...
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
Check prediction
spe_df = pd.read_csv('data/spe_dataset_20201008.csv', index_col = 0) X = spe_df.iloc[:, :2048].values X = X / X.sum(axis = 1, keepdims = True) y_caco3 = np.exp(grid.best_estimator_.predict(X)) len(y_caco3[y_caco3 < 0]) len(y_caco3[y_caco3 > 100]) len(y_caco3[y_caco3 > 100])/len(y_caco3)
_____no_output_____
MIT
build_models_04.ipynb
dispink/CaCO3_NWP
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Kalman Filter Math
#format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style()
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
If you've gotten this far I hope that you are thinking that the Kalman filter's fearsome reputation is somewhat undeserved. Sure, I hand waved some equations away, but I hope implementation has been fairly straightforward for you. The underlying concept is quite straightforward - take two measurements, or a measurement...
import numpy as np from scipy.linalg import expm dt = 0.1 A = np.array([[0, 1], [0, 0]]) expm(A*dt)
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Time InvarianceIf the behavior of the system depends on time we can say that a dynamic system is described by the first-order differential equation$$ g(t) = \dot x$$However, if the system is *time invariant* the equation is of the form:$$ f(x) = \dot x$$What does *time invariant* mean? Consider a home stereo. If you i...
import sympy from sympy import (init_printing, Matrix,MatMul, integrate, symbols) init_printing(use_latex='mathjax') dt, phi = symbols('\Delta{t} \Phi_s') F_k = Matrix([[1, dt, dt**2/2], [0, 1, dt], [0, 0, 1]]) Q_c = Matrix([[0, 0, 0], [0, 0, 0...
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
For completeness, let us compute the equations for the 0th order and 1st order equations.
F_k = sympy.Matrix([[1]]) Q_c = sympy.Matrix([[phi]]) print('0th order discrete process noise') sympy.integrate(F_k*Q_c*F_k.T,(dt, 0, dt)) F_k = sympy.Matrix([[1, dt], [0, 1]]) Q_c = sympy.Matrix([[0, 0], [0, 1]])*phi Q = sympy.integrate(F_k * Q_c * F_k.T, (dt, 0, dt)) print('...
1st order discrete process noise
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Piecewise White Noise ModelAnother model for the noise assumes that the that highest order term (say, acceleration) is constant for the duration of each time period, but differs for each time period, and each of these is uncorrelated between time periods. In other words there is a discontinuous jump in acceleration at...
var=symbols('sigma^2_v') v = Matrix([[dt**2 / 2], [dt]]) Q = v * var * v.T # factor variance out of the matrix to make it more readable Q = Q / var sympy.MatMul(Q, var)
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
The second order system proceeds with the same math.$$\mathbf{F} = \begin{bmatrix}1 & \Delta t & {\Delta t}^2/2 \\ 0 & 1 & \Delta t\\ 0& 0& 1\end{bmatrix}$$Here we will assume that the white noise is a discrete time Wiener process. This gives us$$\Gamma = \begin{bmatrix}\frac{1}{2}\Delta t^2 \\ \Delta t\\ 1\end{bmatrix...
var=symbols('sigma^2_v') v = Matrix([[dt**2 / 2], [dt], [1]]) Q = v * var * v.T # factor variance out of the matrix to make it more readable Q = Q / var sympy.MatMul(Q, var)
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
We cannot say that this model is more or less correct than the continuous model - both are approximations to what is happening to the actual object. Only experience and experiments can guide you to the appropriate model. In practice you will usually find that either model provides reasonable results, but typically one ...
from filterpy.common import Q_continuous_white_noise from filterpy.common import Q_discrete_white_noise Q = Q_continuous_white_noise(dim=2, dt=1, spectral_density=1) print(Q) Q = Q_continuous_white_noise(dim=3, dt=1, spectral_density=1) print(Q)
[[ 0.05 0.125 0.167] [ 0.125 0.333 0.5] [ 0.167 0.5 1.0]]
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
The function `Q_discrete_white_noise()` computes $\mathbf Q$ assuming a piecewise model for the noise.
Q = Q_discrete_white_noise(2, var=1.) print(Q) Q = Q_discrete_white_noise(3, var=1.) print(Q)
[[ 0.25 0.5 0.5] [ 0.5 1.0 1.0] [ 0.5 1.0 1.0]]
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Simplification of QMany treatments use a much simpler form for $\mathbf Q$, setting it to zero except for a noise term in the lower rightmost element. Is this justified? Well, consider the value of $\mathbf Q$ for a small $\Delta t$
import numpy as np np.set_printoptions(precision=8) Q = Q_continuous_white_noise( dim=3, dt=0.05, spectral_density=1) print(Q) np.set_printoptions(precision=3)
[[ 0.00000002 0.00000078 0.00002083] [ 0.00000078 0.00004167 0.00125 ] [ 0.00002083 0.00125 0.05 ]]
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
We can see that most of the terms are very small. Recall that the only equation using this matrix is$$ \mathbf P=\mathbf{FPF}^\mathsf{T} + \mathbf Q$$If the values for $\mathbf Q$ are small relative to $\mathbf P$than it will be contributing almost nothing to the computation of $\mathbf P$. Setting $\mathbf Q$ to the z...
import matplotlib.pyplot as plt t = np.linspace(-1, 1, 10) plt.plot(t, np.exp(t)) t = np.linspace(-1, 1, 2) plt.plot(t,t+1, ls='--', c='k');
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
You can see that the slope is very close to the curve at $t=0.1$, but far from itat $t=1$. But let's continue with a step size of 1 for a moment. We can see that at $t=1$ the estimated value of $y$ is 2. Now we can compute the value at $t=2$ by taking the slope of the curve at $t=1$ and adding it to our initial estimat...
import code.book_plots as book_plots t = np.linspace(-1, 2, 20) plt.plot(t, np.exp(t)) t = np.linspace(0, 1, 2) plt.plot([1, 2, 4], ls='--', c='k') book_plots.set_labels(x='x', y='y');
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Here we see the next estimate for y is 4. The errors are getting large quickly, and you might be unimpressed. But 1 is a very large step size. Let's put this algorithm in code, and verify that it works by using a small step size.
def euler(t, tmax, y, dx, step=1.): ys = [] while t < tmax: y = y + step*dx(t, y) ys.append(y) t +=step return ys def dx(t, y): return y print(euler(0, 1, 1, dx, step=1.)[-1]) print(euler(0, 2, 1, dx, step=1.)[-1])
2.0 4.0
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
This looks correct. So now let's plot the result of a much smaller step size.
ys = euler(0, 4, 1, dx, step=0.00001) plt.subplot(1,2,1) plt.title('Computed') plt.plot(np.linspace(0, 4, len(ys)),ys) plt.subplot(1,2,2) t = np.linspace(0, 4, 20) plt.title('Exact') plt.plot(t, np.exp(t)); print('exact answer=', np.exp(4)) print('euler answer=', ys[-1]) print('difference =', np.exp(4) - ys[-1]) print(...
exact answer= 54.5981500331 euler answer= 54.59705808834125 difference = 0.00109194480299 iterations = 400000
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Here we see that the error is reasonably small, but it took a very large number of iterations to get three digits of precision. In practice Euler's method is too slow for most problems, and we use more sophisticated methods.Before we go on, let's formally derive Euler's method, as it is the basis for the more advanced ...
def runge_kutta4(y, x, dx, f): """computes 4th order Runge-Kutta for dy/dx. y is the initial value for y x is the initial value for x dx is the difference in x (e.g. the time step) f is a callable function (y, x) that you supply to compute dy/dx for the specified values. """ k1 = d...
_____no_output_____
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Let's use this for a simple example. Let$$\dot{y} = t\sqrt{y(t)}$$with the initial values$$\begin{aligned}t_0 &= 0\\y_0 &= y(t_0) = 1\end{aligned}$$
import math import numpy as np t = 0. y = 1. dt = .1 ys, ts = [], [] def func(y,t): return t*math.sqrt(y) while t <= 10: y = runge_kutta4(y, t, dt, func) t += dt ys.append(y) ts.append(t) exact = [(t**2 + 4)**2 / 16. for t in ts] plt.plot(ts, ys) plt.plot(ts, exact) error = np.array(exact) - np...
max error 5.206970035942504e-05
CC-BY-4.0
07-Kalman-Filter-Math.ipynb
esvhd/Kalman-and-Bayesian-Filters-in-Python
Complex Numbers Q1. Return the angle of `a` in radian.
a = 1+1j output = ... print(output)
0.785398163397
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q2. Return the real part and imaginary part of `a`.
a = np.array([1+2j, 3+4j, 5+6j]) real = ... imag = ... print("real part=", real) print("imaginary part=", imag)
real part= [ 1. 3. 5.] imaginary part= [ 2. 4. 6.]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q3. Replace the real part of a with `9`, the imaginary part with `[5, 7, 9]`.
a = np.array([1+2j, 3+4j, 5+6j]) ... ... print(a)
[ 9.+5.j 9.+7.j 9.+9.j]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q4. Return the complex conjugate of `a`.
a = 1+2j output = ... print(output)
(1-2j)
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Discrete Fourier Transform Q5. Compuete the one-dimensional DFT of `a`.
a = np.exp(2j * np.pi * np.arange(8)) output = ... print(output)
[ 8.00000000e+00 -6.85802208e-15j 2.36524713e-15 +9.79717439e-16j 9.79717439e-16 +9.79717439e-16j 4.05812251e-16 +9.79717439e-16j 0.00000000e+00 +9.79717439e-16j -4.05812251e-16 +9.79717439e-16j -9.79717439e-16 +9.79717439e-16j -2.36524713e-15 +9.79717439e-16j]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q6. Compute the one-dimensional inverse DFT of the `output` in the above question.
print("a=", a) inversed = ... print("inversed=", a)
a= [ 1. +0.00000000e+00j 1. -2.44929360e-16j 1. -4.89858720e-16j 1. -7.34788079e-16j 1. -9.79717439e-16j 1. -1.22464680e-15j 1. -1.46957616e-15j 1. -1.71450552e-15j] inversed= [ 1. +0.00000000e+00j 1. -2.44929360e-16j 1. -4.89858720e-16j 1. -7.34788079e-16j 1. -9.79717439e-16j 1. -1.22464680e-15j 1. -1...
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q7. Compute the one-dimensional discrete Fourier Transform for real input `a`.
a = [0, 1, 0, 0] output = ... print(output) assert output.size==len(a)//2+1 if len(a)%2==0 else (len(a)+1)//2 # cf. output2 = np.fft.fft(a) print(output2)
[ 1.+0.j 0.-1.j -1.+0.j] [ 1.+0.j 0.-1.j -1.+0.j 0.+1.j]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q8. Compute the one-dimensional inverse DFT of the output in the above question.
inversed = ... print("inversed=", a)
inversed= [0, 1, 0, 0]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Q9. Return the DFT sample frequencies of `a`.
signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=np.float32) fourier = np.fft.fft(signal) n = signal.size freq = ... print(freq)
[ 0. 0.125 0.25 0.375 -0.5 -0.375 -0.25 -0.125]
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Window Functions
fig = plt.figure(figsize=(19, 10)) # Hamming window window = np.hamming(51) plt.plot(np.bartlett(51), label="Bartlett window") plt.plot(np.blackman(51), label="Blackman window") plt.plot(np.hamming(51), label="Hamming window") plt.plot(np.hanning(51), label="Hanning window") plt.plot(np.kaiser(51, 14), label="Kaiser w...
_____no_output_____
MIT
Discrete_Fourier_Transform.ipynb
server73/numpy_excercises
Multiple linear regression In many data sets there may be several predictor variables that have an effect on a response variable. In fact, the *interaction* between variables may also be used to predict response. When we incorporate these additional predictor variables into the analysis the model is called *multiple ...
import pandas as pd #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="Vd-20qkN(WN5nJAUj;?4">pd</variable></variables><block type="importAs" id="ji{aK+A5l`eBa?Q1/|Pf" x="128" y="319"><field name="libraryName">pandas</field><field name="libraryAlias" id="Vd-20qkN(WN5nJAUj;?4">pd</field></b...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Load the dataframe: - Create variable `dataframe`- Set it to `with pd do read_csv using "datasets/trees2.csv"`- `dataframe` (to display)
dataframe = pd.read_csv('datasets/trees2.csv') dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="Vd-20qkN(WN5nJAUj;?4">pd</variable></variables><block type="variables_set" id="9aUm-oG6/!Z54ivA^qkm" x="2" y="351"><field na...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
We know that later on, we'd like to use `Type` as a predictor, so we need to convert it into a dummy variable.However, we'd also like to keep `Type` as a column for our plot labels. There are several ways to do this, but probably the easiest is to save `Type` and then put it back in the dataframe.It will make sense as ...
treeType = dataframe[['Type']] treeType #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="hr*VLs~Y+rz.qsB5%AkC">treeType</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="n?M6{W!2xggQx@X7_00@" x="0" y="391"><field name="VAR" id...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
To do the dummy conversion: - Set `dataframe` to `with pd do get_dummies using` a list containing - `dataframe` - freestyle `drop_first=True`- `dataframe` (to display)
dataframe = pd.get_dummies(dataframe, drop_first=True) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="Vd-20qkN(WN5nJAUj;?4">pd</variable></variables><block type="variables_set" id="f~Vi_+$-EAjHP]f_eV;K" x="55" y="193">...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Notice that `cherry` is now the base level, so `Type_plum` is in `0` where `cherry` was before and `1` where `plum` was before.To put `Type` back in, use `assign`:- Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `Type=treeType`- `dataframe` (to display)
dataframe = dataframe.assign(Type=treeType) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="asM(PJ)BfN(o4N+9wUt$" x="-18" y="225"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</fiel...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
This is nice - we have our dummy code for modeling but also the nice original lable in `Type` so we don't get confused. Explore dataLet's start with some *overall* descriptive statistics:- `with dataframe do describe using`
dataframe.describe() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="?LJ($9e@x-B.Y,`==|to" x="8" y="188"><field name="VAR" id="B5p-Xul6IZ.0%nd96oa%">dataframe</field><field name="MEMBER">describe</field...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
This is nice, but we suspect there might be some differences between cherry trees and plum trees that this doesn't show.We can `describe` each group as well:- Create variable `groups`- Set it to `with dataframe do groupby using "Type"`
groups = dataframe.groupby('Type') #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="0zfUO$}u$G4I(G1e~N#r">groups</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="kr80`.2l6nJi|eO*fce[" x="44" y="230"><field name="VAR" id="0zfUO...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Now `describe` groups:- `with groups do describe using`
groups.describe() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="0zfUO$}u$G4I(G1e~N#r">groups</variable></variables><block type="varDoMethod" id="]q4DcYnB3HUf/GehIu+T" x="8" y="188"><field name="VAR" id="0zfUO$}u$G4I(G1e~N#r">groups</field><field name="MEMBER">describe</field><data>gr...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Notice this results table has been rotated compared to the normal `describe`.The rows are our two tree types, and the columns are **stacked columns** where the header (e.g. `Girth`) applies to everything below it and to the left (it is not centered).From this we see that the `Girth` is about the same across trees, the ...
import plotly.express as px #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable></variables><block type="importAs" id="kPF|afHe60B:rsCmJI2O" x="128" y="178"><field name="libraryName">plotly.express</field><field name="libraryAlias" id="k#w4n=KvP~*sLy*OW|J...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Create the scatterplot:- Create variable `fig`- Set it to `with px do scatter using` a list containing - `dataframe` - freestyle `x="Height"` - freestyle `y="Volume"` - freestyle `color="Type"` - freestyle `size="Girth"`
fig = px.scatter(dataframe, x="Height", y="Volume", color="Type", size="Girth") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><bloc...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And show the figure:- `with fig do show using`
fig.show() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable></variables><block type="varDoMethod" id="SV]QMDs*p(4s=2tPrl4a" x="8" y="188"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><field name="MEMBER">show</field><data>fig:show</data></bl...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Modeling 1Last time we looked at `trees`, we used `Height` to predict `Volume`.With multiple linear regression, we can use more that one variable.Let's start with using `Girth` and `Height` to predict `Volume`.But first, the imports:- `import sklearn.linear_model as sklearn`- `import numpy as np`
import sklearn.linear_model as linear_model import numpy as np #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="!+Hi;Yx;ZB!EQYU8ItpO">linear_model</variable><variable id="YynR+H75hTgW`vKfMxOx">np</variable></variables><block type="importAs" id="m;0Uju49an!8G3YKn4cP" x="93" y="288"><fiel...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Create the model:- Create variable `lm` (for linear model)- Set it to `with sklearn create LinearRegression using`
lm = linear_model.LinearRegression() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="!+Hi;Yx;ZB!EQYU8ItpO">linear_model</variable></variables><block type="variables_set" id="!H`J#y,K:4I.h#,HPeK{" x="127" y="346"><field name="VAR" id="F]q...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Train the model using all the data:- `with lm do fit using` a list containing - `dataframe [ ]` (use {dictVariable} from LISTS) containing a list containing - `"Girth"` (this is $X_1$) - `"Height"` (this is $X_2$) - `dataframe [ ]` containing a list containing - `"Volume"` (this is $Y$)
lm.fit(dataframe[['Girth', 'Height']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><field ...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Go ahead and get the $r^2$ ; you can just copy the blocks from the last cell and change `fit` to `score`.
lm.score(dataframe[['Girth', 'Height']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><fiel...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Based on that $r^2$, we'd think we have a really good model, right? Diagnostics 1To check the model, the first thing we need to do is get the predictions from the model. Once we have the predictions, we can `assign` them to a column in the `dataframe`:- Set `dataframe` to `with dataframe do assign using` a list contai...
dataframe = dataframe.assign(predictions1= (lm.predict(dataframe[['Girth', 'Height']]))) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable></variables><block type="variables_set" id="rn0...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Similarly, we want to add the residuals to `dataframe`: - Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `residuals1=` *followed by* `dataframe [ "Volume" ] - dataframe [ "predictions1" ]`- `dataframe` (to display)**Hint: use {dictVariable}[] and the + block from MATH**
dataframe = dataframe.assign(residuals1= (dataframe['Volume'] - dataframe['predictions1'])) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-28" y="224"><field name...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Now let's do some plots!Let's check linearity and equal variance:- Linearity means the residuals will be close to zero- Equal variance means residuals will be evenly away from zero- Set `fig` to `with px do scatter using` a list containing - `dataframe` - freestyle `x="predictions1"` - freestyle `y="residuals1...
fig = px.scatter(dataframe, x="predictions1", y="residuals1") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And show it:- `with fig do show using`
fig.show() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable></variables><block type="varDoMethod" id="SV]QMDs*p(4s=2tPrl4a" x="8" y="188"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><field name="MEMBER">show</field><data>fig:show</data></bl...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
We see something very, very wrong here: a "U" shape from left to right.This means our residuals are positive for low predictions, go negative for mid predictions, and go positive again for high predictions.The only way this can happen is if something is quadratic (squared) in the phenomenon we're trying to model. Mode...
dataframe = dataframe.assign(GGH= (dataframe['Girth'] * (dataframe['Girth'] * dataframe['Height']))) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-28" y="224"><f...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
As you might have noticed, `GGH` is an interaction. Often when we have interactions, we include the variables that the interactions are made off (also known as **main effects**).However, in this case, that doesn't make sense because we know the interaction is close to the definition of `Volume`.So let's fit a new model...
lm.fit(dataframe[['GGH']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><field name="VAR" i...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Diagnostics 2Save the predictions:- Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `predictions2=` *followed by* - `with lm do predict using` a list containing - `dataframe [ ]` containing a list containing - `"GGH"`- `dataframe` (to display)
dataframe = dataframe.assign(predictions2= (lm.predict(dataframe[['GGH']]))) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
Save the residuals: - Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `residuals2=` *followed by* `dataframe [ "Volume" ] - dataframe [ "predictions2" ]`- `dataframe` (to display)
dataframe = dataframe.assign(residuals2= (dataframe['Volume'] - dataframe['predictions2'])) dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_set" id="rn0LHF%t,0JD5-!Ov?-U" x="-28" y="224"><field name...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And now plot the predicted vs residuals to check linearity and equal variance:- Set `fig` to `with px do scatter using` a list containing - `dataframe` - freestyle `x="predictions2"` - freestyle `y="residuals2"`
fig = px.scatter(dataframe, x="predictions2", y="residuals2") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable><variable id="k#w4n=KvP~*sLy*OW|Jl">px</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="variables_...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
And show it:- `with fig do show using`
fig.show() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="w|!1_/S4wRKF4S1`6Xg+">fig</variable></variables><block type="varDoMethod" id="SV]QMDs*p(4s=2tPrl4a" x="8" y="188"><field name="VAR" id="w|!1_/S4wRKF4S1`6Xg+">fig</field><field name="MEMBER">show</field><data>fig:show</data></bl...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
This is a pretty good plot.Most of the residuals are close to zero, and what residuals aren't are fairly evenly spread.We want to see an evenly spaced band above and below 0 as we scan from left to right, and we do.With this new model, calculate $r^2$:
lm.score(dataframe[['GGH']], dataframe[['Volume']]) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="F]q147x/*m|PMfPQU-lZ">lm</variable><variable id="B5p-Xul6IZ.0%nd96oa%">dataframe</variable></variables><block type="varDoMethod" id="W6(0}aPsJ;vA9C3A!:G@" x="8" y="188"><field name="VAR"...
_____no_output_____
Apache-2.0
Multiple-linear-regression.ipynb
memphis-iis/datawhys-intern-solutions-2020
关于梯度的计算调试
import numpy as np import matplotlib.pyplot as plt np.random.seed(666) X = np.random.random(size=(1000, 10)) true_theta = np.arange(1, 12, dtype=float) X_b = np.hstack([np.ones((len(X), 1)), X]) y = X_b.dot(true_theta) + np.random.normal(size=1000) true_theta X.shape y.shape def J(theta, X_b, y): try: retu...
CPU times: user 1.57 s, sys: 30.6 ms, total: 1.6 s Wall time: 856 ms
Apache-2.0
06-Gradient-Descent/08-Debug-Gradient/08-Debug-Gradient.ipynb
mtianyan/Mtianyan-Play-with-Machine-Learning-Algorithms
Inaugural Project > **Note the following:** > 1. This is an example of how to structure your **inaugural project**.> 1. Remember the general advice on structuring and commenting your code from [lecture 5](https://numeconcopenhagen.netlify.com/lectures/Workflow_and_debugging).> 1. Remember this [guide](https://www.mark...
import numpy as np # autoreload modules when code is run %load_ext autoreload %autoreload 2 # local modules import inauguralproject
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 1 BRIEFLY EXPLAIN HOW YOU SOLVE THE MODEL.
# code for solving the model (remember documentation and comments) a = np.array([1,2,3]) b = inauguralproject.square(a) print(b)
[1 4 9]
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 2 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 3 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 4 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Question 5 ADD ANSWER.
# code
_____no_output_____
MIT
inauguralproject/inauguralproject.ipynb
henrikkyndal/projects-2020-slangerne
Generative Adversarial NetworkIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab...
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data')
Extracting MNIST_data\train-images-idx3-ubyte.gz Extracting MNIST_data\train-labels-idx1-ubyte.gz Extracting MNIST_data\t10k-images-idx3-ubyte.gz Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Model InputsFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks.>**Exercise:** Finish the `mo...
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name="discriminator_inputs") inputs_z = tf.placeholder(tf.float32, (None, z_dim), name="generator_inputs") return inputs_real, inputs_z
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Generator network![GAN Network](assets/gan_network.png)Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, e...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
DiscriminatorThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.>**Exercise:** Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code i...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Hyperparameters
# Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Build networkNow we're building the network from the functions defined above.First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z.Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes...
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminato...
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist
Discriminator and Generator LossesNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropies, which we can get with `tf.nn.sigmoid_cross_entropy_...
# Calculate losses d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \ labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_f...
_____no_output_____
Apache-2.0
Intro_to_GANs_Exercises.ipynb
agoila/gan_mnist