markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Setting limitsNow, we want to space the axes to see all the data points
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1)
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Setting ticksCurrent ticks are not ideal because they do not show the interesting values ($+/-\pi$, $+/-\pi/2$) for sine and cosine.
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi]) plt.yticks([-1, 0, +1])
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Setting tick labels* Ticks are correctly placed but their labels are not very explicit* We can guess that 3.142 is $\pi$, but it would be better to make it explicit* When we set tick values, we can also provide a corresponding label in the second argument list* We can use $\LaTeX$ when defining the labels
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$', '$-\pi/2...
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Moving spines* **Spines** are the lines connecting the axis tick marks and noting the boundaries of the data area.* Spines can be placed at arbitrary positions* Until now, they are on the border of the axis * We want to have them in the middle* There are four of them: top, bottom, left, right* Therefore, the top and t...
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$', '$-\pi/2$', '$0$', '...
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Adding a legend * Let us include a legend in the upper right of the plot
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid", label="sine") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0,...
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Annotate some points* The `annotate` command allows us to include annotation in the plot* For instance, to annotate the value $\frac{2\pi}{3}$ of both the sine and the cosine, we have to: 1. draw a marker on the curve as well as a straight dotted line 2. use the annotate command to display some text with an arrow
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid", label="sine") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], [...
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
* The tick labels are now hardly visible because of the blue and red lines* We can make them bigger and we can also adjust their properties to be rendered on a semi-transparent white background* This will allow us to see both the data and the label
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid", label="sine") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], [...
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Scatter plots
n = 1024 x = np.random.normal(0, 1, n) y = np.random.normal(0, 1, n) t = np.arctan2(y, x) plt.axes([0.025, 0.025, 0.95, 0.95]) plt.scatter(x, y, s=75, c=t, alpha=.5) plt.xlim(-1.5, 1.5) plt.xticks(()) plt.ylim(-1.5, 1.5) plt.yticks(()) ax = plt.gca() ax.spines['top'].set_color('none') ax.spines['right'].set_color(...
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Bar plots* Creates two bar plots overlying the same axis* Include the value of each bar
n = 12 xs = np.arange(n) y1 = (1 - xs / float(n)) * np.random.uniform(0.5, 1.0, n) y2 = (1 - xs / float(n)) * np.random.uniform(0.5, 1.0, n) plt.axes([0.025, 0.025, 0.95, 0.95]) plt.bar(xs, +y1, facecolor='#9999ff', edgecolor='white') plt.bar(xs, -y2, facecolor='#ff9999', edgecolor='white') for x, y in zip(xs, y1):...
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
ClassesFor more information on the magic methods of pytho classes, consult the docs: https://docs.python.org/3/reference/datamodel.html
class DumbClass: """ This class is just meant to demonstrate the magic __repr__ method """ def __repr__(self): """ I'm giving this method a docstring """ return("I'm representing an instance of my dumbclass") dc = DumbClass() print(dc) dc help(DumbClass) class Stack: """ A ...
_____no_output_____
MIT
.ipynb_checkpoints/12-4_review-checkpoint.ipynb
willdoucet/Classwork
Using the SqlAlchemy ORMFor more information, check out the documentation : https://docs.sqlalchemy.org/en/latest/orm/tutorial.html
from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, String, Float, ForeignKey from sqlalchemy.orm import Session, relationship import pymysql pymysql.install_as_MySQLdb() # Sets an object to utilize the default declarative base in SQL Alch...
_____no_output_____
MIT
.ipynb_checkpoints/12-4_review-checkpoint.ipynb
willdoucet/Classwork
Estimation on real data using MSM
from consav import runtools runtools.write_numba_config(disable=0,threads=4) %matplotlib inline %load_ext autoreload %autoreload 2 # Local modules from Model import RetirementClass import figs import SimulatedMinimumDistance as SMD # Global modules import numpy as np import pandas as pd import matplotlib.pyplot as p...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Data
data = pd.read_excel('SASdata/moments.xlsx') mom_data = data['mom'].to_numpy() se = data['se'].to_numpy() obs = data['obs'].to_numpy() se = se/np.sqrt(obs) se[se>0] = 1/se[se>0] factor = np.ones(len(se)) factor[-15:] = 4 W = np.eye(len(se))*se*factor cov = pd.read_excel('SASdata/Cov.xlsx') Omega = cov*obs Nobs = np.med...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Set up estimation
single_kwargs = {'simN': int(1e5), 'simT': 68-53+1} Couple = RetirementClass(couple=True, single_kwargs=single_kwargs, simN=int(1e5), simT=68-53+1) Couple.solve() Couple.simulate() def mom_fun(Couple): return SMD.MomFun(Couple) est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta", "par...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Estimate
theta0 = SMD.start(9,bounds=[(0,1), (0,1), (0.2,0.8), (0.2,0.8), (0,2)]) theta0 smd.MultiStart(theta0,W) theta = smd.est smd.MultiStart(theta0,W) theta = smd.est
Iteration: 50 (11.08 minutes) alpha_0_male=0.5044 alpha_0_female=0.4625 sigma_eta=0.8192 pareto_w=0.7542 phi_0_male=0.1227 -> 21.6723 Iteration: 100 (11.19 minutes) alpha_0_male=0.5703 alpha_0_female=0.5002 sigma_eta=0.7629 pareto_w=0.7459 phi_0_male=0.1575 -> 17.7938 Iteration: 150 (10.73 minutes) alpha_0_male=0.55...
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Save parameters
est_par.append('phi_0_female') thetaN = list(theta) thetaN.append(Couple.par.phi_0_male) SMD.save_est(est_par,thetaN,name='baseline2')
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Standard errors
est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta", "pareto_w", "phi_0_male"] smd = SMD.SimulatedMinimumDistance(Couple,mom_data,mom_fun,est_par=est_par) theta = list(SMD.load_est('baseline2').values()) theta = theta[:5] smd.obj_fun(theta,W) np.round(theta,3) Nobs = np.quantile(obs,0.25) smd.std_error(theta,Omega...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Model fit
smd.obj_fun(theta,W) jmom = pd.read_excel('SASdata/joint_moments_ad.xlsx') for i in range(-2,3): data = jmom[jmom.Age_diff==i]['ssh'].to_numpy() plt.bar(np.arange(-7,8), data, label='Data') plt.plot(np.arange(-7,8),SMD.joint_moments_ad(Couple,i),'k--', label='Predicted') #plt.ylim(0,0.4) plt.legend(...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Sensitivity
est_par_tex = [r'$\alpha^m$', r'$\alpha^f$', r'$\sigma$', r'$\lambda$', r'$\phi$'] fixed_par = ['R', 'rho', 'beta', 'gamma', 'v', 'priv_pension_male', 'priv_pension_female', 'g_adjust', 'pi_adjust_m', 'pi_adjust_f'] fixed_par_tex = [r'$R$', r'$\rho$', r'$\beta$', r'$\gamma$', r'$v$', r'$PP...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Recalibrate model (phi=0)
Couple.par.phi_0_male = 0 Couple.par.phi_0_female = 0 est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta", "pareto_w"] smd = SMD.SimulatedMinimumDistance(Couple,mom_data,mom_fun,est_par=est_par) theta0 = SMD.start(4,bounds=[(0,1), (0,1), (0.2,0.8), (0.2,0.8)]) smd.MultiStart(theta0,W) theta = smd.est est_par.appen...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Recalibrate model (phi high)
Couple.par.phi_0_male = 1.187 Couple.par.phi_0_female = 1.671 Couple.par.pareto_w = 0.8 est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta"] smd = SMD.SimulatedMinimumDistance(Couple,mom_data,mom_fun,est_par=est_par) theta0 = SMD.start(4,bounds=[(0.2,0.6), (0.2,0.6), (0.4,0.8)]) theta0 smd.MultiStart(theta0,W) the...
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Mais Exercícios de Redução de Dimensionalidade Baseado no livro "Python Data Science Handbook" de Jake VanderPlashttps://jakevdp.github.io/PythonDataScienceHandbook/Usando os dados de rostos do scikit-learn, utilizar as tecnicas de aprendizado de variedade para comparação.
from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=30) faces.data.shape
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
A base de dados tem 2300 imagens de rostos com 2914 pixels cada (47x62)Vamos visualizar as primeiras 32 dessas imagens
import numpy as np from numpy import random from matplotlib import pyplot as plt %matplotlib inline fig, ax = plt.subplots(5, 8, subplot_kw=dict(xticks=[], yticks=[])) for i, axi in enumerate(ax.flat): axi.imshow(faces.images[i], cmap='gray')
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Podemos ver se com redução de dimensionalidade é possível entender algumas das caraterísticas das imagens.
from sklearn.decomposition import PCA model0 = PCA(n_components=0.95) X_pca=model0.fit_transform(faces.data) plt.plot(np.cumsum(model0.explained_variance_ratio_)) plt.xlabel('n components') plt.ylabel('cumulative variance') plt.grid(True) print("Numero de componentes para 95% de variância preservada:",model0.n_compo...
Numero de componentes para 95% de variância preservada: 171
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Quer dizer que para ter 95% de variância preservada na dimensionalidade reduzida precisamos mais de 170 dimensões. As novas "coordenadas" podem ser vistas em quadros de 9x19 pixels
def plot_faces(instances, **options): fig, ax = plt.subplots(5, 8, subplot_kw=dict(xticks=[], yticks=[])) sizex = 9 sizey = 19 images = [instance.reshape(sizex,sizey) for instance in instances] for i,axi in enumerate(ax.flat): axi.imshow(images[i], cmap = "gray", **options) ...
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Vamos visualizar a compressão dessas imagens
plot_faces(X_pca,aspect="auto")
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
A opção ```svd_solver=randomized``` faz o PCA achar as $d$ componentes principais mais rápido quando $d \ll n$, mas o $d$ é fixo. Tem alguma vantagem usar para compressão das imagens de rosto? Teste! Aplicar Isomap para vizualizar em 2D
from sklearn.manifold import Isomap iso = Isomap(n_components=2) X_iso = iso.fit_transform(faces.data) X_iso.shape from matplotlib import offsetbox def plot_projection(data,proj,images=None,ax=None,thumb_frac=0.5,cmap="gray"): ax = ax or plt.gca() ax.plot(proj[:, 0], proj[:, 1], '.k') ...
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
As imagens mais a direita são mais escuras que as da direita (seja iluminação ou cor da pele), as imagens mais embaixo estão orientadas com o rosto à esquerda e as de cima com o rosto à direita. Exercícios: 1. Aplicar LLE à base de dados dos rostos e visualizar em mapa 2D, em particular a versão "modificada" ([link](...
import numpy as np from numpy import random from matplotlib import pyplot as plt %matplotlib inline from mpl_toolkits.mplot3d import Axes3D from sklearn.datasets import make_swiss_roll X, t = make_swiss_roll(n_samples=1000, noise=0.2, random_state=42) axes = [-11.5, 14, -2, 23, -12, 15] fig = plt.figure(figsize=(12,...
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Como foi no caso do SVM, pode se aplicar uma transformação de *kernel*, para ter um novo espaço de *features* onde pode ser aplicado o PCA. Embaixo o exemplo de PCA com kernel linear (equiv. a aplicar o PCA), RBF (*radial basis function*) e *sigmoide* (i.e. logístico).
from sklearn.decomposition import KernelPCA lin_pca = KernelPCA(n_components = 2, kernel="linear", fit_inverse_transform=True) rbf_pca = KernelPCA(n_components = 2, kernel="rbf", gamma=0.0433, fit_inverse_transform=True) sig_pca = KernelPCA(n_components = 2, kernel="sigmoid", gamma=0.001, coef0=1, fit_inverse_transfor...
/usr/local/lib/python3.6/dist-packages/sklearn/utils/extmath.py:516: RuntimeWarning: invalid value encountered in multiply v *= signs[:, np.newaxis] /usr/local/lib/python3.6/dist-packages/sklearn/utils/extmath.py:516: RuntimeWarning: invalid value encountered in multiply v *= signs[:, np.newaxis]
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Selecionar um Kernel e Otimizar HiperparâmetrosComo estos são algoritmos não supervisionados, no existe uma forma "obvia" de determinar a sua performance. Porém a redução de dimensionalidade muitas vezes é um passo preparatório para uma outra tarefa de aprendizado supervisionado. Nesse caso é possível usar o ```GridSe...
from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline y = t>6.9 clf = Pipeline([ ("kpca", KernelPCA(n_components=2)), ("log_reg", LogisticRegression(solver="liblinear")) ]) param_grid = [{ "kpca__gamma": ...
{'kpca__gamma': 0.043333333333333335, 'kpca__kernel': 'rbf'}
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Exercício :Varie o valor do corte em ```t``` e veja tem faz alguma diferência para o kernel e hiperparámetros ideais. Inverter a transformação e erro de Reconstrução Outra opção seria escolher o kernel e hiperparâmetros que tem o menor erro de reconstrução. O seguinte código, com opção ```fit_inverse_transform=True``...
rbf_pca = KernelPCA(n_components = 2, kernel="rbf", gamma=13./300., fit_inverse_transform=True) X_reduced = rbf_pca.fit_transform(X) X_preimage = rbf_pca.inverse_transform(X_reduced) X_preimage.shape axes = [-11.5, 14, -2, 23, -12, 15] fig = plt.figure(figsize=(12, 10)) ax = fig.add_subplot(111, pr...
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Então é possível computar o "erro" entre o dataset reconstruido e o original (MSE).
from sklearn.metrics import mean_squared_error as mse print(mse(X,X_preimage))
32.79523578725337
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Ejemplos aplicaciones de las distribuciones de probabilidad Ejemplo BinomialUn modelo de precio de opciones, el cual intente modelar el precio de un activo $S(t)$ en forma simplificada, en vez de usar ecuaciones diferenciales estocásticas. De acuerdo a este modelo simplificado, dado el precio del activo actual $S(0)=...
# Importamos librerías a trabajar en todas las simulaciones import matplotlib.pyplot as plt import numpy as np import scipy.stats as st # Librería estadística from math import factorial as fac # Importo la operación factorial from scipy.special import comb # Importamos la función combinatoria %matplotlib inline # ...
_____no_output_____
MIT
TEMA-2/Clase12_EjemplosDeAplicaciones.ipynb
kitziafigueroa/SPF-2019-II
EjercicioProblema referencia: Introduction to Operations Research,(Chap.10.1, pag.471 and 1118)> Descargar ejercicio de el siguiente link> https://drive.google.com/file/d/19GvzgEmYUNXrZqlmppRyW5t0p8WfUeIf/view?usp=sharing![imagen.png](attachment:imagen.png) ![imagen.png](attachment:imagen.png) ![imagen.png](attachment...
######### Caso de estudio 1 ################ up = 44; sigma = np.sqrt(9); d = 47 P = st.norm(up,sigma).cdf(d) print('P(T<=d)=',P) P2 = st.beta
P(T<=d)= 0.8413447460685429
MIT
TEMA-2/Clase12_EjemplosDeAplicaciones.ipynb
kitziafigueroa/SPF-2019-II
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Assignment - what's goin...
# TODO - your code here # Use what we did live in lecture as an example # HINT - you can find the raw URL on GitHub and potentially use that # to load the data with read_csv, or you can upload it yourself import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-...
_____no_output_____
MIT
module3-databackedassertions/Sanjay_Krishna_LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
sanjaykmenon/DS-Unit-1-Sprint-1-Dealing-With-Data
Can't seem to find a relationship because there is too much data to analyze here. I think I will try plotting this to see if i can get a better understanding.
import seaborn as sns sns.pairplot(df)
_____no_output_____
MIT
module3-databackedassertions/Sanjay_Krishna_LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
sanjaykmenon/DS-Unit-1-Sprint-1-Dealing-With-Data
Working with Pytrees[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or ot...
import jax import jax.numpy as jnp example_trees = [ [1, 'a', object()], (1, (2, 3), ()), [1, {'k1': 2, 'k2': (3, 4)}, 5], {'a': 2, 'b': (2, 3)}, jnp.array([1, 2, 3]), ] # Let's see how many leaves they have: for pytree in example_trees: leaves = jax.tree_leaves(pytree) print(f"{repr(pytree):<...
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>] (1, (2, 3), ()) has 3 leaves: [1, 2, 3] [1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5] {'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3] ...
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with da...
list_of_lists = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] jax.tree_map(lambda x: x*2, list_of_lists)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
To use functions with more than one argument, use `jax.tree_multimap`:
another_list_of_lists = list_of_lists jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
import numpy as np def init_mlp_params(layer_widths): params = [] for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]): params.append( dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in), biases=np.ones(shape=(n_out,)) ) ) return params params = in...
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
jax.tree_map(lambda x: x.shape, params)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Now, let's train our MLP:
def forward(params, x): *hidden, last = params for layer in hidden: x = jax.nn.relu(x @ layer['weights'] + layer['biases']) return x @ last['weights'] + last['biases'] def loss_fn(params, x, y): return jnp.mean((forward(params, x) - y) ** 2) LEARNING_RATE = 0.0001 @jax.jit def update(params, x, y): gr...
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define my own container class, it will be considered a leaf, even if it has trees inside it:
class MyContainer: """A named container.""" def __init__(self, name: str, a: int, b: int, c: int): self.name = name self.a = a self.b = b self.c = c jax.tree_leaves([ MyContainer('Alice', 1, 2, 3), MyContainer('Bob', 4, 5, 6) ])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
jax.tree_map(lambda x: x + 1, [ MyContainer('Alice', 1, 2, 3), MyContainer('Bob', 4, 5, 6) ])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
from typing import Tuple, Iterable def flatten_MyContainer(container) -> Tuple[Iterable[int], str]: """Returns an iterable over container contents, and aux data.""" flat_contents = [container.a, container.b, container.c] # we don't want the name to appear as a child, so it is auxiliary data. # auxiliary data ...
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
from typing import NamedTuple, Any class MyOtherContainer(NamedTuple): name: str a: Any b: Any c: Any # Since `tuple` is already registered with JAX, and NamedTuple is a subclass, # this will work out-of-the-box: jax.tree_leaves([ MyOtherContainer('Alice', 1, 2, 3), MyOtherContainer('Bob', 4, 5, 6) ])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of lea...
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))] # Try to make another tree with ones instead of zeros shapes = jax.tree_map(lambda x: x.shape, a_tree) jax.tree_map(jnp.ones, shapes)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid ...
jax.tree_leaves([None, None, None])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
def tree_transpose(list_of_trees): """Convert a list of trees of identical structure into a single tree of lists.""" return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees) # Convert a dataset from row-major to column-major: episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)] tree_transpose(episode_steps)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
jax.tree_transpose( outer_treedef = jax.tree_structure([0 for e in episode_steps]), inner_treedef = jax.tree_structure(episode_steps[0]), pytree_to_transpose = episode_steps )
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
准备工作
from google.colab import drive drive.mount('/content/drive') import os os.chdir('/content/drive/My Drive/Colab Notebooks/PyTorch/data/pycorrector-words/pycorrector-master-new-abs') !pip install -r requirements.txt !pip install pyltp import pycorrector
_____no_output_____
Apache-2.0
pycorrector_threshold_1.1.ipynb
JohnParken/iigroup
测试结果
sent, detail = pycorrector.correct('我是你的眼') print(sent,detail) sentences = [ '他们都很饿了,需要一些食物来充饥', '关于外交事务,我们必须十分谨慎才可以的', '他们都很饿了,需要一些事物来充饥', '关于外交事物,我们必须十分谨慎才可以的', '关于外交食务,我们必须十分谨慎才可以的', '这些方法是非常实用的', '这些方法是非常食用的', '高...
[('关于', 0, 2), ('外交', 2, 4), ('事务', 4, 6), (',', 6, 7), ('我们', 7, 9), ('必须', 9, 11), ('十分', 11, 13), ('谨慎', 13, 15), ('才', 15, 16), ('可以', 16, 18), ('的', 18, 19)] ngram: n=2 [-3.050492286682129, -7.701910972595215, -6.242913246154785, -6.866119384765625, -5.359715938568115, -6.163232326507568, -7.367890357971191, -6.52...
Apache-2.0
pycorrector_threshold_1.1.ipynb
JohnParken/iigroup
纠错调试(与结果无关)
import jieba words = '权力和义务是对等的' word = jieba.cut(words) print(' '.join(word)) !pip install pyltp import os from pyltp import Segmentor LTP_DATA_DIR='/content/drive/My Drive/Colab Notebooks/PyTorch/data/ltp_data_v3.4.0' cws_model_path=os.path.join(LTP_DATA_DIR,'cws.model') segmentor=Segmentor() segmentor.load(cws_model...
_____no_output_____
Apache-2.0
pycorrector_threshold_1.1.ipynb
JohnParken/iigroup
Week 5 Quiz Perrin Anto - paj2117
# import the datasets module from sklearn from sklearn import datasets # use datasets.load_boston() to load the Boston housing dataset boston = datasets.load_boston() # print the description of the dataset in boston.DESCR print(boston.DESCR) # copy the dataset features from boston.data to X X = boston.data # copy the d...
_____no_output_____
CC0-1.0
weekly_quiz/Week_5_Quiz-paj2117.ipynb
perrindesign/data-science-class
from google.colab import drive drive.mount('/gdrive') import cv2 import numpy as np from google.colab.patches import cv2_imshow circles = cv2.imread('/gdrive/My Drive/Colab Notebooks/opencv/circles.png') cv2_imshow(circles) blue_channel = circles[:,:,0] green_channel = circles[:,:,1] red_channel = circles[:,:,2] cv2_im...
_____no_output_____
MIT
opencv_class_2.ipynb
hrnn/image-processing-practice
client 생성
import boto3 ec2 = boto3.resource('ec2') #high level client instances = ec2.instances.all() for i in instances: print(i) i1 = ec2.Instance(id='i-0cda56764352ef50e') tag = i1.tags print(tag) next((t['Value'] for t in i1.tags if t['Key'] == 'Name'), None) b = next((t['Value'] for t in i1.tags if t['Key'] == 'dd'), No...
i-0cda56764352ef50e False i-0e0c4fa77f5a678b2 True i-07e52d2fbc2ebd266 False i-0d022de22510a69b7 False i-0e701a6507dbae898 False
MIT
aws/python/AWS boto3 ec2 various test.ipynb
honux77/practice
Methodology Objective**Use FAERS data on drug safety to identify possible risk factors associated with patient mortality and other serious adverse events associated with approved used of a drug or drug class** Data**_Outcome table_** 1. Start with outcome_c table to define unit of analysis (primaryid)2. Reshape outco...
# read outcome_c.csv & drop unnecessary fields infile = '../input/Outc20Q1.csv' cols_in = ['primaryid','outc_cod'] df = pd.read_csv(infile, usecols=cols_in) print(df.head(),'\n') print(f'Total number of rows: {len(df):,}\n') print(f'Unique number of primaryids: {df.primaryid.nunique():,}') # distribution of outcomes fr...
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Data Pipeline - Demo Table
# step 0: read demo.csv & check fields for missing values infile = '../input/DEMO20Q1.csv' #%timeit df_demo = pd.read_csv(infile) # 1 loop, best of 5: 5.19 s per loop df_demo = pd.read_csv(infile) print(df_demo.columns,'\n') print(f'Percent missing by column:\n{(pd.isnull(df_demo).sum()/len(df_demo))*100}') # step 1: e...
KG 65844 LBS 72 Name: wt_cod, dtype: int64 count mean std min 25% 50% 75% max wt_cod KG 65844.0 73.377305 26.078758 0.0 59.00 72.00 86.26 720.18 LBS 72.0 171.151389 60.3161...
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Insight: No correlation between wt and age + age range looks wrong. Check age distributions
# step 4: check age fields # age_grp print('age_grp') print(df_demo_outc.age_grp.value_counts(),'\n') # age_cod print('age_cod') print(df_demo_outc.age_cod.value_counts(),'\n') # age print('age') print(df_demo_outc.groupby(['age_grp','age_cod'])['age'].describe())
age_grp A 17048 E 8674 N 1004 C 626 T 503 I 344 Name: age_grp, dtype: int64 age_cod YR 168732 DY 2289 MON 1434 DEC 1377 WK 134 HR 11 Name: age_cod, dtype: int64 age count mean std min 25% 50% 75% \ age_grp age_co...
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
age_grp, age_cod, age: Distributions by age group & code look reasonable. Create age in yrs. age_grp* N - Neonate* I - Infant* C - Child* T - Adolescent (teen?)* A - Adult* E - Elderlyage_cod* DEC - decade (yrs = 10*DEC)* YR - year (yrs = 1*YR)* MON - month (yrs = MON/12)* WK - week (yrs = WK/52)* DY - day (yrs = DY/3...
# step 5: calculate age_yrs and check corr with wt_lbs df_demo_outc['age_yrs'] = np.where(df_demo_outc['age_cod']=='DEC',df_demo_outc['age']*10, np.where(df_demo_outc['age_cod']=='MON',df_demo_outc['age']/12, np.where(df_demo_outc['age_cod']=='WK',df_...
age_yrs count mean std min 25% \ age_grp age_cod A DEC 73.0 44.246575 13.114645 20.000000 30.000000 MON 1.0 1.583333 NaN 1.583333 1.583333 YR 10548.0...
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Halis checked and wt in 400-800 range (and max wt of 1,400 lbs) is correct
# review data where wt_lbs > 800 lbs? print(df_demo_outc[df_demo_outc['wt_lbs'] > 800]) # step 6: Number of AE's reported in 2020Q1 by manufacturer print('Number of patients with adverse events by manufacturer reported in 2020Q1 from DEMO table:') print(df_demo_outc.mfr_sndr.value_counts()) # step 7: save updated fi...
Index(['primaryid', 'caseversion', 'i_f_code', 'event.dt1', 'mfr_dt', 'init_fda_dt', 'fda_dt', 'rept_cod', 'mfr_num', 'mfr_sndr', 'age', 'age_cod', 'age_grp', 'sex', 'e_sub', 'wt', 'wt_cod', 'rept.dt1', 'occp_cod', 'reporter_country', 'occr_country', 'outc_cod__CA', 'outc_cod__DE', 'outc_cod...
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
ML Pipeline: Preprocessing
# step 0: check cat vars for one-hot coding cat_lst = ['i_f_code','rept_cod','sex','occp_cod'] [print(df_demo_outc[x].value_counts(),'\n') for x in cat_lst] print(df_demo_outc[cat_lst].describe(),'\n') # sex, occp_cod have missing values # step 1: create one-hot dummies for multilabel outcomes cat_cols = ['i_f_code'...
Index(['primaryid', 'caseversion', 'event.dt1', 'mfr_dt', 'init_fda_dt', 'fda_dt', 'mfr_num', 'mfr_sndr', 'age', 'age_cod', 'age_grp', 'e_sub', 'wt', 'wt_cod', 'rept.dt1', 'reporter_country', 'occr_country', 'outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT', 'out...
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
check sklearn for imputation options
# step 2: use means to impute the missing values of the features with missing records # calculate percent missing print(df.columns,'\n') print(f'Percent missing by column:\n{(pd.isnull(df).sum()/len(df))*100}') num_inputs = ['n_outc', 'wt_lbs', 'age_yrs'] cat_inputs = ['n_outc', 'wt_lbs', 'age_yrs', 'i_f_code__I', '...
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
STOPPED HERE - 1.13.2021 ML Pipeline: Model Selection
### define functions for evaluating each of 8 types of supervised learning algorithms def evaluate_model(predictors, targets, model, param_dict, passes=500): seed = int(round(random()*1000,0)) print(seed) # specify minimum test MSE, best hyperparameter set test_err = [] min_test_err = 1e1...
988 Pass 1/1 for model <class 'sklearn.ensemble.forest.RandomForestClassifier'>
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
STOPPED HERE - 1.12.2021 TODOs:1. Multicore processing: Setup Dask for multicore processing in Jupyter Notebook2. Distributed computing: Check Dask Distributed for local cluster setup
from joblib import dump, load dump(rf, 'binary_rf.obj') # rf_model features2 = pd.DataFrame(data=rf.feature_importances_, index=data.columns) features2.sort_values(by=0,ascending=False, inplace=True) print(features2[:50]) import seaborn as sns ax_rf = sns.barplot(x=features2.index, y=features2.iloc[:,0], order=featu...
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
END BG RF ANALYSIS - 8.31.2020 OTHER MODELS NOT RUN
# LASSO lasso = evaluate_model(data, Lasso, {'alpha': np.arange(0, 1.1, 0.001), 'normalize': [True], 'tol' : [1e-3, 1e-4, 1e-5], 'max_iter': [1000, 4000, 7000]}, passes=250) # Ridge regression ridg...
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Problem Simulation Tutorial
import pyblp import numpy as np import pandas as pd pyblp.options.digits = 2 pyblp.options.verbose = False pyblp.__version__
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Before configuring and solving a problem with real data, it may be a good idea to perform Monte Carlo analysis on simulated data to verify that it is possible to accurately estimate model parameters. For example, before configuring and solving the example problems in the prior tutorials, it may have been a good idea to...
id_data = pyblp.build_id_data(T=50, J=20, F=10)
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Next, we'll create an :class:`Integration` configuration to build agent data according to a Gauss-Hermite product rule that exactly integrates polynomials of degree $2 \times 9 - 1 = 17$ or less.
integration = pyblp.Integration('product', 9) integration
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
We'll then pass these data to :class:`Simulation`. We'll use :class:`Formulation` configurations to create an $X_1$ that consists of a constant, prices, and an exogenous characteristic; an $X_2$ that consists only of the same exogenous characteristic; and an $X_3$ that consists of the common exogenous characteristic an...
simulation = pyblp.Simulation( product_formulations=( pyblp.Formulation('1 + prices + x'), pyblp.Formulation('0 + x'), pyblp.Formulation('0 + x + z') ), beta=[1, -2, 2], sigma=1, gamma=[1, 4], product_data=id_data, integration=integration, seed=0 ) simulation
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
When :class:`Simulation` is initialized, it constructs :attr:`Simulation.agent_data` and simulates :attr:`Simulation.product_data`.The :class:`Simulation` can be further configured with other arguments that determine how unobserved product characteristics are simulated and how marginal costs are specified.At this stage...
simulation_results = simulation.replace_endogenous() simulation_results
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Now, we can try to recover the true parameters by creating and solving a :class:`Problem`. The convenience method :meth:`SimulationResults.to_problem` constructs some basic "sums of characteristics" BLP instruments that are functions of all exogenous numerical variables in the problem. In this example, excluded demand-...
problem = simulation_results.to_problem() problem
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
We'll choose starting values that are half the true parameters so that the optimization routine has to do some work. Note that since we're jointly estimating the supply side, we need to provide an initial value for the linear coefficient on prices because this parameter cannot be concentrated out of the problem (unlike...
results = problem.solve( sigma=0.5 * simulation.sigma, pi=0.5 * simulation.pi, beta=[None, 0.5 * simulation.beta[1], None], optimization=pyblp.Optimization('l-bfgs-b', {'gtol': 1e-5}) ) results
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
The parameters seem to have been estimated reasonably well.
np.c_[simulation.beta, results.beta] np.c_[simulation.gamma, results.gamma] np.c_[simulation.sigma, results.sigma]
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analo...
import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gr...
Train data shape: (49000, 3073) Train labels shape: (49000,) Validation data shape: (1000, 3073) Validation labels shape: (1000,) Test data shape: (1000, 3073) Test labels shape: (1000,) dev data shape: (500, 3073) dev labels shape: (500,)
MIT
assignment1/softmax.ipynb
rahul1990gupta/bcs231n
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
# First implement the naive softmax loss function with nested loops. # Open the file cs231n/classifiers/softmax.py and implement the # softmax_loss_naive function. from cs231n.classifiers.softmax import softmax_loss_naive import time # Generate a random softmax weight matrix and use it to compute the loss. W = np.ran...
loss: 2.339283 sanity check: 2.302585
MIT
assignment1/softmax.ipynb
rahul1990gupta/bcs231n
Inline Question 1:Why do we expect our loss to be close to -log(0.1)? Explain briefly.****Your answer:** *Because it's a random classifier. Since there are 10 classes and a random classifier will correctly classify with 10% probability.*
# Complete the implementation of softmax_loss_naive and implement a (naive) # version of the gradient that uses nested loops. loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As we did for the SVM, use numeric gradient checking as a debugging tool. # The numeric gradient should be close to the analytic gradient...
_____no_output_____
MIT
assignment1/softmax.ipynb
rahul1990gupta/bcs231n
`timeseries` package for fastai v2> **`timeseries`** is a Timeseries Classification and Regression package for fastai v2.> It mimics the fastai v2 vision module (fastai2.vision).> This notebook is a tutorial that shows, and trains an end-to-end a timeseries dataset. > The dataset example is the NATOPS dataset (see de...
!pip install git+https://github.com/fastai/fastai2.git
Collecting git+https://github.com/fastai/fastai2.git Cloning https://github.com/fastai/fastai2.git to /tmp/pip-req-build-icognque Running command git clone -q https://github.com/fastai/fastai2.git /tmp/pip-req-build-icognque Collecting fastcore Downloading https://files.pythonhosted.org/packages/5d/e4/62d66b9530a...
Apache-2.0
index.ipynb
Massachute/TS
Installing `timeseries` package from github
!pip install git+https://github.com/ai-fast-track/timeseries.git
Collecting git+https://github.com/ai-fast-track/timeseries.git Cloning https://github.com/ai-fast-track/timeseries.git to /tmp/pip-req-build-2010puda Running command git clone -q https://github.com/ai-fast-track/timeseries.git /tmp/pip-req-build-2010puda Requirement already satisfied: matplotlib in /usr/local/lib/p...
Apache-2.0
index.ipynb
Massachute/TS
*pip Installing - End Here* `Usage`
%reload_ext autoreload %autoreload 2 %matplotlib inline from fastai2.basics import * # hide # Only for Windows users because symlink to `timeseries` folder is not recognized by Windows import sys sys.path.append("..") from timeseries.all import *
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Tutorial on timeseries package for fastai v2 Example : NATOS dataset Right Arm vs Left Arm (3: 'Not clear' Command (see picture here above)) DescriptionThe data is generated by sensors on the hands, elbows, wrists and thumbs. The data are the x,y,z coordinates for each of the eight locations. The order of the data...
dsname = 'NATOPS' #'NATOPS', 'LSST', 'Wine', 'Epilepsy', 'HandMovementDirection' # url = 'http://www.timeseriesclassification.com/Downloads/NATOPS.zip' path = unzip_data(URLs_TS.NATOPS) path
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Why do I have to concatenate train and test data?Both Train and Train dataset contains 180 samples each. We concatenate them in order to have one big dataset and then split into train and valid dataset using our own split percentage (20%, 30%, or whatever number you see fit)
fname_train = f'{dsname}_TRAIN.arff' fname_test = f'{dsname}_TEST.arff' fnames = [path/fname_train, path/fname_test] fnames data = TSData.from_arff(fnames) print(data) items = data.get_items() idx = 1 x1, y1 = data.x[idx], data.y[idx] y1 # You can select any channel to display buy supplying a list of channels and pas...
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Using `Datasets` class Creating a Datasets object
tfms = [[ItemGetter(0), ToTensorTS()], [ItemGetter(1), Categorize()]] # Create a dataset ds = Datasets(items, tfms, splits=splits) ax = show_at(ds, 2, figsize=(1,1))
3.0
Apache-2.0
index.ipynb
Massachute/TS
Create a `Dataloader` objects 1st method : using `Datasets` object
bs = 128 # Normalize at batch time tfm_norm = Normalize(scale_subtype = 'per_sample_per_channel', scale_range=(0, 1)) # per_sample , per_sample_per_channel # tfm_norm = Standardize(scale_subtype = 'per_sample') batch_tfms = [tfm_norm] dls1 = ds.dataloaders(bs=bs, val_bs=bs * 2, after_batch=...
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Using `DataBlock` class 2nd method : using `DataBlock` and `DataBlock.get_items()`
getters = [ItemGetter(0), ItemGetter(1)] tsdb = DataBlock(blocks=(TSBlock, CategoryBlock), get_items=get_ts_items, getters=getters, splitter=RandomSplitter(seed=seed), batch_tfms = batch_tfms) tsdb.summary(fnames) # num_workers=0 is Microsoft...
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
3rd method : using `DataBlock` and passing `items` object to the `DataBlock.dataloaders()`
getters = [ItemGetter(0), ItemGetter(1)] tsdb = DataBlock(blocks=(TSBlock, CategoryBlock), getters=getters, splitter=RandomSplitter(seed=seed)) dls3 = tsdb.dataloaders(data.get_items(), batch_tfms=batch_tfms, num_workers=0, device=default_device()) dls3.show_batch(max_n=9, chs=ran...
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
4th method : using `TSDataLoaders` class and `TSDataLoaders.from_files()`
dls4 = TSDataLoaders.from_files(fnames, batch_tfms=batch_tfms, num_workers=0, device=default_device()) dls4.show_batch(max_n=9, chs=range(0,12,3))
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Train Model
# Number of channels (i.e. dimensions in ARFF and TS files jargon) c_in = get_n_channels(dls2.train) # data.n_channels # Number of classes c_out= dls2.c c_in,c_out
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Create model
model = inception_time(c_in, c_out).to(device=default_device()) model
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Create Learner object
#Learner opt_func = partial(Adam, lr=3e-3, wd=0.01) loss_func = LabelSmoothingCrossEntropy() learn = Learner(dls2, model, opt_func=opt_func, loss_func=loss_func, metrics=accuracy) print(learn.summary())
Sequential (Input shape: ['64 x 24 x 51']) ================================================================ Layer (type) Output Shape Param # Trainable ================================================================ Conv1d 64 x 32 x 51 29,952 True ___________________...
Apache-2.0
index.ipynb
Massachute/TS
LR find
lr_min, lr_steep = learn.lr_find() lr_min, lr_steep
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Train
#lr_max=1e-3 epochs=30; lr_max=lr_steep; pct_start=.7; moms=(0.95,0.85,0.95); wd=1e-2 learn.fit_one_cycle(epochs, lr_max=lr_max, pct_start=pct_start, moms=moms, wd=wd) # learn.fit_one_cycle(epochs=20, lr_max=lr_steep)
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Plot loss function
learn.recorder.plot_loss()
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Show results
learn.show_results(max_n=9, chs=range(0,12,3)) #hide from nbdev.export import notebook2script # notebook2script() notebook2script(fname='index.ipynb') # #hide # from nbdev.export2html import _notebook2html # # notebook2script() # _notebook2html(fname='index.ipynb')
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score from sklear...
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
As we can see from the above cell there is not correlation between **water** and our target variable.
sns.boxplot(x='Age_day', y = 'Cement',data=df) sns.regplot(x='Age_day', y = 'Cement',data=df) X = df.drop('Concrete_compressive_strength',axis=1) y = df.Concrete_compressive_strength X.head() y.head() X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=.2, random_state=seed) X_train.shape ,y_train.shape
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
In our case we notice from our analysis the presence of outliers although they are not many we are going to use Robustscaler from sklearn to scale the data.Robust scaler is going to remove the median and put variance to 1 it will also transform the data by removing outliers(24%-75%) is considered.
scale = RobustScaler() # note we have to fit_transform only on the training data. On your test data you only have to transform. X_train = scale.fit_transform(X_train) X_test = scale.transform(X_test) X_train
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Model creation Linear Regression
lr = LinearRegression() lr.fit(X_train,y_train) pred_lr = lr.predict(X_test) pred_lr[:10] mae_lr = mean_absolute_error(y_test,pred_lr) r2_lr = r2_score(y_test,pred_lr) print(f'Mean absolute error of linear regression is {mae_lr}') print(f'R2 score of Linear Regression is {r2_lr}')
Mean absolute error of linear regression is 7.745559243921439 R2 score of Linear Regression is 0.6275531792314843
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
**Graph for linear regression** the below graph is showing the relationship between the actual and the predicted values.
fig, ax = plt.subplots() ax.scatter(pred_lr, y_test) ax.plot([y_test.min(), y_test.max()],[y_test.min(), y_test.max()], color = 'red', marker = "*", markersize = 10)
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Decision tree Regression
dt = DecisionTreeRegressor(criterion='mae') dt.fit(X_train,y_train) pred_dt = dt.predict(X_test) mae_dt = mean_absolute_error(y_test,pred_dt) r2_dt = r2_score(y_test,pred_dt) print(f'Mean absolute error of linear regression is {mae_dt}') print(f'R2 score of Decision tree regressor is {r2_dt}') fig, ax = plt.subplots(...
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction