markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Random Forest Regression
from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() rf.fit(X_train, y_train) # prediction pred_rf = rf.predict(X_test) mae_rf = mean_absolute_error(y_test,pred_rf) r2_rf = r2_score(y_test,pred_rf) print(f'Mean absolute error of Random forst regression is {mae_rf}') print(f'R2 score of Ra...
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Lasso Regression
laso = Lasso() laso.fit(X_train, y_train) pred_laso = laso.predict(X_test) mae_laso = mean_absolute_error(y_test, pred_laso) r2_laso = r2_score(y_test, pred_laso) print(f'Mean absolute error of Random forst regression is {mae_laso}') print(f'R2 score of Random forst regressor is {r2_laso}') fig, ax = plt.subplots() ...
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Stacking Regressor:combining multiple regression model and choosing the final model. in our case we used kfold cross validation to make sure that the model is not overfitting.
estimators = [('lr',LinearRegression()), ('gb',GradientBoostingRegressor()),\ ('dt',DecisionTreeRegressor()), ('laso',Lasso())] from sklearn.model_selection import KFold kf = KFold(n_splits=10,shuffle=True, random_state=seed) stacking = StackingRegressor(estimators=estimators, final_estimator=RandomFore...
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Working with XSPEC modelsOne of the most powerful aspects of **XSPEC** is a huge modeling community. While in 3ML, we are focused on building a powerful and modular data analysis tool, we cannot neglect the need for many of the models thahat already exist in **XSPEC** and thus provide support for them via **astromodel...
%matplotlib notebook import matplotlib.pyplot as plt import numpy as np
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
We do not load the models by default as this takes some time and 3ML should load quickly. However, if you need the **XSPEC** models, they are imported from astromodels like this:
from astromodels.xspec.factory import *
Loading xspec models...done
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
The models are indexed with *XS_* before the typical **XSPEC** model names.
plaw = XS_powerlaw() phabs = XS_phabs() phabs
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
The spectral models behave just as any other **astromodels** spectral model and can be used in combination with other **astromodels** spectral models.
from astromodels import Powerlaw am_plaw = Powerlaw() plaw_with_abs = am_plaw*phabs fig, ax =plt.subplots() energy_grid = np.linspace(.1,10.,1000) ax.loglog(energy_grid,plaw_with_abs(energy_grid)) ax.set_xlabel('energy') ax.set_ylabel('flux')
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
XSPEC SettingsMany **XSPEC** models depend on external abundances, cross-sections, and cosmological parameters. We provide an interface to control these directly.Simply import the **XSPEC** settings like so:
from astromodels.xspec.xspec_settings import *
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
Calling the functions without arguments simply returns their current settings
xspec_abund() xspec_xsect() xspec_cosmo()
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
To change the settings for abundance and cross-section, provide strings with the normal **XSPEC** naming conventions.
xspec_abund('wilm') xspec_abund() xspec_xsect('bcmc') xspec_xsect()
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
To alter the cosmological parameters, one passes either the parameters that should be changed, or all three:
xspec_cosmo(H0=68.) xspec_cosmo() xspec_cosmo(H0=68.,q0=.1,lambda_0=70.) xspec_cosmo()
_____no_output_____
BSD-3-Clause
docs/notebooks/xspec_models.ipynb
ke-fang/3ML
The Extended Kalman Filterμ„ ν˜• 칼만 ν•„ν„° (Linear Kalman Filter)에 λŒ€ν•œ 이둠을 λ°”νƒ•μœΌλ‘œ λΉ„μ„ ν˜• λ¬Έμ œμ— 칼만 ν•„ν„°λ₯Ό μ μš©ν•΄ λ³΄κ² μŠ΅λ‹ˆλ‹€. ν™•μž₯μΉΌλ§Œν•„ν„° (EKF)λŠ” μ˜ˆμΈ‘λ‹¨κ³„μ™€ μΆ”μ •λ‹¨κ³„μ˜ 데이터λ₯Ό λΉ„μ„ ν˜•μœΌλ‘œ κ°€μ •ν•˜κ³  ν˜„μž¬μ˜ 좔정값에 λŒ€ν•΄ μ‹œμŠ€ν…œμ„ μ„ ν˜•ν™” ν•œλ’€ μ„ ν˜• 칼만 ν•„ν„°λ₯Ό μ‚¬μš©ν•˜λŠ” κΈ°λ²•μž…λ‹ˆλ‹€.λΉ„μ„ ν˜• λ¬Έμ œμ— μ μš©λ˜λŠ” μ„±λŠ₯이 더 쒋은 μ•Œκ³ λ¦¬μ¦˜λ“€ (UKF, H_infinity)이 μžˆμ§€λ§Œ EKF λŠ” 아직도 널리 μ‚¬μš©λ˜μ„œ 관련성이 λ†’μŠ΅λ‹ˆλ‹€.
%matplotlib inline # HTML(""" # <style> # .output_png { # display: table-cell; # text-align: center; # vertical-align: middle; # } # </style> # """)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
Linearizing the Kalman Filter Non-linear models칼만 ν•„ν„°λŠ” μ‹œμŠ€ν…œμ΄ μ„ ν˜•μΌκ²ƒμ΄λΌλŠ” 가정을 ν•˜κΈ° λ•Œλ¬Έμ— λΉ„μ„ ν˜• λ¬Έμ œμ—λŠ” μ§μ ‘μ μœΌλ‘œ μ‚¬μš©ν•˜μ§€ λͺ»ν•©λ‹ˆλ‹€. λΉ„μ„ ν˜•μ„±μ€ 두가지 μ›μΈμ—μ„œ 기인될수 μžˆλŠ”λ° μ²«μ§ΈλŠ” ν”„λ‘œμ„ΈμŠ€ λͺ¨λΈμ˜ λΉ„μ„ ν˜•μ„± 그리고 λ‘˜μ§Έ μΈ‘μ • λͺ¨λΈμ˜ λΉ„μ„ ν˜•μ„±μž…λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, λ–¨μ–΄μ§€λŠ” 물체의 κ°€μ†λ„λŠ” μ†λ„μ˜ μ œκ³±μ— λΉ„λ‘€ν•˜λŠ” 곡기저항에 μ˜ν•΄ κ²°μ •λ˜κΈ° λ•Œλ¬Έμ— λΉ„μ„ ν˜•μ μΈ ν”„λ‘œμ„ΈμŠ€ λͺ¨λΈμ„ κ°€μ§€κ³ , λ ˆμ΄λ”λ‘œ λͺ©ν‘œλ¬Όμ˜ λ²”μœ„μ™€ λ°©μœ„ (bearing) λ₯Ό μΈ‘μ •ν• λ•Œ λΉ„μ„ ν˜•ν•¨μˆ˜μΈ μ‚Όκ°ν•¨μˆ˜λ₯Ό μ‚¬μš©ν•˜μ—¬ ν‘œμ μ˜ μœ„μΉ˜λ₯Ό κ³„μ‚°ν•˜κΈ° λ•Œλ¬Έμ— λΉ„μ„ ν˜•μ μΈ μΈ‘μ • ...
import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt mu, sigma = 0, 0.1 gaussian = stats.norm.pdf(x, mu, sigma) x = np.linspace(mu - 3*sigma, mu + 3*sigma, 10000) def nonlinearFunction(x): return np.sin(x) def linearFunction(x): return 0.5*x nonlinearOutput = nonlinearFunction(gau...
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
System Equationsμ„ ν˜• 칼만 ν•„ν„°μ˜ 경우 ν”„λ‘œμ„ΈμŠ€ 및 μΈ‘μ • λͺ¨λΈμ€ λ‹€μŒκ³Ό 같이 λ‚˜νƒ€λ‚Όμˆ˜ μžˆμŠ΅λ‹ˆλ‹€.$$\begin{aligned}\dot{\mathbf x} &= \mathbf{Ax} + w_x\\\mathbf z &= \mathbf{Hx} + w_z\end{aligned}$$μ΄λ•Œ $\mathbf A$ λŠ” (μ—°μ†μ‹œκ°„μ—μ„œ) μ‹œμŠ€ν…œμ˜ 역학을 λ¬˜μ‚¬ν•˜λŠ” dynamic matrix μž…λ‹ˆλ‹€. μœ„μ˜ 식을 이산화(discretize)μ‹œν‚€λ©΄ μ•„λž˜μ™€ 같이 λ‚˜νƒ€λ‚΄μ€„ 수 μžˆμŠ΅λ‹ˆλ‹€. $$\begin{aligned}\bar{\mathbf x}_k &= \mathbf{F} \math...
import kf_book.ekf_internal as ekf_internal ekf_internal.plot_bicycle()
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
$$\begin{aligned} \beta &= \frac d w \tan(\alpha) \\\bar x_k &= x_{k-1} - R\sin(\theta) + R\sin(\theta + \beta) \\\bar y_k &= y_{k-1} + R\cos(\theta) - R\cos(\theta + \beta) \\\bar \theta_k &= \theta_{k-1} + \beta\end{aligned}$$ μœ„μ˜ 식듀을 ν† λŒ€λ‘œ μƒνƒœλ²‘ν„°λ₯Ό $\mathbf{x}=[x, y, \theta]^T$ 그리고 μž…λ ₯벑터λ₯Ό $\mathbf{u}=[v, \alpha]^T$ 라고 μ •μ˜ ν•΄...
import sympy from sympy.abc import alpha, x, y, v, w, R, theta from sympy import symbols, Matrix sympy.init_printing(use_latex="mathjax", fontsize='16pt') time = symbols('t') d = v*time beta = (d/w)*sympy.tan(alpha) r = w/sympy.tan(alpha) fxu = Matrix([[x-r*sympy.sin(theta) + r*sympy.sin(theta+beta)], [y...
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
Measurement Model (μΈ‘μ •λͺ¨λΈ)λ ˆμ΄λ”λ‘œ λ²”μœ„$(r)$와 λ°©μœ„($\phi$)λ₯Ό μΈ‘μ •ν• λ•Œ λ‹€μŒκ³Ό 같은 μ„Όμ„œλͺ¨λΈμ„ μ‚¬μš©ν•©λ‹ˆλ‹€. μ΄λ•Œ $\mathbf p$ λŠ” landmark의 μœ„μΉ˜λ₯Ό λ‚˜νƒ€λ‚΄μ€λ‹ˆλ‹€.$$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}\;\;\;\;\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$$$\begin{aligned}\mathbf z& = h(\bar{\mathbf x}, \mathbf p) &+ \mathcal{N}(0, R)\\&= \begin{bmatrix}\sqrt{(p_x...
import sympy from sympy.abc import alpha, x, y, v, w, R, theta px, py = sympy.symbols('p_x, p_y') z = sympy.Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)], [sympy.atan2(py-y, px-x) - theta]]) z.jacobian(sympy.Matrix([x, y, theta])) # print(sympy.latex(z.jacobian(sympy.Matrix([x, y, theta]))) from math import...
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
μΈ‘μ • λ…Έμ΄μ¦ˆλŠ” λ‹€μŒκ³Ό 같이 λ‚˜νƒ€λ‚΄μ€λ‹ˆλ‹€.$$\mathbf R=\begin{bmatrix}\sigma_{range}^2 & 0 \\ 0 & \sigma_{bearing}^2\end{bmatrix}$$ Implementation`FilterPy` 의 `ExtendedKalmanFilter` class λ₯Ό ν™œμš©ν•΄μ„œ EKF λ₯Ό κ΅¬ν˜„ν•΄λ³΄λ„λ‘ ν•˜κ² μŠ΅λ‹ˆλ‹€.
from filterpy.kalman import ExtendedKalmanFilter as EKF from numpy import array, sqrt, random import sympy class RobotEKF(EKF): def __init__(self, dt, wheelbase, std_vel, std_steer): EKF.__init__(self, 3, 2, 2) self.dt = dt self.wheelbase = wheelbase self.std_vel = std_vel s...
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
μ •ν™•ν•œ μž”μ°¨κ°’ $y$을 κ΅¬ν•˜κΈ° λ°©μœ„κ°’μ΄ $0 \leq \phi \leq 2\pi$ 이도둝 κ³ μ³μ€λ‹ˆλ‹€.
def residual(a, b): """ compute residual (a-b) between measurements containing [range, bearing]. Bearing is normalized to [-pi, pi)""" y = a - b y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi) if y[1] > np.pi: # move to [-pi, pi) y[1] -= 2 * np.pi return y from filte...
_____no_output_____
Apache-2.0
.ipynb_checkpoints/Extended Kalman Filter-checkpoint.ipynb
ai-robotics-kr/sensor_fusion_study
Method for visualizing warping over training steps
import os import imageio import numpy as np import matplotlib.pyplot as plt np.random.seed(0)
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Construct warping matrix
g = 1.02 # scaling parameter # Matrix for rotating 45 degrees rotate = np.array([[np.cos(np.pi/4), -np.sin(np.pi/4)], [np.sin(np.pi/4), np.cos(np.pi/4)]]) # Matrix for scaling along x coordinate scale_x = np.array([[g, 0], [0, 1]]) # Matrix for scaling along y coordinate scale_...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Warp grid slowly over time
# Construct 4x4 grid s = 1 # initial scale locs = [[x,y] for x in range(4) for y in range(4)] grid = s*np.array(locs) # Matrix to collect data n_steps = 50 warp_data = np.zeros([n_steps, 16, 2]) # Initial timestep has no warping warp_data[0,:,:] = grid # Warp slowly over time for i in range(1,n_steps): grid = gri...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Unwarp grid slowly over time
# Matrix to collect data unwarp_data = np.zeros([n_steps, 16, 2]) # Start with warped grid unwarp_data[0,:,:] = grid # Unwarp slowly over time for i in range(1,n_steps): grid = grid @ unwarp unwarp_data[i,:,:] = grid fig, ax = plt.subplots(1,3, figsize=(15, 5), sharex=True, sharey=True) ax[0].scatter(unwarp_d...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
High-dimensional vectors with random projection matrix
# data = [warp_data, unwarp_data] data = np.concatenate([warp_data, unwarp_data], axis=0) # Random projection matrix hidden_dim = 32 random_mat = np.random.randn(2, hidden_dim) data = data @ random_mat # Add noise to each time step sigma = 0.2 noise = sigma*np.random.randn(2*n_steps, 16, hidden_dim) data = data + noi...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Parameterize scatterplot with average "congruent" and "incongruent" distances
loc2idx = {i:(loc[0],loc[1]) for i,loc in enumerate(locs)} idx2loc = {v:k for k,v in loc2idx.items()}
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Function for computing distance matrix
def get_distances(M): n,m = M.shape D = np.zeros([n,n]) for i in range(n): for j in range(n): D[i,j] = np.linalg.norm(M[i,:] - M[j,:]) return D D = get_distances(data[0]) plt.imshow(D) plt.show()
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Construct same-rank groups for "congruent" and "incongruent" diagonals
c_rank = np.array([loc[0] + loc[1] for loc in locs]) # rank along "congruent" diagonal i_rank = np.array([3 + loc[0] - loc[1] for loc in locs]) # rank along "incongruent" diagonal G_idxs = [] # same-rank group for "congruent" diagonal H_idxs = [] # same-rank group for "incongruent" diagonal for i in range(7): # total ...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Function for estimating $ \alpha $ and $ \beta $ $$ \bar{x_i} = \sum_{x \in G_i} \frac{1}{n} x $$$$ \alpha_{i, i+1} = || \bar{x}_i - \bar{x}_{i+1} || $$$$ \bar{y_i} = \sum_{y \in H_i} \frac{1}{n} y $$$$ \beta_{i, i+1} = || \bar{y}_i - \bar{y}_{i+1} || $$
def get_parameters(M): # M: [16, hidden_dim] alpha = [] beta = [] for i in range(6): # total number of parameters (01,12,23,34,45,56) # alpha_{i, i+1} x_bar_i = np.mean(M[G_idxs[i],:], axis=0) x_bar_ip1 = np.mean(M[G_idxs[i+1],:], axis=0) x_dist = np.linalg.norm(x_bar_i -...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Use parameters to plot idealized 2D representations
idx2g = {} for idx in range(16): for g, group in enumerate(G_idxs): if idx in group: idx2g[idx] = g idx2h = {} for idx in range(16): for h, group in enumerate(H_idxs): if idx in group: idx2h[idx] = h def generate_grid(alpha, beta): cum_alpha = np.zeros(7) cum_bet...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Get reconstructed grid for each time step
reconstruction = np.zeros([data.shape[0], data.shape[1], 2]) for t,M in enumerate(data): alpha, beta = get_parameters(M) X = generate_grid(alpha, beta) reconstruction[t,:,:] = X t = 50 plt.scatter(reconstruction[t,:,0], reconstruction[t,:,1]) plt.show()
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Make .gif
plt.scatter(M[:,0], M[:,1]) reconstruction.shape xmin = np.min(reconstruction[:,:,0]) xmax = np.max(reconstruction[:,:,0]) ymin = np.min(reconstruction[:,:,1]) ymax = np.max(reconstruction[:,:,1]) for t,M in enumerate(reconstruction): plt.scatter(M[:,0], M[:,1]) plt.title("Reconstructed grid") plt.xlim([xmi...
_____no_output_____
Apache-2.0
notebooks/fake_simulations/Visualize_warped_learning.ipynb
MaryZolfaghar/WCSLS
Rerun jobs to achieve better magmom matching---Will take most magnetic slab of OER set and apply those magmoms to the other slabs Import Modules
import os print(os.getcwd()) import sys # ######################################################### from methods import get_df_features_targets from methods import get_df_magmoms
/mnt/f/Dropbox/01_norskov/00_git_repos/PROJ_IrOx_OER/dft_workflow/run_slabs/rerun_magmoms
MIT
dft_workflow/run_slabs/rerun_magmoms/rerun_magmoms.ipynb
raulf2012/PROJ_IrOx_OER
Read Data
df_features_targets = get_df_features_targets() df_magmoms = get_df_magmoms() df_magmoms = df_magmoms.set_index("job_id") for name_i, row_i in df_features_targets.iterrows(): tmp = 42 # ##################################################### job_id_o_i = row_i[("data", "job_id_o", "", )] job_id_oh_i = row_i[("data"...
_____no_output_____
MIT
dft_workflow/run_slabs/rerun_magmoms/rerun_magmoms.ipynb
raulf2012/PROJ_IrOx_OER
Documenting ClassesIt is almost as easy to document a class as it is to document a function. Simply add docstrings to all of the classes functions, and also below the class name itself. For example, here is a simple documented class
class Demo: """This class demonstrates how to document a class. This class is just a demonstration, and does nothing. However the principles of documentation are still valid! """ def __init__(self, name): """You should document the constructor, saying what it expects ...
Help on Demo in module __main__ object: class Demo(builtins.object) | This class demonstrates how to document a class. | | This class is just a demonstration, and does nothing. | | However the principles of documentation are still valid! | | Methods defined here: | | __init__(self, name) | ...
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Often, when you write a class, you want to hide member data or member functions so that they are only visible within an object of the class. For example, above, the `self._name` member data should be hidden, as it should only be used by the object.You control the visibility of member functions or member data using an u...
class Demo: """This class demonstrates how to document a class. This class is just a demonstration, and does nothing. However the principles of documentation are still valid! """ def __init__(self, name): """You should document the constructor, saying what it expects ...
Help on Demo in module __main__ object: class Demo(builtins.object) | This class demonstrates how to document a class. | | This class is just a demonstration, and does nothing. | | However the principles of documentation are still valid! | | Methods defined here: | | __init__(self, name) | ...
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Member functions or data that are hidden are called "private". Member functions or data that are visible are called "public". You should document all public member functions of a class, as these are visible and designed to be used by other people. It is helpful, although not required, to document all of the private mem...
class Person1: """Class that holds a person's height""" def __init__(self): """Construct a person who has zero height""" self.height = 0 class Person2: """Class that holds a person's height""" def __init__(self): """Construct a person who has zero height""" self._height =...
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
The first example is quicker to write, but it does little to protect itself against a user who attempts to use the class badly.
p = Person1() p.height = -50 p.height p.height = "cat" p.height
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
The second example takes more lines of code, but these lines are valuable as they check that the user is using the class correctly. These checks, when combined with good documentation, ensure that your classes can be safely used by others, and that incorrect use will not create difficult-to-find bugs.
p = Person2() p.setHeight(-50) p.getHeight() p.setHeight("cat") p.getHeight()
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Exercise Exercise 1Below is the completed `GuessGame` class from the previous lesson. Add documentation to this class.
class GuessGame: """ This class provides a simple guessing game. You create an object of the class with its own secret, with the aim that a user then needs to try to guess what the secret is. """ def __init__(self, secret, max_guesses=5): """Create a new guess game ...
Help on class GuessGame in module __main__: class GuessGame(builtins.object) | This class provides a simple guessing game. You create an object | of the class with its own secret, with the aim that a user | then needs to try to guess what the secret is. | | Methods defined here: | | __init__(self, secr...
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Exercise 2Below is a poorly-written class that uses public member data to store the name and age of a Person. Edit this class so that the member data is made private. Add `get` and `set` functions that allow you to safely get and set the name and age.
class Person: """Class the represents a Person, holding their name and age""" def __init__(self, name="unknown", age=0): """Construct a person with unknown name and an age of 0""" self.setName(name) self.setAge(age) def setName(self, name): """Set the person's name t...
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Exercise 3Add a private member function called `_splitName` to your `Person` class that breaks the name into a surname and first name. Add new functions called `getFirstName` and `getSurname` that use this function to return the first name and surname of the person.
class Person: """Class the represents a Person, holding their name and age""" def __init__(self, name="unknown", age=0): """Construct a person with unknown name and an age of 0""" self.setName(name) self.setAge(age) def setName(self, name): """Set the person's name t...
_____no_output_____
MIT
answers/08_class_documentation.ipynb
CCPBioSim/python_and_data_workshop
Lab 4: EM Algorithm and Single-Cell RNA-seq Data Name: Your Name Here (Your netid here) Due April 2, 2021 11:59 PM Preamble (Don't change this) Important Instructions - 1. Please implement all the *graded functions* in main.py file. Do not change function names in main.py.2. Please read the description of every gr...
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %run main.py module = Lab4()
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Part 1 : Expectation-Maximization (EM) algorithm for transcript quantification IntroductionThe EM algorithm is a very helpful tool to compute maximum likelihood estimates of parameters in models that have some latent (hidden) variables.In the case of the transcript quantification problem, the model parameters we want...
n_reads=30000 n_transcripts=30 read_mapping=[] with open("read_mapping_data.txt",'r') as file : lines_reads=file.readlines() for line in lines_reads : read_mapping.append([int(x) for x in line.split(",")]) read_mapping[:10]
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Rather than giving you a giant binary matrix, we encoded the read mapping data in a more concise way. read_mapping is a list of lists. The $i$th list contains the indices of the transcripts that the $i$th read maps to. Reading true abundances and transcript lengths
with open("transcript_true_abundances.txt",'r') as file : lines_gt=file.readlines() ground_truth=[float(x) for x in lines_gt[0].split(",")] with open("transcript_lengths.txt",'r') as file : lines_gt=file.readlines() tr_lengths=[float(x) for x in lines_gt[0].split(",")] ground_truth[:5] tr_lengths[:5]
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Graded Function 1 : expectation_maximization (10 marks) Purpose : To implement the EM algorithm to obtain abundance estimates for each transcript.E-step : In this step, we calculate the fraction of read that is assigned to each transcript (i.e., the estimate of $Z_{ik}$). For read $i$ and transicript $k$, this is cal...
history=module.expectation_maximization(read_mapping,tr_lengths,20) print(len(history)) print(len(history[0])) print(history[0][-5:]) print(history[1][-5:]) print(history[2][-5:])
30 21 [0.033769639494636614, 0.03381298624783303, 0.03384568373972949, 0.0338703482393148, 0.03388895326082054] [0.0020082674603036053, 0.0019649207071071456, 0.0019322232152109925, 0.0019075587156241912, 0.0018889536941198502] [0.0660581789629968, 0.06606927656035864, 0.0660765012689558, 0.06608120466668756, 0.0660842...
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output - 3021[0.033769639494636614, 0.03381298624783303, 0.03384568373972948, 0.0338703482393148, 0.03388895326082054][0.0020082674603036053, 0.0019649207071071456, 0.0019322232152109925, 0.0019075587156241912, 0.0018889536941198502][0.0660581789629968, 0.06606927656035864, 0.06607650126895578, 0.066081204666...
def visualize_em(history,n_iterations) : #start code here fig, ax = plt.subplots(figsize=(8,6)) for j in range(n_transcripts): ax.plot([i for i in range(n_iterations+1)],[history[j][i] - ground_truth[j] for i in range(n_iterations+1)],marker='o') #end code here visualize_em(history,20)
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Part 2 : Exploring Single-Cell RNA-seq data In a study published in 2015, Zeisel et al. used single-cell RNA-seq data to explore the cell diversity in the mouse brain. We will explore the data used for their study.You can read more about it [here](https://science.sciencemag.org/content/347/6226/1138).
#reading single-cell RNA-seq data lines_genes=[] with open("Zeisel_expr.txt",'r') as file : lines_genes=file.readlines() lines_genes[0][:300]
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Each line in the file Zeisel_expr.txt corresponds to one gene.The columns correspond to different cells (notice that this is the opposite of how we looked at this matrix in class).The entries of this matrix correspond to the number of reads mapping to a given gene in the corresponding cell.
# reading true labels for each cell with open("Zeisel_labels.txt",'r') as file : true_labels = file.read().splitlines()
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
The study also provides us with true labels for each of the cells.For each of the cells, the vector true_labels contains the name of the cell type.There are nine different cell types in this dataset.
set(true_labels)
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Graded Function 2 : prepare_data (10 marks) :Purpose - To create a dataframe where each row corresponds to a specific cell and each column corresponds to the expressions levels of a particular gene across all cells. You should name the columns as "Gene_1", "Gene_2", and so on.We will iterate through all the lines in l...
data_df=module.prepare_data(lines_genes) print(data_df.shape) print(data_df.iloc[0:3,:5]) print(data_df.columns)
Index(['Gene_0', 'Gene_1', 'Gene_2', 'Gene_3', 'Gene_4', 'Gene_5', 'Gene_6', 'Gene_7', 'Gene_8', 'Gene_9', ... 'Gene_19962', 'Gene_19963', 'Gene_19964', 'Gene_19965', 'Gene_19966', 'Gene_19967', 'Gene_19968', 'Gene_19969', 'Gene_19970', 'Gene_19971'], dtype='object', length=19972)
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output :``(3005, 19972)```` Gene_0 Gene_1 Gene_2 Gene_3 Gene_4`` ``0 0.0 1.38629 1.38629 0.0 0.69315````1 0.0 0.69315 0.69315 0.0 0.69315````2 0.0 0.00000 1.94591 0.0 0.69315`` Graded Function 3 : identify_less_expressive_genes (10 marks)Purpose : To identify g...
drop_columns = module.identify_less_expressive_genes(data_df) print(len(drop_columns)) print(drop_columns[:10])
5120 Index(['Gene_28', 'Gene_126', 'Gene_145', 'Gene_146', 'Gene_151', 'Gene_152', 'Gene_167', 'Gene_168', 'Gene_170', 'Gene_173'], dtype='object')
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output : ``5120`` ``['Gene_28', 'Gene_126', 'Gene_145', 'Gene_146', 'Gene_151', 'Gene_152', 'Gene_167', 'Gene_168', 'Gene_170', 'Gene_173']`` Filtering less expressive genesWe will now create a new dataframe in which genes which are expressed in less than 25 cells will not be present
df_new = data_df.drop(drop_columns, axis=1) df_new.head()
_____no_output_____
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Graded Function 4 : perform_pca (10 marks)Pupose - Perform Principal Component Analysis on the new dataframe and take the top 50 principal componentsInput - df_newOutput - numpy array containing the top 50 principal components of the data. Note - All the values in the output should be rounded off to 5 digits after th...
pca_data=module.perform_pca(df_new) print(pca_data.shape) print(type(pca_data)) print(pca_data[0:3,:5])
(3005, 50) <class 'numpy.ndarray'> [[26.97148 -2.7244 0.62163 25.90148 -6.24736] [26.49135 -1.58774 -4.79315 24.01094 -7.25618] [47.82664 5.06799 2.15177 30.24367 -3.38878]]
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output : ``(3005, 50)````````[[26.97148 -2.7244 0.62163 25.90148 -6.24736]```` [26.49135 -1.58774 -4.79315 24.01094 -7.25618]`` `` [47.82664 5.06799 2.15177 30.24367 -3.38878]]`` (Non-graded) Function 5 : perform_tsnePupose - Perform t-SNE on the pca_data and obtain 2 t-SNE componentsWe will use TSNE cl...
tsne_data50 = module.perform_tsne(pca_data) print(tsne_data50.shape) print(tsne_data50[:3,:])
(3005, 2) [[ 19.031317 -45.3434 ] [ 19.188553 -44.945473] [ 17.369982 -47.997364]]
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Expected Output :(These numbers can deviate a bit depending on your sklearn)``(3005, 2)````[[ 15.069608 -47.535984]```` [ 15.251476 -47.172073]`` `` [ 13.3932 -49.909657]]``
fig, ax = plt.subplots(figsize=(12,8)) sns.scatterplot(tsne_data50[:,0], tsne_data50[:,1], hue=true_labels) plt.show()
/usr/local/lib/python3.9/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. warnings.warn...
MIT
ECE365/genomics/Genomics_Lab4/ECE365-Genomics-Lab4-Spring21.ipynb
debugevent90901/courseArchive
Week 2 Tasks During this week's meeting, we have discussed about if/else statements, Loops and Lists. This notebook file will guide you through reviewing the topics discussed and assisting you to be familiarized with the concepts discussed. Let's first create a list
# Create a list that stores the multiples of 5, from 0 to 50 (inclusive) # initialize the list using list comprehension! # Set the list name to be 'l' # TODO: Make the cell return 'True' # Hint: Do you remember that you can apply arithmetic operators in the list comprehension? # Your code goes below here # Do not m...
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
If you are eager to learn more about list comprehension, you can look up here -> https://www.programiz.com/python-programming/list-comprehension. You will find out how you can initialize `l` without using arithmetic operators, but using conditionals (if/else).Now, simply run the cell below, and observe how `l` has chan...
l[0] = 3 print(l) l[5]
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
As seen above, you can overwrite each elements of the list.Using this fact, complete the task written below. If/elif/else practice
# Write a for loop such that: # For each elements in the list l, # If the element is divisible by 6, divide the element by 6 # Else if the element is divisible by 3, divide the element by 3 and then add 4 # Else if the element is divisible by 2, subtract 10. # Else, square the element # TODO: Make the cell return 'True...
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
Limitations of a ternary operator
# Write a for loop that counts the number of odd number elements in the list # and the number of even number elements in the list # These should be stored in the variables 'odd_count' and 'even_count', which are declared below. # Try to use the ternary operator inside the for loop and inspect why it does not work # TO...
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
If you have tried using the ternary operator in the cell above, you would have found that the cell fails to compile because of a syntax error. This is because you can only write *expressions* in ternary operators, specifically **the last segment of the three segments in the operator**, not *statements*.In other words, ...
# Write a while loop that finds an index of the element in 'l' which first exceeds 1000. # The index found should be stored in the variable 'large_index' # If there are no element in 'l' that exceeds 1000, 'large_index' must store -1 # Use the declared 'large_not_found' as the condition for the while loop # Use the dec...
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
Finding the minimum element
# For this task, you can use either for loop or while loop, depending on your preference # Find the smallest element in 'l' and store it in the declared variable 'min_value' # 'min_value' is initialized as a big number # Do not use min() # TODO: Make the cell return 'True' import sys min_value = sys.maxsize min_index ...
_____no_output_____
MIT
Week 2/Week 2 Tasks.ipynb
jihoonkang0829/Codable_FA20
launch scripts through SLURM The script in the cell below submits SLURM jobs running the requested `script`, with all parameters specified in `param_iterators` and the folder where to dump data as last parameter. The generated SBATCH scipts (`.job` files) are saved in the `jobs` folder and then submitted.Output and er...
import numpy as np import os from itertools import product ####################### ### User parameters ### ####################### script = "TFIM-bangbang-WF.py" # name of the script to be run data_subdir = "TFIM/bangbang/WF" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "BBWF-L{}JvB{}nit{}" # j...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
History of parameters that have been run TFIM LogSweep density matrix
script = "TFIM-logsweep-DM.py" data_subdir = "TFIM/logsweep/DM" param_iterators = ( [2], # L [0.2, 1, 5], # JvB np.arange(2, 50) # K ) param_iterators = ( [7], # L [0.2, 1, 5], # JvB np.arange(2, 50) # K ) param_iterators = ( np.arange(2, 11), # L [0.2, 1, 5], # JvB [2, 5, 10, 20, 40...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
Iterative, density matrix
script = "TFIM-logsweep-DM-iterative.py" # name of the script to be run data_subdir = "TFIM/logsweep/DM/iterative" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "ItLS-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( [2, 7], # L [0.2, 1, 5], ...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
WF + Monte Carlo old version of the script the old version suffered from unnormalized final states due to numerical error
script = "TFIM-logsweep-WF.py" # name of the script to be run data_subdir = "TFIM/logsweep/WF-raw" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "WF-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 15), # L [0.2, 1, 5], # JvB ...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
new version of the script Where normalization is forced
script = "TFIM-logsweep-WF.py" # name of the script to be run data_subdir = "TFIM/logsweep/WF" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "WF-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 10), # L [0.2, 1, 5], # JvB [2...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
iterative, WF + Monte Carlo
script = "TFIM-logsweep-WF-iterative.py" # name of the script to be run data_subdir = "TFIM/logsweep/WF/iterative" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "WFiter-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 14), # L [...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
continuous DM
script = "TFIM-logsweep-continuous-DM.py" # name of the script to be run data_subdir = "TFIM/logsweep/continuous/DM" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "Rh-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2,7), # L [0.2,...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
continuous WF
script = "TFIM-logsweep-continuous-WF.py" # name of the script to be run data_subdir = "TFIM/logsweep/continuous/WF" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "CWF-L{}JvB{}K{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 12), # L [0...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
TFIM bang-bang
data_subdir = "TFIM/bangbang/WF" # subdirectory of Β΄dataΒ΄ where to save results jobname_template = "BBWF-L{}JvB{}nit{}" # job name will be created from this, inserting parameter values param_iterators = ( np.arange(2, 21), # L [0.2, 1, 5], # JvB [None], # nit [200] # n_samples ) time = "4-00:00" # f...
_____no_output_____
Apache-2.0
slurm-working-dir/SLURM-launcher.ipynb
aQaLeiden/QuantumDigitalCooling
Lambda School Data Science*Unit 2, Sprint 3, Module 1*--- Define ML problems- Choose a target to predict, and check its distribution- Avoid leakage of information from test to train or from target to features- Choose an appropriate evaluation metric Setup
%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/'
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose a target to predict, and check its distribution Overview This is the data science process at a high level:β€”Renee Teate, [Becoming a Data Scientist, PyData DC 2016 Talk](https://www.becomingadatascientist.com/2016/10/11/pydata-dc-2016-talk/) We've focused on the 2nd arrow in the diagram, by training predictive ...
import pandas as pd pd.options.display.max_columns = None df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose your target Which column in your tabular dataset will you predict?
df.head() df['overall'].describe() import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline sns.distplot(df['overall']) df['Great'] = df['overall'] >= 4 df['Great']
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
How is your target distributed?For a classification problem, determine: How many classes? Are the classes imbalanced?
y = df['Great'] y.unique() y.value_counts(normalize=True) sns.countplot(y) y.value_counts(normalize=True).plot(kind="bar") # Stretch: how to fix imbalanced classes #. upsampling: randomly re-sample from the minority class to increase the sample in the minority class #. downsampling: random re-sampling from the maj...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Avoid leakage of information from test to train or from target to features Overview Overfitting is our enemy in applied machine learning, and leakage is often the cause.> Make sure your training features do not contain data from the β€œfuture” (aka time traveling). While this might be easy and obvious in some cases, it...
df['Burrito'].nunique() df['Burrito'].unique() # Combine Burrito categories df['Burrito_rename'] = df['Burrito'].str.lower() # All burrito types that contain 'California' are grouped into the same #. category. Similar logic applied to asada, surf, and carnitas. # 'California Surf and Turf' california = df['Burrito']....
<class 'pandas.core.frame.DataFrame'> RangeIndex: 423 entries, 0 to 422 Data columns (total 61 columns): Burrito 423 non-null object Date 423 non-null object Yelp 423 non-null object Google 423 non-null object Chips 423 non-null object Cost 423 non...
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Next, do a **time-based split:**- Train on reviews from 2016 & earlier. - Validate on 2017. - Test on 2018 & later.
df['Date'] = pd.to_datetime(df['Date']) # create a subset of data for anything less than or equal to the year 2016, equal #. to 2017 for validation, and test set to include >= 2018 train = df[df['Date'].dt.year <= 2016] val = df[df['Date'].dt.year == 2017] test = df[df['Date'].dt.year >= 2018] train.shape, val.shape,...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Begin to choose which features, if any, to exclude. **Would some features β€œleak” future information?**What happens if we _DON’T_ drop features with leakage?
# Try a shallow decision tree as a fast, first model import category_encoders as ce from sklearn.pipeline import make_pipeline from sklearn.tree import DecisionTreeClassifier target = 'Great' features = train.columns.drop([target, 'Date', 'Data']) X_train = train[features] y_train = train[target] X_val = val[feature...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Drop the column with β€œleakage”.
target = 'Great' features = train.columns.drop([target, 'Date', 'Data', 'overall']) X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] pipeline = make_pipeline( ce.OrdinalEncoder(), DecisionTreeClassifier() ) pipeline.fit(X_train, y_train) print('Validation Accuracy',...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose an appropriate evaluation metric Overview How will you evaluate success for your predictive model? You must choose an appropriate evaluation metric, depending on the context and constraints of your problem.**Classification & regression metrics are different!**- Don’t use _regression_ metrics to evaluate _class...
# 1:3 -> 25%, 75% y.value_counts(normalize=True)
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Precision & RecallLet's review Precision & Recall. What do these metrics mean, in scenarios like these?- Predict great burritos- Predict fraudulent transactions- Recommend Spotify songs[Are false positives or false negatives more costly? Can you optimize for dollars?](https://alexgude.com/blog/machine-learning-metrics...
# High precision -> few false positives. # High recall -> few false negatives. # In lay terms, how would we translate our problem with burritos: #. high precision- 'Great burrito'. If we make a prediction of a great burrito, #. it probably IS a great burrito. # Which metric would you emphasize if you were choosing...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
ROC AUC Let's also review ROC AUC (Receiver Operating Characteristic, Area Under the Curve).[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier...
from sklearn.metrics import roc_auc_score y_pred_proba = pipeline.predict_proba(X_val)[:, -1] roc_auc_score(y_val, y_pred_proba) from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba) (fpr, tpr, thresholds) import matplotlib.pyplot as plt plt.scatter(fpr, tpr) plt.plot(fpr, tpr) plt...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Imbalanced classesDo you have highly imbalanced classes?If so, you can try ideas from [Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/):- β€œAdjust the class weight (misclassification costs)” β€” most scikit-learn classifiers have a `class_balance` parameter.- β€œAdjust the decision t...
# Read our NYC apartment rental listing dataset df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Choose your targetWhich column in your tabular dataset will you predict?
y = df['price']
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
How is your target distributed?For a regression problem, determine: Is the target right-skewed?
# Yes, the target is right-skewed import seaborn as sns sns.distplot(y); y.describe()
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Are some observations outliers? Will you excludethem?
# Yes! There are outliers # Some prices are so high or low it doesn't really make sense. # Some locations aren't even in New York City # Remove the most extreme 1% prices, # the most extreme .1% latitudes, & # the most extreme .1% longitudes import numpy as np df = df[(df['price'] >= np.percentile(df['price'], 0.5)) ...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
Log-TransformIf the target is right-skewed, you may want to β€œlog transform” the target.> Transforming the target variable (using the mathematical log function) into a tighter, more uniform space makes life easier for any [regression] model.>> The only problem is that, while easy to execute, understanding why taking th...
import numpy as np y_log = np.log1p(y) sns.distplot(y_log) sns.distplot(y) plt.title('Original target, in the unit of US dollars'); y_log = np.log1p(y) sns.distplot(y_log) plt.title('Log-transformed target, in log-dollars'); y_untransformed = np.expm1(y_log) sns.distplot(y_untransformed) plt.title('Back to the original...
_____no_output_____
MIT
module1-define-ml-problems/Unit_2_Sprint_3_Module_1_LESSON.ipynb
Vanagand/DS-Unit-2-Applied-Modeling
An RFSoC Spectrum Analyzer Dashboard with Voila----Please use Jupyter Labs http://board_ip_address/lab for this notebook.The RFSoC Spectrum Analyzer is an open source tool developed by the [University of Strathclyde](https://github.com/strath-sdr/rfsoc_sam). This notebook is specifically for Voila dashboards. If you ...
from rfsoc_sam.overlay import Overlay
_____no_output_____
BSD-3-Clause
boards/ZCU111/rfsoc_sam/notebooks/voila_rfsoc_spectrum_analyzer.ipynb
dnorthcote/rfsoc_sam
Initialise Overlay
sam = Overlay(init_rf_clks = True)
_____no_output_____
BSD-3-Clause
boards/ZCU111/rfsoc_sam/notebooks/voila_rfsoc_spectrum_analyzer.ipynb
dnorthcote/rfsoc_sam
Dashboard Display
sam.spectrum_analyzer_application()
_____no_output_____
BSD-3-Clause
boards/ZCU111/rfsoc_sam/notebooks/voila_rfsoc_spectrum_analyzer.ipynb
dnorthcote/rfsoc_sam
No 1 : Multiple SubplotsDengan data di bawah ini buatlah visualisasi seperti expected output :
x = np.linspace(2*-np.pi, 2*np.pi, 200) tan = np.tan(x)/10 cos = np.cos(x) sin = np.sin(x)
_____no_output_____
MIT
Task/Week 3 Visualization/Week 3 Day 3.ipynb
mazharrasyad/Data-Science-SanberCode
![image.png](attachment:image.png) No 2 : Nested AxisDengan data di bawah ini, buatlah visualisasi seperti expected output :
x = np.linspace(2*-np.pi, 2*np.pi, 100) y = np.cos(x) y2 = np.cos(x**2) y3 = np.cos(x**3) y4 = np.cos(x**4) y5 = np.cos(x**5)
_____no_output_____
MIT
Task/Week 3 Visualization/Week 3 Day 3.ipynb
mazharrasyad/Data-Science-SanberCode
Examples I - Inferring $v_{\rm rot}$ By Minimizing the Line WidthThis Notebook intends to demonstrate the method used in [Teague et al. (2018a)](https://ui.adsabs.harvard.edu/abs/2018ApJ...860L..12T) to infer the rotation velocity as a function of radius in the disk of HD 163296. The following [Notebook](Examples%20-%...
%matplotlib inline from eddy.annulus import ensemble from eddy.modelling import gaussian_ensemble annulus = gaussian_ensemble(vrot=1500., Tb=40., dV=350., rms=2.0, N=20, plot=True, return_ensemble=True)
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
We first want to shift all the points to the systemic velocity (here at 0m/s). To do this we use the `deproject_spectra()` function which takes the rotation as its only argument. It returns the new velocity of each pixel in the annulus and it's value. Lets first deproject with the correct rotation velocity of 1500m/s t...
velocity, brightness = annulus.deprojected_spectra(1500.) import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.errorbar(velocity, brightness, fmt='.k', ms=4) ax.set_xlim(velocity[0], velocity[-1]) ax.set_xlabel(r'Velocity') ax.set_ylabel(r'Intensity')
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
This highlights which this method can achieve such a high precision on determinations of the rotation velocity. Because we shift back all the spectra by a non-quantised amount, we end up sampling the intrinsic profile at a much higher rate (by a factor of the number of beams we have in our annulus).We can compare this ...
fig, ax = plt.subplots() velocity, brightness = annulus.deprojected_spectrum(1500.) ax.errorbar(velocity, brightness, fmt='.k', ms=4) ax.set_xlim(velocity[0], velocity[-1]) ax.set_xlabel(r'Velocity') ax.set_ylabel(r'Intensity')
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
Now, if we projected the spectra with the incorrect velocity, we can see that the stacked spectrum becomes broader. Note also that this is symmetric about the correct velocity meaning this is a convex problem making minimization much easier.
import numpy as np fig, ax = plt.subplots() for vrot in np.arange(1100, 2100, 200): velocity, brightness = annulus.deprojected_spectrum(vrot) ax.plot(velocity, brightness, label='%d m/s' % vrot) ax.legend(markerfirst=False) ax.set_xlim(-1000, 1000) ax.set_xlabel(r'Velocity') ax.set_ylabel(r'In...
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
We can measure the width of the stacked lines by fitting a Gaussian using the `get_deprojected_width()` function.
vrots = np.linspace(1300, 1700, 150) widths = np.array([annulus.get_deprojected_width(vrot) for vrot in vrots]) fig, ax = plt.subplots() ax.plot(vrots, widths, label='Deprojected Widths') ax.axvline(1500., ls=':', color='k', label='Truth') ax.set_xlabel(r'Rotation Velocity (m/s)') ax.set_ylabel(r'Width of Stacked Line...
_____no_output_____
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
This shows that if we find the rotation velocity which minimizes the width of the stacked line we should have a pretty good idea of the rotation velocity is. The `get_vrot_dV()` function packges this all up, using the `bounded` method to search for the minimum width within a range of 0.7 to 1.3 times an initial guess. ...
vfit = annulus.get_vrot_dV() print("The linewidth is minimized for a rotation velocity of %.1f m/s" % vfit)
The linewidth is minimized for a rotation velocity of 1502.1 m/s
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
The power of this method is also that the fitting is performed on the stacked spectrum meaning that in the noisy regions at the edges of the disk we stack over so many independent beams that we still get a reasonable line profile to fit.Lets try with a signal-to-noise ratio of 4.
annulus = gaussian_ensemble(vrot=1500., Tb=40., dV=350., rms=10.0, N=20, plot=True, return_ensemble=True) fig, ax = plt.subplots() velocity, brightness = annulus.deprojected_spectrum(1500.) ax.step(velocity, brightness, color='k', where='mid', label='Shifted') ax.legend(markerfirst=False) ax.set_xlim(velocity[0], velo...
The linewidth is minimized for a rotation velocity of 1491.9 m/s
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
The final advtange of this method is that it is exceptionally quick. The convex nature of the problem means that a minimum width is readily found and so it can be applied very quickly, even with a large number of spectra. With 200 indiviudal beams:
annulus = gaussian_ensemble(vrot=1500., Tb=40., dV=350., rms=10.0, N=200, plot=True, return_ensemble=True) %timeit annulus.get_vrot_dV()
10 loops, best of 3: 102 ms per loop
MIT
docs/Examples - I.ipynb
ryanaloomis/eddy
Data Extraction and load from FRED API..
## Import packages for the process... import requests import pickle import os import mysql.connector import time
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Using pickle to wrap the database credentials and Fred API keys
if not os.path.exists('fred_api_secret.pk1'): fred_key = {} fred_key['api_key'] = '' with open ('fred_api_secret.pk1','wb') as f: pickle.dump(fred_key,f) else: fred_key=pickle.load(open('fred_api_secret.pk1','rb')) if not os.path.exists('fred_sql.pk1'): fred_sql = {} fred_sql['user'] = '...
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff