markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
4. get the data The top level element is the Query. For each query fields can be added (usually statistics / measures) that you want to get information on. A Query can either be done on a single region, or on multiple regions (e.g. all Bundesländer). Single RegionIf I want information - e.g. all births for the past y...
# create a query for the region 11 query = Query.region('11') # add a field (the statstic) to the query field_births = query.add_field('BEV001') # get the data of this query query.results().head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
To get the short description in the result data frame instead of the cryptic ID (e.g. "Lebend Geborene" instead of BEV001) set the argument "verbose_statsitics"=True in the resutls:
query.results(verbose_statistics =True).head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
Now we only get the information about the count of births per year and the source of the data (year, value and source are default fields).But there is more information in the statistic that we can get information on.Let's look at the meta data of the statstic:
# get information on the field field_births.get_info()
kind: OBJECT description: Lebend Geborene arguments: year: LIST of type SCALAR(Int) statistics: LIST of type ENUM(BEV001Statistics) enum values: R12612: Statistik der Geburten ALTMT1: LIST of type ENUM(ALTMT1) enum values: ALT000B20: unter 20 Jahre ALT020B25: 20 bis u...
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
The arguments tell us what we can use for filtering (e.g. only data on baby girls (female)).The fields tell us what more information can be displayed in our results.
# add filter field_births.add_args({'GES': 'GESW'}) # now only about half the amount of births are returned as only the results for female babies are queried query.results().head() # add the field NAT (nationality) to the results field_births.add_field('NAT')
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
**CAREFUL**: The information for the fields (e.g. nationality) is by default returned as a total amount. Therefore - if no argument "NAT" is specified in addition to the field, then only "None" will be displayed.In order to get information on all possible values, the argument "ALL" needs to be added:(the rows with valu...
field_births.add_args({'NAT': 'ALL'}) query.results().head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
To display the short description of the enum values instead of the cryptic IDs (e.g. Ausländer(innen) instead of NATA), set the argument "verbose_enums = True" on the results:
query.results(verbose_enums=True).head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
Multiple Regions To display data for multiple single regions, a list with region IDs can be used:
query_multiple = Query.region(['01', '02']) query_multiple.add_field('BEV001') query_multiple.results().sort_values('year').head()
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
To display data for e.g. all 'Bundesländer' or for all regions within a Bundesland, you can use the function `all_regions()`:- specify nuts level- specify lau level- specify parent ID (Careful: not only the regions for the next lower level will be returned, but all levels - e.g. if you specify a parent on nuts level 1 ...
# get data for all Bundesländer query_all = Query.all_regions(nuts=1) query_all.add_field('BEV001') query_all.results().sort_values('year').head(12) # get data for all regions within Brandenburg query_all = Query.all_regions(parent='12') query_all.add_field('BEV001') query_all.results().head() # get data for all nuts 3...
_____no_output_____
MIT
use_case/01_intro_tutorial.ipynb
elekt/datenguide-python
Chapter 4`Original content created by Cam Davidson-Pilon``Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`______ The greatest theorem never toldThis chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit o...
%matplotlib inline import numpy as np from IPython.core.pylabtools import figsize import matplotlib.pyplot as plt figsize( 12.5, 5 ) sample_size = 100000 expected_value = lambda_ = 4.5 poi = np.random.poisson N_samples = range(1,sample_size,100) for k in range(3): samples = poi( lambda_, sample_size ) ...
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how *jagged and jumpy* the average is initially, then *smooths* out). All three paths *approach* the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have ano...
figsize( 12.5, 4) N_Y = 250 #use this many to approximate D(N) N_array = np.arange( 1000, 50000, 2500 ) #use this many samples in the approx. to the variance. D_N_results = np.zeros( len( N_array ) ) lambda_ = 4.5 expected_value = lambda_ #for X ~ Poi(lambda) , E[ X ] = lambda def D_N( n ): """ This functio...
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the *rate* of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but *20 000* more samples to again decreas...
N = 10000 print( np.mean( [ np.random.exponential( 0.5 ) > 5 for i in range(N) ] ) )
0.0001
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
What does this all have to do with Bayesian statistics? *Point estimates*, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integral...
figsize( 12.5, 4) std_height = 15 mean_height = 150 n_counties = 5000 pop_generator = np.random.randint norm = np.random.normal #generate some artificial population numbers population = pop_generator(100, 1500, n_counties ) average_across_county = np.zeros( n_counties ) for i in range( n_counties ): #generate s...
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
What do we observe? *Without accounting for population sizes* we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties...
print("Population sizes of 10 'shortest' counties: ") print(population[ np.argsort( average_across_county )[:10] ], '\n') print("Population sizes of 10 'tallest' counties: ") print(population[ np.argsort( -average_across_county )[:10] ])
Population sizes of 10 'shortest' counties: [109 135 135 133 109 157 175 120 105 131] Population sizes of 10 'tallest' counties: [122 133 313 109 124 280 106 198 326 216]
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers. Example: Kaggle's *U.S. Census Return Rate Challenge*Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The ...
figsize( 12.5, 6.5 ) data = np.genfromtxt( "./data/census_data.csv", skip_header=1, delimiter= ",") plt.scatter( data[:,1], data[:,0], alpha = 0.5, c="#7A68A6") plt.title("Census mail-back rate vs Population") plt.ylabel("Mail-back rate") plt.xlabel("population of block-group") plt.xlim(-100, 1...
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
The above is a classic phenomenon in statistics. I say *classic* referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact). I am perhaps overstressing the point and maybe I should have titled th...
#adding a number to the end of the %run call with get the ith top post. %run top_showerthoughts_submissions.py 2 print("Post contents: \n") print(top_post) """ contents: an array of the text from the last 100 top submissions to a subreddit votes: a 2d numpy array of upvotes, downvotes for each submission. """ n_submis...
Some Submissions (out of 98 total) ----------- "Rappers from the 90's used guns when they had beef rappers today use Twitter." upvotes/downvotes: [32 3] "All polls are biased towards people who are willing to take polls" upvotes/downvotes: [1918 101] "Taco Bell should give customers an extra tortilla so they c...
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bay...
import pymc3 as pm def posterior_upvote_ratio( upvotes, downvotes, samples = 20000): """ This function accepts the number of upvotes and downvotes a particular submission recieved, and the number of posterior samples to return to the user. Assumes a uniform prior. """ N = upvotes + downvotes w...
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Below are the resulting posterior distributions.
figsize( 11., 8) posteriors = [] colours = ["#348ABD", "#A60628", "#7A68A6", "#467821", "#CF4457"] for i in range(len(submissions)): j = submissions[i] posteriors.append( posterior_upvote_ratio( votes[j, 0], votes[j,1] ) ) plt.hist( posteriors[i], bins = 10, normed = True, alpha = .9, histtype=...
Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model. [-------100%-------] 20000 of 20000 in 1.4 sec. | SPS: 14595.5 | ETA: 0.0Applied interval-transform to upvote_ratio and added transformed upvote_ratio_interval_ to model. [-------100%-------] 20000 of 20000 in 1.3 sec. |...
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be. Sorting!We have been ignoring the goal of this exercise: how do we sort the submissions from *best to worst*? Of course, we cannot sort distributions, we must sort s...
N = posteriors[0].shape[0] lower_limits = [] for i in range(len(submissions)): j = submissions[i] plt.hist( posteriors[i], bins = 20, normed = True, alpha = .9, histtype="step",color = colours[i], lw = 3, label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) ) ...
[1 0 2 3] [0.80034320917496615, 0.94092009444598201, 0.74660503350561902, 0.72190353389632911]
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
The best submissions, according to our procedure, are the submissions that are *most-likely* to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are bei...
def intervals(u,d): a = 1. + u b = 1. + d mu = a/(a+b) std_err = 1.65*np.sqrt( (a*b)/( (a+b)**2*(a+b+1.) ) ) return ( mu, std_err ) print("Approximate lower bounds:") posterior_mean, std_err = intervals(votes[:,0],votes[:,1]) lb = posterior_mean - std_err print(lb) print("\n") print("Top 40 Sorted...
Approximate lower bounds: [ 0.93349005 0.9532194 0.94149718 0.90859764 0.88705356 0.8558795 0.85644927 0.93752679 0.95767101 0.91131012 0.910073 0.915999 0.9140058 0.83276025 0.87593961 0.87436674 0.92830849 0.90642832 0.89187973 0.89950891 0.91295322 0.78607629 0.90250203 0.79950031 0.8...
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern.
r_order = order[::-1][-40:] plt.errorbar( posterior_mean[r_order], np.arange( len(r_order) ), xerr=std_err[r_order], capsize=0, fmt="o", color = "#7A68A6") plt.xlim( 0.3, 1) plt.yticks( np.arange( len(r_order)-1,-1,-1 ), map( lambda x: x[:30].replace("\n",""), ordered_contents) );
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
In the graphic above, you can see why sorting by mean would be sub-optimal. Extension to Starred rating systemsThe above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average: an item with two p...
## Enter code here import scipy.stats as stats exp = stats.expon( scale=4 ) N = 1e5 X = exp.rvs( int(N) ) ## ...
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
2\. The following table was located in the paper "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression" [2]. The table ranks football field-goal kickers by their percent of non-misses. What mistake have the researchers made?----- Kicker Careers Ranked by Make PercentageRank Kicker ...
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
_____no_output_____
MIT
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC3.ipynb
quantopian/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started ...
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wr...
_____no_output_____
MIT
experiments/baseline_ptn/wisig/trials/2/trial.ipynb
stevester94/csc500-notebooks
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "labels_source", "labels_target", "domains_source", "domains_target", "num_examples_per_domain_per_label_source", "num_examples_per_domain_per_label_target", "n_shot", "n_way", "n_q...
_____no_output_____
MIT
experiments/baseline_ptn/wisig/trials/2/trial.ipynb
stevester94/csc500-notebooks
Neural Networks===============Neural networks can be constructed using the ``torch.nn`` package.Now that you had a glimpse of ``autograd``, ``nn`` depends on``autograd`` to define models and differentiate them.An ``nn.Module`` contains layers, and a method ``forward(input)`` thatreturns the ``output``.For example, look...
import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16...
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
You just have to define the ``forward`` function, and the ``backward``function (where gradients are computed) is automatically defined for youusing ``autograd``.You can use any of the Tensor operations in the ``forward`` function.The learnable parameters of a model are returned by ``net.parameters()``
params = list(net.parameters()) print(len(params)) print(params[0].size()) # conv1's .weight
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Let's try a random 32x32 input.Note: expected input size of this net (LeNet) is 32x32. To use this net onthe MNIST dataset, please resize the images from the dataset to 32x32.
input = torch.randn(1, 1, 32, 32) out = net(input) print(out)
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Zero the gradient buffers of all parameters and backprops with randomgradients:
net.zero_grad() out.backward(torch.randn(1, 10))
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Note``torch.nn`` only supports mini-batches. The entire ``torch.nn`` package only supports inputs that are a mini-batch of samples, and not a single sample. For example, ``nn.Conv2d`` will take in a 4D Tensor of ``nSamples x nChannels x Height x Width``. If you have a single sample, just use ``input.unsq...
output = net(input) target = torch.randn(10) # a dummy target, for example target = target.view(1, -1) # make it the same shape as output criterion = nn.MSELoss() loss = criterion(output, target) print(loss)
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Now, if you follow ``loss`` in the backward direction, using its``.grad_fn`` attribute, you will see a graph of computations that lookslike this::: input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d -> flatten -> linear -> relu -> linear -> relu -> linear -> MSELoss -> los...
print(loss.grad_fn) # MSELoss print(loss.grad_fn.next_functions[0][0]) # Linear print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Backprop--------To backpropagate the error all we have to do is to ``loss.backward()``.You need to clear the existing gradients though, else gradients will beaccumulated to existing gradients.Now we shall call ``loss.backward()``, and have a look at conv1's biasgradients before and after the backward.
net.zero_grad() # zeroes the gradient buffers of all parameters print('conv1.bias.grad before backward') print(net.conv1.bias.grad) loss.backward() print('conv1.bias.grad after backward') print(net.conv1.bias.grad)
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Now, we have seen how to use loss functions.**Read Later:** The neural network package contains various modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is `here `_.**The only thing left to learn is:** - Updating the weights of the networkUpdate the we...
import torch.optim as optim # create your optimizer optimizer = optim.SGD(net.parameters(), lr=0.01) # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = criterion(output, target) loss.backward() optimizer.step() # Does the update
_____no_output_____
MIT
source/pytorch/deepLearningIn60mins/neural_networks_tutorial.ipynb
alphajayGithub/ai.online
Classifying Fashion-MNISTNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% ...
import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pyt...
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to /Users/Mia/.pytorch/F_MNIST_data/FashionMNIST/raw/train-images-idx3-ubyte.gz
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
Here we can see one of the images.
image, label = next(iter(trainloader)) helper.imshow(image[0,:]);
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
Building the networkHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up t...
from torch import nn, optim import torch.nn.functional as F # TODO: Define your network architecture here class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn...
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
Train the networkNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.htmlloss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).Then write t...
# TODO: Create the network, define the criterion and optimizer model = Network() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) model # TODO: Train the network here # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloa...
_____no_output_____
MIT
intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb
xxiMiaxx/deep-learning-v2-pytorch
----------------- Please run the IPython Widget below. Using the checkboxes, you can:* Download the training, validation and test datasets* Extract all tarfiles* Create the necessary PyTorch files for the training/validation/test datasets. We create 1 file for each datanet sample, resulting in exactly * ./dataset/co...
from convertDataset import process_in_parallel, download_dataset, extract_tarfiles import ipywidgets as widgets cbs = [widgets.Checkbox() for i in range(5)] cbs[0].description="Download dataset" cbs[1].description="Extract Tarfiles" cbs[2].description="Generate Pytorch Files - Training" cbs[3].description="Generate Py...
_____no_output_____
MIT
1) Download dataset, create .pt files.ipynb
brunoklaus/PS-001-ML5G-GNNetworkingChallenge2021-PARANA
Importing Dependencies
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pandas_datareader import pandas_datareader.data as web import datetime from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense,LSTM,Dropout %matplotlib inline
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Importing Data
start = datetime.datetime(2016,1,1) end = datetime.datetime(2021,1,1) QQQ = web.DataReader("QQQ", "yahoo", start, end) QQQ.head() QQQ['Close'].plot(label = 'QQQ', figsize = (16,10), title = 'Closing Price') plt.legend(); QQQ['Volume'].plot(label = 'QQQ', figsize = (16,10), title = 'Volume Traded') plt.legend(); QQQ['MA...
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Selecting The Close Column
QQQ["Close"]=pd.to_numeric(QQQ.Close,errors='coerce') #turning the Close column to numeric QQQ = QQQ.dropna() trainData = QQQ.iloc[:,3:4].values #selecting closing prices for training
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Scaling Values in the Range of 0-1 for Best Results
sc = MinMaxScaler(feature_range=(0,1)) trainData = sc.fit_transform(trainData) trainData.shape
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Prepping Data for LSTM
X_train = [] y_train = [] for i in range (60,1060): X_train.append(trainData[i-60:i,0]) y_train.append(trainData[i,0]) X_train,y_train = np.array(X_train),np.array(y_train) X_train = np.reshape(X_train,(X_train.shape[0],X_train.shape[1],1)) #adding the batch_size axis X_train.shape
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Building The Model
model = Sequential() model.add(LSTM(units=100, return_sequences = True, input_shape =(X_train.shape[1],1))) model.add(Dropout(0.2)) model.add(LSTM(units=100, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units=100, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units=100, return...
Epoch 1/20 32/32 - 26s - loss: 0.0187 Epoch 2/20 32/32 - 3s - loss: 0.0036 Epoch 3/20 32/32 - 3s - loss: 0.0026 Epoch 4/20 32/32 - 3s - loss: 0.0033 Epoch 5/20 32/32 - 3s - loss: 0.0033 Epoch 6/20 32/32 - 3s - loss: 0.0028 Epoch 7/20 32/32 - 3s - loss: 0.0024 Epoch 8/20 32/32 - 3s - loss: 0.0024 Epoch 9/20 32/32 - 3s -...
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Plotting The Training Loss
plt.plot(hist.history['loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train'], loc='upper left') plt.show()
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Testing Model on New Data
start = datetime.datetime(2021,1,1) end = datetime.datetime.today() testData = web.DataReader("QQQ", "yahoo", start, end) #importing new data for testing testData["Close"]=pd.to_numeric(testData.Close,errors='coerce') #turning the Close column to numeric testData = testData.dropna() #droping the NA values testData = t...
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Plotting Results
plt.plot(y_test, color = 'blue', label = 'Actual Stock Price') plt.plot(predicted_price, color = 'red', label = 'Predicted Stock Price') plt.title('QQQ stock price prediction') plt.xlabel('Time') plt.ylabel('Stock Price') plt.legend() plt.show()
_____no_output_____
MIT
Untitled.ipynb
gaben3722/Time-Series-Project
Boolean Operator
print (10>9) print (10==9) print (10<9) x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool() function print(bool("Hello")) print(bool(15)) print(bool(1)) print(bool(True)) print(bool(False)) print(bool(0)) print(bool([]))
True True True True False False False
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Functions can return Boolean
def myfunctionboolean(): return True print(myfunctionboolean()) def myfunction(): return False if myfunction(): print("yes!") else: print("no")
no
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
You Try
print(10>9) a=6 b=7 print(a==b) print(a!=a)
True False False
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Arithmetic Operators
print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3x3=9+1 print(10**5)
15 5 50 2.0 0 2 3 1 100000
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Bitwise Operators
a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(b>>1) #1 0000 0110 print(b>>2) #0000 0011 carry flag bit=01
12 61 49 -61 120 240 6 3
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Phyton Assigment Operators
a+=3 #Same As a = a + 3 #Same As a = 60 + 3, a=63 print(a)
63
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Logical Operators
#and logical operators a = True b = False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b)
False True
Apache-2.0
Expressions and Operations.ipynb
CedricPengson/CPEN-21-A-ECE-2-1
Hyperparameter tuning with Cloud AI Platform **Learning Objectives:** * Improve the accuracy of a model by hyperparameter tuning
import os PROJECT = 'qwiklabs-gcp-faf328caac1ef9a0' # REPLACE WITH YOUR PROJECT ID BUCKET = 'qwiklabs-gcp-faf328caac1ef9a0' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-east1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # for bash os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION']...
Updated property [core/project]. Updated property [compute/region].
MIT
Coursera/Art and Science of Machine Learning/Improve model accuracy by hyperparameter tuning with AI Platform.ipynb
helpthx/Path_through_Data_Science_2019
Create command-line programIn order to submit to Cloud AI Platform, we need to create a distributed training program. Let's convert our housing example to fit that paradigm, using the Estimators API.
%%bash rm -rf house_prediction_module mkdir house_prediction_module mkdir house_prediction_module/trainer touch house_prediction_module/trainer/__init__.py %%writefile house_prediction_module/trainer/task.py import argparse import os import json import shutil from . import model if __name__ == '__main__' and "get...
WARNING: Logging before flag parsing goes to stderr. W0809 20:42:02.240282 139715572925888 deprecation_wrapper.py:119] From /home/jupyter/training-data-analyst/courses/machine_learning/deepdive/05_artandscience/house_prediction_module/trainer/model.py:6: The name tf.logging.set_verbosity is deprecated. Please use tf.co...
MIT
Coursera/Art and Science of Machine Learning/Improve model accuracy by hyperparameter tuning with AI Platform.ipynb
helpthx/Path_through_Data_Science_2019
Create hyperparam.yaml
%%writefile hyperparam.yaml trainingInput: hyperparameters: goal: MINIMIZE maxTrials: 5 maxParallelTrials: 1 hyperparameterMetricTag: rmse params: - parameterName: batch_size type: INTEGER minValue: 8 maxValue: 64 scaleType: UNIT_LINEAR_SCALE - parameterName: learni...
createTime: '2019-08-09T20:42:55Z' etag: zU1W9lhyf0w= jobId: house_190809_204253 startTime: '2019-08-09T20:42:59Z' state: RUNNING trainingInput: args: - --output_dir=gs://qwiklabs-gcp-faf328caac1ef9a0/house_trained hyperparameters: goal: MINIMIZE hyperparameterMetricTag: rmse maxParallelTrials: 1 ...
MIT
Coursera/Art and Science of Machine Learning/Improve model accuracy by hyperparameter tuning with AI Platform.ipynb
helpthx/Path_through_Data_Science_2019
KöhnIn this notebook I replicate Koehn (2015): _What's in an embedding? Analyzing word embeddings through multilingual evaluation_. This paper proposes to i) evaluate an embedding method on more than one language, and ii) evaluate an embedding model by how well its embeddings capture syntactic features. He uses an L2-...
%matplotlib inline import os import csv import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() from sklearn.linear_model import LogisticRegression, LogisticRegressionCV from sklearn.model_selection import train_test_split, StratifiedKFold from sklearn.metrics import roc_c...
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Learnt representations GloVe
size = 50 fname = 'embeddings/glove.6B.{}d.txt'.format(size) glove_path = os.path.join(data_path, fname) glove = pd.read_csv(glove_path, sep=' ', header=None, index_col=0, quoting=csv.QUOTE_NONE) glove.head()
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Features
fname = 'UD_English/features.csv' features_path = os.path.join(data_path, os.path.join('evaluation/dependency', fname)) features = pd.read_csv(features_path).set_index('form') features.head() df = pd.merge(glove, features, how='inner', left_index=True, right_index=True) df.head()
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Prediction
def prepare_X_and_y(feature, data): """Return X and y ready for predicting feature from embeddings.""" relevant_data = data[data[feature].notnull()] columns = list(range(1, size+1)) X = relevant_data[columns] y = relevant_data[feature] train = relevant_data['set'] == 'train' test = (relevant...
_____no_output_____
MIT
semrep/evaluate/koehn/koehn.ipynb
geoffbacon/semrep
Transfer Learning Template
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wr...
_____no_output_____
MIT
experiments/tl_1v2/cores-oracle.run1.limited/trials/29/trial.ipynb
stevester94/csc500-notebooks
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "n_shot", "n_query", "n_way", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_net", "datasets", "torch_default_dtype", ...
_____no_output_____
MIT
experiments/tl_1v2/cores-oracle.run1.limited/trials/29/trial.ipynb
stevester94/csc500-notebooks
Convert old input card1. meta and experiment
from ruamel.yaml import YAML from cvm.utils import get_inp import sys yaml = YAML() yaml.indent(mapping=4, sequence=4, offset=2) yaml.default_flow_style = None yaml.width = 120 inp = get_inp('<old_input_card.json>') meta = dict(host=inp['host'], impurity=inp['impurity'], prefix=inp['prefix'], description=inp['descript...
_____no_output_____
BSD-3-Clause
samples/convert_old_input_card.ipynb
kidddddd1984/CVM
2. enegires
def extractor(s, prefix): print(s['label']) print(s['transfer']) print(s['temp']) data = s['datas'] lattice = data['lattice_c'] host=data['host_en'] n_ens = {} for i in range(11): s_i = str(i + 1) l = 'pair' + s_i n_ens[s_i + '_II'] = data[l][0]['energy'] ...
$T_\mathrm{FD}=800$K [[1, 11, 2]] [400, 1290, 50] 0_normalizer.csv 0_clusters.csv $T_\mathrm{FD}=1000$K [[1, 11, 2]] [400, 1550, 50] 1_normalizer.csv 1_clusters.csv $T_\mathrm{FD}=1200$K [[1, 11, 2]] [400, 1700, 50] 2_normalizer.csv 2_clusters.csv $T_\mathrm{FD}=1400$K [[1, 11, 2]] [500, 1700, 50] 3_normalizer.csv 3...
BSD-3-Clause
samples/convert_old_input_card.ipynb
kidddddd1984/CVM
import torch from torchvision.transforms import ToTensor, Normalize, Compose from torchvision.datasets import MNIST import torch.nn as nn from torch.utils.data import DataLoader from torchvision.utils import save_image import os class DeviceDataLoader: def __init__(self, dl, device): self.dl = dl se...
_____no_output_____
MIT
simple_generative_adversarial_net/MNIST_GANs.ipynb
s-mostafa-a/a
Matrix> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil A matrix is a square or rectangular array of numbers or symbols (termed elements), arranged in rows and columns. For instance:$$ \mathbf{A} = \begin{bmatrix} a_{1,1} & a...
# Import the necessary libraries import numpy as np from IPython.display import display np.set_printoptions(precision=4) # number of digits of precision for floating point A = np.array([[1, 2, 3], [4, 5, 6]]) A
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
To get information about the number of elements and the structure of the matrix (in fact, a Numpy array), we can use:
print('A:\n', A) print('len(A) = ', len(A)) print('np.size(A) = ', np.size(A)) print('np.shape(A) = ', np.shape(A)) print('np.ndim(A) = ', np.ndim(A))
A: [[1 2 3] [4 5 6]] len(A) = 2 np.size(A) = 6 np.shape(A) = (2, 3) np.ndim(A) = 2
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
We could also have accessed this information with the correspondent methods:
print('A.size = ', A.size) print('A.shape = ', A.shape) print('A.ndim = ', A.ndim)
A.size = 6 A.shape = (2, 3) A.ndim = 2
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
We used the array function in Numpy to represent a matrix. A [Numpy array is in fact different than a matrix](http://www.scipy.org/NumPy_for_Matlab_Users), if we want to use explicit matrices in Numpy, we have to use the function `mat`:
B = np.mat([[1, 2, 3], [4, 5, 6]]) B
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Both array and matrix types work in Numpy, but you should choose only one type and not mix them; the array is preferred because it is [the standard vector/matrix/tensor type of Numpy](http://www.scipy.org/NumPy_for_Matlab_Users). So, let's use the array type for the rest of this text. Addition and multiplicationThe su...
A = np.array([[1, 2, 3], [4, 5, 6]]) B = np.array([[7, 8, 9], [10, 11, 12]]) print('A:\n', A) print('B:\n', B) print('A + B:\n', A+B);
A: [[1 2 3] [4 5 6]] B: [[ 7 8 9] [10 11 12]] A + B: [[ 8 10 12] [14 16 18]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
The multiplication of the m-by-n matrix $\mathbf{A}$ by the n-by-p matrix $\mathbf{B}$ is a m-by-p matrix:$$ \mathbf{A} = \begin{bmatrix} a_{1,1} & a_{1,2} \\a_{2,1} & a_{2,2} \end{bmatrix}\;\;\; \text{and} \;\;\;\mathbf{B} =\begin{bmatrix} b_{1,1} & b_{1,2} & b_{1,3} \\b_{2,1} & b_{2,2} & b_{2,3} \end{bmatrix}$$$$\mat...
A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6, 7], [8, 9, 10]]) print('A:\n', A) print('B:\n', B) print('A x B:\n', np.dot(A, B));
A: [[1 2] [3 4]] B: [[ 5 6 7] [ 8 9 10]] A x B: [[21 24 27] [47 54 61]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Note that because the array type is not truly a matrix type, we used the dot product to calculate matrix multiplication. We can use the matrix type to show the equivalent:
A = np.mat(A) B = np.mat(B) print('A:\n', A) print('B:\n', B) print('A x B:\n', A*B);
A: [[1 2] [3 4]] B: [[ 5 6 7] [ 8 9 10]] A x B: [[21 24 27] [47 54 61]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Same result as before.The order in multiplication matters, $\mathbf{AB} \neq \mathbf{BA}$:
A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6], [7, 8]]) print('A:\n', A) print('B:\n', B) print('A x B:\n', np.dot(A, B)) print('B x A:\n', np.dot(B, A));
A: [[1 2] [3 4]] B: [[5 6] [7 8]] A x B: [[19 22] [43 50]] B x A: [[23 34] [31 46]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
The addition or multiplication of a scalar (a single number) to a matrix is performed over all the elements of the matrix:
A = np.array([[1, 2], [3, 4]]) c = 10 print('A:\n', A) print('c:\n', c) print('c + A:\n', c+A) print('cA:\n', c*A);
A: [[1 2] [3 4]] c: 10 c + A: [[11 12] [13 14]] cA: [[10 20] [30 40]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
TranspositionThe transpose of the matrix $\mathbf{A}$ is the matrix $\mathbf{A^T}$ turning all the rows of matrix $\mathbf{A}$ into columns (or columns into rows):$$ \mathbf{A} = \begin{bmatrix} a & b & c \\d & e & f \end{bmatrix}\;\;\;\;\;\;\iff\;\;\;\;\;\;\mathbf{A^T} = \begin{bmatrix} a & d \\b & e \\c & f\end{bmat...
A = np.array([[1, 2], [3, 4]]) print('A:\n', A) print('A.T:\n', A.T) print('np.transpose(A):\n', np.transpose(A));
A: [[1 2] [3 4]] A.T: [[1 3] [2 4]] np.transpose(A): [[1 3] [2 4]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
DeterminantThe determinant is a number associated with a square matrix.The determinant of the following matrix: $$ \left[ \begin{array}{ccc}a & b & c \\d & e & f \\g & h & i \end{array} \right] $$is written as:$$ \left| \begin{array}{ccc}a & b & c \\d & e & f \\g & h & i \end{array} \right| $$And has the value:$$ (a...
A = np.array([[1, 2], [3, 4]]) print('A:\n', A); print('Determinant of A:\n', np.linalg.det(A))
Determinant of A: -2.0
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
IdentityThe identity matrix $\mathbf{I}$ is a matrix with ones in the main diagonal and zeros otherwise. The 3x3 identity matrix is: $$ \mathbf{I} = \begin{bmatrix} 1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1 \end{bmatrix} $$In Numpy, instead of manually creating this matrix we can use the function `eye`:
np.eye(3) # identity 3x3 array
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
InverseThe inverse of the matrix $\mathbf{A}$ is the matrix $\mathbf{A^{-1}}$ such that the product between these two matrices is the identity matrix:$$ \mathbf{A}\cdot\mathbf{A^{-1}} = \mathbf{I} $$The calculation of the inverse of a matrix is usually not simple (the inverse of the matrix $\mathbf{A}$ is not $1/\math...
A = np.array([[1, 2], [3, 4]]) print('A:\n', A) Ainv = np.linalg.inv(A) print('Inverse of A:\n', Ainv);
A: [[1 2] [3 4]] Inverse of A: [[-2. 1. ] [ 1.5 -0.5]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Pseudo-inverseFor a non-square matrix, its inverse is not defined. However, we can calculate what it's known as the pseudo-inverse. Consider a non-square matrix, $\mathbf{A}$. To calculate its inverse, note that the following manipulation results in the identity matrix:$$ \mathbf{A} \mathbf{A}^T (\mathbf{A}\mathbf{A}...
from scipy.linalg import pinv2 A = np.array([[1, 0, 0], [0, 1, 0]]) Apinv = pinv2(A) print('Matrix A:\n', A) print('Pseudo-inverse of A:\n', Apinv) print('A x Apinv:\n', A@Apinv)
Matrix A: [[1 0 0] [0 1 0]] Pseudo-inverse of A: [[ 1. 0.] [ 0. 1.] [ 0. 0.]] A x Apinv: [[ 1. 0.] [ 0. 1.]]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
OrthogonalityA square matrix is said to be orthogonal if:1. There is no linear combination of one of the lines or columns of the matrix that would lead to the other row or column. 2. Its columns or rows form a basis of (independent) unit vectors (versors).As consequence:1. Its determinant is equal to 1 or -1.2. Its ...
A = np.array([[1, 2], [3, 4]]) Ainv = np.linalg.inv(A) c = np.array([4, 10]) v = np.dot(Ainv, c) print('v:\n', v)
v: [ 2. 1.]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
What we expected.However, the use of the inverse of a matrix to solve equations is computationally inefficient. Instead, we should use `linalg.solve` for a determined system (same number of equations and unknowns) or `linalg.lstsq` otherwise: From the help for `solve`: numpy.linalg.solve(a, b)[source] Solv...
v = np.linalg.solve(A, c) print('Using solve:') print('v:\n', v)
Using solve: v: [ 2. 1.]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
And from the help for `lstsq`: numpy.linalg.lstsq(a, b, rcond=-1)[source] Return the least-squares solution to a linear matrix equation. Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2. The equation may be under-, well-, or over- determined (i.e., the num...
v = np.linalg.lstsq(A, c)[0] print('Using lstsq:') print('v:\n', v)
Using lstsq: v: [ 2. 1.]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Same solutions, of course.When a system of equations has a unique solution, the determinant of the **square** matrix associated to this system of equations is nonzero. When the determinant is zero there are either no solutions or many solutions to the system of equations.But if we have an overdetermined system:$$ x +...
A = np.array([[1, 2], [3, 4], [5, 6]]) print('A:\n', A) c = np.array([4, 10, 15]) print('c:\n', c);
A: [[1 2] [3 4] [5 6]] c: [ 4 10 15]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
Because the matix $\mathbf{A}$ is not squared, we can calculate its pseudo-inverse or use the function `linalg.lstsq`:
v = np.linalg.lstsq(A, c)[0] print('Using lstsq:') print('v:\n', v)
Using lstsq: v: [ 1.3333 1.4167]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
The functions `inv` and `solve` failed because the matrix $\mathbf{A}$ was not square (overdetermined system). The function `lstsq` not only was able to handle an overdetermined system but was also able to find the best approximate solution.And if the the set of equations was undetermined, `lstsq` would also work. For ...
A = np.array([[1, 2, 2], [3, 4, 1]]) print('A:\n', A) c = np.array([10, 13]) print('c:\n', c); v = np.linalg.lstsq(A, c)[0] print('Using lstsq:') print('v:\n', v);
Using lstsq: v: [ 0.8 2. 2.6]
MIT
notebooks/.ipynb_checkpoints/Matrix-checkpoint.ipynb
raissabthibes/bmc
# Python program to generate embedding (word vectors) using Word2Vec # importing necessary modules for embedding !pip install --upgrade gensim !pip install rdflib import rdflib !pip uninstall numpy !pip install numpy # pip install numpy and then hit the RESTART RUNTIME import gensim from gensim.models import Word2Vec ...
_____no_output_____
MIT
embedding_word_clusters2.ipynb
mzkhan2000/KG-Embeddings
Inference from the analysis: All the above variables show positive skewness; while Age & Mean_distance_from_home are leptokurtic and all other variables are platykurtic. The Mean_Monthly_Income’s IQR is at 54K suggesting company wide attrition across all income bands Mean age forms a near normal distribution with 1...
box_plot=dataset1.Age plt.boxplot(box_plot)
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Age is normally distributed without any outliers
box_plot=dataset1.MonthlyIncome plt.boxplot(box_plot)
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Monthly Income is Right skewed with several outliers
box_plot=dataset1.YearsAtCompany plt.boxplot(box_plot)
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Years at company is also Right Skewed with several outliers observed. Attrition Vs Distance from Home
from scipy.stats import mannwhitneyu from scipy.stats import mannwhitneyu a1=dataset.DistanceFromHome_Yes a2=dataset.DistanceFromHome_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 3132625.5 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value of 0.0 is < 0.05, the H0 is rejected and Ha is accepted.H0: There is no significant differences in the Distance From Home between attrition (Y) and attirition (N)Ha: There is significant differences in the Distance From Home between attrition (Y) and attirition (N) Attrition Vs Income
a1=dataset.MonthlyIncome_Yes a2=dataset.MonthlyIncome_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 3085416.0 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the income between attrition (Y) and attirition (N)Ha: There is significant differences in the income between attrition (Y) and attirition (N) Attrition Vs Total Working Years
a1=dataset.TotalWorkingYears_Yes a2=dataset.TotalWorkingYears_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 2760982.0 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Total Working Years between attrition (Y) and attirition (N)Ha: There is significant differences in the Total Working Years between attrition (Y) and attirition (N) Attrition Vs Years...
a1=dataset.YearsAtCompany_Yes a2=dataset.YearsAtCompany_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 2882047.5 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Years At Company between attrition (Y) and attirition (N) Ha: There is significant differences in the Years At Company between attrition (Y) and attirition (N) Attrition Vs YearsWi...
a1=dataset.YearsWithCurrManager_Yes a2=dataset.YearsWithCurrManager_No stat, p=mannwhitneyu(a1,a2) print(stat, p) 3674749.5 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Years With Current Manager between attrition (Y) and attirition (N)Ha: There is significant differences in the Years With Current Manager between attrition (Y) and attirition (N) Stat...
z1=dataset.DistanceFromHome_Yes z2=dataset.DistanceFromHome_No stat, p=ttest_ind(z2,z1) print(stat, p) 44.45445917636664 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Distance From Home between attrition (Y) and attirition (N)Ha: There is significant differences in the Distance From Home between attrition (Y) and attirition (N) Attrition Vs Income
z1=dataset.MonthlyIncome_Yes z2=dataset.MonthlyIncome_No stat, p=ttest_ind(z2, z1) print(stat, p) 52.09279408504947 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Monthly Income between attrition (Y) and attirition (N) Ha: There is significant differences in the Monthly Income between attrition (Y) and attirition (N) Attrition Vs Yeats At Co...
z1=dataset.YearsAtCompany_Yes z2=dataset.YearsAtCompany_No stat, p=ttest_ind(z2, z1) print(stat, p) 51.45296941515692 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
As the P value is again 0.0, which is < than 0.05, the H0 is rejected and ha is accepted.H0: There is no significant differences in the Years At Company between attrition (Y) and attirition (N)Ha: There is significant differences in the Years At Company between attrition (Y) and attirition (N) Attrition Vs Years With ...
z1=dataset.YearsWithCurrManager_Yes z2=dataset.YearsWithCurrManager_No stat, p=ttest_ind(z2, z1) print(stat, p) 53.02424349024521 0.0
_____no_output_____
MIT
DAY-12/DAY-12.ipynb
BhuvaneshHingal/LetsUpgrade-AI-ML
Convolutional NetworksSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art result...
# As usual, a bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.cnn import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * from cs231n.fast_layers import * from cs2...
X_train: (49000, 3, 32, 32) y_train: (49000,) X_val: (1000, 3, 32, 32) y_val: (1000,) X_test: (1000, 3, 32, 32) y_test: (1000,)
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Convolution: Naive forward passThe core of a convolutional network is the convolution operation. In the file `cs231n/layers.py`, implement the forward pass for the convolution layer in the function `conv_forward_naive`. You don't have to worry too much about efficiency at this point; just write the code in whatever wa...
x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.arra...
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017